From gavin at doughtie.com Sun Jun 1 20:13:41 2003 From: gavin at doughtie.com (Gavin Doughtie) Date: Sun, 01 Jun 2003 11:13:41 -0700 Subject: [C++-sig] Custom exceptions In-Reply-To: <3ED7D02A.7030900@anim.dreamworks.com> References: <3ED7D02A.7030900@anim.dreamworks.com> Message-ID: <3EDA4255.4070401@doughtie.com> Just a note -- I did in fact realize that this is what the "scope" function is for. Now I'm writing a templated exception generator class for my custom exceptions. Once that's working (i.e. once I fix my boost::bind calls to not crash gcc 2.96) I'll post it. Gavin Doughtie wrote: > So, let's say I want to register a custom exception which python code > can catch, thusly: > > try: > doSomething() > except mymodule.MyException e: > print e > > Is there anything in boost python that makes setting this up easy? I've > got a working exception translator and everything, but I don't see > anything equivalent to putting > "PyErr_NewException("mymodule.MyException", NULL, NULL)" into the system > dictionary during module initialization. > > Or am I just working too hard again? > From dave at boost-consulting.com Sun Jun 1 21:13:38 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 01 Jun 2003 15:13:38 -0400 Subject: [C++-sig] Re: Custom exceptions References: <3ED7D02A.7030900@anim.dreamworks.com> <3EDA4255.4070401@doughtie.com> Message-ID: Gavin Doughtie writes: > Gavin Doughtie wrote: >> So, let's say I want to register a custom exception which python >> code can catch, thusly: >> try: >> doSomething() >> except mymodule.MyException e: >> print e >> Is there anything in boost python that makes setting this up easy? >> I've got a working exception translator and everything, but I don't >> see anything equivalent to putting >> "PyErr_NewException("mymodule.MyException", NULL, NULL)" into the >> system dictionary during module initialization. >> Or am I just working too hard again? Nope, thatt didn't make it into any release yet, though I've been meaning to do it for some time. > Just a note -- I did in fact realize that this is what the "scope" > function is for. scope is a class; did you look at http://www.boost.org/libs/python/doc/v2/scope.html? > Now I'm writing a templated exception generator class for my custom > exceptions. Once that's working (i.e. once I fix my boost::bind > calls to not crash gcc 2.96) I'll post it. Wonderful! I'd love to have something like that in the library! -- Dave Abrahams Boost Consulting www.boost-consulting.com From Vincenzo.Innocente at cern.ch Mon Jun 2 08:28:32 2003 From: Vincenzo.Innocente at cern.ch (Vincenzo Innocente) Date: Mon, 2 Jun 2003 08:28:32 +0200 (MEST) Subject: [C++-sig] problems in parsing boost header with pyste In-Reply-To: <20030601160012.24812.60656.Mailman@mail.python.org> Message-ID: Hi, pyste (cvs version downloaded around May 15) fails in parsing this header file #include struct foo{ void f(){} }; using a pyste file containing just bha = Class("bha","foo.hh") (in the real case my class was using iterators_adaptors but the problem seems not to be in parsing the user code but just in parsing the xml corresponding to the boost header...) Is this a bug or missing Include? cheers, Vincenzo From giulio.eulisse at cern.ch Mon Jun 2 09:48:53 2003 From: giulio.eulisse at cern.ch (Giulio Eulisse) Date: 02 Jun 2003 09:48:53 +0200 Subject: [C++-sig] missing boost::ref in pyste generated code? In-Reply-To: <3ED8115E.7060605@globalite.com.br> References: <1053960018.16477.166.camel@lxcms25> <3ED8115E.7060605@globalite.com.br> Message-ID: <1054540134.9570.16.camel@lxcms25> > >as it seem to pass Ev by value, instead of passing it by reference. > >We fixed it by adding boost::ref where indicated in the sourcecode as > >otherwise it tries to pass things by value. Is this a bug or am I doing > >something wrong? > > Sorry for missing this post, Giulio! > > Pyste doesn't do it because it's not safe, otherwise Boost.Python would > do that by default. Dave can elaborate better on that. Ok, but how do I do it then? Modifing generated .cc code? In my case that is not an option as generated C++ code is temporary and any modification to it is lost after "make clean" (and I cannot change this behaviour). It would be very nice (not to say "essential", in my case) to have some way to force the use of boost::ref. Maybe it is dangerous and pyste/boost cannot tell whether someone will destroy the referent or not, but having it as an option would still be useful. Let me decide if I want to take the risk, please...;-)And no, I cannot modify the interfaces to pass a shared_ptr. BTW, some time ago I send you a simple snippet of code with which I work around unnamed enum problem, have you had any opportunity to look at it? Ciao, Giulio From camelo at esss.com.br Mon Jun 2 16:21:59 2003 From: camelo at esss.com.br (Marcelo A. Camelo) Date: Mon, 2 Jun 2003 11:21:59 -0300 Subject: [C++-sig] Re: Attempted a typeid of NULL pointer! In-Reply-To: Message-ID: <000001c32912$54299fb0$0d00000a@esss.com.br> > By the way... This is no way to report > such a simple bug (...) It's not much better > than the previous post which gave no > information about how to reproduce the problem. Mea culpa: I didn't think it was a bug. I thought it was more like I was doing something wrong and that someone would quickly identify the error from my (admittedly vague) explanation. I promise to do my homework next time. :-) Sorry and thanks for the prompt fix. []'s Marcelo A. Camelo, M. Eng. - Project Leader ESSS - Engineering Simulation and Scientific Software E-mail: camelo at esss.com.br Phone: +55-48-239-2226 From greglandrum at mindspring.com Mon Jun 2 20:44:37 2003 From: greglandrum at mindspring.com (greg Landrum) Date: Mon, 02 Jun 2003 11:44:37 -0700 Subject: [C++-sig] Problems with functions taking string arguments on Windows Message-ID: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> [Boost 1.30, Win2K, MSVC v6] Building the attached extension module and running the script results in crashes every time GetVal attempts to return. The problem (as demonstrated by GetVal2) appears to be due to the fact that it's taking an std::string as an argument. I haven't tested this exact code fragment, but analogous code works just fine on linux (g++ 3.2). Is this: a) BPL bug b) MSVC v6 bug c) Me bug Additional information: Just to verify that this is not due to my suggested changes in builtin_converters.cpp::string_rvalue_from_python(), I switched that code back to its previous form (without the PyString_Size argument to the std::string constructor); this does not change the behavior I'm observing (i.e. it still crashes ever time). -greg -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: strings_crash.cpp URL: -------------- next part -------------- import rdchem_new print 'get val2:' a = rdchem_new.GetVal2('foo') print 'done:',a print 'get val:' a = rdchem_new.GetVal('foo') print 'done:',a From greglandrum at mindspring.com Mon Jun 2 22:25:08 2003 From: greglandrum at mindspring.com (greg Landrum) Date: Mon, 02 Jun 2003 13:25:08 -0700 Subject: [C++-sig] Problems with functions taking string arguments on Windows In-Reply-To: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> Message-ID: <5.1.0.14.2.20030602130754.03c38a10@mail.mindspring.com> At 11:44 AM 6/2/2003, greg Landrum wrote: >[Boost 1.30, Win2K, MSVC v6] > >Additional information: >Just to verify that this is not due to my suggested changes in >builtin_converters.cpp::string_rvalue_from_python(), I switched that code >back to its previous form (without the PyString_Size argument to the >std::string constructor); this does not change the behavior I'm observing >(i.e. it still crashes ever time). I've managed to figure out a bit more of what's going on. The conversion from PyObject* -> std::string is working just fine. The crash is happening when ~rvalue_from_python_data() calls python::detail::destroy_referent(). This ends up (via the type_traits::value_destroyer stuff) calling the destructor of the std::string which was built for the argument. I'm guessing that this is the problem. I've attached relevant bits of the stack trace, if that's at all helpful. -greg -------------- next part -------------- _CrtIsValidHeapPointer(const void * 0x00770460) line 1697 _free_dbg_lk(void * 0x00770460, int 1) line 1044 + 9 bytes _free_dbg(void * 0x00770460, int 1) line 1001 + 13 bytes free(void * 0x00770460) line 956 + 11 bytes operator delete(void * 0x00770460) line 7 + 10 bytes std::allocator::deallocate(void * 0x00770460, unsigned int 33) line 64 + 16 bytes std::basic_string,std::allocator >::_Tidy(unsigned char 1) line 592 std::basic_string,std::allocator >::~basic_string,std::allocator >() line 59 + 17 bytes boost::python::detail::value_destroyer<0,0>::execute(const std::basic_string,std::allocator > * 0x0012f9f0 {0x00770461 "foo"}) line 22 + 11 bytes boost::python::detail::destroy_referent_impl(void * 0x0012f9f0, std::basic_string,std::allocator > & (void)* 0x00000000) line 72 + 9 bytes boost::python::detail::destroy_referent(void * 0x0012f9f0, std::basic_string,std::allocator > & (void)* 0x00000000) line 78 + 11 bytes boost::python::converter::rvalue_from_python_data,std::allocator > &>::~rvalue_from_python_data,std::allocator > &>() line 136 + 14 bytes boost::python::converter::arg_rvalue_from_python,std::allocator > >::~arg_rvalue_from_python,std::allocator > >() + 22 bytes boost::python::arg_from_python,std::allocator > >::~arg_from_python,std::allocator > >() + 22 bytes boost::python::detail::nullary,std::allocator > > >::~nullary,std::allocator > > >() + 22 bytes boost::python::detail::caller_arity<1>::impl,std::allocator >),boost::python::detail::args_from_python,boost::python::default_call_policies,boost::mpl::list2,std::allocator >),boost::python::detail::args_from_python,boost::python::default_call_policies,boost::m3fc2b75d(boost::detail::function::any_pointer {...}, _object * 0x0077c0c8, _object * 0x00000000) line 118 From nicodemus at globalite.com.br Tue Jun 3 06:03:53 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Mon, 02 Jun 2003 20:03:53 -0800 Subject: [C++-sig] missing boost::ref in pyste generated code? In-Reply-To: <1054540134.9570.16.camel@lxcms25> References: <1053960018.16477.166.camel@lxcms25> <3ED8115E.7060605@globalite.com.br> <1054540134.9570.16.camel@lxcms25> Message-ID: <3EDC1E29.5080604@globalite.com.br> Giulio Eulisse wrote: >>>as it seem to pass Ev by value, instead of passing it by reference. >>>We fixed it by adding boost::ref where indicated in the sourcecode as >>>otherwise it tries to pass things by value. Is this a bug or am I doing >>>something wrong? >>> >>> >>Sorry for missing this post, Giulio! >> >>Pyste doesn't do it because it's not safe, otherwise Boost.Python would >>do that by default. Dave can elaborate better on that. >> >> > >Ok, but how do I do it then? Modifing generated .cc code? In my case >that is not an option as generated C++ code is temporary and any >modification to it is lost after "make clean" (and I cannot change this >behaviour). > Of course, you're right. >It would be very nice (not to say "essential", in my case) >to have some way to force the use of boost::ref. Maybe it is dangerous >and pyste/boost cannot tell whether someone will destroy the referent or >not, but having it as an option would still be useful. Let me decide if >I want to take the risk, please...;-)And no, I cannot modify the >interfaces to pass a shared_ptr. > How about something like this: C = Class('C', 'C.h') by_ref(C.foo) by_ref(C.bar) ? >BTW, some time ago I send you a simple snippet of code with which I work >around unnamed enum problem, have you had any opportunity to look at it? > > Sorry, I must have misplaced it! Could you send it again, please? >Ciao, >Giulio > > Just curious, which language is "Ciao"? Italian? 8) Regards, Nicodemus. From gdoughtie at anim.dreamworks.com Tue Jun 3 01:30:00 2003 From: gdoughtie at anim.dreamworks.com (Gavin Doughtie) Date: Mon, 02 Jun 2003 16:30:00 -0700 Subject: [C++-sig] First pass at exception translator template Message-ID: <3EDBDDF8.9030105@anim.dreamworks.com> So, this is probably gross or non thread-safe in some manner that I'm not clued into yet. I'd appreciate any elegant refactorings, but this seems to work and will allow me to just specify a bunch of "REGISTER_EXCEPTION" lines in my module. --------8<------------------------------- template struct ExceptionTranslator { typedef ExceptionTranslator type; static PyObject *pyException; static void RegisterExceptionTranslator(scope & scope, const char* moduleName, const char* name) { // Add the exception to the module scope std::strstream exName; exName << moduleName << "." << name << '\0'; pyException = PyErr_NewException(exName.str(), NULL, NULL); handle<> instanceException(pyException); scope.attr(name) = object(instanceException); // Register a translator for the type register_exception_translator< ExceptionType > ( &ExceptionTranslator::translateException ); } static void translateException(const ExceptionType& ex) { PyErr_SetString(pyException, ex.getMessage().ptr()); } }; template PyObject* ExceptionTranslator::pyException; // Convenience macro #define REGISTER_EXCEPTION(scopeRef, moduleName, className) \ ExceptionTranslator::RegisterExceptionTranslator(scopeRef, moduleName, #className) // Module ====================================================================== BOOST_PYTHON_MODULE(my_module) { scope moduleScope; REGISTER_EXCEPTION(moduleScope, "my_module", InstanceException); ..... } -- Gavin Doughtie DreamWorks SKG From nicodemus at globalite.com.br Tue Jun 3 08:32:50 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Mon, 02 Jun 2003 22:32:50 -0800 Subject: [C++-sig] Re: problems in parsing boost header with pyste In-Reply-To: References: Message-ID: <3EDC4112.3010309@globalite.com.br> Vincenzo Innocente wrote: > Hi, >pyste (cvs version downloaded around May 15) >fails in parsing this header file > >#include >struct foo{ > void f(){} >}; > >using a pyste file containing just >bha = Class("bha","foo.hh") > >(in the real case my class was using iterators_adaptors >but the problem seems not to be in parsing the user code but just in >parsing the xml corresponding to the boost header...) > >Is this a bug or missing Include? > > cheers, > Vincenzo > > Most likely it's a bug. Could you send me the resulting xml (run pyste with --debug) and the exception trace? Thanks! Regards, Nicodemus. From nicodemus at globalite.com.br Tue Jun 3 08:57:59 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Mon, 02 Jun 2003 22:57:59 -0800 Subject: [C++-sig] set_wrapper In-Reply-To: <3ED41C1A.5000303@globalite.com.br> References: <1054055040.19592.27.camel@lxcms25> <3ED41C1A.5000303@globalite.com.br> Message-ID: <3EDC46F7.5020409@globalite.com.br> Nicodemus wrote: > Yeah, it doesn't have support to add methods to classes yet... I will > look into it. You find the "set_wrapper" syntax good enough for this > purpose, or perhaps an "add_method(name, func)" would be better? > > Note also that the method you are trying to add must receive as first > argument a reference or a pointer to the class, so your f_A function > should be: > > void f_A(A&); > > Regards, > Nicodemus. Done! Refer to the documentation for usage information. From giulio.eulisse at cern.ch Tue Jun 3 09:13:36 2003 From: giulio.eulisse at cern.ch (Giulio Eulisse) Date: 03 Jun 2003 09:13:36 +0200 Subject: [C++-sig] set_wrapper In-Reply-To: <3EDC46F7.5020409@globalite.com.br> References: <1054055040.19592.27.camel@lxcms25> <3ED41C1A.5000303@globalite.com.br> <3EDC46F7.5020409@globalite.com.br> Message-ID: <1054624417.11607.1.camel@lxcms25> > Done! Refer to the documentation for usage information. THANKS A LOT!!!! Ciao, Giulio From giulio.eulisse at cern.ch Tue Jun 3 09:35:33 2003 From: giulio.eulisse at cern.ch (Giulio Eulisse) Date: 03 Jun 2003 09:35:33 +0200 Subject: [C++-sig] missing boost::ref in pyste generated code? In-Reply-To: <3EDC1E29.5080604@globalite.com.br> References: <1053960018.16477.166.camel@lxcms25> <3ED8115E.7060605@globalite.com.br> <1054540134.9570.16.camel@lxcms25> <3EDC1E29.5080604@globalite.com.br> Message-ID: <1054625733.11628.21.camel@lxcms25> > How about something like this: > > C = Class('C', 'C.h') > by_ref(C.foo) > by_ref(C.bar) > > ? That would be great, IMHO. > >BTW, some time ago I send you a simple snippet of code with which I work > >around unnamed enum problem, have you had any opportunity to look at it? > Sorry, I must have misplaced it! Could you send it again, please? > > >Ciao, > >Giulio > > > > > Just curious, which language is "Ciao"? Italian? 8) Yes. Actually it used to be slang from the province of Venice but now it's considered a proper Italian word. It's very close in meaning to the German/Austrian "Servus", since "Sciao" (or something like that) used to be the venician word for "servant", just like "Servus" is the latin word for it. So, the meaning in both cases is something like: "I'm your servant", to be intended as a form of salutation and respect. Nowadays, however, very few people know the original meaning and it is used as an informal/quick salutation per se. Ciao, ;-) Giulio From gilles.orazi at varianinc.com Tue Jun 3 13:48:31 2003 From: gilles.orazi at varianinc.com (Gilles Orazi) Date: Tue, 3 Jun 2003 13:48:31 +0200 Subject: [C++-sig] RE : [newbie] Failing to test boost.python under cygwin In-Reply-To: Message-ID: <001101c329c6$11922640$6dc8be0a@NICOIS> Thanks a lot for your fast answer. I am now able to build some python modules. Regards, --- Gilles From dimour at mail.ru Tue Jun 3 17:12:58 2003 From: dimour at mail.ru (Dmitri Mouromtsev) Date: Tue, 3 Jun 2003 19:12:58 +0400 Subject: [C++-sig] Problem with Extract conversion Message-ID: <006201c329e2$9edc47d0$7300a8c0@dima> Hello all! I am embedding Python in my application by using BOOST.PYTHON. I tried to compile in MSVC 7.0 the example with extract<> conversion: boost::python::str test_str("test"); char const* c_str = extract(test_str); If I run this code an unhandled exception rise when the function throw_no_lvalue_from_python compose error message "No registered converter was able to extract a C++...". What I need to do and what is my mistake. Thanks Dmitri PS The same things I 've with double, long etc. instead of string type From dave at boost-consulting.com Tue Jun 3 18:53:45 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 03 Jun 2003 12:53:45 -0400 Subject: [C++-sig] Re: Attempted a typeid of NULL pointer! References: <000001c32912$54299fb0$0d00000a@esss.com.br> Message-ID: "Marcelo A. Camelo" writes: >> By the way... This is no way to report >> such a simple bug (...) It's not much better >> than the previous post which gave no >> information about how to reproduce the problem. > > Mea culpa: I didn't think it was a bug. I thought > it was more like I was doing something wrong and > that someone would quickly identify the error from > my (admittedly vague) explanation. No problem, but all the same rules apply *especially* if you just need help. > I promise to do my homework next time. :-) > > Sorry and thanks for the prompt fix. You're welcome! -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 3 19:00:36 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 03 Jun 2003 13:00:36 -0400 Subject: [C++-sig] Re: Problem with Extract conversion References: <006201c329e2$9edc47d0$7300a8c0@dima> Message-ID: "Dmitri Mouromtsev" writes: > Hello all! > I am embedding Python in my application by using BOOST.PYTHON. I tried > to compile in MSVC 7.0 the example with extract<> conversion: > > boost::python::str test_str("test"); > char const* c_str = extract(test_str); > > If I run this code an unhandled exception rise when the function > throw_no_lvalue_from_python compose error message "No registered converter > was able to extract a C++...". > > What I need to do and what is my mistake. Please post a small, complete program which reproduces your problem. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 3 18:52:45 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 03 Jun 2003 12:52:45 -0400 Subject: [C++-sig] Re: missing boost::ref in pyste generated code? References: <1053960018.16477.166.camel@lxcms25> <3ED8115E.7060605@globalite.com.br> <1054540134.9570.16.camel@lxcms25> <3EDC1E29.5080604@globalite.com.br> Message-ID: Nicodemus writes: > How about something like this: > > C = Class('C', 'C.h') > by_ref(C.foo) > by_ref(C.bar) Don't you need control on an argument-by-argument basis? -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Wed Jun 4 02:02:07 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 03 Jun 2003 16:02:07 -0800 Subject: [C++-sig] Re: missing boost::ref in pyste generated code? In-Reply-To: References: <1053960018.16477.166.camel@lxcms25> <3ED8115E.7060605@globalite.com.br> <1054540134.9570.16.camel@lxcms25> <3EDC1E29.5080604@globalite.com.br> Message-ID: <3EDD36FF.4010306@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > > >>How about something like this: >> >>C = Class('C', 'C.h') >>by_ref(C.foo) >>by_ref(C.bar) >> >> > >Don't you need control on an argument-by-argument basis? > > I don't know, I thought by method would be sufficient. Giulio? From Giulio.Eulisse at cern.ch Tue Jun 3 23:53:00 2003 From: Giulio.Eulisse at cern.ch (Giulio Eulisse) Date: Tue, 3 Jun 2003 23:53:00 +0200 (CEST) Subject: [C++-sig] Re: missing boost::ref in pyste generated code? In-Reply-To: <3EDD36FF.4010306@globalite.com.br> Message-ID: > >>How about something like this: > >>C = Class('C', 'C.h') > >>by_ref(C.foo) > >>by_ref(C.bar) > >Don't you need control on an argument-by-argument basis? > I don't know, I thought by method would be sufficient. Giulio? In my case, control by method would be probably sufficient, as we should have control on the all the object passed(actually I would suggest to add a "by_ref(C)" method), but I agree that argument-by-argument would be a more correct approach. Ciao, Giulio From nicodemus at globalite.com.br Wed Jun 4 04:58:15 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 03 Jun 2003 18:58:15 -0800 Subject: [C++-sig] Re: missing boost::ref in pyste generated code? In-Reply-To: References: Message-ID: <3EDD6047.6040403@globalite.com.br> Giulio Eulisse wrote: >>>>How about something like this: >>>>C = Class('C', 'C.h') >>>>by_ref(C.foo) >>>>by_ref(C.bar) >>>> >>>> >>>Don't you need control on an argument-by-argument basis? >>> >>> >>I don't know, I thought by method would be sufficient. Giulio? >> >> > >In my case, control by method would be probably sufficient, as we should >have control on the all the object passed(actually I would suggest to add a >"by_ref(C)" method), but I agree that argument-by-argument would be a more >correct approach. > Ok, how about this: C = Class(...) by_ref(C.foo[0], C.foo[1]) # first and second argument passed using boost::ref by_ref(C.bar[2]) # third argument by ref From nicodemus at globalite.com.br Wed Jun 4 06:37:24 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 03 Jun 2003 20:37:24 -0800 Subject: [C++-sig] Re: Projects Page Reminder In-Reply-To: References: <3ED663FD.3070303@globalite.com.br> Message-ID: <3EDD7784.6010402@globalite.com.br> David Abrahams wrote: >Definitely. The page is not exclusively about open-source projects. > (Sorry for not posting this earlier: I was waiting for my boss aproval, and he was travelling) ESSS (Engineering Simulation and Scientific Software) provides engineering solutions and acts in the brazilian and south-american market providing products and services related to Computational Fluid Dynamics and Image Analysis. Recently we moved our work from working exclusively with C++ to an hybrid-language approach, using Python and C++, with Boost.Python providing the layer between the two. Two projects have been developed so far with this technology: Simba provides 3D visualization of geological formations gattered from simulation of the evolution of oil systems, allowing the user to analyse various aspects of the simulation, like deformation, pressure and fluids, along the time. Aero aims to construct a CFD with brazilian technology, which involves various companies and universities. ESSS is responsible for various of the application modules, including GUI and post-processing of results. From dave at boost-consulting.com Wed Jun 4 04:16:36 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 03 Jun 2003 22:16:36 -0400 Subject: [C++-sig] Re: Projects Page Reminder References: <3ED663FD.3070303@globalite.com.br> <3EDD7784.6010402@globalite.com.br> Message-ID: Nicodemus writes: > David Abrahams wrote: > >>Definitely. The page is not exclusively about open-source projects. >> > > (Sorry for not posting this earlier: I was waiting for my boss > aproval, and he was travelling) This is great stuff, Bruno. Why don't you add an entry to the projects page yourself and check it in? > ESSS (Engineering Simulation and Scientific Software) provides > engineering solutions and acts in the brazilian and south-american > market providing products and services related to Computational Fluid > Dynamics and Image Analysis. > > Recently we moved our work from working exclusively with C++ to an > hybrid-language approach, using Python and C++, with Boost.Python > providing the layer between the two. > > Two projects have been developed so far with this technology: > > Simba provides 3D visualization of geological formations gattered from > simulation of the evolution of oil systems, allowing the user to > analyse > > various aspects of the simulation, like deformation, pressure and > fluids, along the time. > > Aero aims to construct a CFD with brazilian technology, which involves > various companies and universities. ESSS is responsible for various of > the application modules, including GUI and post-processing of results. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dimour at mail.ru Wed Jun 4 09:14:05 2003 From: dimour at mail.ru (Dmitri Mouromtsev) Date: Wed, 4 Jun 2003 11:14:05 +0400 Subject: [C++-sig] Re: Problem with Extract conversion Message-ID: <000501c32a68$e2ebdea0$7300a8c0@dima> >> Hello all! >> I am embedding Python in my application by using BOOST.PYTHON. I tried >> to compile in MSVC 7.0 the example with extract<> conversion: >> >> boost::python::str test_str("test"); >> char const* c_str = extract(test_str); >> >> If I run this code an unhandled exception rise when the function >> throw_no_lvalue_from_python compose error message "No registered converter >> was able to extract a C++...". >> >> What I need to do and what is my mistake. > >Please post a small, complete program which reproduces your problem. Here is test program reproducing my problem (or bug): #include "Python.h" #include "boost\python.hpp" #include using namespace boost::python; int main() { Py_Initialize(); boost::python::str test_str("test"); char const* c_str = extract(test_str); std::cout << c_str << '\n'; return 0; } The exception rise in the same point as I describe blow. If I use another conversoin: char const* c_str = PyString_AsString(test_str.ptr()); it's all right. And when I build this program I've got warning "'argument' : conversion from 'size_t' to 'int', possible loss of data" to line BOOST_PYTHON_TO_PYTHON_BY_VALUE(std::string, PyString_FromStringAndSize(x.c_str(),x.size())) in the file builtin_converters.hpp. From ejoy at 163.com Wed Jun 4 09:19:31 2003 From: ejoy at 163.com (Zhang Le) Date: Wed, 4 Jun 2003 15:19:31 +0800 Subject: [C++-sig] wrap global variable (def not work) Message-ID: Hello, I want to wrap a single int variable to python model but failed: the code is: int verbose = 1; BOOST_PYTHON_MODULE(foo) { def("verbose", &verbose); } Error message: ... /usr/local/include/boost/python/make_function.hpp: In function `boost::python::api::object boost::python::make_function(F) [with F = int*]': /usr/local/include/boost/python/def.hpp:81: instantiated from `boost::python::api::object boost::python::detail::make_function1(T, ...) [with T = int*]' /usr/local/include/boost/python/def.hpp:90: instantiated from `void boost::python::def(const char*, Fn) [with Fn = int*]' foo.cpp:50: instantiated from here /usr/local/include/boost/python/make_function.hpp:76: no matching function for call to `get_signature(int*&)' ... It seems def can only be used on class members. Any tips? -- Sincerely yours, Zhang Le From nicodemus at globalite.com.br Wed Jun 4 17:54:01 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Wed, 04 Jun 2003 07:54:01 -0800 Subject: [C++-sig] wrap global variable (def not work) In-Reply-To: References: Message-ID: <3EDE1619.10103@globalite.com.br> Zhang Le wrote: >Hello, > I want to wrap a single int variable to python model but failed: > the code is: > int verbose = 1; > BOOST_PYTHON_MODULE(foo) > { > def("verbose", &verbose); > } > > > > It seems def can only be used on class members. Any tips? > Try this: int verbose = 1; int get_verbose() { return verbose; } BOOST_PYTHON_MODULE(foo) { scope().attr("verbose") = verbose; def("get_verbose", &get_verbose); } But there's one problem: >>> import foo >>> foo.verbose 1 >>> foo.verbose = 2 >>> foo.verbose 2 >>> foo.get_verbose() 1 Ie, changes in a global variable won't be seen in C++, so this technique works only for constants. If you *really* need to be able to change the global variable from Python, you will have to create accessor functions and use these. That's not Boost.Python fault really, is just that in Python there's no way to know when a module-variable is *rebound* (note, not *changed*). HTH, Nicodemus. From brett.calcott at paradise.net.nz Wed Jun 4 13:19:27 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Wed, 4 Jun 2003 23:19:27 +1200 Subject: [C++-sig] playing with pygame Message-ID: Pygame (www.pygame.org) is a standard "C" type extension for python based on the Simple Direct Media Layer. It is broken into several modules and exposes the C objects to each other via a simple mechanism using a array of "slots", that are initialised via python. Here is the simplest class, a rect, snipped from the header pygame.h: /* RECT */ #define PYGAMEAPI_RECT_FIRSTSLOT 20 #define PYGAMEAPI_RECT_NUMSLOTS 4 typedef struct { short x, y; short w, h; }GAME_Rect; typedef struct { PyObject_HEAD GAME_Rect r; } PyRectObject; #define PyRect_AsRect(x) (((PyRectObject*)x)->r) #ifndef PYGAMEAPI_RECT_INTERNAL #define PyRect_Check(x) ((x)->ob_type == (PyTypeObject*)PyGAME_C_API[PYGAMEAPI_RECT_FIRSTSLOT + 0]) #define PyRect_Type (*(PyTypeObject*)PyGAME_C_API[PYGAMEAPI_RECT_FIRSTSLOT + 0]) #define PyRect_New (*(PyObject*(*)(GAME_Rect*))PyGAME_C_API[PYGAMEAPI_RECT_FIRSTSLOT + 1]) #define PyRect_New4 \ (*(PyObject*(*)(short,short,short,short))PyGAME_C_API[PYGAMEAPI_RECT_FIRSTSL OT + 2]) #define GameRect_FromObject \ (*(GAME_Rect*(*)(PyObject*, GAME_Rect*))PyGAME_C_API[PYGAMEAPI_RECT_FIRSTSLOT + 3]) Here is the bit that exposes this to other modules: #define import_pygame_rect() { \ PyObject *module = PyImport_ImportModule("pygame.rect"); \ if (module != NULL) { \ PyObject *dict = PyModule_GetDict(module); \ PyObject *c_api = PyDict_GetItemString(dict, PYGAMEAPI_LOCAL_ENTRY); \ if(PyCObject_Check(c_api)) {\ int i; void** localptr = (void**)PyCObject_AsVoidPtr(c_api); \ for(i = 0; i < PYGAMEAPI_RECT_NUMSLOTS; ++i) \ PyGAME_C_API[i + PYGAMEAPI_RECT_FIRSTSLOT] = localptr[i]; \ } Py_DECREF(module); } } #endif where this appears later on: #ifndef NO_PYGAME_C_API #define PYGAMEAPI_TOTALSLOTS 60 static void* PyGAME_C_API[PYGAMEAPI_TOTALSLOTS] = {NULL}; #endif (Hope that makes sense) So, I can do this: #include <...boostpythonstuff...> #include void look_at_rect(object o) { PyObject *p = o.ptr(); if (PyRect_Check(p)) { GAME_Rect &r = PyRect_AsRect(p); std::cout << r.x << ',' << r.y; } } BOOST_PYTHON_MODULE(pygame_boost) { import_pygame_rect(); def("look_at_rect", look_at_rect); } Now the QUESTION: I should be able to register a conversion though. I got as far as this: lvalue_from_pytype, &PyRect_Type>(); But, this won't compile as: 'python_type' : invalid template argument for 'boost::python::lvalue_from_pytype', constant expression expected I think I understand the problem, but what is the solution? Cheers, Brett From warkid at hotbox.ru Wed Jun 4 13:27:49 2003 From: warkid at hotbox.ru (Kerim Borchaev) Date: Wed, 4 Jun 2003 15:27:49 +0400 Subject: [C++-sig] Compile times using Boost::python. Message-ID: <261303885125.20030604152749@hotbox.ru> Hello! System P4-1800, 512Mb. Project consists of 12 files(~12 classes, 100-200 methods). Compiling it with MSVC7 takes 2 minutes. Isn't in too long? What are your numbers? How can I speed it up? (I'm already using pre-compiled header with boost/python.hpp included in it) Thanks. Kerim mailto:warkid at hotbox.ru From dave at boost-consulting.com Wed Jun 4 14:01:21 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 08:01:21 -0400 Subject: [C++-sig] Re: playing with pygame References: Message-ID: "Brett Calcott" writes: > But, this won't compile as: > 'python_type' : invalid template argument for > 'boost::python::lvalue_from_pytype', constant expression expected > > I think I understand the problem, but what is the solution? lvalue_from_pytype is a simple template; take a look. You could write an extract function which does the same thing, but which uses your non-constant PyTypeObject* expression in place of the template parameter, and register that just the way the lvalue_from_pytype constructor does. HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 4 15:07:44 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 09:07:44 -0400 Subject: [C++-sig] Re: Problems with functions taking string arguments on Windows References: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> <5.1.0.14.2.20030602130754.03c38a10@mail.mindspring.com> Message-ID: greg Landrum writes: > At 11:44 AM 6/2/2003, greg Landrum wrote: > >>[Boost 1.30, Win2K, MSVC v6] >> >>Additional information: >> Just to verify that this is not due to my suggested changes in >> builtin_converters.cpp::string_rvalue_from_python(), I switched that >> code back to its previous form (without the PyString_Size argument >> to the std::string constructor); this does not change the behavior >> I'm observing (i.e. it still crashes ever time). > > I've managed to figure out a bit more of what's going on. The > conversion from PyObject* -> std::string is working just fine. The > crash is happening when ~rvalue_from_python_data() calls > python::detail::destroy_referent(). This ends up (via the > type_traits::value_destroyer stuff) calling the destructor of the > std::string which was built for the argument. I'm guessing that this > is the problem. > > > I've attached relevant bits of the stack trace, if that's at all helpful. Your example works perfectly for me with vc7.1 vc7 and vc6.5; maybe you need a service pack? bjam -sBUILD=debug-python -sTOOLS="vc7.1 vc7 msvc" test BD Software STL Message Decryptor v2.38 for gcc ...found 2081 targets... ...updating 3 targets... python-test-target c:\build\libs\python\user\bin\foo.test\vc7.1\debug-python\runtime-link- dynamic\foo.test enter: foo enter: foo get val2: done: 4 get val: done: 4 Adding parser accelerators ... Done. [3574 refs] python-test-target c:\build\libs\python\user\bin\foo.test\vc7\debug-python\runtime-link- dynamic\foo.test enter: foo enter: foo get val2: done: 4 get val: done: 4 Adding parser accelerators ... Done. [3574 refs] python-test-target c:\build\libs\python\user\bin\foo.test\msvc\debug-python\runtime-link- dynamic\foo.test get val2: Adding parser accelerators ... Done. enter: foo done: 4 get val: enter: foo done: 4 [3574 refs] ...updated 3 targets... Compilation finished at Wed Jun 04 09:06:35 -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 4 15:19:24 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 09:19:24 -0400 Subject: [C++-sig] Re: Problem with Extract conversion References: <000501c32a68$e2ebdea0$7300a8c0@dima> Message-ID: "Dmitri Mouromtsev" writes: >>> Hello all! >>> I am embedding Python in my application by using BOOST.PYTHON. I tried >>> to compile in MSVC 7.0 the example with extract<> conversion: >>> >>> boost::python::str test_str("test"); >>> char const* c_str = extract(test_str); >>> >>> If I run this code an unhandled exception rise when the function >>> throw_no_lvalue_from_python compose error message "No registered > converter >>> was able to extract a C++...". >>> >>> What I need to do and what is my mistake. >> >>Please post a small, complete program which reproduces your problem. > > Here is test program reproducing my problem (or bug): > #include "Python.h" > > #include "boost\python.hpp" > > #include > > using namespace boost::python; > > int main() > > { > > Py_Initialize(); > > boost::python::str test_str("test"); > > char const* c_str = extract(test_str); > > std::cout << c_str << '\n'; > > return 0; > > } > > The exception rise in the same point as I describe blow. > > If I use another conversoin: > > char const* c_str = PyString_AsString(test_str.ptr()); > > it's all right. > > And when I build this program I've got warning "'argument' : conversion from > 'size_t' to 'int', possible loss of data" to line > > BOOST_PYTHON_TO_PYTHON_BY_VALUE(std::string, > PyString_FromStringAndSize(x.c_str(),x.size())) > > in the file builtin_converters.hpp. Your example works perfectly well for me and builds with no warnings: bjam -sBUILD=debug-python -sTOOLS="vc7.1" --verbose-test test ...found 2016 targets... ...updating 3 targets... execute-test c:\build\libs\python\user\bin\test.test\vc7.1\debug-python\runtime-link-dynamic\test.run 1 file(s) copied. ====== BEGIN OUTPUT ====== test ====== END OUTPUT ====== **passed ** c:\build\libs\python\user\bin\test.test\vc7.1\debug-python\runtime-link-dynamic\test.test ...updated 3 targets... The Jamfile looks like: subproject libs/python/user ; # bring in the rules for python SEARCH on python.jam = $(BOOST_BUILD_PATH) ; include python.jam ; # bring in rules for testing SEARCH on testing.jam = $(BOOST_BUILD_PATH) ; include testing.jam ; run test.cpp ../build/boost_python : # program args : # input files : # requirements $(PYTHON_PROPERTIES) $(PYTHON_LIB_PATH) <$(gcc-compilers)>$(CYGWIN_PYTHON_DEBUG_DLL_PATH) <$(gcc-compilers)><*>$(CYGWIN_PYTHON_DLL_PATH) $(PYTHON_EMBEDDED_LIBRARY) ; -- Dave Abrahams Boost Consulting www.boost-consulting.com From nectar at celabo.org Wed Jun 4 19:51:53 2003 From: nectar at celabo.org (Jacques A. Vidrine) Date: Wed, 4 Jun 2003 12:51:53 -0500 Subject: [C++-sig] boost::python and exceptions, seg fault Message-ID: <20030604175153.GA15401@madman.celabo.org> [I would be much obliged if responses are cc'd to me directly, as I am not yet subscribed. Thanks!] Hello, I've pulled some of my hair out over this. I can't seem to get Boost.Python (from Boost 1.29.0) to handle exceptions. Instead, they seem to cause the python interpreter to abort. A very simple example is below, along with a backtrace. cd .../boost_1_29_0/libs/python/test && bjam ... exception_translator.run runs to completion. Any clues would be much appreciated! % g++ -v Using built-in specs. Configured with: FreeBSD/i386 system compiler Thread model: posix gcc version 3.2.2 [FreeBSD] 20030205 (release) ---- begin ---- #include #include void example_func() { throw std::exception(); } BOOST_PYTHON_MODULE(example) { boost::python::def("example", example_func); } ---- end ---- % python Python 2.2.2 (#1, Feb 11 2003, 21:28:43) [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 Type "help", "copyright", "credits" or "license" for more information. >>> import example >>> example.example() zsh: 15134 abort (core dumped) python % gdb =python python.core GNU gdb 5.2.1 (FreeBSD) [...] #0 0x28224357 in kill () from /usr/lib/libc.so.5 (gdb) up #1 0x2828420e in abort () from /usr/lib/libc.so.5 (gdb) up #2 0x28194e0a in __cxxabiv1::__terminate(void (*)()) () from /usr/lib/libstdc++.so.4 (gdb) up #3 0x28194e50 in __cxxabiv1::__unexpected(void (*)()) () from /usr/lib/libstdc++.so.4 (gdb) up #4 0x28194db5 in __cxa_throw () from /usr/lib/libstdc++.so.4 (gdb) up #5 0x28341344 in example_func() () from ./example.so (gdb) quit Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From gideon at computer.org Wed Jun 4 20:07:10 2003 From: gideon at computer.org (gideon may) Date: Wed, 04 Jun 2003 20:07:10 +0200 Subject: [C++-sig] Memory leak when using return_internal_reference !? Message-ID: <47500592.1054757230@localhost> Hi Dave, I'm experiencing a memory leak when using the return_internal_reference policy. It's probably best explained using the the test_pointer_adoption_ext module and a little test program : ----------- leak.py ----------------- from test_pointer_adoption_ext import * a = create("leak") while 1: innards = a.get_inner() innards = None print ".", -------------------------------------- When running this program, there is a serious memory leak. When I take out the line 'innards = a.get_inner()' everything is OK. I tried to hunt it down and it seems that life_support system created in life_support.cpp is never deleted, i.e. life_support_dealloc is never called. Do you have any idea what could be the problem ? I'm using return_internal_reference quite extensively and couldn't live without it :-) ciao, gideon From rwgk at yahoo.com Wed Jun 4 21:04:10 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 4 Jun 2003 12:04:10 -0700 (PDT) Subject: [C++-sig] boost::python and exceptions, seg fault In-Reply-To: <20030604175153.GA15401@madman.celabo.org> Message-ID: <20030604190410.65136.qmail@web20204.mail.yahoo.com> --- "Jacques A. Vidrine" wrote: > % python > Python 2.2.2 (#1, Feb 11 2003, 21:28:43) > [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 > Type "help", "copyright", "credits" or "license" for more information. > >>> import example > >>> example.example() > zsh: 15134 abort (core dumped) python Try this: import sys sys.setdlopenflags(0x100|0x2) import example example.example() Please let me know if this works and I will explain. If it doesn't work it could be that the bit flags are different under your OS (the above work for Linux). In that case try "man dlopen" for more information and look at /usr/include/dlfcn.h (and possibly the files that are included from there) to find out what the bit flags are for RTLD_GLOBAL and RTLD_NOW. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nectar at celabo.org Wed Jun 4 21:17:51 2003 From: nectar at celabo.org (Jacques A. Vidrine) Date: Wed, 4 Jun 2003 14:17:51 -0500 Subject: [C++-sig] boost::python and exceptions, seg fault In-Reply-To: <20030604190410.65136.qmail@web20204.mail.yahoo.com> References: <20030604175153.GA15401@madman.celabo.org> <20030604190410.65136.qmail@web20204.mail.yahoo.com> Message-ID: <20030604191751.GA15781@madman.celabo.org> On Wed, Jun 04, 2003 at 12:04:10PM -0700, Ralf W. Grosse-Kunstleve wrote: > --- "Jacques A. Vidrine" wrote: > > % python > > Python 2.2.2 (#1, Feb 11 2003, 21:28:43) > > [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import example > > >>> example.example() > > zsh: 15134 abort (core dumped) python > > Try this: > > import sys > sys.setdlopenflags(0x100|0x2) > import example > example.example() > > Please let me know if this works and I will explain. > > If it doesn't work it could be that the bit flags are > different under your OS (the above work for Linux). > In that case try "man dlopen" for more information > and look at /usr/include/dlfcn.h (and possibly the files > that are included from there) to find out what the bit flags > are for RTLD_GLOBAL and RTLD_NOW. Thanks for the suggestion, but no joy. (0x102 is RTLD_GLOBAL|RTLD_NOW on my platform also, BTW.) Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From dave at boost-consulting.com Wed Jun 4 22:48:50 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 16:48:50 -0400 Subject: [C++-sig] Re: Memory leak when using return_internal_reference !? References: <47500592.1054757230@localhost> Message-ID: gideon may writes: > Hi Dave, > > I'm experiencing a memory leak when using the > return_internal_reference policy. > > It's probably best explained using the the test_pointer_adoption_ext > module and a > little test program : > > ----------- leak.py ----------------- > > from test_pointer_adoption_ext import * > > a = create("leak") > while 1: > innards = a.get_inner() > innards = None > print ".", > -------------------------------------- > > When running this program, there is a serious memory leak. When I take out > the line 'innards = a.get_inner()' everything is OK. > I tried to hunt it down and it seems that life_support system created > in life_support.cpp is never deleted, i.e. life_support_dealloc is > never called. > > Do you have any idea what could be the problem ? Perhaps... a bug in Boost.Python??! Fixed in CVS, thanks for reporting it. Extra bonus points if you can supply a patch to the test that will detect this bug! > I'm using return_internal_reference quite extensively and couldn't live > without it :-) Doubtless that's true. I can't believe I never saw this one before. That's what you get for not having tests :( -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 4 22:33:23 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 16:33:23 -0400 Subject: [C++-sig] Re: boost::python and exceptions, seg fault References: <20030604175153.GA15401@madman.celabo.org> Message-ID: "Jacques A. Vidrine" writes: > Hello, > > I've pulled some of my hair out over this. I can't seem to get > Boost.Python (from Boost 1.29.0) to handle exceptions. Instead, they > seem to cause the python interpreter to abort. > > A very simple example is below, along with a backtrace. > > > cd .../boost_1_29_0/libs/python/test && bjam ... exception_translator.run > runs to completion. > > Any clues would be much appreciated! 1. What happens if you build and test your example under bjam? 2. Suppose you add a simple function which doesn't throw to your module and try calling that first, from Python. Does it work? 3. Can you slowly mutate the exception_translator example into your example and find out when it breaks? -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Wed Jun 4 23:48:48 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 4 Jun 2003 14:48:48 -0700 (PDT) Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <261303885125.20030604152749@hotbox.ru> Message-ID: <20030604214848.40405.qmail@web20201.mail.yahoo.com> --- Kerim Borchaev wrote: > System P4-1800, 512Mb. > Project consists of 12 files(~12 classes, 100-200 methods). > Compiling it with MSVC7 takes 2 minutes. > > Isn't in too long? I also wish compilation would go faster, but if you look at a preprocessed file you will not be surprised anymore. > What are your numbers? Similar. > How can I speed it up? The way I view it: 1. Boost.Python is highly efficient in minimizing the work for the programmer (expensive in terms of money) by using the available computing resources (cheap in comparison). 2. Once you have your (run)time-consuming core algorithms implemented in C++ and wrapped with Boost.Python you can spend most of your time working with the much more pleasant Python language. I am sometimes going for weeks without recompiling. If I have to recompile I am using SCons which supports parallel builds (the -j option). Amazingly, our latest dual-CPU PC (for <$3k) allows me to use -j 4 and is indeed about 3.8 times faster than compiling with one CPU. I hear this is due to hyper-threading. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From gideon at computer.org Wed Jun 4 23:50:58 2003 From: gideon at computer.org (gideon may) Date: Wed, 04 Jun 2003 23:50:58 +0200 Subject: [C++-sig] Re: Memory leak when using return_internal_reference !? In-Reply-To: References: <47500592.1054757230@localhost> Message-ID: <3982065.1054770658@localhost> Thanks for the fix! --On Wednesday, June 04, 2003 4:48 PM -0400 David Abrahams wrote: > gideon may writes: > > Perhaps... a bug in Boost.Python??! > > Fixed in CVS, thanks for reporting it. Extra bonus points if you can > supply a patch to the test that will detect this bug! Thanks, will try it immediately. I'm afraid I have to let the bonus points pass, since there is no way to track if objects have really been deleted; except if you define COUNT_ALLOCS while building python :( > >> I'm using return_internal_reference quite extensively and couldn't live >> without it :-) > > Doubtless that's true. I can't believe I never saw this one before. > That's what you get for not having tests :( Hmm, still hard to test this one, the garbage collector reports all objects deleted, even if the weak references are not. ciao, gideon From gideon at computer.org Wed Jun 4 23:59:01 2003 From: gideon at computer.org (gideon may) Date: Wed, 04 Jun 2003 23:59:01 +0200 Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <20030604214848.40405.qmail@web20201.mail.yahoo.com> References: <20030604214848.40405.qmail@web20201.mail.yahoo.com> Message-ID: <4465120.1054771141@localhost> Regarding compilation on Linux in debug mode (gcc 3.2), I have extremely long linking phases, sometimes up to an hour with my application :(. Is there a way to speed this up ? Linking without debug info is much faster. Must say the MS VC7 is much faster in this respect, at least 5 times. I've got a 1GHz pentium with 256 Mb, which should be enough (no swapping occurs). --On Wednesday, June 04, 2003 2:48 PM -0700 "Ralf W. Grosse-Kunstleve" wrote: > --- Kerim Borchaev wrote: >> System P4-1800, 512Mb. >> Project consists of 12 files(~12 classes, 100-200 methods). >> Compiling it with MSVC7 takes 2 minutes. >> >> Isn't in too long? > > I also wish compilation would go faster, but if you look at a > preprocessed file you will not be surprised anymore. > >> What are your numbers? > > Similar. > >> How can I speed it up? > > The way I view it: > > 1. Boost.Python is highly efficient in minimizing the work for the > programmer (expensive in terms of money) by using the available computing > resources (cheap in comparison). That's absolutely right, and leaves the programmer some time to read mail and newsgroups ;) > > 2. Once you have your (run)time-consuming core algorithms implemented in > C++ and wrapped with Boost.Python you can spend most of your time working > with the much more pleasant Python language. I am sometimes going for > weeks without recompiling. Except if you're actively developing the wrapper library and looking for bugs in your code. > > If I have to recompile I am using SCons which supports parallel builds > (the -j option). Amazingly, our latest dual-CPU PC (for <$3k) allows me > to use -j 4 and is indeed about 3.8 times faster than compiling with one > CPU. I hear this is due to hyper-threading. Ah, waiting for the laptop with dual P4, like to sit on the balcony while coding. Especially when it's really nice weather outside. ciao, gideon From nicodemus at globalite.com.br Thu Jun 5 00:10:13 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Wed, 04 Jun 2003 14:10:13 -0800 Subject: [C++-sig] Re: problems in parsing boost header with pyste In-Reply-To: <3EDC4112.3010309@globalite.com.br> References: <3EDC4112.3010309@globalite.com.br> Message-ID: <3EDE6E45.4050300@globalite.com.br> Nicodemus wrote: > Vincenzo Innocente wrote: > >> Hi, >> pyste (cvs version downloaded around May 15) >> fails in parsing this header file > > Most likely it's a bug. Could you send me the resulting xml (run pyste > with --debug) and the exception trace? Thanks! Vicenzo has sent me the files, and I've fixed it in CVS. Thaks for the bug report! Regards, Nicodemus. From dave at boost-consulting.com Thu Jun 5 00:24:36 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 18:24:36 -0400 Subject: [C++-sig] Re: Memory leak when using return_internal_reference !? References: <47500592.1054757230@localhost> <3982065.1054770658@localhost> Message-ID: gideon may writes: > Thanks for the fix! > > --On Wednesday, June 04, 2003 4:48 PM -0400 David Abrahams > wrote: > >> gideon may writes: > > > >> >> Perhaps... a bug in Boost.Python??! >> >> Fixed in CVS, thanks for reporting it. Extra bonus points if you can >> supply a patch to the test that will detect this bug! > > Thanks, will try it immediately. I'm afraid I have to let the bonus > points pass, since there is no way to track if objects have really been > deleted; except if you define COUNT_ALLOCS while building python :( I regularly test with a debug build which has COUNT_ALLOCS defined, so if you can make a test which uses that fact you will reap the bonus points. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Thu Jun 5 00:26:52 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 18:26:52 -0400 Subject: [C++-sig] Re: Compile times using Boost::python. References: <20030604214848.40405.qmail@web20201.mail.yahoo.com> <4465120.1054771141@localhost> Message-ID: gideon may writes: > Regarding compilation on Linux in debug mode (gcc 3.2), > I have extremely long linking phases, sometimes up to an hour > with my application :(. Is there a way to speed this up ? Yes, report the problem to the GCC/ld developers and wait for them to fix it. ;-/ I'm really not kidding. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Thu Jun 5 00:25:39 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 04 Jun 2003 18:25:39 -0400 Subject: [C++-sig] Re: Compile times using Boost::python. References: <261303885125.20030604152749@hotbox.ru> <20030604214848.40405.qmail@web20201.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > If I have to recompile I am using SCons which supports parallel builds (the -j > option). So does Boost.Jam. The -j option again. -- Dave Abrahams Boost Consulting www.boost-consulting.com From greglandrum at mindspring.com Thu Jun 5 01:03:40 2003 From: greglandrum at mindspring.com (greg Landrum) Date: Wed, 04 Jun 2003 16:03:40 -0700 Subject: [C++-sig] Re: Problems with functions taking string arguments on Windows In-Reply-To: References: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> <5.1.0.14.2.20030602130754.03c38a10@mail.mindspring.com> Message-ID: <5.1.0.14.2.20030604160150.0468fc00@mail.mindspring.com> At 06:07 AM 6/4/2003, David Abrahams wrote: >Your example works perfectly for me with vc7.1 vc7 and vc6.5; maybe >you need a service pack? After building with bjam, I realized that the problem was that I was using the release build of the boost_python DLL from an debug mode extension module. This appears to work sometimes, but not always. Using the proper DLL cleared up the strings problem. Now I have another one though, but that's for another message... -greg From rwgk at yahoo.com Thu Jun 5 01:35:09 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 4 Jun 2003 16:35:09 -0700 (PDT) Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <4465120.1054771141@localhost> Message-ID: <20030604233509.12968.qmail@web20204.mail.yahoo.com> --- gideon may wrote: > Regarding compilation on Linux in debug mode (gcc 3.2), > I have extremely long linking phases, sometimes up to an hour > with my application :(. Is there a way to speed this up ? > Linking without debug info is much faster. I am always using -O0. If and only if I really need the debug symbols I recompile with -g (with SCons and bjam you can do this without interferring with your -O0 built). > > 2. Once you have your (run)time-consuming core algorithms implemented in > > C++ and wrapped with Boost.Python you can spend most of your time working > > with the much more pleasant Python language. I am sometimes going for > > weeks without recompiling. > > Except if you're actively developing the wrapper library and looking > for bugs in your code. True, but: - Parallel builts really help. - Good build systems like SCons and bjam always only recompile what is really needed (i.e. no "make clean" necessary ever). - You can maximize the benefits of parallel builts and good build systems by modularizing your code, which is also good for other reasons. - While developing you can use the fastest platform available. Test on slower platforms later. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From ejoy at 163.com Wed Jun 4 15:24:47 2003 From: ejoy at 163.com (Zhang Le) Date: Wed, 4 Jun 2003 21:24:47 +0800 Subject: [C++-sig] Building Boost.Python on MingW(win98) In-Reply-To: <20030530090902.13638.66227.Mailman@mail.python.org> References: <20030530090902.13638.66227.Mailman@mail.python.org> Message-ID: Hello, I have some problems with building BPL on a win98 with mingw. After the boost_python.lib is built, bjam failed to build boost_python.dll with the following command (bjam -d3) g++ -Wl,--exclude-symbols,_bss_end__:_bss_start__:_data_end__:_data_start__ -Wl,--enable-auto-image-base -W l,--out-implib,..\..\..\libs\python\build\bin\boost_python.dll\mingw\release\runtime-link-dynamic\boost_python .lib -s -shared -o "..\..\..\libs\python\build\bin\boost_python.dll\mingw\release\runtime-link-dynamic\boost _python.dll" -Lc:\Python22\libs "...some obj file name here" \object_operators.obj" -lpython22 -Wl, -rpath-link, . c:\MinGW\bin\..\lib\gcc-lib\mingw32\3.2\..\..\..\..\mingw32\bin\ld.exe: cannot find -lpython22 But I have seen "-Lc:\Python22\libs" in the g++ command line. And even after I copy python22.lib from c:\Python22\libs to the building directory the error is still there. Am I missing something? I use python 2.2.3 with mingw2.0(gcc 3.2). -- Sincerely yours, Zhang Le From prabhu at aero.iitm.ernet.in Thu Jun 5 08:39:37 2003 From: prabhu at aero.iitm.ernet.in (Prabhu Ramachandran) Date: Thu, 5 Jun 2003 12:09:37 +0530 Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <20030604233509.12968.qmail@web20204.mail.yahoo.com> References: <4465120.1054771141@localhost> <20030604233509.12968.qmail@web20204.mail.yahoo.com> Message-ID: <16094.58793.55915.462884@monster.linux.in> >>>>> "RWGK" == Ralf W Grosse-Kunstleve writes: >> Except if you're actively developing the wrapper library and >> looking for bugs in your code. RWGK> True, but: RWGK> - Parallel builts really help. I guess distcc will also be useful for parallel builds spread over several machines. http://distcc.samba.org/ cheers, prabhu From milind_patil at hotmail.com Thu Jun 5 09:02:35 2003 From: milind_patil at hotmail.com (Milind Patil) Date: Thu, 5 Jun 2003 00:02:35 -0700 Subject: [C++-sig] Re: long long unsigned issue... References: Message-ID: "David Abrahams" wrote in message news:uy90p3nmp.fsf at boost-consulting.com... > "Milind Patil" writes: > > My point is that this is a *very* big toy, with lots of constructors > which look like they could potentially contend for the same input > arguments. Can't you reduce the problem you're having a little bit > more? > Yes, realize that the example class had lots of constructors. But the original class that I am wrapping is huge. The constructor contention is part of my problem as I struggle to get types accross C++ and python to match up. I want my python wrapper to behave like the C++ class I am wrapping with very few caveats. > Do you need *implicit* conversion capability? Remember that implicit > conversions are generally un-Pythonic. > I was trying to save myself the tiresome chore of writing operator .defs for the matrix of different types and operators that the class supports. For example .def( self + self) with implicity_convertible(); implicity_convertible(); implicity_convertible(); would save me .def(self + other()) .def(other() + self) .def(self + other()) .def(other + self) .def(self + other()) ... Isn't this a common use case scenario for boost python -- Mapping C++ class which behaves like numeric types to python numeric type like behaviour? > > Now that we have a long_ to Y_Wrapper constructor > > and that class_ Y has Y_Wrapper as one of the bases > > You got that backwards. Y is the base of Y_Wrapper. > My fault. I mistakenly thought that the "class_< Y, Y_Wrapper >("Y", init< >())" part of module definition meant Y, and Y_Wrapper are bases of the python Y class. > The right solution to this problem is to provide for something Ralf > has been requesting for some time: the ability to inject new > constructors into a class, just the way we can inject methods that > aren't built from member functions. Something like: > > Y Y_from_pylong(long_ y) > { > return Y(extract(y)); > } > > ... > .def("__init__", constructor(Y_from_pylong)) > I think such a class constructor injector will really be useful for doing converters in a simple way. > > a) Expose C++ classes as python classes alone. User will not derive > > from the exposed classes. > > b) Expose C++ classes as derivable classes in python. > > a and b are equivalent as far as the library is concerned. > Isn't there a difference in how a user wraps a class with virtual functions and one without virtual functions -- in the scenario where the user expects the python class to be extended? My interpretation of docs was that a Y_Wrapper class holding a PyObject* as a member is needed to wrap the Y class with virtual methods. > I'm confused. Why do you want more info on c) if you don't use > Boost.Python that way? I do use the embedded + extension scenario. I don't have to reference or derive from the python classes in the C++ that embeds the python extension and I don't have to derive from the c++ classes in python. Milind From gideon at computer.org Thu Jun 5 09:19:14 2003 From: gideon at computer.org (gideon may) Date: Thu, 05 Jun 2003 09:19:14 +0200 Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <20030604233509.12968.qmail@web20204.mail.yahoo.com> References: <20030604233509.12968.qmail@web20204.mail.yahoo.com> Message-ID: <38076420.1054804754@[10.0.0.9]> --On Wednesday, June 04, 2003 4:35 PM -0700 "Ralf W. Grosse-Kunstleve" wrote: > --- gideon may wrote: >> Regarding compilation on Linux in debug mode (gcc 3.2), >> I have extremely long linking phases, sometimes up to an hour >> with my application :(. Is there a way to speed this up ? >> Linking without debug info is much faster. > > I am always using -O0. If and only if I really need the debug symbols I > recompile with -g (with SCons and bjam you can do this without > interferring with your -O0 built). I agree with you, normally I do build release versions (am using bjam), but in the case that I do need debugging info I normally take lunch :) > >> > 2. Once you have your (run)time-consuming core algorithms implemented >> > in C++ and wrapped with Boost.Python you can spend most of your time >> > working with the much more pleasant Python language. I am sometimes >> > going for weeks without recompiling. >> >> Except if you're actively developing the wrapper library and looking >> for bugs in your code. > > True, but: > > - Parallel builts really help. Agree, if you have a multiprocessor system. And parallel builds don't speed up the linking phase, which is definitely the bottleneck in the edit-compile-test loop. > - Good build systems like SCons and bjam always only recompile what is > really needed (i.e. no "make clean" necessary ever). what is make clean or bjam clean ? > - You can maximize the benefits of parallel builts and good build systems > by modularizing your code, which is also good for other reasons. Again, the compilation is fast in comparison with the linking. Unfortunately I do need to link after I change a single source file. BTW, my library consists of about 100 source files, thus I could say the code is pretty modular. > - While developing you can use the fastest platform available. Test on > slower platforms later. With me they are all one and the same platform :( Guess I will do some profiling of the gcc linker and file a report, as Dave mentioned ciao gideon From ejoy at 163.com Thu Jun 5 09:22:21 2003 From: ejoy at 163.com (Zhang Le) Date: Thu, 5 Jun 2003 15:22:21 +0800 Subject: [C++-sig] Re: C++-sig digest, Vol 1 #559 - 16 msgs In-Reply-To: <20030604132207.22132.809.Mailman@mail.python.org> References: <20030604132207.22132.809.Mailman@mail.python.org> Message-ID: > Date: Wed, 04 Jun 2003 07:54:01 -0800 > From: Nicodemus > To: c++-sig at python.org > Subject: Re: [C++-sig] wrap global variable (def not work) > Reply-To: c++-sig at python.org > Try this: > > int verbose = 1; > int get_verbose() { return verbose; } > BOOST_PYTHON_MODULE(foo) > { > scope().attr("verbose") = verbose; > def("get_verbose", &get_verbose); > } > > Ie, changes in a global variable won't be seen in C++, so this > technique works only for constants. If you *really* need to be able to > change the global variable from Python, you will have to create > accessor functions and use these. That's not Boost.Python fault > really, is just that in Python there's no way to know when a > module-variable is *rebound* (note, not *changed*). From rwgk at yahoo.com Thu Jun 5 10:46:17 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Thu, 5 Jun 2003 01:46:17 -0700 (PDT) Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <38076420.1054804754@[10.0.0.9]> Message-ID: <20030605084617.85486.qmail@web20206.mail.yahoo.com> --- gideon may wrote: > With me they are all one and the same platform :( It's off topic, but without a pass through a recent EDG based compiler (e.g. Intel) I am generally not convinced that a given piece of code is clean. The EDG diagnostics are very good and have saved me days of debugging. > Guess I will do some profiling of the gcc linker and file a report, as Dave > mentioned This is definitely a good idea. For the short term: the first time I noticed the excessive link times was with gcc 3.2. IIRC gcc 3.1 is fine. I am absolutely sure that gcc 3.0.4 is fine. I.e. for your debug builds you could use an older compiler. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dimour at mail.ru Thu Jun 5 11:01:01 2003 From: dimour at mail.ru (Dmitri Mouromtsev) Date: Thu, 5 Jun 2003 13:01:01 +0400 Subject: [C++-sig] Re: Problem with Extract conversion Message-ID: <000501c32b40$fcfb6120$7300a8c0@dima> >>>> Hello all! >>>> I am embedding Python in my application by using BOOST.PYTHON. I tried >>>> to compile in MSVC 7.0 the example with extract<> conversion: >>>> boost::python::str test_str("test"); >>>> char const* c_str = extract(test_str); >>>> If I run this code an unhandled exception rise when the function >>>> throw_no_lvalue_from_python compose error message "No registered >> converter >>>> was able to extract a C++...". >>>> What I need to do and what is my mistake. >>> >>>Please post a small, complete program which reproduces your problem. >> >> Here is test program reproducing my problem (or bug): >> #include "Python.h" >> #include "boost\python.hpp" > >#include > >using namespace boost::python; >> int main() >> { >> Py_Initialize(); >> boost::python::str test_str("test"); >> char const* c_str = extract(test_str); >> std::cout << c_str << '\n'; >> return 0; >> } >> The exception rise in the same point as I describe blow. >> If I use another conversoin: >> char const* c_str = PyString_AsString(test_str.ptr()); >> it's all right. >> And when I build this program I've got warning "'argument' : conversion from >> 'size_t' to 'int', possible loss of data" to line >> BOOST_PYTHON_TO_PYTHON_BY_VALUE(std::string, > PyString_FromStringAndSize(x.c_str(),x.size())) >> in the file builtin_converters.hpp. > >Your example works perfectly well for me and builds with no warnings: I tested this example in debug and release modes. In release mode it's really work, but when i tried to run it in debug mode it fail. And warnings appear every time. What are your suggestions? From nicodemus at globalite.com.br Fri Jun 6 02:05:17 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Thu, 05 Jun 2003 21:05:17 -0300 Subject: [C++-sig] Re: Projects Page Reminder In-Reply-To: References: <3ED663FD.3070303@globalite.com.br> <3EDD7784.6010402@globalite.com.br> Message-ID: <3EDFDABD.6080403@globalite.com.br> David Abrahams wrote: >This is great stuff, Bruno. Why don't you add an entry to the >projects page yourself and check it in? > > Done! From dave at boost-consulting.com Fri Jun 6 05:03:25 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 05 Jun 2003 23:03:25 -0400 Subject: [C++-sig] Re: boost::python and exceptions, seg fault In-Reply-To: <20030606005801.GA30027@madman.celabo.org> (Jacques A. Vidrine's message of "Thu, 5 Jun 2003 19:58:01 -0500") References: <20030604175153.GA15401@madman.celabo.org> <20030604190410.65136.qmail@web20204.mail.yahoo.com> <20030604191751.GA15781@madman.celabo.org> <20030606005801.GA30027@madman.celabo.org> Message-ID: Please keep this dissussion on the C++-sig rather than emailing me personally. "Jacques A. Vidrine" writes: > On Wed, Jun 04, 2003 at 04:49:58PM -0400, David Abrahams wrote: >> "Jacques A. Vidrine" writes: >> >> > Hello, >> > >> > I've pulled some of my hair out over this. I can't seem to get >> > Boost.Python (from Boost 1.29.0) to handle exceptions. Instead, they >> > seem to cause the python interpreter to abort. >> > >> > A very simple example is below, along with a backtrace. >> > >> > >> > cd .../boost_1_29_0/libs/python/test && bjam ... exception_translator.run >> > runs to completion. >> > >> > Any clues would be much appreciated! >> >> 1. What happens if you build and test your example under bjam? >> >> 2. Suppose you add a simple function which doesn't throw to your >> module and try calling that first, from Python. Does it work? >> >> 3. Can you slowly mutate the exception_translator example into your >> example and find out when it breaks? > > Sorry for the delayed reply. > > I haven't had a chance to figure out how to use bjam outside of the > boost tree. So do it inside the Boost tree. > But, the problem is reproduceable in the exception_translator > example anyway. I guess I don't know what `bjam ... > exception_translator.run' is supposed to do, It's supposed to run the test. > but it apparently didn't run the test :-) How do you know it didn't run the test? > When I actually load and call the exception_translator_ext, the > exception is uncaught as in my previous simple example. > > % pwd > ~/boost_1_29_0/libs/python/test > % cd bin/exception_translator_ext.so/gcc/debug/runtime-link-dynamic/shared-linkable-true/ > % env PYTHONPATH="$PWD" python > Python 2.2.2 (#1, Feb 11 2003, 21:28:43) > [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 > Type "help", "copyright", "credits" or "license" for more information. > >>> import exception_translator_ext > >>> exception_translator_ext.throw_error() > zsh: 30040 abort (core dumped) env PYTHONPATH="$PWD" python This doesn't prove much. The point of running the example under bjam is that it handles setting up the environment properly. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 6 05:22:58 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 05 Jun 2003 23:22:58 -0400 Subject: [C++-sig] Re: Problems with functions taking string arguments on Windows References: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> <5.1.0.14.2.20030602130754.03c38a10@mail.mindspring.com> <5.1.0.14.2.20030604160150.0468fc00@mail.mindspring.com> Message-ID: greg Landrum writes: >>Your example works perfectly for me with vc7.1 vc7 and vc6.5; maybe >>you need a service pack? > > After building with bjam, I realized that the problem was that I was > using the release build of the boost_python DLL from an debug mode > extension module. This appears to work sometimes, but not always. > Using the proper DLL cleared up the strings problem. > > Now I have another one though, but that's for another message... Does your other one work with bjam? There's a reason we set up this build system, you know. It's not just that way because we wanted to reinvent the wheel. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 6 05:23:33 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 05 Jun 2003 23:23:33 -0400 Subject: [C++-sig] Re: First pass at exception translator template References: <3EDBDDF8.9030105@anim.dreamworks.com> Message-ID: Gavin Doughtie writes: > So, this is probably gross or non thread-safe in some manner that I'm > not clued into yet. I'd appreciate any elegant refactorings, but this > seems to work and will allow me to just specify a bunch of > "REGISTER_EXCEPTION" lines in my module. Gavin, Thanks for your post; I'll try to look at it in the next couple of days. -- Dave Abrahams Boost Consulting www.boost-consulting.com From greglandrum at mindspring.com Fri Jun 6 05:42:20 2003 From: greglandrum at mindspring.com (greg Landrum) Date: Thu, 05 Jun 2003 20:42:20 -0700 Subject: [C++-sig] Re: Problems with functions taking string arguments on Windows In-Reply-To: References: <5.1.0.14.2.20030602112751.03df18b8@mail.earthlink.net> <5.1.0.14.2.20030602130754.03c38a10@mail.mindspring.com> <5.1.0.14.2.20030604160150.0468fc00@mail.mindspring.com> Message-ID: <5.1.0.14.2.20030605203358.03bfcb30@mail.mindspring.com> At 08:22 PM 6/5/2003, David Abrahams wrote: >greg Landrum writes: > > > > > Now I have another one though, but that's for another message... > >Does your other one work with bjam? The other problem has to do with the type_converter registry (registering converters for the same types in different extension modules without causing crashes). Since I have a work around in place, I'm going to let it stew for a while. >There's a reason we set up this build system, you know. It's not just >that way because we wanted to reinvent the wheel. I'm sure that there are a variety of great arguments for bjam; I would probably come out ahead in the end if I could start using it. However, I have a large base of code that I have to use and support that is currently built with "standard" tools like make (on unix) and Visual Studio (on windows). I doubt that I will ever have time to port that stuff over to a new build system. Even if I could do that, I would then have to convince every potential client that they should also drink the bjam kool-aid so that I could deliver tools to them. Maybe it'll happen some day, but it's not likely to be soon. I will, however, attempt to build all problematic examples with bjam before posting them to the list. -greg From dave at boost-consulting.com Fri Jun 6 05:27:29 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 05 Jun 2003 23:27:29 -0400 Subject: [C++-sig] Re: Problem with Extract conversion References: <000501c32b40$fcfb6120$7300a8c0@dima> Message-ID: "Dmitri Mouromtsev" writes: >>Your example works perfectly well for me and builds with no warnings: > > > I tested this example in debug and release modes. In release mode it's > really work, but when i tried to run it in debug mode it fail. And warnings > appear every time. My test was run in debug mode. > What are your suggestions? "Use bjam to build and test" is the best I can offer. -- Dave Abrahams Boost Consulting www.boost-consulting.com From brett.calcott at paradise.net.nz Fri Jun 6 09:18:56 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Fri, 6 Jun 2003 19:18:56 +1200 Subject: [C++-sig] Re: playing with pygame References: Message-ID: > > lvalue_from_pytype is a simple template; take a look. You could write > an extract function which does the same thing, but which uses your > non-constant PyTypeObject* expression in place of the template > parameter, and register that just the way the lvalue_from_pytype > constructor does. > Uh yeah. That wasn't so hard. A bit of messing around with your template got me this: template struct lvalue_from_nonconst_pytype { lvalue_from_nonconst_pytype(PyTypeObject *type) { // assume this is called only once m_type = type; converter::registry::insert( &extract, detail::extractor_type_id(&Extractor::execute)); } private: static PyTypeObject *m_type; static void* extract(PyObject* op) { return PyObject_TypeCheck(op, m_type) ? const_cast( static_cast( detail::normalize(&Extractor::execute).execute(op))) : 0 ; } }; template PyTypeObject * lvalue_from_nonconst_pytype::m_type = 0; BOOST_PYTHON_MODULE(test) { import_pygame_rect(); lvalue_from_nonconst_pytype > need_this_or_msvc_barfs(&PyRect_Type); ...etc... } Finding out what the msvc problem was took more time than changing the template... I am having great fun :) Thanks again, Brett From dave at boost-consulting.com Fri Jun 6 13:08:10 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 06 Jun 2003 07:08:10 -0400 Subject: [C++-sig] Re: Building Boost.Python on MingW(win98) References: <20030530090902.13638.66227.Mailman@mail.python.org> Message-ID: Zhang Le writes: > Hello, > I have some problems with building BPL on a win98 with mingw. > After the boost_python.lib is built, bjam failed to build > boost_python.dll with the following command (bjam -d3) > > g++ > -Wl,--exclude-symbols,_bss_end__:_bss_start__:_data_end__:_data_start__ > -Wl,--enable-auto-image-base -W > l,--out-implib,..\..\..\libs\python\build\bin\boost_python.dll\mingw\release\runtime-link-dynamic\boost_python > .lib -s -shared -o > "..\..\..\libs\python\build\bin\boost_python.dll\mingw\release\runtime-link-dynamic\boost > _python.dll" -Lc:\Python22\libs "...some obj file name here" > \object_operators.obj" -lpython22 -Wl, -rpath-link, . > > > c:\MinGW\bin\..\lib\gcc-lib\mingw32\3.2\..\..\..\..\mingw32\bin\ld.exe: > cannot find -lpython22 > > But I have seen "-Lc:\Python22\libs" in the g++ command line. And even after I copy > python22.lib from c:\Python22\libs to the building directory the error > is still there. > > Am I missing something? I use python 2.2.3 with mingw2.0(gcc 3.2). You need to create libpython22.a with the first two steps detailed here: http://www.python.org/doc/current/inst/non-ms-compilers.html#SECTION000312000000000000000 HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 6 13:22:34 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 06 Jun 2003 07:22:34 -0400 Subject: [C++-sig] Re: playing with pygame References: Message-ID: "Brett Calcott" writes: >> >> lvalue_from_pytype is a simple template; take a look. You could write >> an extract function which does the same thing, but which uses your >> non-constant PyTypeObject* expression in place of the template >> parameter, and register that just the way the lvalue_from_pytype >> constructor does. >> > > Uh yeah. That wasn't so hard. > > A bit of messing around with your template got me this: > > template > struct lvalue_from_nonconst_pytype > { > lvalue_from_nonconst_pytype(PyTypeObject *type) > { > // assume this is called only once > m_type = type; > converter::registry::insert( > &extract, detail::extractor_type_id(&Extractor::execute)); > } > private: > static PyTypeObject *m_type; > > static void* extract(PyObject* op) > { > return PyObject_TypeCheck(op, m_type) > ? const_cast( > static_cast( > > detail::normalize(&Extractor::execute).execute(op))) > : 0 > ; > } > }; > > template PyTypeObject * > lvalue_from_nonconst_pytype::m_type = 0; > > > BOOST_PYTHON_MODULE(test) > { > import_pygame_rect(); > lvalue_from_nonconst_pytype > > need_this_or_msvc_barfs(&PyRect_Type); > > ...etc... > > } Wow, for some reason I couldn't see a generic solution and didn't think of doing it that way. That's totally cool! Maybe I should change the implementation in the library to allow non-const PyTypeObjects. It's more flexible, after all. > I am having great fun :) I'm very pleased :) -- Dave Abrahams Boost Consulting www.boost-consulting.com From nectar at celabo.org Fri Jun 6 14:38:24 2003 From: nectar at celabo.org (Jacques A. Vidrine) Date: Fri, 6 Jun 2003 07:38:24 -0500 Subject: [C++-sig] Re: boost::python and exceptions, seg fault In-Reply-To: References: <20030604175153.GA15401@madman.celabo.org> <20030604190410.65136.qmail@web20204.mail.yahoo.com> <20030604191751.GA15781@madman.celabo.org> <20030606005801.GA30027@madman.celabo.org> Message-ID: <20030606123823.GA60456@madman.celabo.org> On Thu, Jun 05, 2003 at 11:03:25PM -0400, David Abrahams wrote: > > Please keep this dissussion on the C++-sig rather than emailing me > personally. I would be happy to. For some reason your message to which I was replying (Message-ID: ) did not include c++-sig in the headers. [1] > "Jacques A. Vidrine" writes: [...] > > but it apparently didn't run the test :-) > > How do you know it didn't run the test? Sorry, I made a bad assumption. Double-checking, the test actually /did/ run. > > When I actually load and call the exception_translator_ext, the > > exception is uncaught as in my previous simple example. > > > > % pwd > > ~/boost_1_29_0/libs/python/test > > % cd bin/exception_translator_ext.so/gcc/debug/runtime-link-dynamic/shared-linkable-true/ > > % env PYTHONPATH="$PWD" python > > Python 2.2.2 (#1, Feb 11 2003, 21:28:43) > > [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import exception_translator_ext > > >>> exception_translator_ext.throw_error() > > zsh: 30040 abort (core dumped) env PYTHONPATH="$PWD" python > > This doesn't prove much. The point of running the example under bjam > is that it handles setting up the environment properly. I guess it means a lot to me. Like you said, clearly there is something different in the runtime environment. After some further investigating, it appears that the `release' build library (i.e. .../gcc/release/.../libboost_python.so) may be the culprit. The `release' library was installed, but the `debug' library was used in the tests. The test crashes when run against the `release' library, but completes successfully when run against the `debug' library. My own applications work with the `debug' library. I'm afraid I don't know enough about the Boost build system at this time to indicate what exactly is the difference between the `debug' and `release' built bits. If someone could point out where to look for these things, I would be happy to continue to try to track down the issue. It seems likely to be some (over)optimization flag or such that tickles a gcc bug. It seems this issue has come up in the c++-sig archives at least two other times this year, but it has not seemed to be resolved (on the list). http://mail.python.org/pipermail/c++-sig/2003-March/003636.html http://mail.python.org/pipermail/c++-sig/2003-January/003152.html I suspect it can be readily reproduced with gcc 3.2.x, at least. I wonder if the environment used for testing is not flawed if it does not exercise the `release' build? Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se [1] Although I /do/ see it in the archives now, *shrug*. From phil at freehackers.org Fri Jun 6 15:09:29 2003 From: phil at freehackers.org (Philippe Fremy) Date: Fri, 6 Jun 2003 15:09:29 +0200 Subject: [C++-sig] Compile times using Boost::python. In-Reply-To: <38076420.1054804754@[10.0.0.9]> References: <20030604233509.12968.qmail@web20204.mail.yahoo.com> <38076420.1054804754@[10.0.0.9]> Message-ID: <200306061509.29687.phil@freehackers.org> Hi, To improve build time, I have found ccache quite useful. It can rebuild files that have not changed in no time. Theorically, however, if your make program (bjam, tmake, ...) is good and really only the files that need rebuilding are rebuilt, ccache does not help. Tmake is not very good so ccache does help a lot. What I really enjoy is being able to do a "make clean; make" that lasts less than 10 seconds. It does not help for the linking phase though. regards, Philippe -- "The difference between theory and practice is that in theory, there is no difference between theory and practice." From dave at boost-consulting.com Fri Jun 6 18:46:29 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 06 Jun 2003 12:46:29 -0400 Subject: [C++-sig] Re: boost::python and exceptions, seg fault References: <20030604175153.GA15401@madman.celabo.org> <20030604190410.65136.qmail@web20204.mail.yahoo.com> <20030604191751.GA15781@madman.celabo.org> <20030606005801.GA30027@madman.celabo.org> <20030606123823.GA60456@madman.celabo.org> Message-ID: "Jacques A. Vidrine" writes: > On Thu, Jun 05, 2003 at 11:03:25PM -0400, David Abrahams wrote: >> >> Please keep this dissussion on the C++-sig rather than emailing me >> personally. > > I would be happy to. For some reason your message to which I was > replying (Message-ID: ) did not > include c++-sig in the headers. [1] > >> "Jacques A. Vidrine" writes: > [...] >> > but it apparently didn't run the test :-) >> >> How do you know it didn't run the test? > > Sorry, I made a bad assumption. Double-checking, the test actually > /did/ run. > >> > When I actually load and call the exception_translator_ext, the >> > exception is uncaught as in my previous simple example. >> > >> > % pwd >> > ~/boost_1_29_0/libs/python/test >> > % cd bin/exception_translator_ext.so/gcc/debug/runtime-link-dynamic/shared-linkable-true/ >> > % env PYTHONPATH="$PWD" python >> > Python 2.2.2 (#1, Feb 11 2003, 21:28:43) >> > [GCC 3.2.2 [FreeBSD] 20030205 (release)] on freebsd5 >> > Type "help", "copyright", "credits" or "license" for more information. >> > >>> import exception_translator_ext >> > >>> exception_translator_ext.throw_error() >> > zsh: 30040 abort (core dumped) env PYTHONPATH="$PWD" python >> >> This doesn't prove much. The point of running the example under bjam >> is that it handles setting up the environment properly. > > I guess it means a lot to me. Like you said, clearly there is > something different in the runtime environment. > > After some further investigating, it appears that the `release' > build library (i.e. .../gcc/release/.../libboost_python.so) may be > the culprit. The `release' library was installed, but the `debug' > library was used in the tests. The test crashes when run against the > `release' library, but completes successfully when run against the > `debug' library. > > My own applications work with the `debug' library. I'm afraid I don't > know enough about the Boost build system at this time to indicate what > exactly is the difference between the `debug' and `release' built > bits. I don't know what you're asking exactly. You can run the tests in 'release' mode by adding -sBUILD=release to the bjam invocation. You can see exactly what build commands will be executed by passing the -n flag at the beginning of the bjam command line. You can see *all* the actions for everything, including targets that are not out-of-date, by passing -n -a at the beginning of the command-line. > If someone could point out where to look for these things, I > would be happy to continue to try to track down the issue. It seems > likely to be some (over)optimization flag or such that tickles a gcc > bug. > > It seems this issue has come up in the c++-sig archives at least two > other times this year, but it has not seemed to be resolved (on the > list). > > http://mail.python.org/pipermail/c++-sig/2003-March/003636.html > http://mail.python.org/pipermail/c++-sig/2003-January/003152.html > > I suspect it can be readily reproduced with gcc 3.2.x, at least. > > I wonder if the environment used for testing is not flawed if it does > not exercise the `release' build? There's only so much time for testing ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com From brett.calcott at paradise.net.nz Sat Jun 7 06:03:29 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Sat, 7 Jun 2003 16:03:29 +1200 Subject: [C++-sig] Re: playing with pygame References: Message-ID: > > Wow, for some reason I couldn't see a generic solution and didn't > think of doing it that way. That's totally cool! Maybe I should > change the implementation in the library to allow non-const > PyTypeObjects. It's more flexible, after all. > Yup - that makes sense. The only caveat is the requirement (at least with VC6) to put a variable name in now: > > lvalue_from_nonconst_pytype > > > need_this_or_msvc_barfs(&PyRect_Type); Cheers, Brett From dave at boost-consulting.com Sun Jun 8 13:54:29 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 07:54:29 -0400 Subject: [C++-sig] Re: playing with pygame References: Message-ID: "Brett Calcott" writes: >> >> Wow, for some reason I couldn't see a generic solution and didn't >> think of doing it that way. That's totally cool! Maybe I should >> change the implementation in the library to allow non-const >> PyTypeObjects. It's more flexible, after all. >> > > Yup - that makes sense. The only caveat is the requirement (at least with > VC6) to put a variable name in now: > >> > lvalue_from_nonconst_pytype > >> > need_this_or_msvc_barfs(&PyRect_Type); I think making it into a function template would solve that ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 8 14:11:05 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 08:11:05 -0400 Subject: [C++-sig] Re: long long unsigned issue... References: Message-ID: "Milind Patil" writes: > "David Abrahams" wrote in message > news:uy90p3nmp.fsf at boost-consulting.com... >> "Milind Patil" writes: >> >> My point is that this is a *very* big toy, with lots of >> constructors which look like they could potentially contend for the >> same input arguments. Can't you reduce the problem you're having a >> little bit more? > > Yes, realize that the example class had lots of constructors. But the > original class that I am wrapping is huge. The constructor contention > is part of my problem as I struggle to get types accross C++ and > python to match up. I want my python wrapper to behave like the > C++ class I am wrapping with very few caveats. A noble goal, but I want to caution you that Python users generally have different expectations from C++ users. >> Do you need *implicit* conversion capability? Remember that implicit >> conversions are generally un-Pythonic. >> > > I was trying to save myself the tiresome chore of writing operator .defs > for the matrix of different types and operators that the class supports. > > For example > > .def( self + self) > > with > > implicity_convertible(); > implicity_convertible(); > implicity_convertible(); Wow, your thing is really convertible from char* and int? Spooky! Note that on the Python side, int and unsigned int are both represented by the same type of object, so getting both working is generally not useful. It will always pick the same one, depending on which is registered first. > would save me > > .def(self + other()) > .def(other() + self) > .def(self + other()) > .def(other + self) > .def(self + other()) > ... Yeah, but I guess there's a lot of ambiguity possible. The behavior will depend a lot on the order with which you register these conversions. > Isn't this a common use case scenario for boost python -- Mapping > C++ class which behaves like numeric types to python numeric > type like behaviour? Yep, but the Python numeric types generally don't behave that way. >> > Now that we have a long_ to Y_Wrapper constructor >> > and that class_ Y has Y_Wrapper as one of the bases >> >> You got that backwards. Y is the base of Y_Wrapper. >> > > My fault. I mistakenly thought that the "class_< Y, Y_Wrapper >("Y", init< >>())" > part of module definition meant Y, and Y_Wrapper are bases of the python > Y class. A C++ class can't really be a base of a Python class. What it means is that when you construct a Python Y object from Python by invoking Y(...), it will contain an instance of Y_Wrapper but be convertible to Y, Y&, Y*,... on the C++ side in addition to Y_Wrapper, Y_Wrapper&, ... If you didn't supply that Y_Wrapper argument, Python Y objects constructed from Python would contain just a plain C++ Y object. >> The right solution to this problem is to provide for something Ralf >> has been requesting for some time: the ability to inject new >> constructors into a class, just the way we can inject methods that >> aren't built from member functions. Something like: >> >> Y Y_from_pylong(long_ y) >> { >> return Y(extract(y)); >> } >> >> ... >> .def("__init__", constructor(Y_from_pylong)) >> > > I think such a class constructor injector will really be useful for > doing converters in a simple way. We just need to arrange for time and/or funding to do it... >> > a) Expose C++ classes as python classes alone. User will not derive >> > from the exposed classes. >> > b) Expose C++ classes as derivable classes in python. >> >> a and b are equivalent as far as the library is concerned. >> > > Isn't there a difference in how a user wraps a class with virtual > functions and one without virtual functions Yes, but you didn't mention virtual functions above. You can derive new Python classes from any Boost.Python class. > -- in the scenario where the user expects the python class to be > extended? My interpretation of docs was that a Y_Wrapper class > holding a PyObject* as a member is needed to wrap the Y class with > virtual methods. > >> I'm confused. Why do you want more info on c) if you don't use >> Boost.Python that way? > > I do use the embedded + extension scenario. OK, I was confused because you said your use case was a) and d). > I don't have to > reference or derive from the python classes in the C++ that embeds > the python extension and I don't have to derive from the c++ classes > in python. Well, I'm not sure I understand what you need, but if you ask specific questions I can try to help. -- Dave Abrahams Boost Consulting www.boost-consulting.com From milind_patil at hotmail.com Sun Jun 8 19:59:15 2003 From: milind_patil at hotmail.com (Milind Patil) Date: Sun, 8 Jun 2003 10:59:15 -0700 Subject: [C++-sig] return value policy for returning same python object... Message-ID: I have a scenario where a C++ method that performs inplace operation on the object is wrapped using boost python. I would like the corresponding python operation to result in the same object instead of a new python object. What is the return policy that will allow me to do that? eg. in Y_Wrapper class of some Y class... const Y& do_iadd (int other) { *this += Y_FromInt(other); return *this } BOOST_PYTHON_MODULE(hello) { ... .def("__iadd__", (const Y& Y_Wrapper::*) &do_iadd(int), return_value_policy<>(??? which one ???)) ... } in python... x = y.hello(0) a = x x += 1 assert( a is x) I tried return_existing_object and the copy_... policies. Obviuosly, because they all return a new python object, that failed. Any pointers as to how I can get the desired behaviour? (Notice I cannot do .def (self += other()) because there is no += in the Y class for Y& and int defined.) Thanks, Milind From rwgk at yahoo.com Sun Jun 8 22:13:11 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Sun, 8 Jun 2003 13:13:11 -0700 (PDT) Subject: [C++-sig] return value policy for returning same python object... In-Reply-To: Message-ID: <20030608201311.85622.qmail@web20209.mail.yahoo.com> --- Milind Patil wrote: > const Y& > do_iadd (int other) > { > *this += Y_FromInt(other); > return *this > } > > BOOST_PYTHON_MODULE(hello) > { > ... > .def("__iadd__", (const Y& Y_Wrapper::*) &do_iadd(int), > return_value_policy<>(??? which one ???)) > ... > } I am not sure if there is a return policy for doing what you want, but I am fairly sure you can do it a different way. Along the lines of: boost::python::object do_iadd(boost::python::object const& o, int) { // use extract or similar to get a reference to the wrapped C++ object // manipulate the extracted object return o; } Note that this version of do_iadd() is not a member function. If you want to keep it inside your Y_wrapper make the do_iadd() function static. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dave at boost-consulting.com Sun Jun 8 22:40:08 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 16:40:08 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: Message-ID: "Milind Patil" writes: > I have a scenario where a C++ method that performs inplace > operation on the object is wrapped using boost python. I > would like the corresponding python operation to result in the same object > instead of a new python object. What is the return policy that > will allow me to do that? > > eg. in Y_Wrapper class of some Y class... > > const Y& > do_iadd (int other) > { > *this += Y_FromInt(other); > return *this > } > > BOOST_PYTHON_MODULE(hello) > { > ... > .def("__iadd__", (const Y& Y_Wrapper::*) &do_iadd(int), ^^^^^^^^^^^^^^^^^^^^^^^ This C-style cast is dangerous, and in fact you got it wrong, which is part of your problem. No cast should be needed. > return_value_policy<>(??? which one ???)) > ... > } > > in python... > > x = y.hello(0) > a = x > x += 1 > > assert( a is x) > > I tried return_existing_object and the copy_... policies. Obviuosly, > because they all return a new python object, that failed. It's not obvious to me, though return_existing_object should be unsafe I would expect it to work. > Any pointers as to how I can get the desired behaviour? Well, you are returning a reference to an internal object of the Python Y object, so return_internal_reference seems entirely appropriate. > (Notice I cannot do > .def (self += other()) > because there is no += in the Y class for Y& and int defined.) Well, that *should* be easily fixable! > Thanks, > Milind -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 8 22:53:40 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 16:53:40 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: Message-ID: David Abrahams writes: > Well, you are returning a reference to an internal object of the > Python Y object, so return_internal_reference seems entirely appropriate. Actually, you're much better off writing: const object do_iadd (back_reference self, int other) { self.get() += Y_FromInt(other); return self.source(); } because it will return the original Python object instead of a new one which refers to the same C++ data. That's what the builtin operator support does. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 8 23:16:16 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 17:16:16 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: Message-ID: David Abrahams writes: >> Well, you are returning a reference to an internal object of the >> Python Y object, so return_internal_reference seems entirely appropriate. > > Actually, you're much better off writing: > > const object > do_iadd (back_reference self, int other) > { > self.get() += Y_FromInt(other); > return self.source(); > } > > because it will return the original Python object instead of a new > one which refers to the same C++ data. That's what the builtin > operator support does. In fact, the other technique will work sometimes but will fail miserably whenever the Python Y object does not wrap a C++ Y that was constructed from Python (e.g. when it is just pointing to an actual C++ Y returned, e.g. via return_internal_reference). In that case, the Y object can't be converted into a Y_Wrapper to use for the this argument... because, of course, it isn't one. -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Sun Jun 8 23:50:03 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Sun, 8 Jun 2003 14:50:03 -0700 (PDT) Subject: [C++-sig] Re: return value policy for returning same python object... In-Reply-To: Message-ID: <20030608215003.97433.qmail@web20208.mail.yahoo.com> --- David Abrahams wrote: > David Abrahams writes: > > const object > > do_iadd (back_reference self, int other) > > { > > self.get() += Y_FromInt(other); > > return self.source(); > > } I always thought that returning a const temporary doesn't make sense. Do you mean "const object&" above? Or is there something new for me to learn? Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dave at boost-consulting.com Mon Jun 9 00:06:56 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 08 Jun 2003 18:06:56 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > --- David Abrahams wrote: >> David Abrahams writes: >> > const object >> > do_iadd (back_reference self, int other) >> > { >> > self.get() += Y_FromInt(other); >> > return self.source(); >> > } > > I always thought that returning a const temporary doesn't make sense. Do you > mean "const object&" above? No, that would be a disaster (return a reference to a local object). Actually, I just meant "object". -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Mon Jun 9 15:58:57 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 09 Jun 2003 09:58:57 -0400 Subject: [C++-sig] Re: boost python question References: <200306090839.04106.nbecker@hns.com> Message-ID: "Neal D. Becker" writes: > Is this the right place to ask boost python questions? This is OK; the C++-sig (http://www.python.org/sigs/c++-sig/) is better. I'm cross-posting there. > I'm interested in getting started with boost python. I would need a > way to comminicate containers of data between python and c++. My > domain is signal processing. Sounds good so far. > In c++, algorithms usually have an STLsyle iterator interface, and > concrete containers are usually std::vector. In python numarray > might be used. Uh-huh. > What options are available to interface between an (efficient) > python-accessible container and c++ stl-style containers? What kind of interface did you have in mind? I mean, there are a lot of options in C++ and Python . You can fairly easily expose STL containers to Python. Then you can use them together with numarray arrays from Python. Perhaps you had something else in mind? > I noticed the boost::python tutorial has a brief discussion of > iterators. I guess this would suggest a strategy of using vector > for containers with an iterator interface for python to access them. > But this doesn't give any way for python to create or manage the > containers. Hmm? vector for any X is just a class you can wrap like any other, and then you can construct it from Python like any other. Am I missing something? > Is the reverse a feasible solution? Use python numarray containers and pass > iterators to c++ algorithms for computation? Do you know how to get iterators out of a numarray? Probably you can easily get pointers, which are fine. So, sure, you can do that. You'd want to expose thin wrapper functions which accept boost::python::array arguments and then unpackage the iterators to pass on to the algorithms. HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Mon Jun 9 18:52:48 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Mon, 9 Jun 2003 09:52:48 -0700 (PDT) Subject: [C++-sig] Re: [boost] boost python question In-Reply-To: <200306090839.04106.nbecker@hns.com> Message-ID: <20030609165248.13235.qmail@web20210.mail.yahoo.com> --- "Neal D. Becker" wrote: > I'm interested in getting started with boost python. I would need a way to > comminicate containers of data between python and c++. My domain is signal > processing. > > In c++, algorithms usually have an STLsyle iterator interface, and concrete > containers are usually std::vector. In python numarray might be used. > > What options are available to interface between an (efficient) > python-accessible container and c++ stl-style containers? The core of the "scitbx" is designed to solve exactly this problem. For a very high-level overview look for "array_family" in this document: http://cctbx.sourceforge.net/current_cvs/tour.html Also look for "numpy." I am embarrassed to admit again that I still have not documented the array family, but appart from that it is really very useful, mature, extensively tested, highly portable, and I will answer questions. Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nectar at celabo.org Mon Jun 9 19:00:25 2003 From: nectar at celabo.org (Jacques A. Vidrine) Date: Mon, 9 Jun 2003 12:00:25 -0500 Subject: [C++-sig] converters Message-ID: <20030609170025.GA98250@madman.celabo.org> Hello, All! I'm having some difficulty due to the fact that conversions between the Python string type and std::string have NUL-terminated string semantics rather than the expected `opaque byte sequence' semantics. For my immediate needs, I thought I could simply substitute vector and my own converters instead. (Indeed, the API I'm wrapping uses vector rather than std::string.) I found that this was quite easy to do for returning values to Python: struct voc_to_str { static PyObject *convert(const vector v) { return PyString_FromStringAndSize(&v[0], v.size()); } }; to_python_converter, voc_to_str>(); but I don't quite see how one arranges for the opposite transformation (vector -> python::str). Naturally, I also wonder whether it is possible to override the `default' builtin converters for std::string so that they behave more Python- and C++- like. Any pointers would be much appreciated! Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From rwgk at yahoo.com Mon Jun 9 19:27:24 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Mon, 9 Jun 2003 10:27:24 -0700 (PDT) Subject: [C++-sig] converters In-Reply-To: <20030609170025.GA98250@madman.celabo.org> Message-ID: <20030609172724.79262.qmail@web20202.mail.yahoo.com> --- "Jacques A. Vidrine" wrote: > I'm having some difficulty due to the fact that conversions between > the Python string type and std::string have NUL-terminated string > semantics rather than the expected `opaque byte sequence' semantics. What version of boost are you using? I thought this was fixed in 1.30.0. > For my immediate needs, I thought I could simply substitute > vector and my own converters instead. (Indeed, the API I'm > wrapping uses vector rather than std::string.) I found that > this was quite easy to do for returning values to Python: > > struct voc_to_str { > static PyObject *convert(const vector v) { > return PyString_FromStringAndSize(&v[0], v.size()); > } > }; > > to_python_converter, voc_to_str>(); > > but I don't quite see how one arranges for the opposite transformation > (vector -> python::str). I am confused by "vector -> python::str". Anyway, maybe this is useful: http://mail.python.org/pipermail/c++-sig/2003-May/004133.html A bit tedious but very powerful. See also the Boost.Python FAQ (I believe question 2). Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From milind_patil at hotmail.com Tue Jun 10 01:58:41 2003 From: milind_patil at hotmail.com (Milind Patil) Date: Mon, 9 Jun 2003 16:58:41 -0700 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: "David Abrahams" wrote in message news:uy90cnrzz.fsf at boost-consulting.com... > "Ralf W. Grosse-Kunstleve" writes: > > > --- David Abrahams wrote: > >> David Abrahams writes: > >> > const object > >> > do_iadd (back_reference self, int other) > >> > { > >> > self.get() += Y_FromInt(other); > >> > return self.source(); > >> > } > > Thank you both, for the solution. object do_iadd (back_reference self, int other) { self.get() += Y_FromInt(other); return self.source(); } and .def("__iadd__", (object (*)(back_reference, int) )&do_iadd) works for me perfectly! Appreciate the quick and accurate help, -- Milind From gdoughtie at anim.dreamworks.com Tue Jun 10 02:28:08 2003 From: gdoughtie at anim.dreamworks.com (Gavin Doughtie) Date: Mon, 09 Jun 2003 17:28:08 -0700 Subject: [C++-sig] static const object access? In-Reply-To: References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: <3EE52618.5020609@anim.dreamworks.com> let's say I have a class: class A { // implementation of a special "Null" object }; and another class B { private: static const A; public: static const A& getNullA() const { return A; } }; In python, I'd like B.getNullA() to always return the same python object, which in turn is initially populated by the C++ getNullA() call. I was trying to use "return_internal_reference" for the getNullA() method def, but in python I get a "tuple index out of range" error. Enlightenment, anyone? Thanks! -- Gavin Doughtie DreamWorks SKG From nksauter at lbl.gov Tue Jun 10 03:24:13 2003 From: nksauter at lbl.gov (Nicholas K. Sauter) Date: Mon, 09 Jun 2003 18:24:13 -0700 Subject: [C++-sig] voltmeter References: <20030604233509.12968.qmail@web20204.mail.yahoo.com> Message-ID: <3EE5333D.198AE7F4@lbl.gov> > Hi Ralf, Could you remember to bring in that meter? Thanks. Nick From dave at boost-consulting.com Tue Jun 10 04:20:03 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 09 Jun 2003 22:20:03 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: "Milind Patil" writes: > "David Abrahams" wrote in message > news:uy90cnrzz.fsf at boost-consulting.com... >> "Ralf W. Grosse-Kunstleve" writes: >> >> > --- David Abrahams wrote: >> >> David Abrahams writes: >> >> > const object >> >> > do_iadd (back_reference self, int other) >> >> > { >> >> > self.get() += Y_FromInt(other); >> >> > return self.source(); >> >> > } >> > > > Thank you both, for the solution. > > object > do_iadd (back_reference self, int other) > { > self.get() += Y_FromInt(other); > return self.source(); > } > > and > > .def("__iadd__", (object (*)(back_reference, int) )&do_iadd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Why this dangerous cast? It's completely unneccessary, and I cautioned you against it. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 10 04:25:16 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 09 Jun 2003 22:25:16 -0400 Subject: [C++-sig] Re: static const object access? References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EE52618.5020609@anim.dreamworks.com> Message-ID: Gavin Doughtie writes: > let's say I have a class: > > class A > { > // implementation of a special "Null" object > }; > > and another > > class B > { > private: > static const A; > public: > static const A& getNullA() const { ^^^^^ Illegal; there's no "this" in a static member function > return A; > } > }; > > In python, I'd like B.getNullA() to always return the same python > object, which in turn is initially populated by the C++ getNullA() > call. > > I was trying to use "return_internal_reference" for the getNullA() > method def Did you read the reference docs for return_internal_reference? Which Python argument object will the reference be internal to? > but in python I get a "tuple index out of range" error. > > Enlightenment, anyone? Well, there are no arguments! // wrap this instead. object getNullA() { static object nullA(ref(B::getNullA())); return nullA; } HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From gdoughtie at anim.dreamworks.com Tue Jun 10 19:54:05 2003 From: gdoughtie at anim.dreamworks.com (Gavin Doughtie) Date: Tue, 10 Jun 2003 10:54:05 -0700 Subject: [C++-sig] Re: static const object access? In-Reply-To: References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EE52618.5020609@anim.dreamworks.com> Message-ID: <3EE61B3D.6070906@anim.dreamworks.com> This does in fact work, but now I get "Fatal Python error: PyThreadState_Get: no current thread" when the module unloads. David Abrahams wrote: > // wrap this instead. > object getNullA() > { > static object nullA(ref(B::getNullA())); > return nullA; > } > > HTH, -- Gavin Doughtie DreamWorks SKG (818) 695-3821 From dave at boost-consulting.com Tue Jun 10 20:26:39 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 10 Jun 2003 14:26:39 -0400 Subject: [C++-sig] Re: static const object access? References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EE52618.5020609@anim.dreamworks.com> <3EE61B3D.6070906@anim.dreamworks.com> Message-ID: Gavin Doughtie writes: > This does in fact work, but now I get "Fatal Python error: > PyThreadState_Get: no current thread" when the module unloads. > > David Abrahams wrote: > >> // wrap this instead. >> object getNullA() >> { >> static object nullA(ref(B::getNullA())); >> return nullA; >> } >> HTH, Python extension modules don't unload, AFAIK. Do you mean PyFinalize()? Please see: http://aspn.activestate.com/ASPN/Mail/Message/1446109 Dirk Gerrits has been working on solving that issue; I'm sure he'd appreciate your help. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dirk at gerrits.homeip.net Tue Jun 10 21:24:40 2003 From: dirk at gerrits.homeip.net (Dirk Gerrits) Date: Tue, 10 Jun 2003 21:24:40 +0200 Subject: [C++-sig] Re: static const object access? In-Reply-To: References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EE52618.5020609@anim.dreamworks.com> <3EE61B3D.6070906@anim.dreamworks.com> Message-ID: David Abrahams wrote: > Gavin Doughtie writes: > > >>This does in fact work, but now I get "Fatal Python error: >>PyThreadState_Get: no current thread" when the module unloads. >> [snip] > > Python extension modules don't unload, AFAIK. Do you mean > PyFinalize()? > > Please see: http://aspn.activestate.com/ASPN/Mail/Message/1446109 > > Dirk Gerrits has been working on solving that issue; I'm sure he'd > appreciate your help. > Well I can't really look into the issue anymore for the coming 3 weeks because of final exams. But after that it'll be my top priority. ;) You're most welcome to join in the effort Gavin if you'd like. The thread Dave just mentioned explains where the problem is coming from. Regards, Dirk Gerrits From gdoughtie at anim.dreamworks.com Tue Jun 10 22:15:25 2003 From: gdoughtie at anim.dreamworks.com (Gavin Doughtie) Date: Tue, 10 Jun 2003 13:15:25 -0700 Subject: [C++-sig] Re: static const object access? In-Reply-To: References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EE52618.5020609@anim.dreamworks.com> <3EE61B3D.6070906@anim.dreamworks.com> Message-ID: <3EE63C5D.9010200@anim.dreamworks.com> Happy to help -- I'm not calling PyFinalize() myself. I assume that when Python exits, my static boost::python::object attempts to get the interpreter lock in its destructor, but the interpreter has already shut down and kablooie... My current fix is to allocate the static object on the heap and let it leak. I quite hate this approach! Gavin Dirk Gerrits wrote: > David Abrahams wrote: > >> Gavin Doughtie writes: >> >> >>> This does in fact work, but now I get "Fatal Python error: >>> PyThreadState_Get: no current thread" when the module unloads. >>> > [snip] > >> >> Python extension modules don't unload, AFAIK. Do you mean >> PyFinalize()? >> >> Please see: http://aspn.activestate.com/ASPN/Mail/Message/1446109 >> >> Dirk Gerrits has been working on solving that issue; I'm sure he'd >> appreciate your help. >> > > Well I can't really look into the issue anymore for the coming 3 weeks > because of final exams. But after that it'll be my top priority. ;) > You're most welcome to join in the effort Gavin if you'd like. The > thread Dave just mentioned explains where the problem is coming from. > > Regards, > Dirk Gerrits > > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig > -- Gavin Doughtie DreamWorks SKG (818) 695-3821 From milind_patil at hotmail.com Tue Jun 10 22:23:46 2003 From: milind_patil at hotmail.com (Milind Patil) Date: Tue, 10 Jun 2003 13:23:46 -0700 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: "David Abrahams" wrote in message news:uwufud67g.fsf at boost-consulting.com... > > .def("__iadd__", (object (*)(back_reference, int) )&do_iadd) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > Why this dangerous cast? It's completely unneccessary, and I > cautioned you against it. What if this function is overloaded eg. also have do_iadd (back_reference self, char* other) { self.get() += Y_FromString(other); return self.source(); } Having a single .def("__iadd__", &do_iadd) invokes compilation error... :: no matching function for call to `boost::python::class_::Y_Wrapper, boost::python::detail::not_specified, boost::python::detail::not_specified>::def(const char[9], )' So, is this the recommended thing: explicit cast if overloaded, else implicit? Thanks, Milind PS: Pyste does explicit casts for overloaded methods. From rwgk at yahoo.com Tue Jun 10 22:59:38 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 10 Jun 2003 13:59:38 -0700 (PDT) Subject: [C++-sig] Re: return value policy for returning same python object... In-Reply-To: Message-ID: <20030610205938.91695.qmail@web20210.mail.yahoo.com> --- Milind Patil wrote: > What if this function is overloaded eg. also have > > do_iadd (back_reference self, char* other) > { > self.get() += Y_FromString(other); > return self.source(); > } > > Having a single > > .def("__iadd__", &do_iadd) > > invokes compilation error... Sometimes casts are difficult to avoid, but if you have Boost.Python specific functions anyway (e.g. back_reference<> in the signature) you can easily avoid the ugly and error-prone casts by naming the functions differently, e.g. do_iadd_1(...) do_iadd_2(...) .def("__iadd__", &do_iadd_1) .def("__iadd__", &do_iadd_2) Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dave at boost-consulting.com Wed Jun 11 02:05:54 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 10 Jun 2003 20:05:54 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: "Milind Patil" writes: > "David Abrahams" wrote in message > news:uwufud67g.fsf at boost-consulting.com... > >> > .def("__iadd__", (object (*)(back_reference, int) )&do_iadd) >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> Why this dangerous cast? It's completely unneccessary, and I >> cautioned you against it. > > What if this function is overloaded eg. also have > > do_iadd (back_reference self, char* other) > { > self.get() += Y_FromString(other); > return self.source(); > } > > Having a single > > .def("__iadd__", &do_iadd) > > invokes compilation error... > > :: no matching function for call to `boost::python::class_::Y_Wrapper, > boost::python::detail::not_specified, boost::python::detail::not_specified>::def(const char[9], > )' > > So, is this the recommended thing: explicit cast if overloaded, else implicit? No, see the tutorial http://www.boost.org/libs/python/doc/tutorial/doc/overloading.html, or you can use the (undocumented) boost::implicit_cast<...> form which is also safe. > Thanks, > Milind > > PS: Pyste does explicit casts for overloaded methods. Pyste probably should use implicit_cast instead, because some people will use its output as a way of getting started with hand-written wrappers, but it's not really a big problem since Pyste has the advantage of a C++ compiler behind it: it always gets the cast right. Humans are much more fallible. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dimour at mail.ru Wed Jun 11 14:34:52 2003 From: dimour at mail.ru (Dmitri Mouromtsev) Date: Wed, 11 Jun 2003 16:34:52 +0400 Subject: [C++-sig] Re: Problem with Extract conversion Message-ID: <000501c33015$dcd048f0$7300a8c0@dima> Hi! I tested my example with Python v.2.2.2, v.2.2.3a and 2.2.3b. And I've got next: - with 2.2.2 it works well - with 2.3a it fails - with 2.3b I can't build BOOST.PYTHON at all Does BOOST work with Python 2.2.3 or not? Thanks, Dmitri From gideon at computer.org Wed Jun 11 14:48:02 2003 From: gideon at computer.org (gideon may) Date: Wed, 11 Jun 2003 14:48:02 +0200 Subject: [C++-sig] Re: Problem with Extract conversion In-Reply-To: <000501c33015$dcd048f0$7300a8c0@dima> References: <000501c33015$dcd048f0$7300a8c0@dima> Message-ID: <75469739.1055342882@localhost> --On Wednesday, June 11, 2003 4:34 PM +0400 Dmitri Mouromtsev wrote: > Hi! > > I tested my example with Python v.2.2.2, v.2.2.3a and 2.2.3b. > And I've got next: > - with 2.2.2 it works well > - with 2.3a it fails > - with 2.3b I can't build BOOST.PYTHON at all It would help if you supply more information on your platform and the error messages you get while compiling your library. I've had the same experience with boost-1.30.0 and python 2.3b, it has to do with redefinition of the PY_LONG_LONG macro as far as i can remember. This is solved with the cvs version of boost-python. > > Does BOOST work with Python 2.2.3 or not? Yes it does. > Thanks, > Dmitri ciao, gideon From stefan.seefeld at orthosoft.ca Wed Jun 11 17:35:47 2003 From: stefan.seefeld at orthosoft.ca (Stefan Seefeld) Date: Wed, 11 Jun 2003 11:35:47 -0400 Subject: [C++-sig] compiling test/embedding.cpp Message-ID: <4594b71bdb357a858d439f24fce0b85c3ee748fc@Orthosoft.ca> hi there, I'm having some troubles with unresolved symbols on IRIX, so I wanted to try out boosts own test files to see whether / how they work. However, that failed... I stepped into test/ and ran bjam "-sTOOLS=mipspro", which failed for the embedding target because -lutil isn't found on IRIX. Running the linker command manually, I get an unresolved symbol boost::python::converter::shared_ptr_deleter::__dt(void). Did anybody else succeed in compiling and testing test/embedding.cpp, specifically on IRIX ? Thanks, Stefan From rwgk at yahoo.com Wed Jun 11 18:42:26 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 11 Jun 2003 09:42:26 -0700 (PDT) Subject: [C++-sig] compiling test/embedding.cpp In-Reply-To: <4594b71bdb357a858d439f24fce0b85c3ee748fc@Orthosoft.ca> Message-ID: <20030611164226.72671.qmail@web20210.mail.yahoo.com> --- Stefan Seefeld wrote: > Did anybody else succeed in compiling and testing test/embedding.cpp, > specifically on IRIX ? I am trying (very hard!) to maintain IRIX/MIPSpro compatibility, but I didn't have time to look into the embedding.cpp failures (we don't use embedding). > I'm having some troubles with unresolved symbols on IRIX, Does this happen while linking extension modules, or only while linking applications that embed Python? Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From seefeld at sympatico.ca Wed Jun 11 18:57:38 2003 From: seefeld at sympatico.ca (Stefan Seefeld) Date: Wed, 11 Jun 2003 12:57:38 -0400 Subject: [C++-sig] compiling test/embedding.cpp References: <20030611164226.72671.qmail@web20210.mail.yahoo.com> Message-ID: <6b1f0e7950a3a2e5711b829cd74e60203ee75c28@Orthosoft.ca> Ralf W. Grosse-Kunstleve wrote: >>I'm having some troubles with unresolved symbols on IRIX, > > > Does this happen while linking extension modules, or only while linking > applications that embed Python? embedding is special as it is a stand-alone application, not an extension. This means it uses its own build rule, which seems to be broken for IRIX (notably the -lutil flag which is wrong for IRIX). When linking manually I had to add other libraries such as -lm and -lpthread to get all symbols resolved. The missing symbol boost::python::converter::shared_ptr_deleter::__dt(void) is reported at runtime, i.e. when I try to execute 'embedding'. Thanks, Stefan From rwgk at yahoo.com Wed Jun 11 19:47:01 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 11 Jun 2003 10:47:01 -0700 (PDT) Subject: [C++-sig] compiling test/embedding.cpp In-Reply-To: <6b1f0e7950a3a2e5711b829cd74e60203ee75c28@Orthosoft.ca> Message-ID: <20030611174701.54522.qmail@web20202.mail.yahoo.com> --- Stefan Seefeld wrote: > This means it uses its own build rule, which seems to > be broken for IRIX (notably the -lutil flag which is wrong for IRIX). You are the first to pay attention. > When linking manually I had to add other libraries such as -lm and > -lpthread to get all symbols resolved. If you post a complete list of libraries required I am sure Rene will update the IRIX toolset. (E.g. you could post the complete link line that you used.) > The missing symbol > > boost::python::converter::shared_ptr_deleter::__dt(void) > > is reported at runtime, i.e. when I try to execute 'embedding'. A quick find/grep in the Boost.Python sources (current CVS) suggests that ~shared_ptr_deleter(); is declared but not defined. David, what's the best way to resolve this? -- I guess as a quick fix Stefan could edit ./converter/shared_ptr_deleter.hpp and replace a semicolon by {}: ~shared_ptr_deleter() {} Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From seefeld at sympatico.ca Wed Jun 11 20:57:31 2003 From: seefeld at sympatico.ca (Stefan Seefeld) Date: Wed, 11 Jun 2003 14:57:31 -0400 Subject: [C++-sig] compiling test/embedding.cpp References: <20030611174701.54522.qmail@web20202.mail.yahoo.com> Message-ID: Ralf W. Grosse-Kunstleve wrote: > If you post a complete list of libraries required I am sure Rene will update > the IRIX toolset. (E.g. you could post the complete link line that you used.) ok. The error is that tools/python.jam: line 64 should *not* contain 'util' (for IRIX, that is, in linux it's fine), but instead 'm pthread' >>The missing symbol >> >>boost::python::converter::shared_ptr_deleter::__dt(void) >> >>is reported at runtime, i.e. when I try to execute 'embedding'. > > > A quick find/grep in the Boost.Python sources (current CVS) suggests that > ~shared_ptr_deleter(); is declared but not defined. David, what's the best way > to resolve this? -- I guess as a quick fix Stefan could edit > ./converter/shared_ptr_deleter.hpp and replace a semicolon by {}: > > ~shared_ptr_deleter() {} did that, and it works now, however it fails a bit later. Here is the output of 'embedding': bash-2.05a$ bin/embedding.test/mipspro/debug/embedding Hello from C++! TypeError: No registered converter was able to extract a C++ reference to type Base from this Python object of type PythonDerived Thanks a lot ! Stefan From dave at boost-consulting.com Wed Jun 11 20:52:49 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 11 Jun 2003 14:52:49 -0400 Subject: [C++-sig] Re: compiling test/embedding.cpp References: <6b1f0e7950a3a2e5711b829cd74e60203ee75c28@Orthosoft.ca> <20030611174701.54522.qmail@web20202.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > --- Stefan Seefeld wrote: >> This means it uses its own build rule, which seems to >> be broken for IRIX (notably the -lutil flag which is wrong for IRIX). > > You are the first to pay attention. > >> When linking manually I had to add other libraries such as -lm and >> -lpthread to get all symbols resolved. > > If you post a complete list of libraries required I am sure Rene will update > the IRIX toolset. (E.g. you could post the complete link line that you used.) > >> The missing symbol >> >> boost::python::converter::shared_ptr_deleter::__dt(void) >> >> is reported at runtime, i.e. when I try to execute 'embedding'. > > A quick find/grep in the Boost.Python sources (current CVS) suggests that > ~shared_ptr_deleter(); is declared but not defined. It's in builtin_converters.cpp. > David, what's the best way to resolve this? Fix the compiler/linker? > -- I guess as a quick fix Stefan could edit > ./converter/shared_ptr_deleter.hpp and replace a semicolon by {}: > > ~shared_ptr_deleter() {} That might turn out to be a workaround for IRIX, but it will fail in general. -- Dave Abrahams Boost Consulting www.boost-consulting.com From grafik666 at redshift-software.com Wed Jun 11 21:18:08 2003 From: grafik666 at redshift-software.com (Rene Rivera) Date: Wed, 11 Jun 2003 14:18:08 -0500 Subject: [C++-sig] Re: compiling test/embedding.cpp In-Reply-To: Message-ID: <20030611141809-r01010800-0fc31632-0860-0108@12.100.89.43> [2003-06-11] David Abrahams wrote: >"Ralf W. Grosse-Kunstleve" writes: > >> --- Stefan Seefeld wrote: >>> This means it uses its own build rule, which seems to >>> be broken for IRIX (notably the -lutil flag which is wrong for IRIX). >> >> You are the first to pay attention. >> >>> When linking manually I had to add other libraries such as -lm and >>> -lpthread to get all symbols resolved. >> >> If you post a complete list of libraries required I am sure Rene will update >> the IRIX toolset. (E.g. you could post the complete link line that you used.) The link command used would be wonderfull to have :-) -- grafik - Don't Assume Anything -- rrivera at acm.org - grafik at redshift-software.com -- 102708583 at icq From gdoughtie at anim.dreamworks.com Thu Jun 12 00:39:55 2003 From: gdoughtie at anim.dreamworks.com (Gavin Doughtie) Date: Wed, 11 Jun 2003 15:39:55 -0700 Subject: [C++-sig] multilanguage debugging in emacs Message-ID: <3EE7AFBB.5050308@anim.dreamworks.com> In my private fantasy land, I'd be able to run emacs and somehow invoke pdb with pdbtrack to do source-level debugging of my python code, then automagically step into gdb when the python calls out to C++ code I've written via boost::python. Does anybody do anything like this? To my shame, I've gotten used to such luxuries in certain non-linux environments... -- Gavin Doughtie DreamWorks SKG From rwgk at yahoo.com Thu Jun 12 01:17:50 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 11 Jun 2003 16:17:50 -0700 (PDT) Subject: [C++-sig] Re: compiling test/embedding.cpp In-Reply-To: Message-ID: <20030611231750.673.qmail@web20201.mail.yahoo.com> --- David Abrahams wrote: > > A quick find/grep in the Boost.Python sources (current CVS) suggests that > > ~shared_ptr_deleter(); is declared but not defined. > > It's in builtin_converters.cpp. Sorry, I missed this due to a typo in my find/grep command. > David, what's the best way to resolve this? > > Fix the compiler/linker? The commands issued by the mipspro toolset result in the use of precompiled headers and the prelinker. A while ago we stopped using the prelinker because the builds go a lot faster and there is no significant runtime penalty. Stefan, if you'd like to try if disabling the prelinker fixes your problem, here are the options that we are using: Compiling: CC -n32 -mips4 -LANG:std -LANG:pch=OFF -no_prelink -ptused -FE:template_in_elf_section -FE:eliminate_duplicate_inline_copies -woff 1001,1234,1311,1682,3439 -DNDEBUG -O2 -OPT:Olimit=0 -I... -c file.cpp Linking: CC -n32 -mips4 -LANG:std -LANG:pch=OFF -no_prelink -ptused -FE:template_in_elf_section -FE:eliminate_duplicate_inline_copies -shared -LD_MSG:off=15,84 -o libany.so *.o -lm Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dave at boost-consulting.com Thu Jun 12 14:32:54 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 12 Jun 2003 08:32:54 -0400 Subject: [C++-sig] Re: multilanguage debugging in emacs References: <3EE7AFBB.5050308@anim.dreamworks.com> Message-ID: Gavin Doughtie writes: > In my private fantasy land, I'd be able to run emacs and somehow > invoke pdb with pdbtrack to do source-level debugging of my python > code, then automagically step into gdb when the python calls out to > C++ code I've written via boost::python. > > Does anybody do anything like this? To my shame, I've gotten used to > such luxuries in certain non-linux environments... See http://systems.cs.uchicago.edu/wad/ Dave Beazley is a sick man... -- Dave Abrahams Boost Consulting www.boost-consulting.com From v.rey at laposte.net Thu Jun 12 15:46:44 2003 From: v.rey at laposte.net (Vincent Rey) Date: Thu, 12 Jun 2003 15:46:44 +0200 Subject: [C++-sig] Exposing dll and exe classes Message-ID: Here are the 3 modules of my test : - Framework.dll : embeds the python interpreter and other classes. - Python_api.dll : non intrusive exposition of Framework and Main classes using Boost.python. - Main.exe : uses Framework.dll and provides other classes. In order to build Python_api and use Main classes through python script, I had to link Main as an exe AND as a lib. Is it the way to do such a thing, or is it possible to avoid this double link work ? Thank you From xavier.warin at der.edfgdf.fr Thu Jun 12 17:18:34 2003 From: xavier.warin at der.edfgdf.fr (Xavier WARIN(Compte LOCAL) - I23) Date: Thu, 12 Jun 2003 17:18:34 +0200 Subject: [C++-sig] Maximum number of arguments in constructor Message-ID: <3EE899CA.5040309@der.edfgdf.fr> Hi, Some classes I want to wrap use constructors with 16 or more arguments, and when I try and define "def( init< item0, item1, ..., item15 >() )" the compiler produces this error message : /myPath/boost_1_30_0/boost/python/init.hpp:66: provided for `template class boost::python::init' ../BPL/Boost_myclass.cc:line: wrong number of template arguments (16, should be 15) So, my question is : where and how can I define another maximum number of arguments, say 20 ? or is there a way to circumvent this default parameter ? Thank you very much Xavier Warin From rwgk at yahoo.com Thu Jun 12 17:53:00 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Thu, 12 Jun 2003 08:53:00 -0700 (PDT) Subject: [C++-sig] Maximum number of arguments in constructor In-Reply-To: <3EE899CA.5040309@der.edfgdf.fr> Message-ID: <20030612155300.86189.qmail@web20210.mail.yahoo.com> --- "Xavier WARIN(Compte LOCAL) - I23" wrote: > So, my question is : where and how can I define another maximum number > of arguments, say 20 ? or is there a way to circumvent this default > parameter ? http://www.boost.org/libs/python/doc/v2/configuration.html Look for BOOST_PYTHON_MAX_ARITY Ralf __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From roman_sulzhyk at yahoo.com Thu Jun 12 23:15:01 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Thu, 12 Jun 2003 14:15:01 -0700 (PDT) Subject: [C++-sig] Boost.Python: few thoughts and questions... Message-ID: <20030612211501.5779.qmail@web41108.mail.yahoo.com> All: First, want to say thank you for terrific work on boost.python - first time I've touched it in a few years and am very impressed by version 2.0 and especially pyste. Played around with it for a couple days, and have some questions, didn't find ready answers when skimming through archives, so I apologize if they've been voiced before. To give a bit of a background, I'm trying to reflect a third-party C++ API into Python, so am kind of stuck with class definitions, and am trying to do as little work as possible hence leveraging pyste. 1. Virtual functions The automatic overloading of virtual functions is great, however I see situations where it would be nice to be able to disable it. For example, I'm trying to expose a derived class which has some virtual members inherited from its sub-classes, yet I have no intention to overload them in python and hence would like to save on the wrapper code generation. Also, when I'm exposing a class, I think it would be nice to expose all of the virtual methods inherited from base classes, even if they're not explicitly overloaded in the class itself - do you think that would be useful functionality to add? 2. Virtual functions declaring exceptions Basically, a simple construct like this struct foo {}; struct World { virtual const std::string &hello() throw (foo) { return msg; } }; causes pyste generated code to choke, because the exception declaration of the wrapper function is looser than the original one (gcc 3.2) 3. Specifying the set_policy() for virtual functions It appears that the policy setting is ignored for virtual function declarations. Anyway, I've started playing around with it and worked around some of these issues, updating pyste to take into account policy for virtual functions. For the throw specifier problem, it appears that there is no easy solution, as GCCXML does not generate throw specifier data as far as I can tell, requiring changes to gccxml for full solution. I've also added an 'unvirtual' function to pyste, kind of like 'exclude', which allows a user to specify that a function should be treated like a regular rather than virtual method when creating a reflection - that works around the 'looser throw specifier' problem I've mentioned. Anyway, would like to sync up on what you think about these issues and whether it's worthwhile to package the changes and submit a patch. Thanks, Roman Sulzhyk __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From dave at boost-consulting.com Fri Jun 13 00:55:42 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 12 Jun 2003 18:55:42 -0400 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: Roman Sulzhyk writes: > All: > > First, want to say thank you for terrific work on boost.python - first > time I've touched it in a few years and am very impressed by version > 2.0 Thanks! > and especially pyste. Thanks to Nicodemus and his willingness to keep an open mind! > Played around with it for a couple days, and have some questions, > didn't find ready answers when skimming through archives, so I > apologize if they've been voiced before. > > To give a bit of a background, I'm trying to reflect a third-party C++ > API into Python, so am kind of stuck with class definitions, and am > trying to do as little work as possible hence leveraging pyste. > > 1. Virtual functions > > The automatic overloading of virtual functions is great, however I see > situations where it would be nice to be able to disable it. For > example, I'm trying to expose a derived class which has some virtual > members inherited from its sub-classes, yet I have no intention to > overload them in python and hence would like to save on the wrapper > code generation. I think you're confusing overloading and overriding. It sounds like you don't plan to override these virtual functions, and you'd like Pyste not to generate the dispatching code and perhaps not generate the derived wrapper class when there's no dispatching code at all. Do you also want Pyste to *not* wrap some some specific function overloads? > Also, when I'm exposing a class, I think it would be nice to expose all > of the virtual methods inherited from base classes, even if they're not > explicitly overloaded in the class itself ^^^^^^^^^^ "Overridden" again? And why only virtual functions? I'd be really surprised if Pyste failed to expose *all* of the functions publicly inherited from base classes but if that's the case, it should probably be fixed. BTW, "methods" is a Python term; the C++ term is "function" or "member function". When discussing two languages at once it gets really confusing if you're not very careful with terminology. > 2. Virtual functions declaring exceptions > > Basically, a simple construct like this > > struct foo {}; > > > struct World > { > virtual const std::string &hello() throw (foo) { return msg; } > }; > > causes pyste generated code to choke, because the exception declaration > of the wrapper function is looser than the original one (gcc 3.2) That's called an "exception specification", not an "exception declaration". Pyste couldn't handle it due to a missing feature in GCC_XML, which has just been added today. I think this will probably be handled very soon. > 3. Specifying the set_policy() for virtual functions > > It appears that the policy setting is ignored for virtual function > declarations. What makes you think so? -- Dave Abrahams Boost Consulting www.boost-consulting.com From paul.rudin at ntlworld.com Fri Jun 13 06:43:12 2003 From: paul.rudin at ntlworld.com (Paul Rudin) Date: 13 Jun 2003 05:43:12 +0100 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: David Abrahams writes: > Roman Sulzhyk writes: > > > 3. Specifying the set_policy() for virtual functions > > > > It appears that the policy setting is ignored for virtual function > > declarations. > > What makes you think so? > foo.h is: ____________________ class foo { public: virtual foo* bar(); }; _____________________ foo.pyste is: ______________________ foo = AllFromHeader('./foo.h') set_policy(foo.foo.bar,return_internal_reference()) ________________________ then (current cvs) pyste gives you: ______________________________ // Includes ==================================================================== #include #include <./foo.h> // Using ======================================================================= using namespace boost::python; // Declarations ================================================================ namespace { struct foo_Wrapper: foo { foo_Wrapper(PyObject* self_, const foo & p0): foo(p0), self(self_) {} foo_Wrapper(PyObject* self_): foo(), self(self_) {} foo * bar() { return call_method< foo * >(self, "bar"); } foo * default_bar() { return foo::bar(); } PyObject* self; }; }// namespace // Module ====================================================================== BOOST_PYTHON_MODULE(foo) { class_< foo, foo_Wrapper >("foo", init< >()) .def(init< const foo & >()) .def("bar", &foo::bar, &foo_Wrapper::default_bar) ; } ______________________________ From g-evjen at online.no Fri Jun 13 10:02:00 2003 From: g-evjen at online.no (Geir Arne Evjen) Date: 13 Jun 2003 10:02:00 +0200 Subject: [C++-sig] Double arrays from python to c++ Message-ID: <1055491319.1268.12.camel@localhost.localdomain> I've started to test this superb boost::python package . However, I have problems getting the following code to work. #include #include struct EatDoubleArray { EatDoubleArray(const double * const parray, const int size) : m_parray(parray), m_size(size) { } const double * const m_parray; const int m_size; }; std::ostream& operator<<(std::ostream& in, const EatDoubleArray& obj) { for (int i = 0; i < obj.m_size; ++i) in << obj.m_parray[i]; in << std::endl; return in; } using namespace boost::python; struct ListToArrayConverter { ListToArrayConverter() { converter::registry::insert(&ListToArray_convertible, &ListToArray_construct, boost::python::type_id()); } static void ListToArray_construct(PyObject* obj, boost::python::converter::rvalue_from_python_stage1_data* data) { boost::python::list l; if (PyList_Check(obj)) { l = extract(obj); } else return; // Allocate the bytes void* storage = ((converter::rvalue_from_python_storage*)data)->storage.bytes; int n = extract(l.attr("__len__")); new (storage) double[n]; for (int i = 0; i < n; ++i) { *((double*)storage + i) = extract(l[i]); } data->convertible = storage; } static void *ListToArray_convertible(PyObject *p) { return p; } }; // Python requires an exported function called init in every // extension module. This is where we build the module contents. BOOST_PYTHON_MODULE(pydoubletest) { ListToArrayConverter(); class_("EatDoubleArray", init()) .def(self_ns::str(self)) ; } When I'm testing this from python I get the following message: Traceback (most recent call last): File "doubletest.py", line 4, in ? a = EatDoubleArray(A, 4) TypeError: bad argument type for built-in operation Basically I just want to create an array in Python and use it in C++ . Any ideas what I have done wrong, or misunderstood. Thanks in advance Geir Arne Evjen From dave at boost-consulting.com Fri Jun 13 15:08:28 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 09:08:28 -0400 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: Paul Rudin writes: > David Abrahams writes: > >> Roman Sulzhyk writes: >> >> > 3. Specifying the set_policy() for virtual functions >> > >> > It appears that the policy setting is ignored for virtual function >> > declarations. >> >> What makes you think so? OK, it looks like Pyste has a bit of a bug here! Thanks to Roman for reporting it and to Paul for following up. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 13 15:16:44 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 09:16:44 -0400 Subject: [C++-sig] Re: Double arrays from python to c++ References: <1055491319.1268.12.camel@localhost.localdomain> Message-ID: Geir Arne Evjen writes: > When I'm testing this from python I get the following message: > > Traceback (most recent call last): > File "doubletest.py", line 4, in ? > a = EatDoubleArray(A, 4) > TypeError: bad argument type for built-in operation > > Basically I just want to create an array in Python and use it in C++ . > Any ideas what I have done wrong, or misunderstood. What is 'A' above? What you've probably misunderstood is that when converting to a T* or T (non-const)& argument, Boost.Python uses an lvalue converter for T and any rvalue converters for T* are ignored. Another major problem you have is that you've registered this rvalue converter for double*, so the rvalue_from_python_data has enough aligned storage for you to construct a double* there, but there's no place to store double[N]. What your converter is doing invokes undefined behavior. It's probably better to write thin wrapper functions which accept boost::python::object or boost::python::list, and manage the memory for the array internally to those functions. HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From roman_sulzhyk at yahoo.com Fri Jun 13 16:19:26 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Fri, 13 Jun 2003 07:19:26 -0700 (PDT) Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: Message-ID: <20030613141926.65234.qmail@web41113.mail.yahoo.com> David, Paul, thanks for following up. David, sorry about confused terminology in the original email, I'm a bit rusty. Anyway, here's a more complete example of what I meant. ==== world.h ==== #include #include #include struct useless {}; class BaseWorld { public: // Should be exposed but isn't! virtual const char *uber_greet() const { return "uber greet"; } virtual const std::string &greet() throw (useless) = NULL; }; class World : public BaseWorld { public: World(const std::string &msg): msg(msg) {} void set(const std::string &msg) { this->msg = msg; } virtual const std::string &greet() throw (useless) { return msg; } // Pseudo overload // const char *uber_greet() { return this->BaseWorld::uber_greet(); } private: std::string msg; }; std::string test_world(void) { World w("greet from derived"); std::ostringstream os; os << w.uber_greet() << " " << w.greet(); return os.str(); } ==== world.pyste ===== world = Class("World", "world.h") Function ( "test_world", "world.h" ) unvirtual ( world.greet ) set_policy ( world.greet, return_value_policy(copy_const_reference) ) ==== world.cpp ======= [roman at mholden ~/src/pysymphony]$ cat world.cpp // Includes ====================================================================#include #include // Using =======================================================================using namespace boost::python; // Module ======================================================================BOOST_PYTHON_MODULE(world) { def("test_world", &test_world); class_< World >("World", init< const World & >()) .def(init< const std::basic_string,std::allocator > & >()) .def("set", &World::set) .def("greet", &World::greet, return_value_policy< copy_const_reference >()) ; } ========================== This example demonstrates all of the original points. Notice that I'm using my patched version of 1.3.0 pyste, which has 'unvirtual' - if you try the example with regular pyste it will fail on throw(useless), removing that should work fine. However, I do believe it's a useful feature to be able to disable automatic wrapping of classes which have virtual functions. In any case, the current implementation has a bug whereas 'exclude' is not taken into account when generating wrappers - i.e. if a class has virtual functions and you 'exclude' all of them, a blank wrapper will still be generated. A bigger problem is that public methods of the BaseWorld class are not exported. As far as I can tell it should be possible to add it, all of required info appears to be generated by gccxml. Glad to hear extraction of exception specifications is added to GCCXML. I'm planning to fix the problems I've mentioned since I'd like to use this functionality, is it worthwhile submitting a patch? Thanks! Roman --- David Abrahams wrote: > Paul Rudin writes: > > > David Abrahams writes: > > > >> Roman Sulzhyk writes: > >> > >> > 3. Specifying the set_policy() for virtual functions > >> > > >> > It appears that the policy setting is ignored for virtual > function > >> > declarations. > >> > >> What makes you think so? > > policy settings> > > OK, it looks like Pyste has a bit of a bug here! Thanks to Roman for > reporting it and to Paul for following up. > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nbecker at hns.com Fri Jun 13 16:48:54 2003 From: nbecker at hns.com (Neal D. Becker) Date: Fri, 13 Jun 2003 10:48:54 -0400 Subject: [C++-sig] Howto wrap operator[]? (boost::python) Message-ID: I am playing with exposing std::vector. So far I got init OK, now I want to expose operator[]. I don't see anything in the tutorial about this. Any hints? From dave at boost-consulting.com Fri Jun 13 18:45:22 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 12:45:22 -0400 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... References: <20030613141926.65234.qmail@web41113.mail.yahoo.com> Message-ID: Roman Sulzhyk writes: > I'm planning to fix the problems I've mentioned since I'd like to use > this functionality, is it worthwhile submitting a patch? Sure, though I'm not sure where Nicodemus is at the moment; I haven't heard anything from him recently. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 13 21:44:20 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 15:44:20 -0400 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) References: Message-ID: "Neal D. Becker" writes: > I am playing with exposing std::vector. So far I got init > OK, now I want to expose operator[]. I don't see anything in the > tutorial about this. Any hints? You just need to wrap some functions as __getitem__ and __setitem__. There's a whole thread about pitfalls with this here: http://aspn.activestate.com/ASPN/Mail/Message/c++-sig/1652155 You're not subject to most of the problems, because double isn't a class type. -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Fri Jun 13 22:33:46 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Fri, 13 Jun 2003 13:33:46 -0700 (PDT) Subject: [C++-sig] Howto wrap operator[]? (boost::python) In-Reply-To: Message-ID: <20030613203346.91812.qmail@web20208.mail.yahoo.com> --- "Neal D. Becker" wrote: > I am playing with exposing std::vector. So far I got init OK, now I > want to expose operator[]. I don't see anything in the tutorial about > this. Any hints? http://www.python.org/cgi-bin/moinmoin/boost_2epython_2fStlContainers __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nicodemus at globalite.com.br Fri Jun 13 22:52:30 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 17:52:30 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: References: <20030613141926.65234.qmail@web41113.mail.yahoo.com> Message-ID: <3EEA398E.2020403@globalite.com.br> David Abrahams wrote: >Roman Sulzhyk writes: > > > >>I'm planning to fix the problems I've mentioned since I'd like to use >>this functionality, is it worthwhile submitting a patch? >> >> > >Sure, though I'm not sure where Nicodemus is at the moment; I haven't >heard anything from him recently. > Sorry about that: I got a hard drive crash, and I am now at a temporary machine. From nicodemus at globalite.com.br Fri Jun 13 23:00:07 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 18:00:07 -0300 Subject: [C++-sig] Boost.Python: few thoughts and questions... In-Reply-To: <20030612211501.5779.qmail@web41108.mail.yahoo.com> References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: <3EEA3B57.2040702@globalite.com.br> Roman Sulzhyk wrote: >All: > >First, want to say thank you for terrific work on boost.python - first >time I've touched it in a few years and am very impressed by version >2.0 and especially pyste. > > Thanks a lot! >1. Virtual functions > >The automatic overloading of virtual functions is great, however I see >situations where it would be nice to be able to disable it. For >example, I'm trying to expose a derived class which has some virtual >members inherited from its sub-classes, yet I have no intention to >overload them in python and hence would like to save on the wrapper >code generation. > > Certainly, that's easy to add... I was aware that this need might come up. >Also, when I'm exposing a class, I think it would be nice to expose all >of the virtual methods inherited from base classes, even if they're not >explicitly overloaded in the class itself - do you think that would be >useful functionality to add? > > I don't know... why don't you export the base class itself? That way the methods will be avaiable in the derived class thanks to Boost.Python. >2. Virtual functions declaring exceptions > >Basically, a simple construct like this > >struct foo {}; > > >struct World >{ > virtual const std::string &hello() throw (foo) { return msg; } >}; > >causes pyste generated code to choke, because the exception declaration >of the wrapper function is looser than the original one (gcc 3.2) > Yes, that issue has been brought up in the past here in the list, but not until recently Brad King has added support for it in GCCXML. As soon as I can I will implement this. >3. Specifying the set_policy() for virtual functions > >It appears that the policy setting is ignored for virtual function >declarations. > > That's a bug. I fix it as soon as I can. >Anyway, I've started playing around with it and worked around some of >these issues, updating pyste to take into account policy for virtual >functions. For the throw specifier problem, it appears that there is no >easy solution, as GCCXML does not generate throw specifier data as far >as I can tell, requiring changes to gccxml for full solution. > >I've also added an 'unvirtual' function to pyste, kind of like >'exclude', which allows a user to specify that a function should be >treated like a regular rather than virtual method when creating a >reflection - that works around the 'looser throw specifier' problem >I've mentioned. > > >Anyway, would like to sync up on what you think about these issues and >whether it's worthwhile to package the changes and submit a patch. > Certainly, that would be great. Thanks a lot! Unfortunately, my hard drive went dead on me and I am at a temporary machine, so I don't know if I will be able to make any changes in Pyste for now. I should be back up only next week. Sorry about that. >Thanks, > >Roman Sulzhyk > > Regards, Nicodemus. From nicodemus at globalite.com.br Fri Jun 13 23:04:22 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 18:04:22 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: <3EEA3C56.3070006@globalite.com.br> David Abrahams wrote: >Roman Sulzhyk writes: > > >>Also, when I'm exposing a class, I think it would be nice to expose all >>of the virtual methods inherited from base classes, even if they're not >>explicitly overloaded in the class itself >> >> > ^^^^^^^^^^ >"Overridden" again? > >And why only virtual functions? I'd be really surprised if Pyste >failed to expose *all* of the functions publicly inherited from base >classes but if that's the case, it should probably be fixed. > > Pyste does not export the base's member functions in the derived class. If the user wants functions from the base class, he should export the base class too. Perhaps this behaviour should change? >BTW, "methods" is a Python term; the C++ term is "function" or "member >function". When discussing two languages at once it gets really >confusing if you're not very careful with terminology. > > > >>2. Virtual functions declaring exceptions >> >>Basically, a simple construct like this >> >>struct foo {}; >> >> >>struct World >>{ >> virtual const std::string &hello() throw (foo) { return msg; } >>}; >> >>causes pyste generated code to choke, because the exception declaration >>of the wrapper function is looser than the original one (gcc 3.2) >> >> > >That's called an "exception specification", not an "exception >declaration". Pyste couldn't handle it due to a missing feature in >GCC_XML, which has just been added today. I think this will probably >be handled very soon. > As soon as I can get my system up again. 8) >>3. Specifying the set_policy() for virtual functions >> >>It appears that the policy setting is ignored for virtual function >>declarations. >> >> > >What makes you think so? > That's a bug, thanks for the report Roman! Regards, Nicodemus. From nicodemus at globalite.com.br Fri Jun 13 23:09:11 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 18:09:11 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: <20030613141926.65234.qmail@web41113.mail.yahoo.com> References: <20030613141926.65234.qmail@web41113.mail.yahoo.com> Message-ID: <3EEA3D77.9080005@globalite.com.br> Roman Sulzhyk wrote: > > > >This example demonstrates all of the original points. > >Notice that I'm using my patched version of 1.3.0 pyste, which has >'unvirtual' - if you try the example with regular pyste it will fail on >throw(useless), removing that should work fine. > >However, I do believe it's a useful feature to be able to disable >automatic wrapping of classes which have virtual functions. In any >case, the current implementation has a bug whereas 'exclude' is not >taken into account when generating wrappers - i.e. if a class has >virtual functions and you 'exclude' all of them, a blank wrapper will >still be generated. > >A bigger problem is that public methods of the BaseWorld class are not >exported. As far as I can tell it should be possible to add it, all of >required info appears to be generated by gccxml. > >Glad to hear extraction of exception specifications is added to GCCXML. > >I'm planning to fix the problems I've mentioned since I'd like to use >this functionality, is it worthwhile submitting a patch? > Sure, it would be great! But as I said in another post, I won't be able to touch Pyste until next week. 8/ >Thanks! > Thanks to you for taking time to adress this issues and discussing them here in the list! Is because of people like you that Pyste has evolved into a nearly-useful tool. ;) Regards, Nicodemus. From nicodemus at globalite.com.br Fri Jun 13 23:19:29 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 18:19:29 -0300 Subject: [C++-sig] Howto wrap operator[]? (boost::python) In-Reply-To: References: Message-ID: <3EEA3FE1.3080601@globalite.com.br> Neal D. Becker wrote: >I am playing with exposing std::vector. So far I got init OK, now I >want to expose operator[]. I don't see anything in the tutorial about >this. Any hints? > > > Currently you will have to expose them by hand. Python has two especial methods, named __setitem__ and __getitem__, that allows a class to define the behaviour of the [] operator. All you have to do is export this especial methods as members of your class. Here is some code (untested): template void vector_setitem(std::vector& v, int index, T value) { if (index >= 0 && index < v.size()) { v[index] = value; } else { PyErr_SetString(PyExc_IndexError, "index out of range"); throw_error_already_set(); } } template T vector_getitem(std::vector &v, int index) { if (index >= 0 && index < v.size()) { return v[index]; } else { PyErr_SetString(PyExc_IndexError, "index out of range"); throw_error_already_set(); } } BOOST_PYTHON_MODULE(std) { class_ >(...) ... .def("__getitem__", &vector_getitem) .def("__setitem__", &vector_setitem) ; } From dave at boost-consulting.com Fri Jun 13 23:13:28 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 17:13:28 -0400 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) References: <20030613203346.91812.qmail@web20208.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > --- "Neal D. Becker" wrote: >> I am playing with exposing std::vector. So far I got init OK, now I >> want to expose operator[]. I don't see anything in the tutorial about >> this. Any hints? > > http://www.python.org/cgi-bin/moinmoin/boost_2epython_2fStlContainers I should point out that wrapping iterators explicitly is unneccessary (and less safe) then letting Python generate iterators automatically using your __getitem__ for zero-based integer-indexed containers. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 13 23:15:42 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 13 Jun 2003 17:15:42 -0400 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> <3EEA3C56.3070006@globalite.com.br> Message-ID: Nicodemus writes: > David Abrahams wrote: > >>Roman Sulzhyk writes: >> >>>Also, when I'm exposing a class, I think it would be nice to expose all >>>of the virtual methods inherited from base classes, even if they're not >>>explicitly overloaded in the class itself >>> >> ^^^^^^^^^^ >>"Overridden" again? >> >>And why only virtual functions? I'd be really surprised if Pyste >>failed to expose *all* of the functions publicly inherited from base >>classes but if that's the case, it should probably be fixed. >> > > Pyste does not export the base's member functions in the derived > class. If the user wants functions from the base class, he should > export the base class too. Perhaps this behaviour should change? Often base class partitioning is merely an implementation detail. It seems to me that if public bases are not explicitly exported they should be exported implicitly, unless explicitly suppressed. But I might be wrong about that. >>BTW, "methods" is a Python term; the C++ term is "function" or "member >>function". When discussing two languages at once it gets really >>confusing if you're not very careful with terminology. So you might want to tweak the Pyste docs accordingly ;-> -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Fri Jun 13 23:35:43 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 18:35:43 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> <3EEA3C56.3070006@globalite.com.br> Message-ID: <3EEA43AF.1040405@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > > >>David Abrahams wrote: >> >> >> >>>Roman Sulzhyk writes: >>> >>> >>> >>>>Also, when I'm exposing a class, I think it would be nice to expose all >>>>of the virtual methods inherited from base classes, even if they're not >>>>explicitly overloaded in the class itself >>>> >>>> >>>> >>> ^^^^^^^^^^ >>>"Overridden" again? >>> >>>And why only virtual functions? I'd be really surprised if Pyste >>>failed to expose *all* of the functions publicly inherited from base >>>classes but if that's the case, it should probably be fixed. >>> >>> >>> >>Pyste does not export the base's member functions in the derived >>class. If the user wants functions from the base class, he should >>export the base class too. Perhaps this behaviour should change? >> >> > >Often base class partitioning is merely an implementation detail. It >seems to me that if public bases are not explicitly exported they >should be exported implicitly, unless explicitly suppressed. > >But I might be wrong about that. > It is certainly do-able, we just have to decide if this feature is wanted or not. Opinions, everyone? >>>BTW, "methods" is a Python term; the C++ term is "function" or "member >>>function". When discussing two languages at once it gets really >>>confusing if you're not very careful with terminology. >>> >>> > >So you might want to tweak the Pyste docs accordingly ;-> > oops! ;) Will do, thanks for pointing it out. From roman_sulzhyk at yahoo.com Sat Jun 14 00:32:19 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Fri, 13 Jun 2003 15:32:19 -0700 (PDT) Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... [PATCH] In-Reply-To: Message-ID: <20030613223219.69200.qmail@web41113.mail.yahoo.com> Guys: Anyway, here's a patch with some rather raw hacks so far, if this functionality is useful to Nicodemus I can definitely clean this up. The patch is against stock 1.3.0 pyste. Basically I've added 'unvirtual' functionality, to treat virtual functions as if they were regular, added policy honouring to virtual member functions and extended the member function generation to include all of the functions publicly inherited from base classes. BTW, I also have a few questions, maybe we can have a discussion: 1) Is it worthwhile to add a 'default reference policy' to pyste, i.e. a default conversion (like copy_const_reference) which will be used in case one isn't specified explicitely. This will be useful for situations where one has an API which returns a bunch of const std::string &, to avoid having to specify them function by function. 2) How about adding 'converter' functionality to pyste, i.e. ability to register default converters using to_python_converter<> ? Thanks! Roman --- David Abrahams wrote: > Roman Sulzhyk writes: > > > I'm planning to fix the problems I've mentioned since I'd like to > use > > this functionality, is it worthwhile submitting a patch? > > Sure, though I'm not sure where Nicodemus is at the moment; I haven't > heard anything from him recently. > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: pyste.patch.txt URL: From roman_sulzhyk at yahoo.com Sat Jun 14 00:38:42 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Fri, 13 Jun 2003 15:38:42 -0700 (PDT) Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: Message-ID: <20030613223842.50289.qmail@web41114.mail.yahoo.com> --- David Abrahams wrote: > Nicodemus writes: > > > David Abrahams wrote: > > [snip] > > > > Pyste does not export the base's member functions in the derived > > class. If the user wants functions from the base class, he should > > export the base class too. Perhaps this behaviour should change? > > Often base class partitioning is merely an implementation detail. It > seems to me that if public bases are not explicitly exported they > should be exported implicitly, unless explicitly suppressed. > > But I might be wrong about that. David, I'd like to side with you on this - sometimes people do partitioning of base classes as an implementation detail, and you're just interested in exporting a top level class to python, without worrying where its member functions are originating. To me it seems that exposing all of the base classes to python, like Nicodemus is suggesting, is viable but less elegant. > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nicodemus at globalite.com.br Sat Jun 14 00:42:29 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 19:42:29 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... [PATCH] In-Reply-To: <20030613223219.69200.qmail@web41113.mail.yahoo.com> References: <20030613223219.69200.qmail@web41113.mail.yahoo.com> Message-ID: <3EEA5355.8020308@globalite.com.br> Roman Sulzhyk wrote: >Guys: > >Anyway, here's a patch with some rather raw hacks so far, if this >functionality is useful to Nicodemus I can definitely clean this up. >The patch is against stock 1.3.0 pyste. > > Looks great, thanks a lot Roman! I will apply the patch as soon as I can. >1) Is it worthwhile to add a 'default reference policy' to pyste, i.e. >a default conversion (like copy_const_reference) which will be used in >case one isn't specified explicitely. This will be useful for >situations where one has an API which returns a bunch of const >std::string &, to avoid having to specify them function by function. > No, there isn't. But Pyste automatically uses copy_const_reference if a member function returns something by constant reference... try using the latest CVS. >2) How about adding 'converter' functionality to pyste, i.e. ability to >register default converters using to_python_converter<> ? > That is indeed something that is missing. For pyste it is simple, we just need to provide a function to the user that inserts the specific code into the BOOST_PYTHON_MODULE: Include("my_converters.h") to_python_converter("converters::vector_to_list", "PyListType") Would generate: #include BOOST_PYTHON_MODULE(...) { to_python_converter(); // from the top of my head ... } Something like this would do? Regards, Nicodemus. From nicodemus at globalite.com.br Sat Jun 14 00:45:15 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 19:45:15 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: <20030613223842.50289.qmail@web41114.mail.yahoo.com> References: <20030613223842.50289.qmail@web41114.mail.yahoo.com> Message-ID: <3EEA53FB.2000308@globalite.com.br> Roman Sulzhyk wrote: >--- David Abrahams wrote: > > >>>David Abrahams wrote: >>> >>> >>Often base class partitioning is merely an implementation detail. It >>seems to me that if public bases are not explicitly exported they >>should be exported implicitly, unless explicitly suppressed. >> >>But I might be wrong about that. >> >> > >David, I'd like to side with you on this - sometimes people do >partitioning of base classes as an implementation detail, and you're >just interested in exporting a top level class to python, without >worrying where its member functions are originating. To me it seems >that exposing all of the base classes to python, like Nicodemus is >suggesting, is viable but less elegant. > Hmm, no problem then. Your patch already deals with this, right Roman? If not, I will implement it. Thanks for the discussion David and Roman. Regards, Nicodemus. From roman_sulzhyk at yahoo.com Sat Jun 14 00:55:30 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Fri, 13 Jun 2003 15:55:30 -0700 (PDT) Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... [PATCH] In-Reply-To: <3EEA5355.8020308@globalite.com.br> Message-ID: <20030613225530.95280.qmail@web41115.mail.yahoo.com> --- Nicodemus wrote: > Roman Sulzhyk wrote: > > >Guys: > > > >Anyway, here's a patch with some rather raw hacks so far, if this > >functionality is useful to Nicodemus I can definitely clean this up. > >The patch is against stock 1.3.0 pyste. > > > > > > Looks great, thanks a lot Roman! I will apply the patch as soon as I > can. Just make sure to clean it up, it's a bit raw. I do think however that it's probably worthwhile to change pyste internally to treat base classes similarly to regular classes, because if you start exposing all of the base classes member functions / member variables implicitely you would have to change the way 'IsUnique()', 'members', and some other things are handled... > > >1) Is it worthwhile to add a 'default reference policy' to pyste, > i.e. > >a default conversion (like copy_const_reference) which will be used > in > >case one isn't specified explicitely. This will be useful for > >situations where one has an API which returns a bunch of const > >std::string &, to avoid having to specify them function by function. > > > > No, there isn't. But Pyste automatically uses copy_const_reference if > a > member function returns something by constant reference... try using > the > latest CVS. Cool, that's great. > >2) How about adding 'converter' functionality to pyste, i.e. ability > to > >register default converters using to_python_converter<> ? > > > > That is indeed something that is missing. For pyste it is simple, we > just need to provide a function to the user that inserts the specific > > code into the BOOST_PYTHON_MODULE: > > Include("my_converters.h") > to_python_converter("converters::vector_to_list", "PyListType") > > Would generate: > > #include > > BOOST_PYTHON_MODULE(...) > { > to_python_converter(); // > > from the top of my head > ... > } > > Something like this would do? Yep, something like this will do great. > Regards, > Nicodemus. > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig Thanks! Roman __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From roman_sulzhyk at yahoo.com Sat Jun 14 01:09:28 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Fri, 13 Jun 2003 16:09:28 -0700 (PDT) Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: <3EEA53FB.2000308@globalite.com.br> Message-ID: <20030613230928.2404.qmail@web41104.mail.yahoo.com> Nicodemus: The patch would work, it has a couple problems though - treats inherited virtual functions not overloaded at top level by default as 'unvirtual', i.e. doesn't generate a wrapper for them, and it also spews some warnings in 'Class.IsUnique()' however ignoring them works fine. So feel free to use this as a basis for your implementation. I'll also take a look at the CVS version when I get a chance, I'm behind a firewall at work. I think your virtual wrapper generation code is becoming a bit of a hack and may need some cleaning up (second half of ClassExporter.py), but overall the quality of the implementation is very impressive. Thanks, Roman --- Nicodemus wrote: > Roman Sulzhyk wrote: > > >--- David Abrahams wrote: > > > > > >>>David Abrahams wrote: > >>> > >>> > >>Often base class partitioning is merely an implementation detail. > It > >>seems to me that if public bases are not explicitly exported they > >>should be exported implicitly, unless explicitly suppressed. > >> > >>But I might be wrong about that. > >> > >> > > > >David, I'd like to side with you on this - sometimes people do > >partitioning of base classes as an implementation detail, and you're > >just interested in exporting a top level class to python, without > >worrying where its member functions are originating. To me it seems > >that exposing all of the base classes to python, like Nicodemus is > >suggesting, is viable but less elegant. > > > > Hmm, no problem then. Your patch already deals with this, right > Roman? > If not, I will implement it. Thanks for the discussion David and > Roman. > > Regards, > Nicodemus. > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? Yahoo! Calendar - Free online calendar with sync to Outlook(TM). http://calendar.yahoo.com From nicodemus at globalite.com.br Sat Jun 14 01:17:00 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 20:17:00 -0300 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... In-Reply-To: <20030613230928.2404.qmail@web41104.mail.yahoo.com> References: <20030613230928.2404.qmail@web41104.mail.yahoo.com> Message-ID: <3EEA5B6C.7040005@globalite.com.br> Roman Sulzhyk wrote: >Nicodemus: > >The patch would work, it has a couple problems though - treats >inherited virtual functions not overloaded at top level by default as >'unvirtual', i.e. doesn't generate a wrapper for them, and it also >spews some warnings in 'Class.IsUnique()' however ignoring them works >fine. > > Hmm, I will look into these, thanks. >So feel free to use this as a basis for your implementation. I'll also >take a look at the CVS version when I get a chance, I'm behind a >firewall at work. I think your virtual wrapper generation code is >becoming a bit of a hack and may need some cleaning up (second half of >ClassExporter.py), but overall the quality of the implementation is >very impressive. > I have this feeling too... I will have to refactor that code some time, probably move it to a separate file too. And thanks for the compliment! Regards, Nicodemus. From mike at bindkey.com Sat Jun 14 01:30:55 2003 From: mike at bindkey.com (Mike Rovner) Date: Fri, 13 Jun 2003 16:30:55 -0700 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) References: <3EEA3FE1.3080601@globalite.com.br> Message-ID: "Nicodemus" wrote in message news:3EEA3FE1.3080601 at globalite.com.br... > template > void vector_setitem(std::vector& v, int index, T value) > { > if (index >= 0 && index < v.size()) { > v[index] = value; > } > else { > PyErr_SetString(PyExc_IndexError, "index out of range"); > throw_error_already_set(); > } > } That will forbid very useful Python feature - negative index :( So better include if( index < 0 ) index+=v.size(); before your if. Regards, Mike From mike at bindkey.com Sat Jun 14 01:35:55 2003 From: mike at bindkey.com (Mike Rovner) Date: Fri, 13 Jun 2003 16:35:55 -0700 Subject: [C++-sig] Re: Re: Boost.Python: few thoughts and questions... References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> <3EEA3C56.3070006@globalite.com.br> <3EEA43AF.1040405@globalite.com.br> Message-ID: "Nicodemus" wrote in message news:3EEA43AF.1040405 at globalite.com.br... > David Abrahams wrote: > >>Pyste does not export the base's member functions in the derived > >>class. If the user wants functions from the base class, he should > >>export the base class too. Perhaps this behaviour should change? > >Often base class partitioning is merely an implementation detail. It > >seems to me that if public bases are not explicitly exported they > >should be exported implicitly, unless explicitly suppressed. > It is certainly do-able, we just have to decide if this feature is > wanted or not. Opinions, everyone? It will be nice to have. I still do all wrapping by hand, so to move to pyste it's a must be feature for me. Mike From nicodemus at globalite.com.br Sat Jun 14 01:43:58 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Fri, 13 Jun 2003 20:43:58 -0300 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) In-Reply-To: References: <3EEA3FE1.3080601@globalite.com.br> Message-ID: <3EEA61BE.2050306@globalite.com.br> Mike Rovner wrote: >"Nicodemus" wrote in message >news:3EEA3FE1.3080601 at globalite.com.br... > > > >>template >>void vector_setitem(std::vector& v, int index, T value) >>{ >> if (index >= 0 && index < v.size()) { >> v[index] = value; >> } >> else { >> PyErr_SetString(PyExc_IndexError, "index out of range"); >> throw_error_already_set(); >> } >>} >> >> > >That will forbid very useful Python feature - negative index :( >So better include > > if( index < 0 ) index+=v.size(); > >before your if. > > You are right, thanks for the remainder! Regards, Nicodemus. From franke at ableton.com Sat Jun 14 03:21:59 2003 From: franke at ableton.com (Stefan Franke) Date: Sat, 14 Jun 2003 03:21:59 +0200 Subject: [C++-sig] Simple rvalue_from_python_data usage example availabe? Message-ID: I'm currently trying to implement a bidirectional conversion from Python unicode strings to our library's own unicode string type, called 'TString'. The 'TString -> Python Unicode Object' part was easy. Just had to register a to_python_converter as shown in the manual examples. Unfortunately, being a BPL newbie, I'm unable to figure out the 'Python Unicode Object -> TString' part. At least I have spotted the rvalue_from_python_data mechanism which I believe is the right one for the job. But I'm clueless how to set it up for correct usage. I've checked the reference to the "scitbx" in the FAQ, but there it's done in a totally generic way, from which I'm too stupid to derive my special case. Dear SIG, can you help me? This is what I have so far: struct TStringToPythonUnicode { static PyObject* convert(const TString& s) { return PyUnicode_FromWideChar(s.FirstChar(), s.Length()); } }; struct PythonUnicodeToTString { static TString extract(PyObject* o) { // Test string without any actual PyUnicode_Type instance access return ToString("extracted"); } }; BOOST_PYTHON_MODULE(TStringModule) { to_python_converter(); rvalue_from_python_data( ??? ); // What to place here? ^^^^^^^ } -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwgk at yahoo.com Sat Jun 14 05:31:45 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Fri, 13 Jun 2003 20:31:45 -0700 (PDT) Subject: [C++-sig] Simple rvalue_from_python_data usage example availabe? In-Reply-To: Message-ID: <20030614033145.42747.qmail@web20208.mail.yahoo.com> --- Stefan Franke wrote: > I'm currently trying to implement a bidirectional conversion > from Python unicode strings to our library's own unicode string > type, called 'TString'. > ... > But I'm clueless how to set it up for correct usage. I've checked > the reference to the "scitbx" in the FAQ, but there it's done in a > totally generic way, from which I'm too stupid to derive my special > case. I believe this is very similar to what you want: http://mail.python.org/pipermail/c++-sig/2003-May/004133.html I'll add this to the FAQ when I get a chance. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From rwgk at yahoo.com Sat Jun 14 05:42:16 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Fri, 13 Jun 2003 20:42:16 -0700 (PDT) Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) In-Reply-To: Message-ID: <20030614034216.93973.qmail@web20202.mail.yahoo.com> --- Mike Rovner wrote: > > "Nicodemus" wrote in message > news:3EEA3FE1.3080601 at globalite.com.br... > > > template > > void vector_setitem(std::vector& v, int index, T value) > > { > > if (index >= 0 && index < v.size()) { > > v[index] = value; > > } > > else { > > PyErr_SetString(PyExc_IndexError, "index out of range"); > > throw_error_already_set(); > > } > > } > > That will forbid very useful Python feature - negative index :( > So better include > > if( index < 0 ) index+=v.size(); > > before your if. One more nit: replace int by long to support very large arrays. FWIW: here is my little helper function, carefully tailored to mirror indexing of builtin lists: inline std::size_t positive_getitem_index(long i, std::size_t size) { if (i >= 0) { if (i >= size) raise_index_error(); return i; } if (-i > size) raise_index_error(); return size + i; } The size parameter is the size of the indexed object. raise_index_error() is another tiny helper equivalent to the two lines for rasing the exception in the quoted section above. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From djowel at gmx.co.uk Sat Jun 14 06:10:40 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Sat, 14 Jun 2003 12:10:40 +0800 Subject: [C++-sig] Howto wrap operator[]? (boost::python) References: <3EEA3FE1.3080601@globalite.com.br> Message-ID: <001a01c3322a$edf111c0$0100a8c0@godzilla> Nicodemus wrote: > Currently you will have to expose them by hand. Python has two > especial methods, named __setitem__ and __getitem__, that allows a > class to > define the behaviour of the [] operator. All you have to do is export > this especial methods as members of your class. Here is some code > (untested): [snip] > template > T vector_getitem(std::vector &v, int index) FYI, I am currently working on a solution to get this working properly. The problem with the code above, as Dave mentioned in a separate thread (and a problem that I am facing right now) is that if the returned item is by value, it has no link to the original element in the vector. If for example T has a non-const member function, say "set(int)", we run into problems like: >>> vec[n].set(v) where we are trying to set v of a temporary T, thus making the vector elements through the [] operator immutable except through assignment (through __setitem__): >> vec[n] = vec[n].set(v) It might be tempting to return a reference. OTOH, we are in danger of dangling references once the vector changes (resizes for instance). I'm now experimenting with some wrapper code to fix this situation while at the same time making the wrapping of __getitem__ and its friends easier. Regards, -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From dave at boost-consulting.com Sat Jun 14 11:52:21 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 14 Jun 2003 05:52:21 -0400 Subject: [C++-sig] Re: Boost.Python: few thoughts and questions... [PATCH] References: <20030613223219.69200.qmail@web41113.mail.yahoo.com> Message-ID: Roman Sulzhyk writes: > Guys: > > Anyway, here's a patch with some rather raw hacks so far, if this > functionality is useful to Nicodemus I can definitely clean this up. > The patch is against stock 1.3.0 pyste. > > Basically I've added 'unvirtual' functionality, to treat virtual > functions as if they were regular Strange name, though. Does "final" have the wrong connotations? > added policy honouring to virtual member functions and extended the > member function generation to include all of the functions publicly > inherited from base classes. > > BTW, I also have a few questions, maybe we can have a discussion: > > 1) Is it worthwhile to add a 'default reference policy' to pyste, i.e. > a default conversion (like copy_const_reference) which will be used in > case one isn't specified explicitely. This will be useful for > situations where one has an API which returns a bunch of const > std::string &, to avoid having to specify them function by function. I think it might be worthwhile having that *per type*, so that you can say "when std::string is returned by const&, just copy it". -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sat Jun 14 12:24:49 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 14 Jun 2003 06:24:49 -0400 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) References: <3EEA3FE1.3080601@globalite.com.br> <001a01c3322a$edf111c0$0100a8c0@godzilla> Message-ID: "Joel de Guzman" writes: > Nicodemus wrote: > >> Currently you will have to expose them by hand. Python has two >> especial methods, named __setitem__ and __getitem__, that allows a >> class to >> define the behaviour of the [] operator. All you have to do is export >> this especial methods as members of your class. Here is some code >> (untested): > > [snip] > >> template >> T vector_getitem(std::vector &v, int index) > > FYI, I am currently working on a solution to get this working properly. > The problem with the code above, as Dave mentioned in a separate > thread (and a problem that I am facing right now) is that if the returned > item is by value, it has no link to the original element in the vector. If > for example T has a non-const member function, say "set(int)", we run > into problems like: > > >>> vec[n].set(v) > > where we are trying to set v of a temporary T, thus making the vector > elements through the [] operator immutable except through assignment > (through __setitem__): > > >> vec[n] = vec[n].set(v) > > It might be tempting to return a reference. OTOH, we are in danger > of dangling references once the vector changes (resizes for instance). But note that the OP was wrapping a container of double, which isn't subject to many of the same problems. > I'm now experimenting with some wrapper code to fix this situation > while at the same time making the wrapping of __getitem__ and its friends > easier. The "easier" part will be a big help for everyone. -- Dave Abrahams Boost Consulting www.boost-consulting.com From djowel at gmx.co.uk Sat Jun 14 15:36:48 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Sat, 14 Jun 2003 21:36:48 +0800 Subject: [C++-sig] Re: Howto wrap operator[]? (boost::python) References: <3EEA3FE1.3080601@globalite.com.br> <001a01c3322a$edf111c0$0100a8c0@godzilla> Message-ID: <00ed01c3327a$0424b7e0$0100a8c0@godzilla> David Abrahams wrote: > "Joel de Guzman" writes: > > But note that the OP was wrapping a container of double, which isn't > subject to many of the same problems. > >> I'm now experimenting with some wrapper code to fix this situation >> while at the same time making the wrapping of __getitem__ and its >> friends easier. > > The "easier" part will be a big help for everyone. Right. Pardon the noise. -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From franke at ableton.com Sat Jun 14 20:46:30 2003 From: franke at ableton.com (Stefan Franke) Date: Sat, 14 Jun 2003 20:46:30 +0200 Subject: [C++-sig] Simple rvalue_from_python_data usage example availabe? Message-ID: Ralf, thanks a lot for the quick answer. It was indeed exactly what I needed. Stefan > -----Original Message----- > From: c++-sig-admin at python.org [mailto:c++-sig-admin at python.org]On > Behalf Of Ralf W. Grosse-Kunstleve > Sent: Saturday, June 14, 2003 5:32 AM > To: c++-sig at python.org > Subject: Re: [C++-sig] Simple rvalue_from_python_data usage example > availabe? > > > --- Stefan Franke wrote: > > I'm currently trying to implement a bidirectional conversion > > from Python unicode strings to our library's own unicode string > > type, called 'TString'. > > ... > > But I'm clueless how to set it up for correct usage. I've checked > > the reference to the "scitbx" in the FAQ, but there it's done in a > > totally generic way, from which I'm too stupid to derive my special > > case. > > I believe this is very similar to what you want: > > http://mail.python.org/pipermail/c++-sig/2003-May/004133.html > > I'll add this to the FAQ when I get a chance. > Ralf > > > __________________________________ > Do you Yahoo!? > SBC Yahoo! DSL - Now only $29.95 per month! > http://sbc.yahoo.com > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franke at ableton.com Sat Jun 14 21:58:55 2003 From: franke at ableton.com (Stefan Franke) Date: Sat, 14 Jun 2003 21:58:55 +0200 Subject: [C++-sig] Automatic downcast question Message-ID: After observing some strange behaviour I've extracted the toy example below. We have a simple inheritance relation A < B < C. The classes A and B are wrapped, C remains unwrapped. struct A { virtual std::string classname() { return "A"; } }; struct B : A { virtual std::string classname() { return "B"; } }; struct C : B { virtual std::string classname() { return "C"; } }; A* getA() { static A* pA = new A; return pA; } A* getB() { static A* pA = new B; return pA; } A* getC() { static A* pA = new C; return pA; } BOOST_PYTHON_MODULE(Test) { def("getA", getA, return_value_policy()); def("getB", getB, return_value_policy()); def("getC", getC, return_value_policy()); class_("A").def("classname", &A::classname); class_("B", bases); // C remains unwrapped } ---------------------- Now, in Python: >>> from Test import * >>> getA() >>> getB() // <<< (Superb feature, anyway!) >>> getC() // <<< Why not Test.B? >>>getC().classname() 'C' Is there a way to make getC() return a wrapped B object instead of a wrapped A? This is important for my intended use of the BPL, since A and B (and in fact many more) are wrapped API classes, whereas C (and C1, C2, ..) are client classes that can't be exposed to BPL in advance. Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From franke at ableton.com Sat Jun 14 22:05:24 2003 From: franke at ableton.com (Stefan Franke) Date: Sat, 14 Jun 2003 22:05:24 +0200 Subject: [C++-sig] Automatic downcast question Message-ID: Oops, > class_("B", bases); Should have been class_ >("B"); but this doesn't change anything. Stefan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/ms-tnef Size: 3514 bytes Desc: not available URL: From dave at boost-consulting.com Sun Jun 15 00:04:19 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 14 Jun 2003 18:04:19 -0400 Subject: [C++-sig] Re: Automatic downcast question References: Message-ID: "Stefan Franke" writes: > class_("A").def("classname", &A::classname); > class_("B", bases); > // C remains unwrapped > } > > ---------------------- > > Now, in Python: > >>>> from Test import * >>>> getA() > >>>> getB() > // <<< (Superb feature, anyway!) >>>> getC() > // <<< Why not Test.B? > >>>>getC().classname() > 'C' > > > Is there a way to make getC() return a wrapped B object instead of > a wrapped A? There is a way, but it's too expensive, IMO. I think it's reasonable to try to convert directly to the actual type of the pointee (C), but I don't think it's reasonable to try downcasting to every known derived class of the pointee's static type. We could consider implementing that as a new return value policy, though, so you can ask for it when you need it. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nickm at sitius.com Sun Jun 15 04:25:24 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sat, 14 Jun 2003 22:25:24 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> Message-ID: <3EEBD914.FDEBAE3E@sitius.com> What do you think about the following? namespace boost { namespace python { struct return_self : default_call_policies { static PyObject* postcall(PyObject *args, PyObject* ){ return incref(PyTuple_GetItem(args,0)); } struct result_converter { template struct apply{ struct type{ static bool convertible() {return true;} PyObject *operator()(T) const {return 0;} }; }; }; }; }} From dave at boost-consulting.com Sun Jun 15 17:59:40 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 11:59:40 -0400 Subject: [C++-sig] Threads and Boost.Python In-Reply-To: <3EA0310A.4070604@vrac.iastate.edu> (Patrick Hartling's message of "Fri, 18 Apr 2003 12:08:26 -0500") References: <3EA0310A.4070604@vrac.iastate.edu> Message-ID: Thomas Witt writes: > Hi, > > There is a name inconsistency in the iterator papers. Whereas David > Abrahams is used for the author, the copyright holder is Dave Abrahams. > We might want to fix this if we do a revision. > > Thomas Good point. In copyrights and other legal text I should be "David". -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 15 18:14:33 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 12:14:33 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EEBD914.FDEBAE3E@sitius.com> Message-ID: Nikolay Mladenov writes: > What do you think about the following? > > namespace boost { namespace python > { > struct return_self : default_call_policies > { > static PyObject* postcall(PyObject *args, PyObject* ){ > return incref(PyTuple_GetItem(args,0)); > } > struct result_converter > { > template struct apply{ > struct type{ > static bool convertible() {return true;} > PyObject *operator()(T) const {return 0;} ^ I'd like to see ---------------------------^ typename add_reference::type>::type right here. > }; > }; > }; > }; > }} Otherwise, it looks great! If you'd like to write an HTML page for the reference manual and modify one of the tests to exercise it, I'd be happy to add it to the system. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 15 18:15:21 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 12:15:21 -0400 Subject: [C++-sig] Re: Threads and Boost.Python References: <3EA0310A.4070604@vrac.iastate.edu> Message-ID: David Abrahams writes: > Thomas Witt writes: > >> Hi, >> >> There is a name inconsistency in the iterator papers. Whereas David >> Abrahams is used for the author, the copyright holder is Dave Abrahams. >> We might want to fix this if we do a revision. >> >> Thomas > > Good point. In copyrights and other legal text I should be "David". Weird. I have no idea how this ended up on the C++-sig. Sorry everyone. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 15 19:00:44 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 13:00:44 -0400 Subject: [C++-sig] Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... Message-ID: A number of you (who have my sympathy!) have begun to dig in to Boost.Python's implementation to some degree, usually so you can accomplish something that isn't exposed in the documented interface. I'm sure most of you have noticed that it's not always easy to navigate the internal structure of the library. I'd like to address some of that and at the same time give us the tools to solve some other problems, which I'll discuss at the end. I propose to divide the library's implementation into several namespace layers with corresponding subdirectories of boost/python. These are just rough divisions and I would welcome suggestions for finer-grained layering, or better names, or... These layers are generally ordered from dependencies to dependents. core (for lack of a better name) - This is a Python-independent layer which contains the framework of the type-converter registry, inheritance.hpp which manages base<->derived class conversions, the exception translator framework, boost/python/type_id.hpp, and possibly a few components from the current boost/python/detail. converter - This stuff which handles Python-specific conversion mechanics is mostly already segregated in boost/python/converter, but it could be better organized. Almost everything else in the library is built upon these capabilities function - Wrapping of (member) (function) pointers into Python callable objects. Maybe this should be called "callable". callback - Invocation of Python callable objects from C++, e.g. call_method<...>, call<...> api - various namespace-scope functions such as del(), getattr(), etc. objects - object, str, dict, tuple, list, long_ ... classes - specific support for class wrapping, including instance_holders, support for smart pointers, etc. I see great potential in this reorganization. One thing I'd like to see happen in the near term is that Pyste might be altered to use some lower-level components of Boost.Python directly to improve compile times. For example, instead of relying on Boost.Python to generate wrappers for functions and member functions, Pyste might instead generate function wrappers directly using Boost.Python's type converters. I have no doubt that Python executes faster than the template engine in most C++ implementations! A longer-term goal is to move enough of the code into the core that we could easily re-use it to support other interpreted languages besides Python. Comments? -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Sun Jun 15 19:44:39 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Sun, 15 Jun 2003 14:44:39 -0300 Subject: [C++-sig] Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... In-Reply-To: References: Message-ID: <3EECB087.2090904@globalite.com.br> David Abrahams wrote: >I propose to divide the library's implementation into several >namespace layers with corresponding subdirectories of boost/python. >These are just rough divisions and I would welcome suggestions for >finer-grained layering, or better names, or... These layers are >generally ordered from dependencies to dependents. > > core (for lack of a better name) - This is a Python-independent > layer which contains the framework of the type-converter > registry, inheritance.hpp which manages base<->derived class > conversions, the exception translator framework, > boost/python/type_id.hpp, and possibly a few components from the > current boost/python/detail. > > converter - This stuff which handles Python-specific conversion > mechanics is mostly already segregated in boost/python/converter, > but it could be better organized. Almost everything else in the > library is built upon these capabilities > > function - Wrapping of (member) (function) pointers into Python > callable objects. Maybe this should be called "callable". > > callback - Invocation of Python callable objects from C++, > e.g. call_method<...>, call<...> > > api - various namespace-scope functions such as del(), getattr(), > etc. > > objects - object, str, dict, tuple, list, long_ ... > > classes - specific support for class wrapping, including > instance_holders, support for smart pointers, etc. > > Looks great, but perhaps inheritance.hpp should be included in the classes namespace? >I see great potential in this reorganization. One thing I'd like to >see happen in the near term is that Pyste might be altered to use some >lower-level components of Boost.Python directly to improve compile >times. For example, instead of relying on Boost.Python to generate >wrappers for functions and member functions, Pyste might instead >generate function wrappers directly using Boost.Python's type >converters. I have no doubt that Python executes faster than the >template engine in most C++ implementations! > > I will of course be glad to implement any modification in Pyste in that direction, with Dave's help. 8) >A longer-term goal is to move enough of the code into the core that we >could easily re-use it to support other interpreted languages besides >Python. > > Yeah, but what would we call the library then? Boost::Interpreted? ;) Regards, Nicodemus. From nickm at sitius.com Sun Jun 15 21:56:30 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sun, 15 Jun 2003 15:56:30 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EEBD914.FDEBAE3E@sitius.com> Message-ID: <3EECCF6D.8A6F3715@sitius.com> An HTML attachment was scrubbed... URL: From dave at boost-consulting.com Sun Jun 15 22:21:16 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 16:21:16 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EEBD914.FDEBAE3E@sitius.com> <3EECCF6D.8A6F3715@sitius.com> Message-ID: Nikolay Mladenov writes: > > > > David Abrahams wrote: > Nikolay Mladenov writes: > > What do you think about the following? > > > > namespace boost { namespace python > > { > > struct return_self : default_call_policies > > { > > static PyObject* > postcall(PyObject *args, PyObject* ){ > > > return incref(PyTuple_GetItem(args,0)); > > } > > struct result_converter > > { > > > template struct apply{ > > > struct type{ > > > static bool convertible() {return true;} > > > PyObject *operator()(T) const {return 0;} > > ^ > I'd like to see ---------------------------^ > typename add_reference::type>::type > right here. > What is the reason for that? Shouldn't this be optimised away anyway? > Not that I mind it. > > > > }; > > > }; > > }; > > }; > > }} > Otherwise, it looks great! If you'd like to write an HTML > page for > the reference manual and modify one of the tests to exercise it, > I'd > be happy to add it to the system. > Sure. > No offense, but "ick!" Care to repost as plain text? -- Dave Abrahams Boost Consulting www.boost-consulting.com From franke at ableton.com Sun Jun 15 22:33:34 2003 From: franke at ableton.com (Stefan Franke) Date: Sun, 15 Jun 2003 22:33:34 +0200 Subject: [C++-sig] Re: Automatic downcast question Message-ID: > -----Original Message----- > From: c++-sig-admin at python.org [mailto:c++-sig-admin at python.org]On > Behalf Of David Abrahams ... > I think it's reasonable to try to convert directly to the actual type > of the pointee (C), but I don't think it's reasonable to try > downcasting to every known derived class of the pointee's static type. But "every known derived class" is not what I was asking for. Rather "the lowermost class on the inheritance graph from the pointee's static type to its actual type that has been registered to BPL via bases<>". (Sorry if my English is weird). This is typically exactly one class (B), unless with diamond-shaped inheritance. In this case a non-ambivalent base class could be chosen. > We could consider implementing that as a new return value policy, > though, so you can ask for it when you need it. Another option (at least for me) would be to be able to dynamically cast a wrapped A to a wrapped C instance. Is that possible? Stefan Franke www.ableton.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nickm at sitius.com Sun Jun 15 22:40:21 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sun, 15 Jun 2003 16:40:21 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EEBD914.FDEBAE3E@sitius.com> Message-ID: <3EECD9B5.8D4F7D01@sitius.com> Sorry... David Abrahams wrote: > > Nikolay Mladenov writes: > > > What do you think about the following? > > > > namespace boost { namespace python > > { > > struct return_self : default_call_policies > > { > > static PyObject* postcall(PyObject *args, PyObject* ){ > > return incref(PyTuple_GetItem(args,0)); > > } > > struct result_converter > > { > > template struct apply{ > > struct type{ > > static bool convertible() {return true;} > > PyObject *operator()(T) const {return 0;} > ^ > I'd like to see ---------------------------^ > > typename add_reference::type>::type > > right here. What is the reason for that? Shouldn't the parameter passing be optimised away anyway? Not that I mind it. > > > }; > > }; > > }; > > }; > > }} > > Otherwise, it looks great! If you'd like to write an HTML page for > the reference manual and modify one of the tests to exercise it, I'd > be happy to add it to the system. Sure. > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com From dave at boost-consulting.com Mon Jun 16 02:05:49 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 20:05:49 -0400 Subject: [C++-sig] Re: return value policy for returning same python object... References: <20030608215003.97433.qmail@web20208.mail.yahoo.com> <3EEBD914.FDEBAE3E@sitius.com> <3EECD9B5.8D4F7D01@sitius.com> Message-ID: Nikolay Mladenov writes: > Sorry... > > David Abrahams wrote: >> >> Nikolay Mladenov writes: >> >> > What do you think about the following? >> > >> > namespace boost { namespace python >> > { >> > struct return_self : default_call_policies >> > { >> > static PyObject* postcall(PyObject *args, PyObject* ){ >> > return incref(PyTuple_GetItem(args,0)); >> > } >> > struct result_converter >> > { >> > template struct apply{ >> > struct type{ >> > static bool convertible() {return true;} >> > PyObject *operator()(T) const {return 0;} >> ^ >> I'd like to see ---------------------------^ >> >> typename add_reference::type>::type >> >> right here. > > What is the reason for that? Shouldn't the parameter passing be > optimised away anyway? Not if the copy ctor has observable side-effects. There's no allowed value-arg-optimization, so compilers will often not optimize away copies in cases like this. Try it yourself with a std::vector. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Mon Jun 16 02:16:00 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 15 Jun 2003 20:16:00 -0400 Subject: [C++-sig] Re: Automatic downcast question References: Message-ID: "Stefan Franke" writes: >> -----Original Message----- >> From: c++-sig-admin at python.org [mailto:c++-sig-admin at python.org]On >> Behalf Of David Abrahams > ... > >> I think it's reasonable to try to convert directly to the actual type >> of the pointee (C), but I don't think it's reasonable to try >> downcasting to every known derived class of the pointee's static type. > > But "every known derived class" is not what I was asking for. Rather > "the lowermost class on the inheritance graph from the pointee's static > type to its actual type that has been registered to BPL via > bases<>". > (Sorry if my English is weird). Nothing wrong with your English, but: 1. That type may not exist: A / \ B C \ / D <- not registered 2. There's no way to implement that in C++ without potentially trying conversions to every known (i.e. registered) derived class of the pointee's static type. > This is typically exactly one class (B), unless with diamond-shaped > inheritance. In this case a non-ambivalent base class could be > chosen. You must mean non-ambiguous. If you can provide a reasonably-efficient implementation, I'm all for it. However #1, I'm fairly sure that there is no implementation that's reasonably efficient in general (meaning only has to do a small number of dynamic_cast<>s. However #2, I'm not sure that people care all that much about efficiency in cases like these. However #3, it's possible to do the inefficient search for a "most-derived wrapped type" once and cache the result. Option 3 would integrate best with what's already happening in inheritance.hpp/.cpp, but I don't think I have time to implement it right now. >> We could consider implementing that as a new return value policy, >> though, so you can ask for it when you need it. > > Another option (at least for me) would be to be able to dynamically > cast a wrapped A to a wrapped C instance. Is that possible? ?? I thought the whole point of this was that C wasn't wrapped? Dynamically casting A->B is pretty simple; just wrap this: object asB(back_reference b) { return b.source(); } -- Dave Abrahams Boost Consulting www.boost-consulting.com From brett.calcott at paradise.net.nz Mon Jun 16 12:49:01 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Mon, 16 Jun 2003 22:49:01 +1200 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: > > converter - This stuff which handles Python-specific conversion > mechanics is mostly already segregated in boost/python/converter, > but it could be better organized. Almost everything else in the > library is built upon these capabilities > As somebody who has tried to go through the library, this bit is the part I'd really like to understand. I'm sure the reorganisation would help, but I'd really love to read a short doc on how this actually all hangs together. The registry, how the templates automates the construction of the PyObject and Type, how conversions are looked up (what's that graph stuff in there?). Though brilliant, the combination of templates and preprocessor makes it really (really, really) hard going. I've read through a fair bit of c/c++ code before, and I can say that this stuff is the hardest I've ever tried to decipher - and I know it is well written, not like most of the other stuff I've looked at. Even stepping through it in the debugger is not that enlightening. I think I could use the library better and assist more in extending it if I could put all the bits together in my head.(I have in mind something like the big text file that Joel wrote for Phoenix.) Just putting in a bid for your time Dave... Cheers, Brett From djowel at gmx.co.uk Mon Jun 16 13:17:05 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Mon, 16 Jun 2003 19:17:05 +0800 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: <026f01c333f8$e2aab490$0100a8c0@godzilla> Brett Calcott wrote: >> converter - This stuff which handles Python-specific conversion >> mechanics is mostly already segregated in >> boost/python/converter, but it could be better organized. >> Almost everything else in the library is built upon these >> capabilities >> > > As somebody who has tried to go through the library, this bit is the > part I'd really like to understand. I'm sure the reorganisation would > help, but I'd > really love to read a short doc on how this actually all hangs > together. The registry, how the templates automates the construction > of the PyObject and Type, how conversions are looked up (what's that > graph stuff in there?). > > Though brilliant, the combination of templates and preprocessor makes > it really (really, really) hard going. I've read through a fair bit > of c/c++ code before, and I can say that this stuff is the hardest > I've ever tried to decipher - and I know it is well written, not like > most of the other stuff I've looked at. Even stepping through it in > the debugger is not that enlightening. > > I think I could use the library better and assist more in extending > it if I could put all the bits together in my head.(I have in mind > something like the big text file that Joel wrote for Phoenix.) Which text file was that? Indeed, the preprocessor really gets in the way. Unfortunately, I had to bite the bullet too and the next Phoenix/LL code will have to be preprocessor driven. > Just putting in a bid for your time Dave... Some implementation notes would really help. Anyway, I promised Dave to help with the reorganization. I'm not at all quite familiar with the code but I sure could help in shuffling things around into modules, to begin with. Cheers, -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From roman_sulzhyk at yahoo.com Mon Jun 16 17:43:46 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Mon, 16 Jun 2003 08:43:46 -0700 (PDT) Subject: [C++-sig] Pyste bug - static member functions... In-Reply-To: <3EECB087.2090904@globalite.com.br> Message-ID: <20030616154346.44668.qmail@web41101.mail.yahoo.com> Nicodemus: Another thing I've noticed, that for static member functions the code generated defines them in class scope instead of global scope, causing errors. Basically, you need to change the 'PointerDeclaration' of Method class to look more like this: # If a method is static, don't need the class specifier if self.static: scope = '*' else: scope = '%s::*' % self.class_ if self.const: const = 'const' return '(%s (%s)(%s) %s)&%s' %\ (result, scope, params, const, self.FullName()) That fixes the problem. I also like Dave's suggestion to use the term 'final' to denote virtual functions that are not expected to be overloadable from Python. Thanks! Roman __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Mon Jun 16 20:56:52 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 16 Jun 2003 14:56:52 -0400 Subject: [C++-sig] Re: Pyste bug - static member functions... References: <3EECB087.2090904@globalite.com.br> <20030616154346.44668.qmail@web41101.mail.yahoo.com> Message-ID: Roman Sulzhyk writes: > I also like Dave's suggestion to use the term 'final' to denote virtual > functions that are not expected to be overloadable from Python. That's "overridable". The only thing about 'final' which makes me nervous is that IIUC it usually implies enforcement: "thou shalt not try to declare this name in a derived class or the system will slap you on the wrist". But I don't pretend to know Java; maybe I'm wrong? -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Tue Jun 17 03:55:44 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Mon, 16 Jun 2003 22:55:44 -0300 Subject: [C++-sig] Boost.Python: few thoughts and questions... In-Reply-To: <20030612211501.5779.qmail@web41108.mail.yahoo.com> References: <20030612211501.5779.qmail@web41108.mail.yahoo.com> Message-ID: <3EEE7520.40001@globalite.com.br> Hi people, I implemented the changes we've discussed: 1) Disabling the generation of virtual wrappers Done. I called the function no_override (because the user does not want to override it in Python). I think another candidate would be final (as suggested by Dave). I think "unvirtual" sounds a little werid. 8P 2) Policies for virtual member functions were being ignored Fixed too, thanks to Roman for the patch! 3) Derived classes inheriting the bases' attributes even if the base class is not exported. Suppose you have a hierarchy like this: A -> B -> C -> D (ie, A is at the top of the hierarchy) And you only want to export B and D. Then B will inherit A's methods and attributes, D will inherit C's attributes and in Python the hierarchy will be: B -> D as you would expect. 4) Exception specifiers in virtual member functions. I will implement in the next couple of days, sorry! Any comments or criticisms are welcome! Thanks a lot to Dave and Roman for the discussion. And thanks Roman for the patches and insights about the implementation, they were very useful. Regards, Nicodemus. From nicodemus at globalite.com.br Tue Jun 17 08:56:29 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 03:56:29 -0300 Subject: [C++-sig] Pyste bug - static member functions... In-Reply-To: <20030616154346.44668.qmail@web41101.mail.yahoo.com> References: <20030616154346.44668.qmail@web41101.mail.yahoo.com> Message-ID: <3EEEBB9D.4060809@globalite.com.br> Roman Sulzhyk wrote: >Nicodemus: > >Another thing I've noticed, that for static member functions the code >generated defines them in class scope instead of global scope, causing >errors. > >Basically, you need to change the 'PointerDeclaration' of Method class >to look more like this: > > > # If a method is static, don't need the class specifier > if self.static: > scope = '*' > else: > scope = '%s::*' % self.class_ > > > if self.const: > const = 'const' > return '(%s (%s)(%s) %s)&%s' %\ > (result, scope, params, const, self.FullName()) > >That fixes the problem. > > You do not seem to be using the latest CVS, Pyste uses staticmethod to declare static member functions: struct C { static int foo() { return 0; } }; // Module ====================================================================== BOOST_PYTHON_MODULE(test) { class_< C >("C", init< >()) .def(init< const C & >()) .def("foo", &C::foo) .staticmethod("foo") ; } If you are not using the latest CVS, then I might be misunderstanding the problem, but I *stronly* suggest that you use the latest CVS from both Boost.Python and Pyste... Pyste specifically has had lots of bug fixes and some new features since the 1.30.0 release of Boost. ;) >I also like Dave's suggestion to use the term 'final' to denote virtual >functions that are not expected to be overloadable from Python. > > I considered it too, but I thought that "no_override" was more clear. What you guys think? Regards, Nicodemus. From dave at boost-consulting.com Tue Jun 17 13:01:53 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 07:01:53 -0400 Subject: [C++-sig] Re: Pyste bug - static member functions... References: <20030616154346.44668.qmail@web41101.mail.yahoo.com> <3EEEBB9D.4060809@globalite.com.br> Message-ID: Nicodemus writes: >>I also like Dave's suggestion to use the term 'final' to denote virtual >>functions that are not expected to be overloadable from Python. >> > > I considered it too, but I thought that "no_override" was more > clear. What you guys think? WWJD - What Would Java Do? I think it depends on whether "final" on a Java method means you can write a new (non-virtually-dispatched) one of that name or not. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Tue Jun 17 13:39:52 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 08:39:52 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: References: <20030616154346.44668.qmail@web41101.mail.yahoo.com> <3EEEBB9D.4060809@globalite.com.br> Message-ID: <3EEEFE08.9020709@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > > >>>I also like Dave's suggestion to use the term 'final' to denote virtual >>>functions that are not expected to be overloadable from Python. >>> >>> >>> >>I considered it too, but I thought that "no_override" was more >>clear. What you guys think? >> >> > >WWJD - What Would Java Do? > >I think it depends on whether "final" on a Java method means you can >write a new (non-virtually-dispatched) one of that name or not. > I believe that in this context, "final" has the same meaning as in Java... I do not like it much because the meaning is not obvious from the word alone, but perhaps using a familiar term to some other programmers might be better than coming up with a new one? From roman_sulzhyk at yahoo.com Tue Jun 17 16:26:19 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Tue, 17 Jun 2003 07:26:19 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEEFE08.9020709@globalite.com.br> Message-ID: <20030617142619.6203.qmail@web41109.mail.yahoo.com> Nicodemus, Dave: Either one, no_override or final seem fine, doesn't matter so much. Nicodemus, you're right, I am using stock 1.3.0, so I guess the bug I've mentioned is irrelevant. Do you guys know, is there a way to get a CVS snapshot without actually using CVS, i.e. are nightly snapshots available for download somewhere, I couldn't find it on sourceforge... Thanks! Roman --- Nicodemus wrote: > David Abrahams wrote: > > >Nicodemus writes: > > > > > > > >>>I also like Dave's suggestion to use the term 'final' to denote > virtual > >>>functions that are not expected to be overloadable from Python. > >>> > >>> > >>> > >>I considered it too, but I thought that "no_override" was more > >>clear. What you guys think? > >> > >> > > > >WWJD - What Would Java Do? > > > >I think it depends on whether "final" on a Java method means you can > >write a new (non-virtually-dispatched) one of that name or not. > > > > I believe that in this context, "final" has the same meaning as in > Java... I do not like it much because the meaning is not obvious from > > the word alone, but perhaps using a familiar term to some other > programmers might be better than coming up with a new one? > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Tue Jun 17 16:25:07 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 10:25:07 -0400 Subject: [C++-sig] Re: Pyste bug - static member functions... References: <20030616154346.44668.qmail@web41101.mail.yahoo.com> <3EEEBB9D.4060809@globalite.com.br> <3EEEFE08.9020709@globalite.com.br> Message-ID: Nicodemus writes: > David Abrahams wrote: > >>Nicodemus writes: >> >> >>>>I also like Dave's suggestion to use the term 'final' to denote virtual >>>>functions that are not expected to be overloadable from Python. >>>> >>>I considered it too, but I thought that "no_override" was more >>>clear. What you guys think? >>> >> >>WWJD - What Would Java Do? >> >>I think it depends on whether "final" on a Java method means you can >>write a new (non-virtually-dispatched) one of that name or not. >> > > I believe that in this context, "final" has the same meaning as in > Java... I do not like it much because the meaning is not obvious from > the word alone, but perhaps using a familiar term to some other > programmers might be better than coming up with a new one? Absolutely. If that's right, I think we should go with "final". It's a sensible semantics, too. There's no reason, once that name has been "sealed off" from the overriding mechanism, not to allow people to reuse it. BTW, it seems likely that people will eventually want to do things like: # finalize any functions X inherited from base classes for b in X.bases: for f in b.member_functions: final(f) Pyste probably ought to expose a slick programmatic interface to the XML info underneath it all... or does it do that already? -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 17 16:27:12 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 10:27:12 -0400 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: <3EECB087.2090904@globalite.com.br> Message-ID: Nicodemus writes: >>I propose to divide the library's implementation into several >>namespace layers with corresponding subdirectories of boost/python. >>These are just rough divisions and I would welcome suggestions for >>finer-grained layering, or better names, or... These layers are >>generally ordered from dependencies to dependents. >> >> core (for lack of a better name) - This is a Python-independent >> layer which contains the framework of the type-converter >> registry, inheritance.hpp which manages base<->derived class >> conversions, the exception translator framework, >> boost/python/type_id.hpp, and possibly a few components from the >> current boost/python/detail. >> >> converter - This stuff which handles Python-specific conversion >> mechanics is mostly already segregated in boost/python/converter, >> but it could be better organized. Almost everything else in the >> library is built upon these capabilities >> >> function - Wrapping of (member) (function) pointers into Python >> callable objects. Maybe this should be called "callable". >> >> callback - Invocation of Python callable objects from C++, >> e.g. call_method<...>, call<...> >> >> api - various namespace-scope functions such as del(), getattr(), >> etc. >> >> objects - object, str, dict, tuple, list, long_ ... >> >> classes - specific support for class wrapping, including >> instance_holders, support for smart pointers, etc. >> > > Looks great, but perhaps inheritance.hpp should be included in the > classes namespace? Well, it's an interesting issue. The stuff in inheritance.[ch]pp is entirely independent of Python, so it should really be in a layer that's language-independent. Maybe the core needs to be further subdivided? -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Tue Jun 17 16:39:35 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 17 Jun 2003 07:39:35 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <20030617142619.6203.qmail@web41109.mail.yahoo.com> Message-ID: <20030617143935.3099.qmail@web20206.mail.yahoo.com> --- Roman Sulzhyk wrote: > Do you guys know, is there a way to get a CVS snapshot without actually > using CVS, i.e. are nightly snapshots available for download somewhere, > I couldn't find it on sourceforge... In case it is useful: a full, untainted snapshot of the boost CVS tree from a few hours ago is in this file: http://cci.lbl.gov/cctbx_build/results/2003_06_17_0136/cctbx_bundle.tar.gz Subdirectory cctbx_sources/boost. Simply delete the rest. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Tue Jun 17 16:32:56 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 10:32:56 -0400 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: "Brett Calcott" writes: >> >> converter - This stuff which handles Python-specific conversion >> mechanics is mostly already segregated in boost/python/converter, >> but it could be better organized. Almost everything else in the >> library is built upon these capabilities >> > > As somebody who has tried to go through the library, this bit is the > part I'd really like to understand. I'm sure the reorganisation > would help, but I'd really love to read a short doc on how this > actually all hangs together. The registry, how the templates > automates the construction of the PyObject and Type, how conversions > are looked up (what's that graph stuff in there?). The graph stuff is not in this layer, it's in the core. It implements something like this: dst_ptr = dynamic_cast(src_void_ptr, src_type_id, dst_type_id) > Though brilliant, the combination of templates and preprocessor makes it > really (really, really) hard going. I've read through a fair bit of c/c++ > code before, and I can say that this stuff is the hardest I've ever tried to > decipher - and I know it is well written, not like most of the other stuff > I've looked at. Even stepping through it in the debugger is not that > enlightening. > > I think I could use the library better and assist more in extending it if I > could put all the bits together in my head.(I have in mind something like > the big text file that Joel wrote for Phoenix.) > > Just putting in a bid for your time Dave... Well, I'd be happy to collaborate with you on an implementation document. It's not something I can afford to do by myself, in part because I don't know what questions to answer. If you would agree to have a conversation with me about it here, and from that create a RestructuredText document which we'll include in the CVS, I'd be more than happy to type out details at length. I think you might have to dig into the code a little, too. Deal? -- Dave Abrahams Boost Consulting www.boost-consulting.com From Paul_Kunz at SLAC.Stanford.EDU Tue Jun 17 17:15:19 2003 From: Paul_Kunz at SLAC.Stanford.EDU (Paul F. Kunz) Date: Tue, 17 Jun 2003 08:15:19 -0700 Subject: [C++-sig] Documentation suggestion Message-ID: <200306171515.h5HFFJs21576@libra3.slac.stanford.edu> Speaking of documentation, may I make a suggestion for an addition to the tutorial. I would have liked to have seen a section on the handling of singleton class. This would bring into the tutorial two issues I don't believe are currently in the tutorial. How to handle a class where the constructors, i.e. default constructor and copy constructor, are private and how to handle static methods. After many hours of trying things and reading the reference manual, I came ups with this.. class_ < NTupleController, bases<>, NTupleController, boost::noncopyable > ( "NTupleController", no_init ) .def ( "instance", &NTupleController::instance, return_value_policy < reference_existing_object > () ) .staticmethod( "instance" ) Did I get it right? From dave at boost-consulting.com Tue Jun 17 17:56:30 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 11:56:30 -0400 Subject: [C++-sig] Re: Documentation suggestion References: <200306171515.h5HFFJs21576@libra3.slac.stanford.edu> Message-ID: "Paul F. Kunz" writes: > After many hours of trying things and reading the reference manual, > I came ups with this.. > > class_ < NTupleController, bases<>, > NTupleController, boost::noncopyable > ( "NTupleController", > no_init ) > .def ( "instance", &NTupleController::instance, > return_value_policy < reference_existing_object > () ) > .staticmethod( "instance" ) > > Did I get it right? Yup. Singletons are a very specific case which combines a number of issues. It might be good to use as an example, but I think I wouldn't want the subject to be "how to wrap a singleton". Maybe something more like "handling private constructors"? -- Dave Abrahams Boost Consulting www.boost-consulting.com From Paul_Kunz at SLAC.Stanford.EDU Tue Jun 17 18:21:55 2003 From: Paul_Kunz at SLAC.Stanford.EDU (Paul F. Kunz) Date: Tue, 17 Jun 2003 09:21:55 -0700 Subject: [C++-sig] Re: Documentation suggestion In-Reply-To: References: <200306171515.h5HFFJs21576@libra3.slac.stanford.edu> Message-ID: <200306171621.h5HGLtA21790@libra3.slac.stanford.edu> >>>>> On Tue, 17 Jun 2003 11:56:30 -0400, David Abrahams said: > Singletons are a very specific case which combines a number of > issues. It might be good to use as an example, but I think I > wouldn't want the subject to be "how to wrap a singleton". Maybe > something more like "handling private constructors"? Or "Handling private constructors and singletons", in order to bring staticmethod() into tutorial. From mike at bindkey.com Tue Jun 17 22:40:15 2003 From: mike at bindkey.com (Mike Rovner) Date: Tue, 17 Jun 2003 13:40:15 -0700 Subject: [C++-sig] iter(std::map) Message-ID: Hi all, I want to wrap a std::map container, which maps some pointer to wrapped class to another wrapped class. typedef std::map Map; In order to support 'for x in Map():' construct I need to implement __iter__. As I did for keys() call, I'd like to return ptr(it->first) as an iteration result. What is more simple (or recommended) way to do it? - Write and wrap helper iterator object with __iter__ and next methods like struct Map_iter { Map_iter(const Map& m) : it(m.begin()), itend(m.end()) {} Map_iter& identity(Map_iter& self) {return self;} object next() { if( it!=itend ) return ptr(it->first); PyErr_SetString(PyExc_StopIteration,""); throw_error_already_set(); } private: Map::const_iterator it, itend; }; Map_iter get_iterator(const Map& m) { return Map_iter(m); } and then class_("_iter") .def("__iter__", &Map_iter::identity) .def("next", &Map_iter::next) ; //... and in Map wrapper .def("__iter__", get_iterator) or - Use iterator<>() with special return policy like .def("__iter__", iterator >()) If later, I can't figure out the body of apply for my special return policy: struct copy_map_key_ptr { template struct apply { typedef to_python_value< ??? > type; }; }; Any suggestions? Mike From nicodemus at globalite.com.br Wed Jun 18 01:15:21 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 20:15:21 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: References: <20030616154346.44668.qmail@web41101.mail.yahoo.com> <3EEEBB9D.4060809@globalite.com.br> <3EEEFE08.9020709@globalite.com.br> Message-ID: <3EEFA109.4050701@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > >>I believe that in this context, "final" has the same meaning as in >>Java... I do not like it much because the meaning is not obvious from >>the word alone, but perhaps using a familiar term to some other >>programmers might be better than coming up with a new one? >> >> > >Absolutely. If that's right, I think we should go with "final". >It's a sensible semantics, too. There's no reason, once that name >has been "sealed off" from the overriding mechanism, not to allow >people to reuse it. > From "Java Programmer's SourceBook: Thinking in Java": There are two reasons for *final* methods. The first is to put a ?lock? on the method to prevent any inheriting class from changing its meaning. This is done for design reasons when you want to make sure that a method?s behavior is retained during inheritance and cannot be overridden. The second reason for *final* methods is efficiency. The first definition seems to be what we intend by the mechanism. I wil change it in CVS. >BTW, it seems likely that people will eventually want to do things like: > > # finalize any functions X inherited from base classes > for b in X.bases: > for f in b.member_functions: > final(f) > >Pyste probably ought to expose a slick programmatic interface to the >XML info underneath it all... or does it do that already? > It does not. Currently, the headers are parsed after the user scripts have been executed, so what the user is manipulating in the scripts (ie, pyste files) is not gccxml information at all. I agree with you, this is indeed *very* nice to have, and I will put it in my todo list. From roman_sulzhyk at yahoo.com Wed Jun 18 01:46:29 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Tue, 17 Jun 2003 16:46:29 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEFA109.4050701@globalite.com.br> Message-ID: <20030617234629.65123.qmail@web41113.mail.yahoo.com> > >BTW, it seems likely that people will eventually want to do things > like: > > > > # finalize any functions X inherited from base classes > > for b in X.bases: > > for f in b.member_functions: > > final(f) > > > >Pyste probably ought to expose a slick programmatic interface to the > >XML info underneath it all... or does it do that already? > > > > It does not. Currently, the headers are parsed after the user scripts > > have been executed, so what the user is manipulating in the scripts > (ie, > pyste files) is not gccxml information at all. I agree with you, this > is > indeed *very* nice to have, and I will put it in my todo list. > Yep, I arrived to similar conclusions also - currently the mechanism is somewhat raw, it would be *very* nice to expose the parsed declarations in the scope of the pyste script, to allow people to mutate them. Talking about todo lists, another useful thing would be to be able to add a command line option to take XML file already pre-generated - that'll simplify pyste script development some, because with G++ 3.x series it takes considerable amounts of time to generate XML from C++ and hence making iterative changes is complex. Roman > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From nicodemus at globalite.com.br Wed Jun 18 01:57:07 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 20:57:07 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <20030617234629.65123.qmail@web41113.mail.yahoo.com> References: <20030617234629.65123.qmail@web41113.mail.yahoo.com> Message-ID: <3EEFAAD3.1080306@globalite.com.br> Roman Sulzhyk wrote: >Talking about todo lists, another useful thing would be to be able to >add a command line option to take XML file already pre-generated - >that'll simplify pyste script development some, because with G++ 3.x >series it takes considerable amounts of time to generate XML from C++ >and hence making iterative changes is complex. > > That is a good idea. But passing individual filenames in the command line does not seem pratical, because you have to specify a xml file *per header file* that will be parsed. Perhaps a flag like "--xml-dir" where you indicate where the xml files will be? That way, before Pyste calls gccxml in the file "test.h", it checks if "test.xml" is present in the xml-dir, and use that if present, or parses it otherwise. What do you think? From dave at boost-consulting.com Wed Jun 18 02:14:39 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 17 Jun 2003 20:14:39 -0400 Subject: [C++-sig] Re: Pyste bug - static member functions... References: <20030617234629.65123.qmail@web41113.mail.yahoo.com> <3EEFAAD3.1080306@globalite.com.br> Message-ID: Nicodemus writes: > Roman Sulzhyk wrote: > >>Talking about todo lists, another useful thing would be to be able to >>add a command line option to take XML file already pre-generated - >>that'll simplify pyste script development some, because with G++ 3.x >>series it takes considerable amounts of time to generate XML from C++ >>and hence making iterative changes is complex. >> > > That is a good idea. But passing individual filenames in the command > line does not seem pratical, because you have to specify a xml file > *per header file* that will be parsed. Perhaps a flag like "--xml-dir" > where you indicate where the xml files will be? That way, before Pyste > calls gccxml in the file "test.h", it checks if "test.xml" is present > in the xml-dir, and use that if present, or parses it otherwise. What > do you think? What about reading the XML and producing a pickled representation, then re-reading from the XML whenever it's outdated? Then we could easily integrate it with a build system. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Wed Jun 18 02:25:30 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 21:25:30 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: References: <20030617234629.65123.qmail@web41113.mail.yahoo.com> <3EEFAAD3.1080306@globalite.com.br> Message-ID: <3EEFB17A.9030206@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > > >>Roman Sulzhyk wrote: >> >> >> >>>Talking about todo lists, another useful thing would be to be able to >>>add a command line option to take XML file already pre-generated - >>>that'll simplify pyste script development some, because with G++ 3.x >>>series it takes considerable amounts of time to generate XML from C++ >>>and hence making iterative changes is complex. >>> >>> >>> >>That is a good idea. But passing individual filenames in the command >>line does not seem pratical, because you have to specify a xml file >>*per header file* that will be parsed. Perhaps a flag like "--xml-dir" >>where you indicate where the xml files will be? That way, before Pyste >>calls gccxml in the file "test.h", it checks if "test.xml" is present >>in the xml-dir, and use that if present, or parses it otherwise. What >>do you think? >> >> > >What about reading the XML and producing a pickled representation, >then re-reading from the XML whenever it's outdated? Then we could >easily integrate it with a build system. > Unforunately it is not that simple, because of header dependecies: B.h includes A.h. Class B from B.h is exported, so B.xml is generated. User adds a new method to A, and expects it to reflect in the wrapper for B, but with a simplistic approach Pyste would not be able to note that B.h is outdated. I rather let this problem to build systems already out there, like SCons: the user can easily extend it to generate gccxml files from the headers, with dependency analysis built-in. From roman_sulzhyk at yahoo.com Wed Jun 18 02:48:41 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Tue, 17 Jun 2003 17:48:41 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEFB17A.9030206@globalite.com.br> Message-ID: <20030618004841.26717.qmail@web41106.mail.yahoo.com> --- Nicodemus wrote: > David Abrahams wrote: > > >Nicodemus writes: > > > > > > > >>Roman Sulzhyk wrote: > >> > >> > >> > >>>Talking about todo lists, another useful thing would be to be able > to > >>>add a command line option to take XML file already pre-generated - > >>>that'll simplify pyste script development some, because with G++ > 3.x > >>>series it takes considerable amounts of time to generate XML from > C++ > >>>and hence making iterative changes is complex. > >>> > >>> > >>> > >>That is a good idea. But passing individual filenames in the > command > >>line does not seem pratical, because you have to specify a xml file > >>*per header file* that will be parsed. Perhaps a flag like > "--xml-dir" > >>where you indicate where the xml files will be? That way, before > Pyste > >>calls gccxml in the file "test.h", it checks if "test.xml" is > present > >>in the xml-dir, and use that if present, or parses it otherwise. > What > >>do you think? > >> > >> > > > >What about reading the XML and producing a pickled representation, > >then re-reading from the XML whenever it's outdated? Then we could > >easily integrate it with a build system. > > > > Unforunately it is not that simple, because of header dependecies: > B.h > includes A.h. Class B from B.h is exported, so B.xml is generated. > User > adds a new method to A, and expects it to reflect in the wrapper for > B, > but with a simplistic approach Pyste would not be able to note that > B.h > is outdated. > I rather let this problem to build systems already out there, like > SCons: the user can easily extend it to generate gccxml files from > the > headers, with dependency analysis built-in. > I see, it's not that simple to make it automagic... I do think that the parsing step needs to be optionally exposed, so people can use it in their makefiles to generate xml explicitely, and pass it in on the command line, e.g. pyste --cache="{foo.h:foo.xml, faa.h:faa.xml}" or something like that. Let's make it least intrusive to pyste for now and move the responsibility to the build system. > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From nicodemus at globalite.com.br Wed Jun 18 02:54:34 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 21:54:34 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <20030618004841.26717.qmail@web41106.mail.yahoo.com> References: <20030618004841.26717.qmail@web41106.mail.yahoo.com> Message-ID: <3EEFB84A.5000808@globalite.com.br> Roman Sulzhyk wrote: >I see, it's not that simple to make it automagic... I do think that the >parsing step needs to be optionally exposed, so people can use it in >their makefiles to generate xml explicitely, and pass it in on the >command line, e.g. > >pyste --cache="{foo.h:foo.xml, faa.h:faa.xml}" > >or something like that. > What about my suggestion about --xml-dir? From roman_sulzhyk at yahoo.com Wed Jun 18 03:29:02 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Tue, 17 Jun 2003 18:29:02 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEFB84A.5000808@globalite.com.br> Message-ID: <20030618012902.82746.qmail@web41113.mail.yahoo.com> --- Nicodemus wrote: > > > Roman Sulzhyk wrote: > > >I see, it's not that simple to make it automagic... I do think that > the > >parsing step needs to be optionally exposed, so people can use it in > >their makefiles to generate xml explicitely, and pass it in on the > >command line, e.g. > > > >pyste --cache="{foo.h:foo.xml, faa.h:faa.xml}" > > > >or something like that. > > > > What about my suggestion about --xml-dir? > Well, directory is fine also, however in my example passing and mapping of pre-generated files to header files is explicit, hence build system is responsible for checking appropriate expirations and re-generating files as required. If pyste is to look them up implicitely in the xml-cache directory, it's harder to communicate when they become outdated. I basically approached it from the perspective that build system knows better about dependencies between files and when something needs to be refreshed. However, maybe it's an overcomplication :) Either way is fine with me. > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From nicodemus at globalite.com.br Wed Jun 18 04:03:15 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 17 Jun 2003 23:03:15 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <20030618012902.82746.qmail@web41113.mail.yahoo.com> References: <20030618012902.82746.qmail@web41113.mail.yahoo.com> Message-ID: <3EEFC863.4020003@globalite.com.br> Roman Sulzhyk wrote: >--- Nicodemus wrote: > > >>What about my suggestion about --xml-dir? >> >> >Well, directory is fine also, however in my example passing and mapping >of pre-generated files to header files is explicit, hence build system >is responsible for checking appropriate expirations and re-generating >files as required. If pyste is to look them up implicitely in the >xml-cache directory, it's harder to communicate when they become >outdated. I basically approached it from the perspective that build >system knows better about dependencies between files and when something >needs to be refreshed. > > From my experience with SCons, it would actually simpler the other way. You make your build system generate the gccxml files and the pyste files. Whenever a header changes, the related gccxml file will be rebuilt, and consequently the pyste file will be rebuilt also. With --cache, you would have to make your build system generate the command line with the dictionary-like syntax, which I believe would be more complicated than a static command line "python pyste.py --module=foo --xml-dir=xml-cache bar.pyste bah.pyste"? >However, maybe it's an overcomplication :) Either way is fine with me. > I think --xml-dir is a better solution, unless I am missing something, in which case I would be thankful if you could enlighten me. 8) From roman_sulzhyk at yahoo.com Wed Jun 18 04:37:40 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Tue, 17 Jun 2003 19:37:40 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEFC863.4020003@globalite.com.br> Message-ID: <20030618023740.42431.qmail@web41107.mail.yahoo.com> > I think --xml-dir is a better solution, unless I am missing > something, > in which case I would be thankful if you could enlighten me. 8) > I agree, let's do --xml-dir. > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From rwgk at yahoo.com Wed Jun 18 06:29:53 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 17 Jun 2003 21:29:53 -0700 (PDT) Subject: [C++-sig] Re: Pyste bug - static member functions... In-Reply-To: <3EEFC863.4020003@globalite.com.br> Message-ID: <20030618042953.72289.qmail@web20205.mail.yahoo.com> --- Nicodemus wrote: > From my experience with SCons, it would actually simpler the other way. > You make your build system generate the gccxml files and the pyste > files. Whenever a header changes, the related gccxml file will be > rebuilt, and consequently the pyste file will be rebuilt also. With > --cache, you would have to make your build system generate the command > line with the dictionary-like syntax, which I believe would be more > complicated than a static command line "python pyste.py --module=foo > --xml-dir=xml-cache bar.pyste bah.pyste"? Sorry if this is a stupid suggestion (I have not used pyste ever although I find it very exciting): it seems to me what you really need a .pyste scanner, analog to the dependency scanner for .cpp files. The scanner would recursively search all files that a .pyste file depends on. Then you define an action that determines what to do if any of the dependencies has changed. The user will never see these details, but simply specify: BoostPythonExtension(target="foo", sources=["bar.pyste", "bah.pyste"]) I am guessing one could mix .pyste and manually coded .cpp files without having to do anything special: BoostPythonExtension(target="foo", sources=["bar.pyste", "bah.pyste", "custom.cpp"]) The intermediate xml files would automatically stay around just like .o files until the user runs scons --clean. Optionally combine this with Scons' Repository() feature to keep the source code trees free of derived files at all times. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From brett.calcott at paradise.net.nz Wed Jun 18 09:47:11 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Wed, 18 Jun 2003 19:47:11 +1200 Subject: [C++-sig] Re: Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: <026f01c333f8$e2aab490$0100a8c0@godzilla> Message-ID: > > > > I think I could use the library better and assist more in extending > > it if I could put all the bits together in my head.(I have in mind > > something like the big text file that Joel wrote for Phoenix.) > > Which text file was that? Indeed, the preprocessor really gets in the > way. Unfortunately, I had to bite the bullet too and the next Phoenix/LL > code will have to be preprocessor driven. > Maybe my memory serves me badly. Before it made it to boost, there was a single text files documenting the different layers and how they fitted together. I thought of it as it stepped piece by piece through things, and why they were build that way (well, that is what I remember anyway). Maybe I misremembered the whole thing. What year is it again..? Brett From brett.calcott at paradise.net.nz Wed Jun 18 10:25:49 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Wed, 18 Jun 2003 20:25:49 +1200 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: > > > > As somebody who has tried to go through the library, this bit is the > > part I'd really like to understand. I'm sure the reorganisation > > would help, but I'd really love to read a short doc on how this > > actually all hangs together. The registry, how the templates > > automates the construction of the PyObject and Type, how conversions > > are looked up (what's that graph stuff in there?). > > The graph stuff is not in this layer, it's in the core. It > implements something like this: > > dst_ptr = dynamic_cast(src_void_ptr, src_type_id, dst_type_id) > Ok. So I think I need to know about the core as well then. > > > > I think I could use the library better and assist more in extending it if I > > could put all the bits together in my head.(I have in mind something like > > the big text file that Joel wrote for Phoenix.) > > > > Just putting in a bid for your time Dave... > > Well, I'd be happy to collaborate with you on an implementation > document. It's not something I can afford to do by myself, in part > because I don't know what questions to answer. If you would agree to > have a conversation with me about it here, and from that create a > RestructuredText document which we'll include in the CVS, I'd be more > than happy to type out details at length. I think you might have to > dig into the code a little, too. Deal? > Deal. It be a bit slow at first as I am relocating to another country in the next 10 days (NZ -> Aussie), so my internet connection will be sporadic, and I'll be busy finding a new home and settling in. I'd like to approach it as though we are trying to write a simplified version of boost.python, ignoring compiler workarounds, and not using the preproc. We'll show a coded python C extension, an equivalent in boost.python syntax, then go through the structures needed for the boost.python code to generate the equivalently operating C extension code that we initially showed. We could work through examples like the ones on this page : http://starship.python.net/crew/arcege/extwriting/pyext.html ie. python extension: ================= PyObject *python_add(self, args).... ... ... .. PyMethodDef methods[] = { { "add", MyCommand, METH_VARARGS}, {NULL, NULL}, }; void initexample() { (void)Py_InitModule("add", methods); } equivalent in boost ================== ... def("add", python_add); Explain how the boost example generated equivalent code to above using very cut-down techniques that capture the 'essence' of the boost.python implementation, but not the detail. ================= 1. The module def 2. how is the methods array created (or simulated) 3. How does argument parsing happen.. ...etc.. Does this make sense? Comments (from anyone)? Cheers, Brett From dave at boost-consulting.com Wed Jun 18 12:56:01 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 06:56:01 -0400 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: Please know that I take my own response with a grain of salt; if potential contributors really think they want the presentation you describe, who am I to argue? "Brett Calcott" writes: >> Well, I'd be happy to collaborate with you on an implementation >> document. It's not something I can afford to do by myself, in part >> because I don't know what questions to answer. If you would agree >> to have a conversation with me about it here, and from that create >> a RestructuredText document which we'll include in the CVS, I'd be >> more than happy to type out details at length. I think you might >> have to dig into the code a little, too. Deal? > > Deal. It be a bit slow at first as I am relocating to another country in the > next 10 days (NZ -> Aussie) Interesting change. How come? > so my internet connection will be sporadic, and I'll be busy > finding a new home and settling in. OK, I'm patient. > I'd like to approach it as though we are trying to write a simplified > version of boost.python, ignoring compiler workarounds, and not using the > preproc. We'll show a coded python C extension, an equivalent in > boost.python syntax, then go through the structures needed for the > boost.python code to generate the equivalently operating C extension code > that we initially showed. I think that sounds like it will cover a lot of details that nobody needs to see, and in particular there are many things supported by Boost.Python (derived<->base conversions, overloading, even simple stuff like classes and static method support) that we don't know how to do in 'C' without replicating large swaths of Boost.Python functionality so the document would end up being as much about 'C' extension writing as about Boost.Python internals. I'm hoping that this document will be useful to other people who want to be involved with the project (and *me* as well), and I worry that by starting from the top level, what you're describing will fail to illuminate the architecture of Boost.Python, which IMO is the real obstacle to understanding the implementation. > We could work through examples like the ones on this page : > > http://starship.python.net/crew/arcege/extwriting/pyext.html > > > ie. > > python extension: > ================= > PyObject *python_add(self, args).... > ... > ... > .. > > > PyMethodDef methods[] = { > { > "add", MyCommand, METH_VARARGS}, > {NULL, NULL}, > }; > > > void initexample() > { > (void)Py_InitModule("add", methods); > } > > > equivalent in boost > ================== > ... > def("add", python_add); > > > Explain how the boost example generated equivalent code to above using very > cut-down techniques that capture the 'essence' of the boost.python > implementation, but not the detail. a. I think you need to understand much of the detail. b. The code generated by Boost.Python isn't equivalent to the above in any trivial sense. Almost everything in traditional 'C' extension building is geared towards initializing arrays of (function) pointers and constants which the Python core treats almost as a program, interpreting elements of the array and generating Python objects (e.g. callables) from them. Boost.Python doesn't/can't take advantage of these mechanisms, so it builds the Python objects directly. I guess what I'm saying is that the mapping is not very direct so I don't see a comparison with the 'C' API extension is going to help. > ================= > 1. The module def > 2. how is the methods array created (or simulated) > 3. How does argument parsing happen.. > ...etc.. > > > Does this make sense? > Comments (from anyone)? If you are convinced this is the right way to go, I'm still happy to help and answer questions. I would prefer to do a bottom-up description, starting with the core and working outward... especially because it would help the luabind people who are interested in integration with Boost.Python. A top-down description will always be describing new things in terms of their components which are concepts nobody understands yet. I'm not going to argue too strongly with anyone who's volunteering, though! -- Dave Abrahams Boost Consulting www.boost-consulting.com From djowel at gmx.co.uk Wed Jun 18 13:22:08 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Wed, 18 Jun 2003 19:22:08 +0800 Subject: [C++-sig] Re: Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: <026f01c333f8$e2aab490$0100a8c0@godzilla> Message-ID: <047e01c3358b$dd919de0$0100a8c0@godzilla> Brett Calcott wrote: >>> I think I could use the library better and assist more in extending >>> it if I could put all the bits together in my head.(I have in mind >>> something like the big text file that Joel wrote for Phoenix.) >> >> Which text file was that? Indeed, the preprocessor really gets in the >> way. Unfortunately, I had to bite the bullet too and the next >> Phoenix/LL code will have to be preprocessor driven. >> > > Maybe my memory serves me badly. Before it made it to boost, there > was a single text files documenting the different layers and how they > fitted together. I thought of it as it stepped piece by piece through > things, and why they were build that way (well, that is what I > remember anyway). > > Maybe I misremembered the whole thing. What year is it again..? 2003? Oh yeah, I remember now. Yeh, you have a good memory! -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From djowel at gmx.co.uk Wed Jun 18 13:40:08 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Wed, 18 Jun 2003 19:40:08 +0800 Subject: [C++-sig] Re: [Boost-Users] Python and hello.World class example References: Message-ID: <04b501c3358e$61a4bc00$0100a8c0@godzilla> David Brownell wrote: > I am trying to run the hello.World class example in the tutorial and > am running into problems. I can compile the dll fine, and then copy > the dll (and the boost_python.dll) into my python directory, and then > start my Python interpreter. The library will import, but when I > call either the greet or set methods, I get the following errors from > within the Python interpreter: > > [greet]: > Traceback (most recent call last): > File "", line 1, in ? > TypeError: unbound method Boost.Python.function object must be called > with World instance as first argument (got str instance instead) > > [set]: > Traceback (most recent call last): > File "", line 1, in ? > TypeError: unbound method Boost.Python.function object must be called > with World instance as first argument (got nothing instead) > > I am failrly new to Python and even newer to Boost.Python, so there > may be something very basic that I am missing. I am using boost > 1.30, bjam to compile, MSVC 7.1, Python 2.2, and Windows XP. Note > that I can compile and successfully execute the inital Hello World > example (hello.cpp in the tutorial subdir). Hi David, FYI, Boost.Python has its own mailing list. It would be nice to see you there ;-) http://mail.python.org/mailman/listinfo/c++-sig Anyway, I'm a bit lost. Could you please copy and paste the exact Python session? Have you tried the example session in the tutorial?: >>> import hello >>> planet = hello.World() >>> planet.set('howdy') >>> planet.greet() 'howdy' Regards, -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From dave at boost-consulting.com Wed Jun 18 15:03:47 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 09:03:47 -0400 Subject: [C++-sig] Re: iter(std::map) References: Message-ID: "Mike Rovner" writes: > Hi all, > > I want to wrap a std::map container, which maps some pointer to wrapped > class to another wrapped class. > typedef std::map Map; > > In order to support 'for x in Map():' construct I need to implement > __iter__. > As I did for keys() call, I'd like to return ptr(it->first) as an iteration > result. OK... > What is more simple (or recommended) way to do it? > > - Write and wrap helper iterator object with __iter__ and next methods like > struct Map_iter > { > Map_iter(const Map& m) : it(m.begin()), itend(m.end()) {} > Map_iter& identity(Map_iter& self) {return self;} > object next() { > if( it!=itend ) return ptr(it->first); > PyErr_SetString(PyExc_StopIteration,""); > throw_error_already_set(); > } > private: > Map::const_iterator it, itend; > }; > Map_iter get_iterator(const Map& m) { return Map_iter(m); } > and then > class_("_iter") > .def("__iter__", &Map_iter::identity) > .def("next", &Map_iter::next) > ; > //... and in Map wrapper > .def("__iter__", get_iterator) > > or > - Use iterator<>() with special return policy like > .def("__iter__", iterator >()) I guess whichever one is shorter. The 2nd one looks pretty good. > If later, I can't figure out the body of apply for my special return policy: > > struct copy_map_key_ptr > { > template > struct apply > { > typedef to_python_value< ??? > type; > }; > }; > > Any suggestions? Well, to_python_value is the wrong thing there, unless you want the stds::pair referenced by the iterator to be copied into a new Python object. I don't think you've wrapped std::pair have you? http://www.boost.org/libs/python/doc/v2/ResultConverter.html describes the requirements for apply::type. Its convertible() function should return true and its function-call operator should produce a PyObject* (owned reference). Something like PyObject* operator()(std::pair const& x) const { return python::incref( python::object(ptr(x.first)).ptr() ); } ought to work. HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 18 15:40:20 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 09:40:20 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef04c52.5666.16838@student.umu.se> Message-ID: Moving this to the C++-sig as it's a more appropriate forum... "dalwan01" writes: >> Daniel Wallin writes: >> >> > At 18:03 2003-06-17, you wrote: >> >>http://aspn.activestate.com/ASPN/Mail/Message/c++-sig/1673338 is >> >>more recent and also relevant to your question. >> >> >> >>In short, I'd love to see luabind in Boost, and I'd hate to see >> >>it happen without substantial code sharing with Boost.Python. >> > >> > I agree. It seems like a lot of code could be shared. For >> > instance, the conversion system between base<->derived should >> > work exactly the same and we could probably plug in BPL's system >> > for this without much trouble. >> > >> > It would also be really nice if we could share most of the >> > front-end code (declaration of scopes, classes and functions). >> > >> > Note however that there are quite a few differences in design, >> > for instance for our scope's we have been experimenting with >> > expressions ala phoenix: >> > >> > namespace_("foo") >> > [ >> > def(..), >> > def(..) >> > ]; >> >> I considered this syntax but I am not convinced it is an advantage. >> It seems to have quite a few downsides and no upsides. Am I >> missing something? > > For us it has several upsides: > > * We can easily nest namespaces IMO, it optimizes for the wrong case, since namespaces are typically flat rather than deeply nested (see the Zen of Python), nor are they represented explicitly in Python code, but inferred from file boundaries. > * We like the syntax :) It is nice for C++ programmers, but Python programmers at least are very much more comfortable without the brackets. > * We can remove the lua_State* parameter from > all calls to def()/class_() I'm not sure what that is. We handle global state in Boost.Python by simply keeping track of the current module ("state") in a global variable. Works a treat. > What do you consider the downsides to be? In addition to what I cited above, a. since methods and module-scope functions need to be wrapped differently, you need to build up a data structure which stores the arguments to def(...) out of the comma-separated items with a complex expression-template type and then interpret that type using a metaprogram when the operator[]s are applied. This can only increase compile times, which is already a problem. b. You don't get any order-of-evaluation guarantees. Things like staticmethod() need to operate on an existing function object in the class' dictionary [http://www.boost.org/libs/python/doc/v2/class.html#class_-spec-modifiers] and if you can't guarantee that it gets executed after a def() call you need to further complicate your expression template to delay evaluation of staticmethod() I guess these two are essentially the same issue. >> > Also, we don't have a type-converter registry; we make all >> > choices on what converter to use at compile time. >> >> I used to do that, but it doesn't support >> component-based-development has other serious problems. Are you >> sure your code is actually conformant? When converters are >> determined at compile-time, the only viable and conformant way >> AFAICT is with template specializations, and that means clients >> have to be highly conscious of ordering issues. > > I think it's conformant, but I wouldn't swear on it. > We strip all qualifiers from the types and specialize on > > by_cref<..> > by_ref<..> > by_ptr<..> > by_value<..> > > types. How do people define specialized converters for particular types? > It works on all compilers we have tried it on (vc 6-7.1, > codewarrior, gcc2.95.3+, intel). Codewarrior Pro8.x, explicitly using the '-iso-templates on' option? All the others support several common nonconformance bugs, many of which I was exploiting in Boost.Python v1. > For us it doesn't seem like an option to dispatch the converters at > runtime, since performance is a really high priority for our users. What we're doing in Boost.Python turns out to be very efficient, well below the threshold that anyone would notice IIUC. Eric Jones did a test comparing its speed to SWIG and to my great surprise, Boost.Python won. -- Dave Abrahams Boost Consulting www.boost-consulting.com From jh at web.de Wed Jun 18 16:39:11 2003 From: jh at web.de (Juergen Hermann) Date: Wed, 18 Jun 2003 16:39:11 +0200 Subject: [C++-sig] Doc bugs Message-ID: Hi! The following glitches are in the docs: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://www.boost.org/libs/python/doc/tutorial/doc/default_arguments.html BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS ... Like BOOST_PYTHON_FUNCTION_OVERLOADS, BOOST_PYTHON_FUNCTION_OVERLOADS may be used to automatically create the thin wrappers for wrapping member functions. Let's have an example: must read Like BOOST_PYTHON_FUNCTION_OVERLOADS, BOOST_PYTHON_MEMBER_FUNCTION_OVERLOADS ... ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://www.boost.org/libs/python/doc/tutorial/doc/using_the_interpreter. html main_namespace dict(handle<>(borrowed( PyModule_GetDict(main_module.get ()) ))); should be dict main_namespace(handle<>(borrowed( PyModule_GetDict(main_module.get ()) ))); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://www.boost.org/libs/python/pyste/doc/exporting_all_declarations_fr om_a_header.html exclude(hello.World.set, "Set") ==> exclude(hello.World.set) Ciao, J?rgen -- J?rgen Hermann, Developer WEB.DE AG, http://webde-ag.de/ From grafik.list at redshift-software.com Wed Jun 18 17:03:53 2003 From: grafik.list at redshift-software.com (Rene Rivera) Date: Wed, 18 Jun 2003 10:03:53 -0500 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: Message-ID: <20030618100354-r01010800-9baa58be-0860-0108@12.100.89.43> [2003-06-18] David Abrahams wrote: > >Moving this to the C++-sig as it's a more appropriate forum... > >"dalwan01" writes: > >>> Daniel Wallin writes: >>> >>> > namespace_("foo") >>> > [ >>> > def(..), >>> > def(..) >>> > ]; >>> >>> I considered this syntax but I am not convinced it is an advantage. >>> It seems to have quite a few downsides and no upsides. Am I >>> missing something? >> >> For us it has several upsides: >> >> * We can easily nest namespaces > >IMO, it optimizes for the wrong case, since namespaces are typically flat >rather than deeply nested (see the Zen of Python), nor are they >represented explicitly in Python code, but inferred from file >boundaries. I must be atipical. I make heavy, nested, use of namespaces in my C++ code. So having an easy way to represent that would be nice. >> * We like the syntax :) > >It is nice for C++ programmers, but Python programmers at least are >very much more comfortable without the brackets. > >> * We can remove the lua_State* parameter from >> all calls to def()/class_() > >I'm not sure what that is. We handle global state in Boost.Python by >simply keeping track of the current module ("state") in a global >variable. Works a treat. It's not global state. Unlike Python Lua can handle multiple "instances" of an interpreter by keeping all the interpreter state in one object. So having a single global var for that is not an option. It needs to get passed around explicitly or implicitly. I imagine Lua is not the only interpreter that does this. So it's something to consider carefully as we'll run into it again (I fact if I remember correctly Java JNI does the same thing). >> For us it doesn't seem like an option to dispatch the converters at >> runtime, since performance is a really high priority for our users. > >What we're doing in Boost.Python turns out to be very efficient, well >below the threshold that anyone would notice IIUC. Eric Jones did a >test comparing its speed to SWIG and to my great surprise, >Boost.Python won. It's a somewhat different audience that uses Lua. The kind of audience that looks at the assembly generated to make sure it's efficient. People like game developers, embeded developers, etc. so having a choice between compile time and runtime they, and I, would choose compile time. But perhaps the important thing about this is to consider how to support both models. -- grafik - Don't Assume Anything -- rrivera (at) acm.org - grafik (at) redshift-software.com -- 102708583 (at) icq From roman_sulzhyk at yahoo.com Wed Jun 18 17:22:26 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Wed, 18 Jun 2003 08:22:26 -0700 (PDT) Subject: [C++-sig] Getting base C++ class from a derived python class... Message-ID: <20030618152226.53218.qmail@web41104.mail.yahoo.com> Guys: Sorry to bother this forum, but I've been stuck at this for two days now and can't seem to make any progress. I'm basically trying to follow the example in tests/embedding.cpp, which works just fine for me, but my little program fails with dangling pointer error! [roman at mholden ~/src/pysymphony]$ ./smallexample returning service... Casting the service... Some exception caught! ReferenceError: Attempt to return dangling pointer to object of type: c Aborted I'm using pyste to generate wrapper code, for the example purposes I've piled it all in one file. Also, it didn't look as ugly to begin with, I hacked it to make it closer to the embedding example and yet it still doesn't work. One thing I can see different is that pyste generated code produces class_< PyService, boost::noncopyable, PyService_Wrapper >("PyService", init< >()) whereas in the embedding example it's more like class_< PyService, PyService_Wrapper, boost::noncopyable>("PyService", init< >()) however moving things around didn't seem to help :) Any insight would be greatly appreciated! Roman __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com -------------- next part -------------- A non-text attachment was scrubbed... Name: smallexample.cpp Type: application/octet-stream Size: 3257 bytes Desc: smallexample.cpp URL: From djowel at gmx.co.uk Wed Jun 18 17:15:58 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Wed, 18 Jun 2003 23:15:58 +0800 Subject: [C++-sig] Re: Interest in luabind References: <3ef04c52.5666.16838@student.umu.se> Message-ID: <067c01c335af$bbc896e0$0100a8c0@godzilla> David Abrahams wrote: [ snip [] syntax ] >> * We like the syntax :) > > It is nice for C++ programmers, but Python programmers at least are > very much more comfortable without the brackets. FWIW, I like the syntax ;-) But then of course I'm biased :o) Regards, -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From roman_sulzhyk at yahoo.com Wed Jun 18 17:57:00 2003 From: roman_sulzhyk at yahoo.com (Roman Sulzhyk) Date: Wed, 18 Jun 2003 08:57:00 -0700 (PDT) Subject: [C++-sig] Getting base C++ class from a derived python class... In-Reply-To: <20030618152226.53218.qmail@web41104.mail.yahoo.com> Message-ID: <20030618155700.78332.qmail@web41101.mail.yahoo.com> Ooops, never mind guys, I thought it was the dangling pointer in the casting of the Python class to base C++ class, but I've just realized it was actually complaining about the next line, when it returns the char * as the result of the testme() call, so I'm past the problem bit and I'll be able to deal with it now. Thanks, sorry... Roman --- Roman Sulzhyk wrote: > Guys: > > Sorry to bother this forum, but I've been stuck at this for two days > now and can't seem to make any progress. I'm basically trying to > follow > the example in tests/embedding.cpp, which works just fine for me, but > my little program fails with dangling pointer error! > > [roman at mholden ~/src/pysymphony]$ ./smallexample > returning service... > Casting the service... > Some exception caught! > ReferenceError: Attempt to return dangling pointer to object of type: > c > Aborted > > I'm using pyste to generate wrapper code, for the example purposes > I've > piled it all in one file. Also, it didn't look as ugly to begin with, > I > hacked it to make it closer to the embedding example and yet it still > doesn't work. > > One thing I can see different is that pyste generated code produces > > class_< PyService, boost::noncopyable, PyService_Wrapper > >("PyService", init< >()) > > whereas in the embedding example it's more like > > class_< PyService, PyService_Wrapper, > boost::noncopyable>("PyService", > init< >()) > > however moving things around didn't seem to help :) > > Any insight would be greatly appreciated! > > Roman > > __________________________________ > Do you Yahoo!? > SBC Yahoo! DSL - Now only $29.95 per month! > http://sbc.yahoo.com > ATTACHMENT part 2 application/octet-stream name=smallexample.cpp __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From david_brownell at hotmail.com Wed Jun 18 16:42:20 2003 From: david_brownell at hotmail.com (David Brownell) Date: Wed, 18 Jun 2003 07:42:20 -0700 Subject: [C++-sig] Re: [Boost-Users] Python and hello.World class example References: <04b501c3358e$61a4bc00$0100a8c0@godzilla> Message-ID: Joel, thank you for your reply - I am now a member of the Boost Python mailing list as well. I am actually quite embarrassed as to what my problem was, and it is painful to admit it here in public :) I wasn't creating a new instance of the object before I tried to access the object's methods. Sometimes python's syntax is so easy, I expect it to read my mind! Adding that "fix" makes the quite logical errors go away. "Joel de Guzman" wrote in message news:04b501c3358e$61a4bc00$0100a8c0 at godzilla... > David Brownell wrote: > > I am trying to run the hello.World class example in the tutorial and > > am running into problems. I can compile the dll fine, and then copy > > the dll (and the boost_python.dll) into my python directory, and then > > start my Python interpreter. The library will import, but when I > > call either the greet or set methods, I get the following errors from > > within the Python interpreter: > > > > [greet]: > > Traceback (most recent call last): > > File "", line 1, in ? > > TypeError: unbound method Boost.Python.function object must be called > > with World instance as first argument (got str instance instead) > > > > [set]: > > Traceback (most recent call last): > > File "", line 1, in ? > > TypeError: unbound method Boost.Python.function object must be called > > with World instance as first argument (got nothing instead) > > > > I am failrly new to Python and even newer to Boost.Python, so there > > may be something very basic that I am missing. I am using boost > > 1.30, bjam to compile, MSVC 7.1, Python 2.2, and Windows XP. Note > > that I can compile and successfully execute the inital Hello World > > example (hello.cpp in the tutorial subdir). > > Hi David, > > FYI, Boost.Python has its own mailing list. It would be nice to see you there ;-) > http://mail.python.org/mailman/listinfo/c++-sig > > Anyway, I'm a bit lost. Could you please copy and paste the exact Python > session? Have you tried the example session in the tutorial?: > > >>> import hello > >>> planet = hello.World() > >>> planet.set('howdy') > >>> planet.greet() > 'howdy' > > Regards, > -- > Joel de Guzman > joel at boost-consulting.com > http://www.boost-consulting.com > http://spirit.sf.net From nicodemus at globalite.com.br Wed Jun 18 19:34:18 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Wed, 18 Jun 2003 14:34:18 -0300 Subject: [C++-sig] Re: Pyste bug - static member functions... References: <20030618042953.72289.qmail@web20205.mail.yahoo.com> Message-ID: <004f01c335c0$35ec56e0$0101a8c0@sergio> ----- Original Message ----- From: "Ralf W. Grosse-Kunstleve" To: Sent: Wednesday, June 18, 2003 1:29 AM Subject: Re: [C++-sig] Re: Pyste bug - static member functions... > --- Nicodemus wrote: > > From my experience with SCons, it would actually simpler the other way. > > You make your build system generate the gccxml files and the pyste > > files. Whenever a header changes, the related gccxml file will be > > rebuilt, and consequently the pyste file will be rebuilt also. With > > --cache, you would have to make your build system generate the command > > line with the dictionary-like syntax, which I believe would be more > > complicated than a static command line "python pyste.py --module=foo > > --xml-dir=xml-cache bar.pyste bah.pyste"? > > Sorry if this is a stupid suggestion (I have not used pyste ever although I > find it very exciting): it seems to me what you really need a .pyste scanner, > analog to the dependency scanner for .cpp files. The scanner would recursively > search all files that a .pyste file depends on. Then you define an action that > determines what to do if any of the dependencies has changed. The user will > never see these details, but simply specify: > > BoostPythonExtension(target="foo", sources=["bar.pyste", "bah.pyste"]) > > I am guessing one could mix .pyste and manually coded .cpp files without having > to do anything special: > > BoostPythonExtension(target="foo", sources=["bar.pyste", "bah.pyste", > "custom.cpp"]) > > The intermediate xml files would automatically stay around just like .o files > until the user runs scons --clean. Optionally combine this with Scons' > Repository() feature to keep the source code trees free of derived files at all > times. Yes, you are right Ralf, thanks for the remainder! I have not used bare SCons (we use a wrapper around it at work) in some time, so this detail escaped my mind. From dave at boost-consulting.com Wed Jun 18 19:35:43 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 13:35:43 -0400 Subject: [C++-sig] Re: Interest in luabind References: <20030618100354-r01010800-9baa58be-0860-0108@12.100.89.43> Message-ID: Rene Rivera writes: > [2003-06-18] David Abrahams wrote: > >> >>Moving this to the C++-sig as it's a more appropriate forum... >> >>"dalwan01" writes: >> >>>> Daniel Wallin writes: >>>> >>>> > namespace_("foo") >>>> > [ >>>> > def(..), >>>> > def(..) >>>> > ]; >>>> >>>> I considered this syntax but I am not convinced it is an advantage. >>>> It seems to have quite a few downsides and no upsides. Am I >>>> missing something? >>> >>> For us it has several upsides: >>> >>> * We can easily nest namespaces >> >>IMO, it optimizes for the wrong case, since namespaces are typically flat >>rather than deeply nested (see the Zen of Python), nor are they >>represented explicitly in Python code, but inferred from file >>boundaries. > > I must be atipical. I make heavy, nested, use of namespaces in my C++ code. > So having an easy way to represent that would be nice. > >>> * We like the syntax :) >> >>It is nice for C++ programmers, but Python programmers at least are >>very much more comfortable without the brackets. >> >>> * We can remove the lua_State* parameter from >>> all calls to def()/class_() >> >>I'm not sure what that is. We handle global state in Boost.Python by >>simply keeping track of the current module ("state") in a global >>variable. Works a treat. > > It's not global state. Unlike Python Lua can handle multiple "instances" of > an interpreter by keeping all the interpreter state in one object. Python can handle multiple interpreter instances too, but hardly anyone does that. In any case, it still seems to me to be a handle to global state. > So having a single global var for that is not an option. Why not? I don't get it. Normally any module's initialization code will be operating on a single interpreter, right? Why not store its identity in a global variable? > It needs to get passed around explicitly or implicitly. I imagine > Lua is not the only interpreter that does this. So it's something to > consider carefully as we'll run into it again (I fact if I remember > correctly Java JNI does the same thing). As long as modules don't initialize concurrently, I don't see how there could be a problem. Of course, if they *do* initialize concurrently, everything I've said about the viability of globals is wrong. For that case you'd need TLS if you wanted to effectively hide the state :(. >>> For us it doesn't seem like an option to dispatch the converters at >>> runtime, since performance is a really high priority for our users. >> >>What we're doing in Boost.Python turns out to be very efficient, well >>below the threshold that anyone would notice IIUC. Eric Jones did a >>test comparing its speed to SWIG and to my great surprise, >>Boost.Python won. > > It's a somewhat different audience that uses Lua. The kind of audience that > looks at the assembly generated to make sure it's efficient. People like > game developers, embeded developers, etc. That audience tends to be superstitious about cycles, rather than measuring, and I think this concern is almost always misplaced when applied at the boundary between interpreted and compiled languages. The whole point of binding C++ into an interpreter, when you're concerned with performance, is to capture a large chunk of high-performance execution behind a single function call in the interpreter. I would think that once you are willing to use a language like lua you're not going to be that parsimonious with the execution of lua code adjacent to the call into C++, and it's easy for an extra instruction or two in the interpreter to swamp the cost of dynamic type conversion. Furthermore, purely compile-time lookups can have costs in code size, which is another important concern for this audience. > so having a choice between compile time and runtime they, and I, > would choose compile time. But perhaps the important thing about > this is to consider how to support both models. I know it can be a hard sell to that group, but I'd want to see some convincing numbers before deciding to support both models. Boost.Python used to use static converter lookups, but the advantages of doing it dynamically are so huge that I'm highly reluctant to complicate the codebase by supporting both without scientific justification. Oh, and BTW: I think people have vastly overestimated the amount of avoidable dynamic lookup, and the amount of code actually executed in the dynamic converter lookup. There is no map indexing or anything like that, except at the time the module is loaded, when references to converter registry for each type are initialized. The converter registry for a given type generally contains only one converter (in each direction), so there is no cost for searching for an appropriate converter. When C++ classes are extracted from wrapped class objects, a *static* procedure for finding the C++ class is tried before consulting the registry. Finally, if you care about derived <==> base class conversions (and I think you do), there will always be some dynamic type manipulation and/or RTTI, leading to some dynamic dispatching, because that's the only way to implement it. I am not trying to be difficult here. If there are significant technical advantages to purely-static converter lookups, I will be the first to support the idea. In all, however, I believe it's not an accident that Boost.Python evolved from purely-static to a model which supports dynamic conversions, not just because of usability concerns, but also because of correctness and real efficiency. So, let's keep the conversation open, and try to hammer on it until we reach consensus. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dirk at gerrits.homeip.net Wed Jun 18 19:38:16 2003 From: dirk at gerrits.homeip.net (Dirk Gerrits) Date: Wed, 18 Jun 2003 19:38:16 +0200 Subject: [C++-sig] Re: Doc bugs In-Reply-To: References: Message-ID: Juergen Hermann wrote: > http://www.boost.org/libs/python/doc/tutorial/doc/using_the_interpreter. > html > > > main_namespace dict(handle<>(borrowed( PyModule_GetDict(main_module.get > ()) ))); > > should be > > dict main_namespace(handle<>(borrowed( PyModule_GetDict(main_module.get > ()) ))); This one has already been fixed in CVS some time ago. But thanks for the report. Regards, Dirk Gerrits From nicodemus at globalite.com.br Wed Jun 18 19:46:17 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Wed, 18 Jun 2003 14:46:17 -0300 Subject: [C++-sig] Doc bugs References: Message-ID: <009f01c335c1$953080d0$0101a8c0@sergio> From: "Juergen Hermann" > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > http://www.boost.org/libs/python/pyste/doc/exporting_all_declarations_fr > om_a_header.html > > exclude(hello.World.set, "Set") > > ==> > > exclude(hello.World.set) > Thanks for the report Juergen! I will fix it in CVS. From rwgk at yahoo.com Wed Jun 18 19:51:42 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 18 Jun 2003 10:51:42 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: Message-ID: <20030618175142.86389.qmail@web20205.mail.yahoo.com> --- David Abrahams wrote: > As long as modules don't initialize concurrently, I don't see how > there could be a problem. Of course, if they *do* initialize > concurrently, everything I've said about the viability of globals is > wrong. For that case you'd need TLS if you wanted to effectively hide > the state :(. Excuse my ignorance, but what is TLS? Tender Loving S??? Thanks, Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dirk at gerrits.homeip.net Wed Jun 18 19:52:26 2003 From: dirk at gerrits.homeip.net (Dirk Gerrits) Date: Wed, 18 Jun 2003 19:52:26 +0200 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... In-Reply-To: References: Message-ID: David Abrahams wrote: > b. The code generated by Boost.Python isn't equivalent to the above in > any trivial sense. Almost everything in traditional 'C' extension > building is geared towards initializing arrays of (function) > pointers and constants which the Python core treats almost as a > program, interpreting elements of the array and generating Python > objects (e.g. callables) from them. Boost.Python doesn't/can't > take advantage of these mechanisms, so it builds the Python objects > directly. I guess what I'm saying is that the mapping is not very > direct so I don't see a comparison with the 'C' API extension is > going to help. Could just be me, but I think the above is exactly the kind of thing we'd want in such a document. :) I'd be happy to pitch in with the internal documentation after my exams btw. I'll have to dig deeper into Boost.Python to examine the Py_Finalize bug anyway. Regards, Dirk Gerrits From grafik666 at redshift-software.com Wed Jun 18 19:58:32 2003 From: grafik666 at redshift-software.com (Rene Rivera) Date: Wed, 18 Jun 2003 12:58:32 -0500 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <20030618175142.86389.qmail@web20205.mail.yahoo.com> Message-ID: <20030618125833-r01010800-1c2a1d86-0860-0108@12.100.89.43> [2003-06-18] Ralf W. Grosse-Kunstleve wrote: >--- David Abrahams wrote: >> As long as modules don't initialize concurrently, I don't see how >> there could be a problem. Of course, if they *do* initialize >> concurrently, everything I've said about the viability of globals is >> wrong. For that case you'd need TLS if you wanted to effectively hide >> the state :(. > >Excuse my ignorance, but what is TLS? Tender Loving S??? Thread Local Storage. >Thanks, Welcome :-) -- grafik - Don't Assume Anything -- rrivera (at) acm.org - grafik (at) redshift-software.com -- 102708583 (at) icq From grafik666 at redshift-software.com Wed Jun 18 20:13:46 2003 From: grafik666 at redshift-software.com (Rene Rivera) Date: Wed, 18 Jun 2003 13:13:46 -0500 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: Message-ID: <20030618131347-r01010800-bfabf48c-0860-0108@12.100.89.43> [2003-06-18] David Abrahams wrote: >Rene Rivera writes: > >> [2003-06-18] David Abrahams wrote: >> >>> >>>Moving this to the C++-sig as it's a more appropriate forum... >>> >>>"dalwan01" writes: >>> >>>>> Daniel Wallin writes: >>>>> >>>>> > namespace_("foo") >>>>> > [ >>>>> > def(..), >>>>> > def(..) >>>>> > ]; >>>>> >>>>> I considered this syntax but I am not convinced it is an advantage. >>>>> It seems to have quite a few downsides and no upsides. Am I >>>>> missing something? >>>> >>>> For us it has several upsides: >>>> >>>> * We can easily nest namespaces >>> >>>IMO, it optimizes for the wrong case, since namespaces are typically flat >>>rather than deeply nested (see the Zen of Python), nor are they >>>represented explicitly in Python code, but inferred from file >>>boundaries. >> >> I must be atipical. I make heavy, nested, use of namespaces in my C++ code. >> So having an easy way to represent that would be nice. >> >>>> * We like the syntax :) >>> >>>It is nice for C++ programmers, but Python programmers at least are >>>very much more comfortable without the brackets. >>> >>>> * We can remove the lua_State* parameter from >>>> all calls to def()/class_() >>> >>>I'm not sure what that is. We handle global state in Boost.Python by >>>simply keeping track of the current module ("state") in a global >>>variable. Works a treat. >> >> It's not global state. Unlike Python Lua can handle multiple "instances" of >> an interpreter by keeping all the interpreter state in one object. > >Python can handle multiple interpreter instances too, but hardly >anyone does that. In any case, it still seems to me to be a handle to >global state. Perhaps because Python has a higher interpreter cost? The thing is it's the recomended way to do things in Lua. >> So having a single global var for that is not an option. > >Why not? I don't get it. Normally any module's initialization code >will be operating on a single interpreter, right? No. The LuaState is the complete interpreter state. So to do bindings, or anything else, you create the state for each context you are calling in. There's no limitation as to matching the state to anything else other than the calling context. For eaxmple I could create a set of states, say 20, and have a pool of, say 50, threads that all "share" those on an as needed basis. Something like this is in fact my current need for Lua. >> It needs to get passed around explicitly or implicitly. I imagine >> Lua is not the only interpreter that does this. So it's something to >> consider carefully as we'll run into it again (I fact if I remember >> correctly Java JNI does the same thing). > >As long as modules don't initialize concurrently, I don't see how >there could be a problem. Of course, if they *do* initialize >concurrently, everything I've said about the viability of globals is >wrong. For that case you'd need TLS if you wanted to effectively hide >the state :(. Ah, well, there's the rub ;-) They can initialize concurrently. And to make it more interesting the same state can be used by different threads (but not at the same time) from time to time. > I hate politics ;-) But yes measuring the performace is a requirement. The group in question tends to resort to looking at the ASM only when they've run out of other options to find out why their program is slow. >> so having a choice between compile time and runtime they, and I, >> would choose compile time. But perhaps the important thing about >> this is to consider how to support both models. > >I know it can be a hard sell to that group, but I'd want to see some >convincing numbers before deciding to support both models. >Boost.Python used to use static converter lookups, but the advantages >of doing it dynamically are so huge that I'm highly reluctant to >complicate the codebase by supporting both without scientific >justification. > >Oh, and BTW: I think people have vastly overestimated the amount of >avoidable dynamic lookup, and the amount of code actually executed in >the dynamic converter lookup. There is no map indexing or anything >like that, except at the time the module is loaded, when references to >converter registry for each type are initialized. The converter >registry for a given type generally contains only one converter (in >each direction), so there is no cost for searching for an appropriate >converter. When C++ classes are extracted from wrapped class objects, >a *static* procedure for finding the C++ class is tried before >consulting the registry. > >Finally, if you care about derived <==> base class conversions (and I >think you do), there will always be some dynamic type manipulation >and/or RTTI, leading to some dynamic dispatching, because that's the >only way to implement it. > >I am not trying to be difficult here. If there are significant >technical advantages to purely-static converter lookups, I will be the >first to support the idea. In all, however, I believe it's not an >accident that Boost.Python evolved from purely-static to a model which >supports dynamic conversions, not just because of usability concerns, >but also because of correctness and real efficiency. So, let's keep >the conversation open, and try to hammer on it until we reach >consensus. Being difficult is the point ;-) If there's no difficulty there's no discussion. OK, I'm bassically convinced with that argument. If the majority of the lookups are O(1) then the extra cycles at runtime is worth the convenience. I just worry about O(n) lookups at a junction point in a program. It tends to poroduce O(n2) algos ;-) -- grafik - Don't Assume Anything -- rrivera (at) acm.org - grafik (at) redshift-software.com -- 102708583 (at) icq From dave at boost-consulting.com Wed Jun 18 20:31:13 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 14:31:13 -0400 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: Dirk Gerrits writes: > David Abrahams wrote: >> b. The code generated by Boost.Python isn't equivalent to the above in >> any trivial sense. Almost everything in traditional 'C' extension >> building is geared towards initializing arrays of (function) >> pointers and constants which the Python core treats almost as a >> program, interpreting elements of the array and generating Python >> objects (e.g. callables) from them. Boost.Python doesn't/can't >> take advantage of these mechanisms, so it builds the Python objects >> directly. I guess what I'm saying is that the mapping is not very >> direct so I don't see a comparison with the 'C' API extension is >> going to help. > > Could just be me, but I think the above is exactly the kind of thing > we'd want in such a document. :) > > I'd be happy to pitch in with the internal documentation after my > exams btw. I'll have to dig deeper into Boost.Python to examine the > Py_Finalize bug anyway. I'll happily be argued down on this point by any number of volunteer contributors ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 18 20:30:09 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 14:30:09 -0400 Subject: [C++-sig] Re: Interest in luabind References: <20030618131347-r01010800-bfabf48c-0860-0108@12.100.89.43> Message-ID: Rene Rivera writes: > [2003-06-18] David Abrahams wrote: > >>>>I'm not sure what that is. We handle global state in Boost.Python by >>>>simply keeping track of the current module ("state") in a global >>>>variable. Works a treat. >>> >>> It's not global state. Unlike Python Lua can handle multiple "instances" of >>> an interpreter by keeping all the interpreter state in one object. >> >>Python can handle multiple interpreter instances too, but hardly >>anyone does that. In any case, it still seems to me to be a handle >>to global state. > > Perhaps because Python has a higher interpreter cost? I'm not really sure why. What's an "interpreter cost?" > The thing is it's the recomended way to do things in Lua. > >>> So having a single global var for that is not an option. >> >>Why not? I don't get it. Normally any module's initialization code >>will be operating on a single interpreter, right? > > No. The LuaState is the complete interpreter state. So to do bindings, or > anything else, you create the state for each context you are calling > in. "Context", possibly meaning "module?" If so, I still don't see a problem with using namespace-scope variables in an anonymous namespace (for example). > There's no limitation as to matching the state to anything else > other than the calling context. For eaxmple I could create a set of > states, say 20, and have a pool of, say 50, threads that all "share" > those on an as needed basis. Something like this is in fact my > current need for Lua. Wow, cool and weird! Why do you want 20 separate interpreters? >>> It needs to get passed around explicitly or implicitly. I imagine >>> Lua is not the only interpreter that does this. So it's something to >>> consider carefully as we'll run into it again (I fact if I remember >>> correctly Java JNI does the same thing). >> >>As long as modules don't initialize concurrently, I don't see how >>there could be a problem. Of course, if they *do* initialize >>concurrently, everything I've said about the viability of globals is >>wrong. For that case you'd need TLS if you wanted to effectively hide >>the state :(. > > Ah, well, there's the rub ;-) They can initialize concurrently. And > to make it more interesting the same state can be used by different > threads (but not at the same time) from time to time. If the same module can be initialized simultaneously by two separate interpreters, I can see that there might be a problem. Of course, one could put a mutex guard around the whole module initialization, but the people who read the ASM would probably be upset with that. >> > > I hate politics ;-) Me too; it wasn't meant to be a political statement. I was trying to put the technical issues in perspective. > But yes measuring the performace is a requirement. The group in > question tends to resort to looking at the ASM only when they've run > out of other options to find out why their program is slow. Maybe this is a different group from the one that invented EC++ because "namespaces and templates have negative performance impact." ;-) >>I am not trying to be difficult here. If there are significant >>technical advantages to purely-static converter lookups, I will be the >>first to support the idea. In all, however, I believe it's not an >>accident that Boost.Python evolved from purely-static to a model which >>supports dynamic conversions, not just because of usability concerns, >>but also because of correctness and real efficiency. So, let's keep >>the conversation open, and try to hammer on it until we reach >>consensus. > > Being difficult is the point ;-) If there's no difficulty there's no > discussion. > > OK, I'm bassically convinced with that argument. If the majority of the > lookups are O(1) then the extra cycles at runtime is worth the > convenience. They are. Furthermore, to-python conversions for specific known types can be fixed at compile-time. > I just worry about O(n) lookups at a junction point in a program. It tends > to poroduce O(n2) algos ;-) When n != 1 it's usually 0 or 2. As long as you're not nervous about everything that calls through a function pointer I think we're OK. Let's see what the luabind guys think. -- Dave Abrahams Boost Consulting www.boost-consulting.com From jgresula at seznam.cz Wed Jun 18 20:58:33 2003 From: jgresula at seznam.cz (Jarda Gresula) Date: Wed, 18 Jun 2003 20:58:33 +0200 Subject: [C++-sig] std::auto_ptr as a return value Message-ID: <000001c335cb$9e4ff7e0$ea720bd4@jardag> I don't know if it is actually possible, but if so then could someone give an example of how to expose a method returning std::auto_ptr? The only way I'm able to achieve it is to write a thin wrapper returning T*. Thanks, Jarda. From nectar-pycpp at celabo.org Wed Jun 18 20:58:37 2003 From: nectar-pycpp at celabo.org (Jacques A. Vidrine) Date: Wed, 18 Jun 2003 13:58:37 -0500 Subject: [C++-sig] returning auto_ptr, `No to_python converter' ? Message-ID: <20030618185837.GA58784@madman.celabo.org> Hi All! Does this ring any bells? #include #include #include // Boost 1.30.0 std::auto_ptr example() { return std::auto_ptr(new std::string("hello")); } using namespace boost::python; BOOST_PYTHON_MODULE(example) { def("example", example); } >>> import example >>> example.example() Traceback (most recent call last): File "", line 1, in ? TypeError: No to_python (by-value) converter found for C++ type: St8auto_ptrISsE Seems I'm missing something, but I can't quite put my finger on it :-) Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From mike at bindkey.com Wed Jun 18 21:21:40 2003 From: mike at bindkey.com (Mike Rovner) Date: Wed, 18 Jun 2003 12:21:40 -0700 Subject: [C++-sig] Re: iter(std::map) References: Message-ID: Thanks a lot, David, this works like a charm. "David Abrahams" wrote in message news:u3ci75yh8.fsf at boost-consulting.com... > "Mike Rovner" writes: > > Well, to_python_value is the wrong thing there, unless you want the > stds::pair referenced by the iterator to be copied into a > new Python object. I don't think you've wrapped > std::pair have you? Nope. > http://www.boost.org/libs/python/doc/v2/ResultConverter.html > describes the requirements for apply::type. Its convertible() > function should return true and its function-call operator should > produce a PyObject* (owned reference). Something like > > PyObject* operator()(std::pair const& x) const > { > return python::incref( > python::object(ptr(x.first)).ptr() > ); > } > > ought to work. My code: namespace boost { namespace python { struct copy_map_key_ptr { template struct apply { typedef Map::value_type result_converter; struct type { bool convertible() const {return true;} PyObject* operator()(Map::value_type p) const { return incref(object(ptr(p.first)).ptr()); } typedef PyObject* result_type; }; }; }; }} Mike From dave at boost-consulting.com Wed Jun 18 21:28:25 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 15:28:25 -0400 Subject: [C++-sig] Re: std::auto_ptr as a return value References: <000001c335cb$9e4ff7e0$ea720bd4@jardag> Message-ID: "Jarda Gresula" writes: > I don't know if it is actually possible, but if so then could someone > give an example of how to expose a method returning std::auto_ptr? > The only way I'm able to achieve it is to write a thin wrapper > returning T*. a. Wrap T with auto_ptr as one of the template arguments: class_, ... >("T") ... b. or invoke register_ptr_to_python >(); (see http://article.gmane.org/gmane.comp.python.c++/2845 for the definition of register_ptr_to_python) HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 18 21:42:11 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 15:42:11 -0400 Subject: [C++-sig] Re: returning auto_ptr, `No to_python converter' ? References: <20030618185837.GA58784@madman.celabo.org> Message-ID: "Jacques A. Vidrine" writes: > Hi All! > Does this ring any bells? > > #include > #include > #include // Boost 1.30.0 > > std::auto_ptr example() { > return std::auto_ptr(new std::string("hello")); > } > > using namespace boost::python; > BOOST_PYTHON_MODULE(example) { > def("example", example); > } > > >>> import example > >>> example.example() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: No to_python (by-value) converter found for C++ type: St8auto_ptrISsE > > Seems I'm missing something, but I can't quite put my finger on it :-) > Cheers, See my reply to Jarda Gresula (just posted)... That will help you understand things, but it won't solve your problem. My recommendations for Jarda cause auto_ptr to be converted to a Python T object which refers to its C++ object through a copy of the auto_ptr object. std::string is converted to Python as a regular Python string, and there's no way to get a Python string to use an auto_ptr to hold its storage. I think you just want to register a custom to-python converter for std::auto_ptr. Something like: struct auto_ptr_string_to_python : to_python_converter { static PyObject* convert(std::auto_ptr const& x) { return PyString_FromStringAndSize(x->c_str(), x->size()); } }; Just constructing one of those in your module initialization function should handle it. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 18 22:02:45 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 16:02:45 -0400 Subject: [C++-sig] Re: iter(std::map) References: Message-ID: "Mike Rovner" writes: > > My code: > > namespace boost { namespace python { > struct copy_map_key_ptr > { > template > struct apply > { > typedef Map::value_type result_converter; > struct type > { > bool convertible() const {return true;} > PyObject* operator()(Map::value_type p) const > { return incref(object(ptr(p.first)).ptr()); } > typedef PyObject* result_type; > }; > }; > }; > }} Does it work? -- Dave Abrahams Boost Consulting www.boost-consulting.com From nectar-pycpp at celabo.org Wed Jun 18 22:09:22 2003 From: nectar-pycpp at celabo.org (Jacques A. Vidrine) Date: Wed, 18 Jun 2003 15:09:22 -0500 Subject: [C++-sig] Re: returning auto_ptr, `No to_python converter' ? In-Reply-To: References: <20030618185837.GA58784@madman.celabo.org> Message-ID: <20030618200921.GA59015@madman.celabo.org> On Wed, Jun 18, 2003 at 03:42:11PM -0400, David Abrahams wrote: > See my reply to Jarda Gresula (just posted)... > > That will help you understand things, but it won't solve your problem. > My recommendations for Jarda cause auto_ptr to be converted to a > Python T object which refers to its C++ object through a copy of the > auto_ptr object. > > std::string is converted to Python as a regular Python string, and > there's no way to get a Python string to use an auto_ptr to hold its > storage. Actually, std::string was used just for the example case. In real-life, it is a non-copyable class T. I'll have a look at the solution you suggested for Jarda. I'm confused because it appeared to me that what I was trying to accomplish isn't so different from what is done in libs/python/test/auto_ptr.cpp (only looked at source -- did not actually try it). (The interface for the classes I am wrapping changed from returning T * to returning std::auto_ptr. I was previously using return_value_policy.) > I think you just want to register a custom to-python > converter for std::auto_ptr. Something like: > > struct auto_ptr_string_to_python > : to_python_converter > { > static PyObject* convert(std::auto_ptr const& x) > { > return PyString_FromStringAndSize(x->c_str(), x->size()); > } > }; > > Just constructing one of those in your module initialization function > should handle it. Thanks much! By the way, Boost.Python is really cool. Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From dave at boost-consulting.com Wed Jun 18 22:59:52 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 16:59:52 -0400 Subject: [C++-sig] Re: returning auto_ptr, `No to_python converter' ? References: <20030618185837.GA58784@madman.celabo.org> <20030618200921.GA59015@madman.celabo.org> Message-ID: "Jacques A. Vidrine" writes: > On Wed, Jun 18, 2003 at 03:42:11PM -0400, David Abrahams wrote: >> See my reply to Jarda Gresula (just posted)... >> >> That will help you understand things, but it won't solve your problem. >> My recommendations for Jarda cause auto_ptr to be converted to a >> Python T object which refers to its C++ object through a copy of the >> auto_ptr object. >> >> std::string is converted to Python as a regular Python string, and >> there's no way to get a Python string to use an auto_ptr to hold its >> storage. > > Actually, std::string was used just for the example case. In > real-life, it is a non-copyable class T. I'll have a look at the > solution you suggested for Jarda. > > I'm confused because it appeared to me that what I was trying to > accomplish isn't so different from what is done in > libs/python/test/auto_ptr.cpp (only looked at source -- did not > actually try it). And why does that confuse you? That seems to show appropriate solutions to the question you posed. > (The interface for the classes I am wrapping changed from returning > T * to returning std::auto_ptr. I was previously using > return_value_policy.) A fine, fine idea. >> I think you just want to register a custom to-python >> converter for std::auto_ptr. Something like: >> >> struct auto_ptr_string_to_python >> : to_python_converter >> { >> static PyObject* convert(std::auto_ptr const& x) >> { >> return PyString_FromStringAndSize(x->c_str(), x->size()); >> } >> }; >> >> Just constructing one of those in your module initialization function >> should handle it. > > Thanks much! By the way, Boost.Python is really cool. Well, given that you're not actually dealing with std::string, the above advice is now wrong. -- Dave Abrahams Boost Consulting www.boost-consulting.com From mike at bindkey.com Wed Jun 18 23:03:19 2003 From: mike at bindkey.com (Mike Rovner) Date: Wed, 18 Jun 2003 14:03:19 -0700 Subject: [C++-sig] Re: iter(std::map) References: Message-ID: "David Abrahams" wrote in message news:ud6hbxifu.fsf at boost-consulting.com... > "Mike Rovner" writes: > > > > > My code: > > > > namespace boost { namespace python { > > struct copy_map_key_ptr > > { > > template > > struct apply > > { > > typedef Map::value_type result_converter; > > struct type > > { > > bool convertible() const {return true;} > > PyObject* operator()(Map::value_type p) const > > { return incref(object(ptr(p.first)).ptr()); } > > typedef PyObject* result_type; > > }; > > }; > > }; > > }} > > Does it work? Yep. I reference it in class_ declaration: class_("Map") .def("__iter__", iterator >()) ; And in Python: for k in result_map: print k, result_map[k] Thanks again, now that long awaited corner comes clear to me. Mike From dave at boost-consulting.com Wed Jun 18 23:00:07 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 18 Jun 2003 17:00:07 -0400 Subject: [C++-sig] Re: returning auto_ptr, `No to_python converter' ? References: <20030618185837.GA58784@madman.celabo.org> <20030618200921.GA59015@madman.celabo.org> Message-ID: "Jacques A. Vidrine" writes: > On Wed, Jun 18, 2003 at 03:42:11PM -0400, David Abrahams wrote: >> See my reply to Jarda Gresula (just posted)... >> >> That will help you understand things, but it won't solve your problem. >> My recommendations for Jarda cause auto_ptr to be converted to a >> Python T object which refers to its C++ object through a copy of the >> auto_ptr object. >> >> std::string is converted to Python as a regular Python string, and >> there's no way to get a Python string to use an auto_ptr to hold its >> storage. > > Actually, std::string was used just for the example case. In > real-life, it is a non-copyable class T. I'll have a look at the > solution you suggested for Jarda. > > I'm confused because it appeared to me that what I was trying to > accomplish isn't so different from what is done in > libs/python/test/auto_ptr.cpp (only looked at source -- did not > actually try it). And why does that confuse you? That seems to show appropriate solutions to the question you posed. > (The interface for the classes I am wrapping changed from returning > T * to returning std::auto_ptr. I was previously using > return_value_policy.) A fine, fine idea. >> I think you just want to register a custom to-python >> converter for std::auto_ptr. Something like: >> >> struct auto_ptr_string_to_python >> : to_python_converter >> { >> static PyObject* convert(std::auto_ptr const& x) >> { >> return PyString_FromStringAndSize(x->c_str(), x->size()); >> } >> }; >> >> Just constructing one of those in your module initialization function >> should handle it. > > Thanks much! Well, given that you're not actually dealing with std::string, the above advice is now wrong. > By the way, Boost.Python is really cool. Thank *you*! -- Dave Abrahams Boost Consulting www.boost-consulting.com From grafik666 at redshift-software.com Wed Jun 18 23:18:14 2003 From: grafik666 at redshift-software.com (Rene Rivera) Date: Wed, 18 Jun 2003 16:18:14 -0500 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: Message-ID: <20030618161815-r01010800-a5a04ccf-0860-0108@12.100.89.43> [2003-06-18] David Abrahams wrote: >Rene Rivera writes: > >> [2003-06-18] David Abrahams wrote: >> >>>Python can handle multiple interpreter instances too, but hardly >>>anyone does that. In any case, it still seems to me to be a handle >>>to global state. >> >> Perhaps because Python has a higher interpreter cost? > >I'm not really sure why. What's an "interpreter cost?" The time and space required to initialize/create a context sufficient for executing some scripted code independent of any other such context. >> The thing is it's the recomended way to do things in Lua. >> >>>> So having a single global var for that is not an option. >>> >>>Why not? I don't get it. Normally any module's initialization code >>>will be operating on a single interpreter, right? >> >> No. The LuaState is the complete interpreter state. So to do bindings, or >> anything else, you create the state for each context you are calling >> in. > >"Context", possibly meaning "module?" No I meant execution context in this case. As in that of a thread. Hence my use case below. >If so, I still don't see a problem with using namespace-scope >variables in an anonymous namespace (for example). > >> There's no limitation as to matching the state to anything else >> other than the calling context. For eaxmple I could create a set of >> states, say 20, and have a pool of, say 50, threads that all "share" >> those on an as needed basis. Something like this is in fact my >> current need for Lua. > >Wow, cool and weird! Why do you want 20 separate interpreters? Because they may be 20 separate Lua scripts running at a time. My current product has the ability for users to specificy script code that runs on the server. And the server can have some bounded set of execution threads to handle the actions. But only a subset of those will be executing script code. Here's a comparable use case. Imagine you want to make it possible to write scripting capabilities to a web server. It runs a varying number of threads to handle client requests any of which can possibly run some script code. So you write up a common interpreter state pool, just like you have a thread pool. If a thread needs to execute some script code it grabs one of the interpreter state objects and calls the interpreter code with the state and the script. >> Ah, well, there's the rub ;-) They can initialize concurrently. And >> to make it more interesting the same state can be used by different >> threads (but not at the same time) from time to time. > >If the same module can be initialized simultaneously by two separate >interpreters, I can see that there might be a problem. Of course, one >could put a mutex guard around the whole module initialization, but >the people who read the ASM would probably be upset with that. I'm talking Lua4 here so my info may be outdated. I know there's some new stuff to handle sharing of state. I'll look into the luabind code to see how it's doing things. If one needs for script code to execute concurrently then one has to initialize the modules into (clasic idea of intern) each interpreter state. So if requiring that all such initializations execute serialy solves the problem, that's fine IMO. Unless the people who read the ASM intend to do initializations all the time they can live with the mutex. After all in such cases you pool those initilizations ahead of time as I mentioned above. -- grafik - Don't Assume Anything -- rrivera (at) acm.org - grafik (at) redshift-software.com -- 102708583 (at) icq From nectar-pycpp at celabo.org Wed Jun 18 23:27:10 2003 From: nectar-pycpp at celabo.org (Jacques A. Vidrine) Date: Wed, 18 Jun 2003 16:27:10 -0500 Subject: [C++-sig] Re: returning auto_ptr, `No to_python converter' ? In-Reply-To: References: <20030618185837.GA58784@madman.celabo.org> <20030618200921.GA59015@madman.celabo.org> Message-ID: <20030618212710.GA59815@madman.celabo.org> On Wed, Jun 18, 2003 at 04:59:52PM -0400, David Abrahams wrote: > And why does that confuse you? That seems to show appropriate > solutions to the question you posed. excerpt from auto_ptr.cpp: 45 std::auto_ptr make() 46 { 47 return std::auto_ptr(new X(77)); 48 } ... 85 def("make", make); That is in essence what I was attempting to accomplish. I did not see any other `scaffolding' to make this work. A noteable difference is that my `X' is non-copyable, but I didn't believe that was what was tripping me up. > Well, given that you're not actually dealing with std::string, the > above advice is now wrong. Oh, I realized that -- it was useful for the insight nonetheless. Using register_ptr_to_python worked like a charm, but then I finally noticed the piece I was missing. Rather than, std::auto_ptr make(); //... class_("T", no_init) //... ; def("make", make); I needed std::auto_ptr make(); //... class_, boost::noncopyable>("T", no_init) // ^^^^^^^^^^^^^^^^ //... note new parameter ; def("make", make); I believe this is the `HeldType' parameter that I was missing. Now the behavior is as I expected. Boiling the issue down to a primitive type or a specially-handled type before posting here obscured the problem, I think. Cheers, -- Jacques Vidrine . NTT/Verio SME . FreeBSD UNIX . Heimdal nectar at celabo.org . jvidrine at verio.net . nectar at freebsd.org . nectar at kth.se From djowel at gmx.co.uk Thu Jun 19 00:41:23 2003 From: djowel at gmx.co.uk (Joel de Guzman) Date: Thu, 19 Jun 2003 06:41:23 +0800 Subject: [C++-sig] Re: [Boost-Users] Python and hello.World class example References: <04b501c3358e$61a4bc00$0100a8c0@godzilla> Message-ID: <004d01c335f0$b3de2850$0100a8c0@godzilla> David Brownell wrote: > Joel, thank you for your reply - I am now a member of the Boost Python > mailing list as well. I am actually quite embarrassed as to what my > problem was, and it is painful to admit it here in public :) I > wasn't creating a new instance of the object before I tried to access > the object's methods. Sometimes python's syntax is so easy, I expect > it to read my mind! Adding that "fix" makes the quite logical errors > go away. Most welcome. These things happen. Welcome to the BPL world :-) -- Joel de Guzman joel at boost-consulting.com http://www.boost-consulting.com http://spirit.sf.net From dalwan01 at student.umu.se Thu Jun 19 12:55:35 2003 From: dalwan01 at student.umu.se (dalwan01) Date: Thu, 19 Jun 2003 11:55:35 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef196a7.1e06.16838@student.umu.se> > > Moving this to the C++-sig as it's a more appropriate > forum... > > "dalwan01" writes: > > >> Daniel Wallin writes: > >> > >> > At 18:03 2003-06-17, you wrote: > >> > >>http://aspn.activestate.com/ASPN/Mail/Message/c++-sig/16 > 73338 is > >> >>more recent and also relevant to your question. > >> >> > >> >>In short, I'd love to see luabind in Boost, and I'd > hate to see > >> >>it happen without substantial code sharing with > Boost.Python. > >> > > >> > I agree. It seems like a lot of code could be shared. > For > >> > instance, the conversion system between > base<->derived should > >> > work exactly the same and we could probably plug in > BPL's system > >> > for this without much trouble. > >> > > >> > It would also be really nice if we could share most > of the > >> > front-end code (declaration of scopes, classes and > functions). > >> > > >> > Note however that there are quite a few differences > in design, > >> > for instance for our scope's we have been > experimenting with > >> > expressions ala phoenix: > >> > > >> > namespace_("foo") > >> > [ > >> > def(..), > >> > def(..) > >> > ]; > >> > >> I considered this syntax but I am not convinced it is > an advantage. > >> It seems to have quite a few downsides and no upsides. > Am I > >> missing something? > > > > For us it has several upsides: > > > > * We can easily nest namespaces > > IMO, it optimizes for the wrong case, since namespaces are > typically flat > rather than deeply nested (see the Zen of Python), nor are > they > represented explicitly in Python code, but inferred from > file > boundaries. > > > * We like the syntax :) > > It is nice for C++ programmers, but Python programmers at > least are > very much more comfortable without the brackets. > > > * We can remove the lua_State* parameter from > > all calls to def()/class_() > > I'm not sure what that is. We handle global state in > Boost.Python by > simply keeping track of the current module ("state") in a > global > variable. Works a treat. As pointed out lua can handle multiple states, so using global variabels doesn't strike me as a very good solution. > > > What do you consider the downsides to be? > > In addition to what I cited above, > > a. since methods and module-scope functions need to be > wrapped > differently, you need to build up a data structure > which stores the > arguments to def(...) out of the comma-separated items > with a > complex expression-template type and then interpret > that type using > a metaprogram when the operator[]s are applied. This > can only > increase compile times, which is already a problem. We don't build a complex expression-template, instead we build a list of objects with a virtual method to commit that object to the lua_State. This doesn't increase compile times. > > b. You don't get any order-of-evaluation guarantees. > Things like > staticmethod() need to operate on an existing function > object in the > class' dictionary > > [http://www.boost.org/libs/python/doc/v2/class.html#class_ > -spec-modifiers] > and if you can't guarantee that it gets executed after > a def() > call you need to further complicate your expression > template to > delay evaluation of staticmethod() As we don't build a expression template, I don't think this is an issue. > > I guess these two are essentially the same issue. > > >> > Also, we don't have a type-converter registry; we > make all > >> > choices on what converter to use at compile time. > >> > >> I used to do that, but it doesn't support > >> component-based-development has other serious problems. > Are you > >> sure your code is actually conformant? When converters > are > >> determined at compile-time, the only viable and > conformant way > >> AFAICT is with template specializations, and that means > clients > >> have to be highly conscious of ordering issues. > > > > I think it's conformant, but I wouldn't swear on it. > > We strip all qualifiers from the types and specialize on > > > > by_cref<..> > > by_ref<..> > > by_ptr<..> > > by_value<..> > > > > types. > > How do people define specialized converters for particular > types? This isn't finished, but currently we do: yes_t is_user_defined(by_cref); my_type convert_from_lua(lua_State*, by_cref); something like that.. > > > It works on all compilers we have tried it on (vc 6-7.1, > > codewarrior, gcc2.95.3+, intel). > > Codewarrior Pro8.x, explicitly using the '-iso-templates > on' option? > All the others support several common nonconformance bugs, > many of > which I was exploiting in Boost.Python v1. I haven't tried with -iso- option, I'll try it when i get home. We do not however use the bug you where exploiting i bpl.v1 (i assume you are referreing to friend templates?). > > > For us it doesn't seem like an option to dispatch the > converters at > > runtime, since performance is a really high priority for > our users. > > What we're doing in Boost.Python turns out to be very > efficient, well > below the threshold that anyone would notice IIUC. Eric > Jones did a > test comparing its speed to SWIG and to my great surprise, > Boost.Python won. Lua is used a lot in game developement, and game developers tend to care very much about every extra cycle. Even an extra function call via a function pointer could make difference for those users. We like the generated bindings to be almost equal in speed to one that is hand written. (sorry for my late reply, my home connection is down for two weeks..) -- Daniel Wallin From dalwan01 at student.umu.se Thu Jun 19 13:23:38 2003 From: dalwan01 at student.umu.se (dalwan01) Date: Thu, 19 Jun 2003 12:23:38 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef19d3a.4d4f.16838@student.umu.se> > Rene Rivera writes: > > > [2003-06-18] David Abrahams wrote: > > > >>>>I'm not sure what that is. We handle global state in > Boost.Python by > >>>>simply keeping track of the current module ("state") > in a global > >>>>variable. Works a treat. > >>> > >>> It's not global state. Unlike Python Lua can handle > multiple "instances" of > >>> an interpreter by keeping all the interpreter state in > one object. > >> > >>Python can handle multiple interpreter instances too, > but hardly > >>anyone does that. In any case, it still seems to me to > be a handle > >>to global state. > > > > Perhaps because Python has a higher interpreter cost? > > I'm not really sure why. What's an "interpreter cost?" > > > The thing is it's the recomended way to do things in > Lua. > > > >>> So having a single global var for that is not an > option. > >> > >>Why not? I don't get it. Normally any module's > initialization code > >>will be operating on a single interpreter, right? > > > > No. The LuaState is the complete interpreter state. So > to do bindings, or > > anything else, you create the state for each context you > are calling > > in. > > "Context", possibly meaning "module?" > > If so, I still don't see a problem with using > namespace-scope > variables in an anonymous namespace (for example). If the choice is between: namespace_(..) [ def(..), def(..) ]; and { namespace_ local_ns(..); def(state, ..); def(state, ..); } I would prefer the first as it is both more readable and less verbose. > >>I am not trying to be difficult here. If there are > significant > >>technical advantages to purely-static converter lookups, > I will be the > >>first to support the idea. In all, however, I believe > it's not an > >>accident that Boost.Python evolved from purely-static to > a model which > >>supports dynamic conversions, not just because of > usability concerns, > >>but also because of correctness and real efficiency. > So, let's keep > >>the conversation open, and try to hammer on it until we > reach > >>consensus. > > > > Being difficult is the point ;-) If there's no > difficulty there's no > > discussion. > > > > OK, I'm bassically convinced with that argument. If the > majority of the > > lookups are O(1) then the extra cycles at runtime is > worth the > > convenience. > > They are. Furthermore, to-python conversions for specific > known > types can be fixed at compile-time. How can they be fixed at compile-time? Doesn't this mean bypassing the conversion system and not allowing multiple converters / type? > > > I just worry about O(n) lookups at a junction point in a > program. It tends > > to poroduce O(n2) algos ;-) > > When n != 1 it's usually 0 or 2. As long as you're not > nervous about > everything that calls through a function pointer I think > we're OK. > Let's see what the luabind guys think. Some people would indeed get nervous by that. :) It isn't so much about the complexity as the additional cache misses and branch mispreditions introduced by lists of converters and calls through functions pointers. I realise this sounds silly, but when developing on a console these are important issues. I don't really have any numbers on this, and we would really need those. -- Daniel Wallin From brett.calcott at paradise.net.nz Thu Jun 19 13:53:34 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Thu, 19 Jun 2003 23:53:34 +1200 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: "David Abrahams" wrote : > > Please know that I take my own response with a grain of salt; if > potential contributors really think they want the presentation you > describe, who am I to argue? > Hmm. maybe finding the 'right way' might take time. You say you don't know the right questions to ask, but I don't think I do yet either. > > > > Deal. It be a bit slow at first as I am relocating to another > > country in the next 10 days (NZ -> Aussie) > > Interesting change. How come? > I'm giving up my paying job (programming) to go and do a PhD in Philosophy at the Australian National University. I'll be using boost.python to do some stuff for my thesis :) > > > I'd like to approach it as though we are trying to write a > > simplified version of boost.python, ignoring compiler workarounds, > > and not using the preproc. We'll show a coded python C extension, an > > equivalent in boost.python syntax, then go through the structures > > needed for the boost.python code to generate the equivalently > > operating C extension code that we initially showed. > > I think that sounds like it will cover a lot of details that nobody > needs to see, and in particular there are many things supported by > Boost.Python (derived<->base conversions, overloading, even simple > stuff like classes and static method support) that we don't know how > to do in 'C' without replicating large swaths of Boost.Python > functionality so the document would end up being as much about 'C' > extension writing as about Boost.Python internals. Right, but we can simply not do this stuff (at first anyway). > > I'm hoping that this document will be useful to other people who want > to be involved with the project (and *me* as well), and I worry that > by starting from the top level, what you're describing will fail to > illuminate the architecture of Boost.Python, which IMO is the real > obstacle to understanding the implementation. > > a. I think you need to understand much of the detail. > > b. The code generated by Boost.Python isn't equivalent to the above in > any trivial sense. Almost everything in traditional 'C' extension > building is geared towards initializing arrays of (function) > pointers and constants which the Python core treats almost as a > program, interpreting elements of the array and generating Python > objects (e.g. callables) from them. Boost.Python doesn't/can't > take advantage of these mechanisms, so it builds the Python objects > directly. I guess what I'm saying is that the mapping is not very > direct so I don't see a comparison with the 'C' API extension is > going to help. > Well ok. But looking at class.cpp I can see the standard python structs that you see in a C extension. So you must start with these, right? > If you are convinced this is the right way to go, I'm still happy to > help and answer questions. I would prefer to do a bottom-up > description, starting with the core and working outward... especially > because it would help the luabind people who are interested in > integration with Boost.Python. A top-down description will always be > describing new things in terms of their components which are concepts > nobody understands yet. I'm not going to argue too strongly with > anyone who's volunteering, though! > And the advantage to starting top-down is that it gives a motivation to what you are trying to achieve. The architecture of Boost.Python comes from the motivation to use a simple syntax to generate a complex interface. 'How' makes a lot more sense when you know 'why'. Perhaps we should just start in a corner and work our way around in the most interesting direction. Why not try the following: We start a conversation here directed at exposing the architecture (rather than getting my code compiling) - I copy abstracts from the conversation into Structured Text onto a Wiki page where they can be edited and added to. Using a Wiki means we can head off on tangents, leave terms or whole sections to be defined later on, and easily include active links to code, Python C extensions, and template tutorials. All of which will be needed I think. Where is a Wiki we can use? Where shall we start :) Cheers, Brett From dave at boost-consulting.com Thu Jun 19 14:04:23 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 19 Jun 2003 08:04:23 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef196a7.1e06.16838@student.umu.se> Message-ID: "dalwan01" writes: >> >> Moving this to the C++-sig as it's a more appropriate >> forum... >> >> "dalwan01" writes: >> >> >> Daniel Wallin writes: >> >> >> >> > Note however that there are quite a few differences in design, >> >> > for instance for our scope's we have been experimenting with >> >> > expressions ala phoenix: >> >> > >> >> > namespace_("foo") >> >> > [ >> >> > def(..), >> >> > def(..) >> >> > ]; >> >> >> >> I considered this syntax but I am not convinced it is an advantage. >> >> It seems to have quite a few downsides and no upsides. Am I >> >> missing something? >> > >> > For us it has several upsides: >> > >> > * We can easily nest namespaces >> >> IMO, it optimizes for the wrong case, since namespaces are >> typically flat rather than deeply nested (see the Zen of Python), >> nor are they represented explicitly in Python code, but inferred >> from file boundaries. >> >> > * We like the syntax :) >> >> It is nice for C++ programmers, but Python programmers at least are >> very much more comfortable without the brackets. > > >> >> > * We can remove the lua_State* parameter from >> > all calls to def()/class_() >> >> I'm not sure what that is. We handle global state in Boost.Python >> by simply keeping track of the current module ("state") in a global >> variable. Works a treat. > > As pointed out lua can handle multiple states, so using > global variabels doesn't strike me as a very good solution. I am not committed to the global variable approach nor am I opposed to the syntax. >> > What do you consider the downsides to be? >> >> In addition to what I cited above, >> >> a. since methods and module-scope functions need to be wrapped >> differently, you need to build up a data structure which stores the >> arguments to def(...) out of the comma-separated items with a >> complex expression-template type and then interpret that type using >> a metaprogram when the operator[]s are applied. This can only >> increase compile times, which is already a problem. > > We don't build a complex expression-template, instead we > build a list of objects with a virtual method to commit that > object to the lua_State. Very nice solution! My brain must have been trapped into thinking "compile-time". > This doesn't increase compile times. Good. Virtual functions come with bloat of their own, but that's an implementation detail which can be mitigated. >> b. You don't get any order-of-evaluation guarantees. >> Things like >> staticmethod() need to operate on an existing function >> object in the >> class' dictionary >> >> [http://www.boost.org/libs/python/doc/v2/class.html#class_ >> -spec-modifiers] >> and if you can't guarantee that it gets executed after >> a def() >> call you need to further complicate your expression >> template to >> delay evaluation of staticmethod() > > As we don't build a expression template, I don't think this > is an issue. Actually I think it's a non-issue because you *do* build a runtime-bound version of an expression template. >> I guess these two are essentially the same issue. >> >> >> > Also, we don't have a type-converter registry; we make all >> >> > choices on what converter to use at compile time. >> >> >> >> I used to do that, but it doesn't support >> >> component-based-development has other serious problems. Are you >> >> sure your code is actually conformant? When converters are >> >> determined at compile-time, the only viable and conformant way >> >> AFAICT is with template specializations, and that means clients >> >> have to be highly conscious of ordering issues. >> > >> > I think it's conformant, but I wouldn't swear on it. >> > We strip all qualifiers from the types and specialize on >> > >> > by_cref<..> >> > by_ref<..> >> > by_ptr<..> >> > by_value<..> >> > >> > types. I'm not really sure what the above means yet... I'm certainly interested in avoiding runtime dispatching if possible, so if this approach is viable for Boost.Python I'm all for it. >> >> How do people define specialized converters for particular >> types? > > This isn't finished, but currently we do: > > yes_t is_user_defined(by_cref); > my_type convert_from_lua(lua_State*, by_cref); > > something like that.. I assume that means the user must define those two functions? Where in the code must they be defined? How will this work when multiple extension modules need to manipulate the same types? How do *add* a way to convert from Python type A to C++ type B without masking the existing conversion from Python type Y to C++ type Z? >> > It works on all compilers we have tried it on (vc 6-7.1, >> > codewarrior, gcc2.95.3+, intel). >> >> Codewarrior Pro8.x, explicitly using the '-iso-templates on' >> option? All the others support several common nonconformance bugs, >> many of which I was exploiting in Boost.Python v1. > > I haven't tried with -iso- option, I'll try it when i get home. We > do not however use the bug you where exploiting i bpl.v1 (i assume > you are referreing to friend templates?). No, friend functions declared in templates being found without Koenig Lookup. >> > For us it doesn't seem like an option to dispatch the converters at >> > runtime, since performance is a really high priority for our users. >> >> What we're doing in Boost.Python turns out to be very efficient, >> well below the threshold that anyone would notice IIUC. Eric Jones >> did a test comparing its speed to SWIG and to my great surprise, >> Boost.Python won. > > Lua is used a lot in game developement, and game developers tend to > care very much about every extra cycle. Even an extra function call > via a function pointer could make difference for those users. I'm not convinced yet. Just adding a tiny bit of lua code next to any invocation of a wrapped function would typically consume much more than that. > We like the generated bindings to be almost equal in speed to one > that is hand written. Me too; I just have serious doubts that once you factor in everything else that you want going on (e.g. derived <==> base conversions), the ability to dynamically register conversions has a significant cost. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Thu Jun 19 14:12:18 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 19 Jun 2003 08:12:18 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef19d3a.4d4f.16838@student.umu.se> Message-ID: "dalwan01" writes: Abrahams: >> They are. Furthermore, to-python conversions for specific known >> types can be fixed at compile-time. > > How can they be fixed at compile-time? Doesn't this mean > bypassing the conversion system and not allowing multiple > converters / type? Given that when converting from C++ to Python, you only have a C++ type to work with and no Python type, there really can only be a single way to do the conversion anyhow. Given a to-python converter for a given C++ type, there's no criterion by which to say, "this converter doesn't match; try another"... unless of course you inspect the value of the C++ object, but I don't allow that. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Thu Jun 19 14:14:38 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 19 Jun 2003 08:14:38 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef19d3a.4d4f.16838@student.umu.se> Message-ID: "dalwan01" writes: >> When n != 1 it's usually 0 or 2. As long as you're not nervous >> about everything that calls through a function pointer I think >> we're OK. Let's see what the luabind guys think. > > Some people would indeed get nervous by that. :) > It isn't so much about the complexity as the additional cache misses > and branch mispreditions introduced by lists of converters and calls > through functions pointers. I know but I think they're superstitious, at least in the context of bindings to a dynamic language where *everything* goes through function pointers. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nbecker at hns.com Thu Jun 19 14:46:16 2003 From: nbecker at hns.com (Neal D. Becker) Date: Thu, 19 Jun 2003 08:46:16 -0400 Subject: [C++-sig] boost-python interface numarray to c++ Message-ID: I'm still exploring using boost-python. I learned that I can wrap std::vector and std::vector>. This allows me to create arrays in python and then pass them to c++ algorithms that use iterator interfaces. It would probably be useful also to create arrays in python. Is it difficult to interface a python Numeric and/or numarray array to c++ stl-style iterator interface? Incidentally, does anyone else work with numeric? It seems that Numeric-22.0 is the latest, and the web site points to numarray, but numarray seems to be progressing very slowly. Which one to use? From dalwan01 at student.umu.se Thu Jun 19 16:48:16 2003 From: dalwan01 at student.umu.se (dalwan01) Date: Thu, 19 Jun 2003 15:48:16 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef1cd30.2457.16838@student.umu.se> > "dalwan01" writes: > > >> > >> Moving this to the C++-sig as it's a more appropriate > >> forum... > >> > >> "dalwan01" writes: > >> > >> >> Daniel Wallin writes: > >> >> > >> >> > Note however that there are quite a few > differences in design, > >> >> > for instance for our scope's we have been > experimenting with > >> >> > expressions ala phoenix: > >> >> > > >> >> > namespace_("foo") > >> >> > [ > >> >> > def(..), > >> >> > def(..) > >> >> > ]; > >> >> > >> >> I considered this syntax but I am not convinced it > is an advantage. > >> >> It seems to have quite a few downsides and no > upsides. Am I > >> >> missing something? > >> > > >> > For us it has several upsides: > >> > > >> > * We can easily nest namespaces > >> > >> IMO, it optimizes for the wrong case, since namespaces > are > >> typically flat rather than deeply nested (see the Zen > of Python), > >> nor are they represented explicitly in Python code, but > inferred > >> from file boundaries. > >> > >> > * We like the syntax :) > >> > >> It is nice for C++ programmers, but Python programmers > at least are > >> very much more comfortable without the brackets. > > > > > >> > >> > * We can remove the lua_State* parameter from > >> > all calls to def()/class_() > >> > >> I'm not sure what that is. We handle global state in > Boost.Python > >> by simply keeping track of the current module ("state") > in a global > >> variable. Works a treat. > > > > As pointed out lua can handle multiple states, so using > > global variabels doesn't strike me as a very good > solution. > > I am not committed to the global variable approach nor am > I opposed > to the syntax. > > >> > What do you consider the downsides to be? > >> > >> In addition to what I cited above, > >> > >> a. since methods and module-scope functions need to be > wrapped > >> differently, you need to build up a data structure > which stores the > >> arguments to def(...) out of the comma-separated items > with a > >> complex expression-template type and then interpret > that type using > >> a metaprogram when the operator[]s are applied. This > can only > >> increase compile times, which is already a problem. > > > > We don't build a complex expression-template, instead we > > build a list of objects with a virtual method to commit > that > > object to the lua_State. > > Very nice solution! My brain must have been trapped into > thinking > "compile-time". > > > This doesn't increase compile times. > > Good. Virtual functions come with bloat of their own, but > that's an > implementation detail which can be mitigated. Right. The virtual functions isn't generated in the template, so there is very little code generated. > > >> I guess these two are essentially the same issue. > >> > >> >> > Also, we don't have a type-converter registry; we > make all > >> >> > choices on what converter to use at compile time. > >> >> > >> >> I used to do that, but it doesn't support > >> >> component-based-development has other serious > problems. Are you > >> >> sure your code is actually conformant? When > converters are > >> >> determined at compile-time, the only viable and > conformant way > >> >> AFAICT is with template specializations, and that > means clients > >> >> have to be highly conscious of ordering issues. > >> > > >> > I think it's conformant, but I wouldn't swear on it. > >> > We strip all qualifiers from the types and specialize > on > >> > > >> > by_cref<..> > >> > by_ref<..> > >> > by_ptr<..> > >> > by_value<..> > >> > > >> > types. > > I'm not really sure what the above means yet... I'm > certainly > interested in avoiding runtime dispatching if possible, so > if this > approach is viable for Boost.Python I'm all for it. I don't know if i fully understand the ordering issues you mentioned. When we first implemented this we had converter functions with this sig: T convert(type, ..) This of course introduces problems with some compilers when trying to overload for T& and T* and such. So we introduced a more complex type<..>.. T& -> by_ref, const T& -> by_cref etc. > > >> > >> How do people define specialized converters for > particular > >> types? > > > > This isn't finished, but currently we do: > > > > yes_t is_user_defined(by_cref); > > my_type convert_from_lua(lua_State*, by_cref); > > > > something like that.. > > I assume that means the user must define those two > functions? Where > in the code must they be defined? Right. The user declares the first function and defines the other before binding functions that use the types. > > How will this work when multiple extension modules need to > manipulate > the same types? I don't know. I haven't given that much thought. Do you see any obvious issues? > > How do *add* a way to convert from Python type A to C++ > type B > without masking the existing conversion from Python type Y > to C++ > type Z? I don't understand. How are B and Z related? Why would a conversion function for B mask conversions to Z? > > >> > It works on all compilers we have tried it on (vc > 6-7.1, > >> > codewarrior, gcc2.95.3+, intel). > >> > >> Codewarrior Pro8.x, explicitly using the > '-iso-templates on' > >> option? All the others support several common > nonconformance bugs, > >> many of which I was exploiting in Boost.Python v1. > > > > I haven't tried with -iso- option, I'll try it when i > get home. We > > do not however use the bug you where exploiting i bpl.v1 > (i assume > > you are referreing to friend templates?). > > No, friend functions declared in templates being found > without Koenig > Lookup. Right, that's what I meant. :) > > >> > For us it doesn't seem like an option to dispatch the > converters at > >> > runtime, since performance is a really high priority > for our users. > >> > >> What we're doing in Boost.Python turns out to be very > efficient, > >> well below the threshold that anyone would notice IIUC. > Eric Jones > >> did a test comparing its speed to SWIG and to my great > surprise, > >> Boost.Python won. > > > > Lua is used a lot in game developement, and game > developers tend to > > care very much about every extra cycle. Even an extra > function call > > via a function pointer could make difference for those > users. > > I'm not convinced yet. Just adding a tiny bit of lua code > next to any > invocation of a wrapped function would typically consume > much more > than that. > > > We like the generated bindings to be almost equal in > speed to one > > that is hand written. > > Me too; I just have serious doubts that once you factor in > everything > else that you want going on (e.g. derived <==> base > conversions), the > ability to dynamically register conversions has a > significant cost. You might be right. I'll investigate how runtime dispatch would affect luabind the next couple of days, in particular I will look at what this would do to our policy system. -- Daniel Wallin From dave at boost-consulting.com Thu Jun 19 17:45:52 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 19 Jun 2003 11:45:52 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef1cd30.2457.16838@student.umu.se> Message-ID: "dalwan01" writes: >> "dalwan01" writes: >> >> >> >> >> Moving this to the C++-sig as it's a more appropriate >> >> forum... >> >> >> >> "dalwan01" writes: >> >> >> >> >> Daniel Wallin writes: >> >> >> >> >> >> > Note however that there are quite a few differences in >> >> >> > design, for instance for our scope's we have been >> >> >> > experimenting with expressions ala phoenix: >> >> >> > >> >> >> > namespace_("foo") >> >> >> > [ >> >> >> > def(..), >> >> >> > def(..) >> >> >> > ]; >> >> >> >> >> >> I considered this syntax but I am not convinced it is an >> >> >> advantage. It seems to have quite a few downsides and no >> >> >> upsides. Am I missing something? >> >> > >> >> > For us it has several upsides: >> >> > >> >> > * We can easily nest namespaces >> >> >> >> IMO, it optimizes for the wrong case, since namespaces are >> >> typically flat rather than deeply nested (see the Zen of >> >> Python), nor are they represented explicitly in Python code, but >> >> inferred from file boundaries. >> >> >> >> > * We like the syntax :) >> >> >> >> It is nice for C++ programmers, but Python programmers at least >> >> are very much more comfortable without the brackets. >> > >> > >> >> >> >> > * We can remove the lua_State* parameter from >> >> > all calls to def()/class_() >> >> >> >> I'm not sure what that is. We handle global state in >> >> Boost.Python by simply keeping track of the current module >> >> ("state") in a global variable. Works a treat. >> > >> > As pointed out lua can handle multiple states, so using global >> > variabels doesn't strike me as a very good solution. >> >> I am not committed to the global variable approach nor am I opposed >> to the syntax. In fact, the more I look at the syntax of luabind, the more I like. Using addition for policy accumulation is cool. The naming of the policies is cool. >> > This doesn't increase compile times. >> >> Good. Virtual functions come with bloat of their own, but that's >> an implementation detail which can be mitigated. > > Right. The virtual functions isn't generated in the > template, so there is very little code generated. I don't see how that's possible, but I guess I'll learn. >> >> > I think it's conformant, but I wouldn't swear on it. >> >> > We strip all qualifiers from the types and specialize on >> >> > >> >> > by_cref<..> >> >> > by_ref<..> >> >> > by_ptr<..> >> >> > by_value<..> >> >> > >> >> > types. >> >> I'm not really sure what the above means yet... I'm certainly >> interested in avoiding runtime dispatching if possible, so if this >> approach is viable for Boost.Python I'm all for it. > > I don't know if i fully understand the ordering issues you > mentioned. When we first implemented this we had converter > functions with this sig: > > T convert(type, ..) > > This of course introduces problems with some compilers when > trying to overload for T& and T* and such. So we introduced > a more complex type<..>.. T& -> by_ref, const T& -> > by_cref etc. The ordering issues basically have to do with the requirement that classes be wrapped and converters defined before they are used, syntactically speaking. That caused all kinds of inconveniences in BPLv1 when interacting classes were wrapped. OTOH I bet it's possible to implicltly choose conversion methods for classes which you haven't seen a wrapper for, so maybe that's less of a problem than I'm making it out to be. >> >> How do people define specialized converters for particular >> >> types? >> > >> > This isn't finished, but currently we do: >> > >> > yes_t is_user_defined(by_cref); >> > my_type convert_from_lua(lua_State*, by_cref); >> > >> > something like that.. >> >> I assume that means the user must define those two functions? >> Where in the code must they be defined? > > Right. The user declares the first function and defines the > other before binding functions that use the types. Right. >> How will this work when multiple extension modules need to >> manipulate the same types? > > I don't know. I haven't given that much thought. Do you see > any obvious issues? Hmm, maybe I'm on drugs. The biggest problems in BPLv1 in this area were because the converters for a given class were generated, essentially, by its class_<...> instantiation. But I have already said that from-python conversion for a given wrapped class is normally done statically. User-defined converters still need to be exposed to all the extensions which use the types, somehow. It would be better not to replicate that code. Furthermore there are potential issues of creating objects in one DLL and destroying them elsewhere. These may be minor in comparison, though. Ralf, do you have any insight here? >> How do *add* a way to convert from Python type A to C++ type B >> without masking the existing conversion from Python type Y to C++ >> type Z? > > I don't understand. How are B and Z related? Why would a > conversion function for B mask conversions to Z? Sorry, B==Z ;-) >> > Lua is used a lot in game developement, and game developers tend to >> > care very much about every extra cycle. Even an extra function call >> > via a function pointer could make difference for those users. >> >> I'm not convinced yet. Just adding a tiny bit of lua code next to >> any invocation of a wrapped function would typically consume much >> more than that. >> >> > We like the generated bindings to be almost equal in speed to one >> > that is hand written. >> >> Me too; I just have serious doubts that once you factor in >> everything else that you want going on (e.g. derived <==> base >> conversions), the ability to dynamically register conversions has a >> significant cost. > > You might be right. I'll investigate how runtime dispatch > would affect luabind the next couple of days, in particular > I will look at what this would do to our policy system. OK. Incidentally, I find much of what you're doing very appealing, and I think that if we could find a way to share a lot of technology it would be fantastic. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Thu Jun 19 18:28:23 2003 From: dave at boost-consulting.com (David Abrahams) Date: Thu, 19 Jun 2003 12:28:23 -0400 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: "Brett Calcott" writes: > "David Abrahams" wrote : >> >> Please know that I take my own response with a grain of salt; if >> potential contributors really think they want the presentation you >> describe, who am I to argue? > > Hmm. maybe finding the 'right way' might take time. You say you don't > know the right questions to ask, but I don't think I do yet either. I'm willing. >> > >> > Deal. It be a bit slow at first as I am relocating to another >> > country in the next 10 days (NZ -> Aussie) >> >> Interesting change. How come? >> > > I'm giving up my paying job (programming) to go and do a PhD in > Philosophy at the Australian National University. I'll be using > boost.python to do some stuff for my thesis :) Nice! >> > I'd like to approach it as though we are trying to write a >> > simplified version of boost.python, ignoring compiler workarounds, >> > and not using the preproc. We'll show a coded python C extension, an >> > equivalent in boost.python syntax, then go through the structures >> > needed for the boost.python code to generate the equivalently >> > operating C extension code that we initially showed. >> >> I think that sounds like it will cover a lot of details that nobody >> needs to see, and in particular there are many things supported by >> Boost.Python (derived<->base conversions, overloading, even simple >> stuff like classes and static method support) that we don't know how >> to do in 'C' without replicating large swaths of Boost.Python >> functionality so the document would end up being as much about 'C' >> extension writing as about Boost.Python internals. > > Right, but we can simply not do this stuff (at first anyway). It's up to you, I guess. As long as I don't have to write working 'C' API code ;-) > Well ok. But looking at class.cpp I can see the standard python structs > that you see in a C extension. So you must start with these, right? Yep. That's a bottom-up view, though ;-) >> If you are convinced this is the right way to go, I'm still happy >> to help and answer questions. I would prefer to do a bottom-up >> description, starting with the core and working >> outward... especially because it would help the luabind people who >> are interested in integration with Boost.Python. A top-down >> description will always be describing new things in terms of their >> components which are concepts nobody understands yet. I'm not >> going to argue too strongly with anyone who's volunteering, though! >> > > And the advantage to starting top-down is that it gives a motivation > to what you are trying to achieve. The architecture of Boost.Python > comes from the motivation to use a simple syntax to generate a > complex interface. 'How' makes a lot more sense when you know 'why'. I think the tutorial shows 'why' already, but as I said, I'm willing to leave it up to you. > Perhaps we should just start in a corner and work our way around in > the most interesting direction. Why not try the following: We start > a conversation here directed at exposing the architecture (rather > than getting my code compiling) - I copy abstracts from the > conversation into Structured Text onto a Wiki page where they can be > edited and added to. Using a Wiki means we can head off on > tangents, leave terms or whole sections to be defined later on, and > easily include active links to code, Python C extensions, and > template tutorials. All of which will be needed I think. > > Where is a Wiki we can use? There is (http://www.python.org/cgi-bin/moinmoin/boost_2epython), but note that it doesn't do ReST and I much prefer CVS for this sort of thing. I hate editing in a webpage, and I like to be able to apply all the usual revision control tools. I'd happily give you Boost CVS access if you wanted to do it that way. If you have a strong preference for Wiki it's no problem, since you'll be doing most of the interaction anyway. > Where shall we start :) What are you interested in? -- Dave Abrahams Boost Consulting www.boost-consulting.com From paustin at eos.ubc.ca Thu Jun 19 22:22:44 2003 From: paustin at eos.ubc.ca (Philip Austin) Date: Thu, 19 Jun 2003 13:22:44 -0700 Subject: [C++-sig] boost-python interface numarray to c++ In-Reply-To: References: Message-ID: <16114.7060.901555.658380@gull.eos.ubc.ca> Neal D. Becker writes: > It would probably be useful also to create arrays in python. Is it > difficult to interface a python Numeric and/or numarray array to c++ > stl-style iterator interface? See http://www.boost.org/libs/python/doc/v2/numeric.html#array-spec > > Incidentally, does anyone else work with numeric? It seems that > Numeric-22.0 is the latest, and the web site points to numarray, but > numarray seems to be progressing very slowly. Which one to use? The lastest numarray release removed one of the major problems, which was slow perfomance on small arrays. We're still using Numeric, however -- numarray is a complete rewrite and is going to be early beta for a while longer. Note that numeric::array can switch between the two using set_module_and_type(). Back in April I said that: "... boost::python::numeric::array is a registered type that wraps a Numeric Python array. We have some example code and a set of utility functions that demonstrate this approach, and will put them up in the next day or so" (http://mail.python.org/pipermail/c++-sig/2003-April/003769.html). I've now put a copy of our boost numeric utilities up at: http://www.eos.ubc.ca/research/clouds/num_util.html Note that these functions call the C API directly, rather than using the member functions of numeric::array. We will switch over eventually, but for because of time pressures we're sticking with our Boost V1 approach for now. Regards, Phil From wenning_qiu at yahoo.com Fri Jun 20 00:05:10 2003 From: wenning_qiu at yahoo.com (Wenning Qiu) Date: Thu, 19 Jun 2003 15:05:10 -0700 (PDT) Subject: [C++-sig] Problem building Boost.Python Message-ID: <20030619220510.42402.qmail@web40513.mail.yahoo.com> I am trying to build boost_1_30_0 on Linux and have not succeeded so far. I am new to boost and Jam. I think I've followed the build/install instructions, but obviously there's something I am not doing right. I'd appreciate it if anybody can point out. Thanks. Wenning Qiu Here's the software versions: Red Hat Linux 8.0 gcc (GCC) 3.2 20020903 Python.2.2.3 built from source locally. boost-jam-3.1.4 built locally with command "build.sh gcc". I set the evrironment variables: export PYTHON_ROOT=/home/qiuw01/linux/Python export PYTHON_VERSION=2.2 And do the build: qiuw01$ cd boost_1_30_0/libs/python/build qiuw01$ bjam "-sTOOLS=gcc" I got the error message "unknown target type for libboost_python.so" right away. However, it builds when I comment out following lines from Jamfile: stage bin-stage : boost_python boost_python : "_debug" "_pydebug" : debug release ; Attempt to build hello.cpp also failed: qiuw01$ cd boost_1_30_0/libs/python/example/tutorial qiuw01$ bjam "-sTOOLS=gcc" don't know how to make ../src/numeric.cpp don't know how to make ../src/list.cpp don't know how to make ../src/long.cpp don't know how to make ../src/dict.cpp don't know how to make ../src/tuple.cpp don't know how to make ../src/str.cpp don't know how to make ../src/aix_init_module.cpp don't know how to make ../src/converter/from_python.cpp don't know how to make ../src/converter/registry.cpp don't know how to make ../src/converter/type_id.cpp don't know how to make ../src/object/enum.cpp don't know how to make ../src/object/class.cpp don't know how to make ../src/object/function.cpp don't know how to make ../src/object/inheritance.cpp don't know how to make ../src/object/life_support.cpp don't know how to make ../src/object/pickle_support.cpp don't know how to make ../src/errors.cpp don't know how to make ../src/module.cpp don't know how to make ../src/converter/builtin_converters.cpp don't know how to make ../src/converter/arg_to_python_base.cpp don't know how to make ../src/object/iterator.cpp don't know how to make ../src/object_protocol.cpp don't know how to make ../src/object_operators.cpp ...found 898 targets... ...can't find 23 targets... ...can't make 25 targets... ...skipped numeric.o for lack of ../src/numeric.cpp... ...skipped list.o for lack of ../src/list.cpp... ...skipped long.o for lack of ../src/long.cpp... ...skipped dict.o for lack of ../src/dict.cpp... ...skipped tuple.o for lack of ../src/tuple.cpp... ...skipped str.o for lack of ../src/str.cpp... ...skipped aix_init_module.o for lack of ../src/aix_init_module.cpp... ...skipped from_python.o for lack of ../src/converter/from_python.cpp... ...skipped registry.o for lack of ../src/converter/registry.cpp... ...skipped type_id.o for lack of ../src/converter/type_id.cpp... ...skipped enum.o for lack of ../src/object/enum.cpp... ...skipped class.o for lack of ../src/object/class.cpp... ...skipped function.o for lack of ../src/object/function.cpp... ...skipped inheritance.o for lack of ../src/object/inheritance.cpp... ...skipped life_support.o for lack of ../src/object/life_support.cpp... ...skipped pickle_support.o for lack of ../src/object/pickle_support.cpp... ...skipped errors.o for lack of ../src/errors.cpp... ...skipped module.o for lack of ../src/module.cpp... ...skipped builtin_converters.o for lack of ../src/converter/builtin_converters.cpp... ...skipped arg_to_python_base.o for lack of ../src/converter/arg_to_python_base.cpp... ...skipped iterator.o for lack of ../src/object/iterator.cpp... ...skipped object_protocol.o for lack of ../src/object_protocol.cpp... ...skipped object_operators.o for lack of ../src/object_operators.cpp... ...skipped libboost_python.so for lack of numeric.o... ...skipped hello.so for lack of libboost_python.so... ...skipped 25 targets... __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From nikolai.kirsebom at siemens.no Fri Jun 20 12:11:05 2003 From: nikolai.kirsebom at siemens.no (Kirsebom Nikolai) Date: Fri, 20 Jun 2003 12:11:05 +0200 Subject: [C++-sig] Exception second time loaded Message-ID: <5B5544F322E5D211850F00A0C91DEF3B050DBE40@osll007a.siemens.no> Hi, I have some questions: How do I is it possible to search the archives? I have defined the PyExecutor class, CString<-->python string converter (see entry from Ralf W. Grosse-Kunstleve 27th May), code shown below. The function "RunPythonViaBoost" is called from the startup function of a DLL, loaded by the actual application. If I make use of the 'getit' function, the system runs into an exception the second time it is started. The statement in RunStmt method being executed when the exception occurs is the second statement (PyRun_String). The stack-frame in the debugger is: boost_python_debug.dll!boost::python::throw_error_alread_set() Line 58 DLEPRPythonDLd.dll!boost::python::expect_non_null(_object * x=0x00000000) Line 45 + 0x8 DLEPRPythonDLd.dll!boost::python::detail::manage_ptr(_object * p=0x00000000, ...) Line 57 + 0x9 DLEPRPythonDLd.dll!boost::python::handle<_object>::handle<_object>(_object * p=0x00000000) Line 80 + 0x30 DLEPRPythonDLd.dll!PyExecutor::RunStmt(DLEPRInterface * dlepr=0x008be388) Line 336 + 0x31 Does anyone see what I've done wrong ? When in the python-shell I write: import DocuLive v = DocuLive.getit() and then closes the shell, what happens to the actual C++ object (DLEPRInterface) when 'v' goes out of scope ? THANK YOU FOR ANY HELP. Nikolai HERE IS THE CODE // Wrapper class needed. If the DLEPRInterface class is used directly, compiler complains about: //...\boost_1_30_0\boost\python\object\value_holder.hpp(111): error C2558: class 'DLEPRInterface' : // no copy constructor available or copy constructor is declared 'explicit' // // Note that I do not need to construct instances of this class. The actual interface object will // always exist when the dll is loaded, and it provides all information the dll needs. // //The constructors / destructor of DLEPRInteface are: // // DLEPRInterface(); // DLEPRInterface(const MenuEntry& Menu, CItemView* pView, char TriggerKey, RootTemplate* MatrixTemplate); // DLEPRInterface(CDC* pDC, CItemView* pView, int CurItemIndex, int Col, int Row, RootTemplate* MatrixTemplate); // ~DLEPRInterface(); // class PyDLEPRInterface : public DLEPRInterface { public: PyDLEPRInterface(); PyDLEPRInterface(const PyDLEPRInterface& objectSrc); }; PyDLEPRInterface::PyDLEPRInterface(const PyDLEPRInterface& objectSrc) { } PyDLEPRInterface::PyDLEPRInterface() { } // //////////////////////////////////////////////////////////////////////////// ///////////// ///// EXPOSED INTERFACE ///////////////////////////////////////////////////////////////// using namespace boost::python; static PyDLEPRInterface* curr = NULL; PyDLEPRInterface* getit() { return curr; } BOOST_PYTHON_MODULE(DocuLive) { def("getit", getit, return_value_policy()) ; class_("DLEPRInterface") .def("GetDatabaseName", &PyDLEPRInterface::GetDatabaseName) .def("GetDefaultServerName", &PyDLEPRInterface::GetDefaultServerName) .def("IsRubber", &PyDLEPRInterface::IsRubber) .def_readonly("RecordId", &PyDLEPRInterface::m_RecordID) .def_readonly("Item", &PyDLEPRInterface::m_Item) .def_readonly("OriginalItem", &PyDLEPRInterface::m_OriginalItem) .def_readonly("Row", &PyDLEPRInterface::m_Row) .def_readonly("Col", &PyDLEPRInterface::m_Col) .def_readonly("UserID", &PyDLEPRInterface::m_UserID) .def_readonly("m_CurMenuEntry", &PyDLEPRInterface::m_CurMenuEntry) .add_property("DocumentCategory", make_getter(&PyDLEPRInterface::m_DocumentCategory, return_value_policy())) .add_property("LookupCategory", make_getter(&PyDLEPRInterface::m_LookupCategory, return_value_policy())) .add_property("RecordCategory", make_getter(&PyDLEPRInterface::m_RecordCategory, return_value_policy())) .add_property("IconPurpose", make_getter(&PyDLEPRInterface::m_IconPurpose, return_value_policy())) ; class_("MenuEntry") .add_property("ParamString1", make_getter(&MenuEntry::ParamString1, return_value_policy())) ; } namespace MFCString { namespace { struct CString_to_python_str { static PyObject* convert(CString const& s) { CString ss = s; std::string x = ss.GetBuffer(1000); return boost::python::incref(boost::python::object(x).ptr()); } }; struct CString_from_python_str { CString_from_python_str() { boost::python::converter::registry::push_back( &convertible, &construct, boost::python::type_id()); } static void* convertible(PyObject* obj_ptr) { if (!PyString_Check(obj_ptr)) return 0; return obj_ptr; } static void construct( PyObject* obj_ptr, boost::python::converter::rvalue_from_python_stage1_data* data) { const char* value = PyString_AsString(obj_ptr); if (value == 0) boost::python::throw_error_already_set(); void* storage = ( (boost::python::converter::rvalue_from_python_storage*)data)->stora ge.bytes; new (storage) CString(value); data->convertible = storage; } }; void init_module() { using namespace boost::python; boost::python::to_python_converter< CString, CString_to_python_str>(); CString_from_python_str(); } }} // namespace MFCString:: BOOST_PYTHON_MODULE(MFC) { class_("CRect") .def_readwrite("bottom", &CRect::bottom) .def_readwrite("top", &CRect::top) .def_readwrite("right", &CRect::right) .def_readwrite("left", &CRect::left) .def("Height", &CRect::Height) .def("Width", &CRect::Width) ; class_("CPoint") .def_readwrite("x", &CPoint::x) .def_readwrite("y", &CPoint::y) ; } BOOST_PYTHON_MODULE(CString) { MFCString::init_module(); } //////////////////////////////////////////////////////////////////////////// //////// /// PYTHON EXECUTOR CLASS ////////////////////////////////////////////////////////// class PyExecutor { public: PyExecutor(); ~PyExecutor(); CString RunStmt(DLEPRInterface * dl); PyObject * MainNamespace; }; PyExecutor::PyExecutor() { // Register the module with the interpreter if (PyImport_AppendInittab("DocuLive", initDocuLive) == -1) throw std::runtime_error("Failed to add DocuLive to the interpreter's builtin modules"); if (PyImport_AppendInittab("MFC", initMFC) == -1) throw std::runtime_error("Failed to add MFC to the interpreter's builtin modules"); if (PyImport_AppendInittab("CString", initCString) == -1) throw std::runtime_error("Failed to add CString to the interpreter's builtin modules"); Py_Initialize(); boost::python::handle<> main_module(borrowed( PyImport_AddModule("__main__"))); boost::python::handle<> main_namespace(borrowed( PyModule_GetDict(main_module.get()) )); MainNamespace = main_namespace.get(); } PyExecutor::~PyExecutor() { Py_Finalize(); } CString PyExecutor::RunStmt(DLEPRInterface * dlepr) { curr = (PyDLEPRInterface *)dlepr; //<<<--- Global variable, boost::python::handle<> result(PyRun_String( "import sys\n" "import wxPython.lib.PyCrust.PyShellApp\n" "wxPython.lib.PyCrust.PyShellApp.main()\n" "valx = 'done'\n", Py_file_input, MainNamespace, MainNamespace)); result.reset(); PyObject *p = PyRun_String("valx", Py_eval_input, MainNamespace, MainNamespace); result.release(); if (p != 0) { char *s; int i = PyArg_Parse(p, "s", &s); return _T(s); } else { return _T("NULL"); } } UINT ThreadFunc(LPVOID pParam) { static PyExecutor *x = NULL; if (x == NULL) { x = new PyExecutor(); } CString v = x->RunStmt((DLEPRInterface *)pParam); return 0; } void RunPythonViaBoost(DLEPRInterface *pInterface, int Modeless) { CWinThread * pThread = AfxBeginThread(ThreadFunc, pInterface); HANDLE hThread = pThread->m_hThread; if (Modeless == 0) ::WaitForSingleObject(hThread, INFINITE); } From nbecker at hns.com Fri Jun 20 13:39:38 2003 From: nbecker at hns.com (Neal D. Becker) Date: Fri, 20 Jun 2003 07:39:38 -0400 Subject: [C++-sig] boost-python interface numarray to c++ In-Reply-To: <16114.7060.901555.658380@gull.eos.ubc.ca> References: <16114.7060.901555.658380@gull.eos.ubc.ca> Message-ID: <200306200739.38779.nbecker@hns.com> On Thursday 19 June 2003 04:22 pm, Philip Austin wrote: > Neal D. Becker writes: > > It would probably be useful also to create arrays in python. Is it > > difficult to interface a python Numeric and/or numarray array to c++ > > stl-style iterator interface? > > See http://www.boost.org/libs/python/doc/v2/numeric.html#array-spec > Thanks for the pointer. Sorry for my slowness, but I'm afraid that it wasn't obvious to me what to do to get stl-style iterators. Could you suggest something or maybe a small example? Thanks. From dalwan01 at student.umu.se Fri Jun 20 13:47:59 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Fri, 20 Jun 2003 12:47:59 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef2f46f.2f80.16838@student.umu.se> > "dalwan01" writes: > > >> "dalwan01" writes: > >> > >> >> > >> >> Moving this to the C++-sig as it's a more > appropriate > >> >> forum... > >> >> > >> >> "dalwan01" writes: > >> >> > >> >> >> Daniel Wallin writes: > >> >> >> > >> >> >> > Note however that there are quite a few > differences in > >> >> >> > design, for instance for our scope's we have > been > >> >> >> > experimenting with expressions ala phoenix: > >> >> >> > > >> >> >> > namespace_("foo") > >> >> >> > [ > >> >> >> > def(..), > >> >> >> > def(..) > >> >> >> > ]; > >> >> >> > >> >> >> I considered this syntax but I am not convinced > it is an > >> >> >> advantage. It seems to have quite a few > downsides and no > >> >> >> upsides. Am I missing something? > >> >> > > >> >> > For us it has several upsides: > >> >> > > >> >> > * We can easily nest namespaces > >> >> > >> >> IMO, it optimizes for the wrong case, since > namespaces are > >> >> typically flat rather than deeply nested (see the > Zen of > >> >> Python), nor are they represented explicitly in > Python code, but > >> >> inferred from file boundaries. > >> >> > >> >> > * We like the syntax :) > >> >> > >> >> It is nice for C++ programmers, but Python > programmers at least > >> >> are very much more comfortable without the brackets. > >> > > >> > > >> >> > >> >> > * We can remove the lua_State* parameter from > >> >> > all calls to def()/class_() > >> >> > >> >> I'm not sure what that is. We handle global state > in > >> >> Boost.Python by simply keeping track of the current > module > >> >> ("state") in a global variable. Works a treat. > >> > > >> > As pointed out lua can handle multiple states, so > using global > >> > variabels doesn't strike me as a very good solution. > >> > >> I am not committed to the global variable approach nor > am I opposed > >> to the syntax. > > In fact, the more I look at the syntax of luabind, the > more I like. > Using addition for policy accumulation is cool. The > naming of the > policies is cool. It does increase compile times a bit though, but it shouldn't matter that much. > > >> > This doesn't increase compile times. > >> > >> Good. Virtual functions come with bloat of their own, > but that's > >> an implementation detail which can be mitigated. > > > > Right. The virtual functions isn't generated in the > > template, so there is very little code generated. > > I don't see how that's possible, but I guess I'll learn. We can generate the wrapper code in the template, and store function pointers in the object instead of generating a virtual function which generates the wrapper functions. I'm not sure we do it that way though, been a while since I looked at the code. > > >> >> > I think it's conformant, but I wouldn't swear on > it. > >> >> > We strip all qualifiers from the types and > specialize on > >> >> > > >> >> > by_cref<..> > >> >> > by_ref<..> > >> >> > by_ptr<..> > >> >> > by_value<..> > >> >> > > >> >> > types. > >> > >> I'm not really sure what the above means yet... I'm > certainly > >> interested in avoiding runtime dispatching if possible, > so if this > >> approach is viable for Boost.Python I'm all for it. > > > > I don't know if i fully understand the ordering issues > you > > mentioned. When we first implemented this we had > converter > > functions with this sig: > > > > T convert(type, ..) > > > > This of course introduces problems with some compilers > when > > trying to overload for T& and T* and such. So we > introduced > > a more complex type<..>.. T& -> by_ref, const T& -> > > by_cref etc. > > The ordering issues basically have to do with the > requirement that > classes be wrapped and converters defined before they are > used, > syntactically speaking. That caused all kinds of > inconveniences in > BPLv1 when interacting classes were wrapped. OTOH I bet > it's possible > to implicltly choose conversion methods for classes which > you haven't > seen a wrapper for, so maybe that's less of a problem than > I'm making > it out to be. Ok. In BPLv1 you generated converter functions using friend functions in templates though, and this was the cause for these ordering issues? > > >> How will this work when multiple extension modules need > to > >> manipulate the same types? > > > > I don't know. I haven't given that much thought. Do you > see > > any obvious issues? > > > Hmm, maybe I'm on drugs. The biggest problems in BPLv1 in > this area > were because the converters for a given class were > generated, > essentially, by its class_<...> instantiation. But I have > already > said that from-python conversion for a given wrapped class > is normally > done statically. > > User-defined converters still need to be exposed to all > the extensions > which use the types, somehow. It would be better not to > replicate > that code. I haven't thought about that at all. But it's a good point, and it's impossible to not replicate the code with static dispatch. > >> How do *add* a way to convert from Python type A to C++ > type B > >> without masking the existing conversion from Python > type Y to C++ > >> type Z? > > > > I don't understand. How are B and Z related? Why would a > > conversion function for B mask conversions to Z? > > Sorry, B==Z ;-) Ah, ok. Well, this isn't finished either. We have a (unfinished) system which works like this: template<> struct implicit_conversion<0, B> : from {}; template<> struct implicit_conversion<1, B> : from {}; Of course, this has all the problems with static dispatch as well.. > > >> > Lua is used a lot in game developement, and game > developers tend to > >> > care very much about every extra cycle. Even an extra > function call > >> > via a function pointer could make difference for > those users. > >> > >> I'm not convinced yet. Just adding a tiny bit of lua > code next to > >> any invocation of a wrapped function would typically > consume much > >> more than that. > >> > >> > We like the generated bindings to be almost equal in > speed to one > >> > that is hand written. > >> > >> Me too; I just have serious doubts that once you factor > in > >> everything else that you want going on (e.g. derived > <==> base > >> conversions), the ability to dynamically register > conversions has a > >> significant cost. > > > > You might be right. I'll investigate how runtime > dispatch > > would affect luabind the next couple of days, in > particular > > I will look at what this would do to our policy system. > > OK. Incidentally, I find much of what you're doing very > appealing, > and I think that if we could find a way to share a lot of > technology > it would be fantastic. I think so too. I'm looking around in BPL's conversion system now trying to understand how I incorporate it in luabind. -- Daniel Wallin From dave at boost-consulting.com Fri Jun 20 13:54:09 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 20 Jun 2003 07:54:09 -0400 Subject: [C++-sig] Re: Exception second time loaded References: <5B5544F322E5D211850F00A0C91DEF3B050DBE40@osll007a.siemens.no> Message-ID: Kirsebom Nikolai writes: > Hi, > I have some questions: > How do I is it possible to search the archives? http://www.boost.org/more/mailing_lists.htm#cplussig Shows many available archives. Each one has its own search method. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 20 13:57:26 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 20 Jun 2003 07:57:26 -0400 Subject: [C++-sig] Re: Exception second time loaded References: <5B5544F322E5D211850F00A0C91DEF3B050DBE40@osll007a.siemens.no> Message-ID: Kirsebom Nikolai writes: > I have defined the PyExecutor class, CString<-->python string converter (see > entry from Ralf W. Grosse-Kunstleve 27th May), > code shown below. > The function "RunPythonViaBoost" is called from the startup function of a > DLL, loaded by the actual application. > > If I make use of the 'getit' function, the system runs into an exception the > second time it is started. The statement in RunStmt method > being executed when the exception occurs is the second statement > (PyRun_String). > > The stack-frame in the debugger is: > > boost_python_debug.dll!boost::python::throw_error_alread_set() Line 58 > DLEPRPythonDLd.dll!boost::python::expect_non_null(_object * x=0x00000000) > Line 45 + 0x8 > DLEPRPythonDLd.dll!boost::python::detail::manage_ptr(_object * p=0x00000000, > ...) Line 57 + 0x9 > DLEPRPythonDLd.dll!boost::python::handle<_object>::handle<_object>(_object * > p=0x00000000) Line 80 + 0x30 > DLEPRPythonDLd.dll!PyExecutor::RunStmt(DLEPRInterface * dlepr=0x008be388) > Line 336 + 0x31 PyRunString returned 0, meaning there was an exception raised in Python code somewhere. You should allow the exception to propagate so that Python can report the error; that's where all the information is. See the main() function in libs/python/test/embedding.cpp for how this can be handled. -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Fri Jun 20 14:06:39 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Fri, 20 Jun 2003 05:06:39 -0700 (PDT) Subject: [C++-sig] Exception second time loaded In-Reply-To: <5B5544F322E5D211850F00A0C91DEF3B050DBE40@osll007a.siemens.no> Message-ID: <20030620120639.24982.qmail@web20204.mail.yahoo.com> --- Kirsebom Nikolai wrote: > PyExecutor::~PyExecutor() > { > Py_Finalize(); > } Recently someone was mentioning a "Py_Finalize bug". Maybe you want to look for that in the archives. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Fri Jun 20 14:13:35 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 20 Jun 2003 08:13:35 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef2f46f.2f80.16838@student.umu.se> Message-ID: I just love how GNUs is able to straighten out that nasty Outlook (Express) wrapping. Unfortunately it looks like you're using webmail, or you could use OE-QuoteFix :( "Daniel Wallin" writes: >> In fact, the more I look at the syntax of luabind, the more I like. >> Using addition for policy accumulation is cool. The naming of the >> policies is cool. > > It does increase compile times a bit though What, overloading '+'? I don't think it's significant. > but it shouldn't matter that much. Agreed. >> >> >> > This doesn't increase compile times. >> >> >> >> Good. Virtual functions come with bloat of their own, but that's >> >> an implementation detail which can be mitigated. >> > >> > Right. The virtual functions isn't generated in the >> > template, so there is very little code generated. >> >> I don't see how that's possible, but I guess I'll learn. > > We can generate the wrapper code in the template, and store > function pointers in the object instead of generating a > virtual function which generates the wrapper functions. Well, IIUC, that means you have to treat def() the same when it appears inside a class [...] as when it's inside a module [ ... ], since there's no delayed evaluation. ah, wait: you don't use [ ... ] for class, which gets you off the hook. but what about nested classes? Consistency would dictate the use of [ ... ]. > I'm not sure we do it that way though, been a while since I looked > at the code. OK. >> The ordering issues basically have to do with the requirement that >> classes be wrapped and converters defined before they are used, >> syntactically speaking. That caused all kinds of inconveniences in >> BPLv1 when interacting classes were wrapped. OTOH I bet it's >> possible to implicltly choose conversion methods for classes which >> you haven't seen a wrapper for, so maybe that's less of a problem >> than I'm making it out to be. > > Ok. In BPLv1 you generated converter functions using friend > functions in templates though, and this was the cause for > these ordering issues? That was one factor. The other factor of course was that each class which needed to be converted from Python used its own conversion function, where a generalized procedure for converting classes will do perfectly well. There is still an issue of to-python conversions for wrapped classes; different ones get generated depending on how the class is "held". I'm not convinced that dynamically generating the smart pointer conversions is needed, but conversions for virtual function dispatching subclass may be. >> >> How will this work when multiple extension modules need to >> >> manipulate the same types? >> > >> > I don't know. I haven't given that much thought. Do you see >> > any obvious issues? >> >> >> Hmm, maybe I'm on drugs. The biggest problems in BPLv1 in this >> area were because the converters for a given class were generated, >> essentially, by its class_<...> instantiation. But I have already >> said that from-python conversion for a given wrapped class is >> normally done statically. >> >> User-defined converters still need to be exposed to all the >> extensions which use the types, somehow. It would be better not to >> replicate that code. > > I haven't thought about that at all. But it's a good point, and it's > impossible to not replicate the code with static dispatch. Right. BPLv1 used a nonuniform system of explicit importing converters from other modules (thanks, Ralf!) but we went to uniform dynamic dispatching for v2. >> >> How do *add* a way to convert from Python type A to C++ type B >> >> without masking the existing conversion from Python type Y to C++ >> >> type Z? >> > >> > I don't understand. How are B and Z related? Why would a >> > conversion function for B mask conversions to Z? >> >> Sorry, B==Z ;-) > > Ah, ok. Well, this isn't finished either. We have a > (unfinished) system which works like this: > > template<> > struct implicit_conversion<0, B> : from {}; > template<> > struct implicit_conversion<1, B> : from {}; > > Of course, this has all the problems with static dispatch as > well.. And with multiple implicit conversions being contributed by multiple people. Also note that in many environments there's no guarantee that different extension modules won't share a link namespace, so you have to watch out for ODR problems. >> >> Me too; I just have serious doubts that once you factor in >> >> everything else that you want going on (e.g. derived <==> base >> >> conversions), the ability to dynamically register conversions has a >> >> significant cost. >> > >> > You might be right. I'll investigate how runtime dispatch >> > would affect luabind the next couple of days, in particular >> > I will look at what this would do to our policy system. >> >> OK. Incidentally, I find much of what you're doing very appealing, >> and I think that if we could find a way to share a lot of >> technology it would be fantastic. > > I think so too. I'm looking around in BPL's conversion system now > trying to understand how I incorporate it in luabind. I am not convinced I got it 100% right. You've forced me to think about the issues again in a new way. It may be that the best answer blends our two approaches. -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Fri Jun 20 14:16:29 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Fri, 20 Jun 2003 05:16:29 -0700 (PDT) Subject: [C++-sig] Problem building Boost.Python In-Reply-To: <20030619220510.42402.qmail@web40513.mail.yahoo.com> Message-ID: <20030620121629.53760.qmail@web20205.mail.yahoo.com> --- Wenning Qiu wrote: > Red Hat Linux 8.0 > gcc (GCC) 3.2 20020903 I am using the same platform all the time. > Python.2.2.3 built from source locally. But only with Python 2.2.1 and 2.2.2. I have not tried 2.2.3. > qiuw01$ cd boost_1_30_0/libs/python/build > > qiuw01$ cd boost_1_30_0/libs/python/example/tutorial I don't use bjam very much, but cd boost_1_30_0/libs/python/test bjam ... definitely works here. FWIW, here is the command that I use (without messing around with environment variables): bjam -sPYTHON_ROOT=/usr/local_cci/Python-2.2.1 -sTOOLS=gcc -sPYTHON_VERSION=2.2 -sBUILD=debug -sRUN_ALL_TESTS=1 Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From paustin at eos.ubc.ca Fri Jun 20 19:03:46 2003 From: paustin at eos.ubc.ca (Philip Austin) Date: Fri, 20 Jun 2003 10:03:46 -0700 Subject: [C++-sig] boost-python interface numarray to c++ In-Reply-To: <200306200739.38779.nbecker@hns.com> References: <16114.7060.901555.658380@gull.eos.ubc.ca> <200306200739.38779.nbecker@hns.com> Message-ID: <16115.15986.934133.809142@gull.eos.ubc.ca> Neal D. Becker writes: > On Thursday 19 June 2003 04:22 pm, Philip Austin wrote: > > Neal D. Becker writes: > > > It would probably be useful also to create arrays in python. Is it > > > difficult to interface a python Numeric and/or numarray array to c++ > > > stl-style iterator interface? > > > > See http://www.boost.org/libs/python/doc/v2/numeric.html#array-spec > > > > Thanks for the pointer. Sorry for my slowness, but I'm afraid that it wasn't > obvious to me what to do to get stl-style iterators. Could you suggest > something or maybe a small example? Thanks. Well, I'm not exactly sure what you mean by "get stl-style iterators" but if you want to pass a Numpy array to C++ and construct an array that supports iterators (like boost::ublas or mtl) using its data pointer, then our approach looks something like this: a C++ histogram module, on the python side: import hist outDict=hist.hist(theArray,numbins,themin,themax) on the C++ side: using namespace boost::python; namespace nbpl = num_util; dict histo::hist(numeric::array x, int numbins, float mindata, float maxdata) { nbpl::check_rank(x, 1); nbpl::check_type(x, PyArray_FLOAT); float* dataPtr = (float*) nbpl::data(x); .... both mtl and ublas have constructors that will make an array from dataPtr, and let Python manage the memory. Regards, Phil From janders at users.sourceforge.net Fri Jun 20 20:40:34 2003 From: janders at users.sourceforge.net (Jon Anderson) Date: Fri, 20 Jun 2003 13:40:34 -0500 Subject: [C++-sig] shared_ptr and inheritance Message-ID: <200306201340.34691.janders@users.sf.net> I'm trying to pass a shared_ptr of a derived class into a method that is expecting a shared_ptr of the base class, and am getting type errors in python. Specifically, my classes are defined as: ///////////////////////////// class Base; class Derived; class Container; typedef boost::shared_ptr BasePtr; typedef boost::shared_ptr DerivedPtr; typedef boost::shared_ptr ContainerPtr; class Base { public: static BasePtr create() { BasePtr b( new Base() ); return b; }; Base() {}; }; class Derived : public Base { public: static DerivedPtr create() { DerivedPtr b( new Derived() ); return b; }; Derived() {}; }; class Container { public: Container() {}; void add( BasePtr b ) { }; }; ///////////////////////////// I'm declaring the bindings by: /////// class_ b( "Base", no_init ); b.def( "create", &Base::create ); b.staticmethod( "create" ); class_() > d( "Derived", no_init ); d.def( "create", &Derived::create ); d.staticmethod( "create" ); class_ c( "Container"); c.def( "add", &Container::add ); /////// But the following python fails: b = Base.create() d = Derived.create() c = Container() c.add(b) c.add(d) The tutorial has a similar example, but it doesn't use smart_ptrs. I just assumed they would work as well. Am I missing something? Thanks, Jon From dave at boost-consulting.com Fri Jun 20 20:50:41 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 20 Jun 2003 14:50:41 -0400 Subject: [C++-sig] Re: shared_ptr and inheritance References: <200306201340.34691.janders@users.sf.net> Message-ID: Jon Anderson writes: > I'm trying to pass a shared_ptr of a derived class into a method that is > expecting a shared_ptr of the base class, and am getting type errors in > python. Are you using the latest CVS state? -- Dave Abrahams Boost Consulting www.boost-consulting.com From brett.calcott at paradise.net.nz Sat Jun 21 00:28:50 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Sat, 21 Jun 2003 10:28:50 +1200 Subject: [C++-sig] Re: shared_ptr and inheritance References: <200306201340.34691.janders@users.sf.net> Message-ID: > > > I'm trying to pass a shared_ptr of a derived class into a method that is > > expecting a shared_ptr of the base class, and am getting type errors in > > python. > > Are you using the latest CVS state? > Should this magically work now? I thought you had to use : implicitly_convertible(); in your module def. Brett From dave at boost-consulting.com Sat Jun 21 01:14:08 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 20 Jun 2003 19:14:08 -0400 Subject: [C++-sig] Re: shared_ptr and inheritance References: <200306201340.34691.janders@users.sf.net> Message-ID: "Brett Calcott" writes: >> >> > I'm trying to pass a shared_ptr of a derived class into a method that is >> > expecting a shared_ptr of the base class, and am getting type errors in >> > python. >> >> Are you using the latest CVS state? >> > > Should this magically work now? I thought you had to use : > > implicitly_convertible(); > > in your module def. Works for me; I just checked in some tests which prove it. -- Dave Abrahams Boost Consulting www.boost-consulting.com From brett.calcott at paradise.net.nz Sun Jun 22 07:39:20 2003 From: brett.calcott at paradise.net.nz (Brett Calcott) Date: Sun, 22 Jun 2003 17:39:20 +1200 Subject: [C++-sig] Re: Refactoring, Compilation Speed, Pyste, Lua/Ruby/JavaScript... References: Message-ID: > > There is (http://www.python.org/cgi-bin/moinmoin/boost_2epython), but > note that it doesn't do ReST and I much prefer CVS for this sort of > thing. I hate editing in a webpage, and I like to be able to apply > all the usual revision control tools. I'd happily give you Boost CVS > access if you wanted to do it that way. If you have a strong > preference for Wiki it's no problem, since you'll be doing most of the > interaction anyway. You're right. Editing in a webpage has all the power of windows notepad. I'll cut and paste into a document, and we can sort out CVS access. > > > Where shall we start :) > > What are you interested in? Let me have a think, and I'll start a new thread. I'm in the middle of packing, so it might be a few days away. I look forward to you filling my brain up :) Cheers, Brett From dalwan01 at student.umu.se Sun Jun 22 13:36:04 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Sun, 22 Jun 2003 12:36:04 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef594a4.543f.16838@student.umu.se> > > I just love how GNUs is able to straighten out that nasty > Outlook > (Express) wrapping. Unfortunately it looks like you're > using > webmail, or you could use OE-QuoteFix :( I am using webmail, my home connection is down so I either use this or configure mutt. :) > > "Daniel Wallin" writes: > > >> In fact, the more I look at the syntax of luabind, the > more I like. > >> Using addition for policy accumulation is cool. The > naming of the > >> policies is cool. > > > > It does increase compile times a bit though > > What, overloading '+'? I don't think it's significant. I meant composing typelist's with '+' opposed to composing the typelist manually like in BPL. > >> > >> >> > This doesn't increase compile times. > >> >> > >> >> Good. Virtual functions come with bloat of their > own, but that's > >> >> an implementation detail which can be mitigated. > >> > > >> > Right. The virtual functions isn't generated in the > >> > template, so there is very little code generated. > >> > >> I don't see how that's possible, but I guess I'll > learn. > > > > We can generate the wrapper code in the template, and > store > > function pointers in the object instead of generating a > > virtual function which generates the wrapper functions. > > Well, IIUC, that means you have to treat def() the same > when it > appears inside a class [...] as when it's inside a module > [ ... ], > since there's no delayed evaluation. > > ah, wait: you don't use [ ... ] for class, which gets > you off > the hook. > > but what about nested classes? Consistency would > dictate the > use of [ ... ]. Right, we don't have nested classes. We have thought about a few solutions: class_("A") .def(..) [ class_("inner") .def(..) ] .def(..) ; Or reusing namespace_: class_("A"), namespace_("A") [ class_(..) ] We thought that nested classes is less common than nested namespaces. > >> The ordering issues basically have to do with the > requirement that > >> classes be wrapped and converters defined before they > are used, > >> syntactically speaking. That caused all kinds of > inconveniences in > >> BPLv1 when interacting classes were wrapped. OTOH I > bet it's > >> possible to implicltly choose conversion methods for > classes which > >> you haven't seen a wrapper for, so maybe that's less of > a problem > >> than I'm making it out to be. > > > > Ok. In BPLv1 you generated converter functions using > friend > > functions in templates though, and this was the cause > for > > these ordering issues? > > That was one factor. The other factor of course was that > each class > which needed to be converted from Python used its own > conversion > function, where a generalized procedure for converting > classes will do > perfectly well. Right. We have a general conversion function for all user-defined types. More on this later. > > There is still an issue of to-python conversions for > wrapped classes; > different ones get generated depending on how the class is > "held". > I'm not convinced that dynamically generating the smart > pointer > conversions is needed, but conversions for virtual > function > dispatching subclass may be. I don't understand how this has anything to do with ordering. Unless you mean that you need to register the types before executing python/lua code that uses them, which seems pretty obvious. :) > > >> >> How will this work when multiple extension modules > need to > >> >> manipulate the same types? > >> > > >> > I don't know. I haven't given that much thought. Do > you see > >> > any obvious issues? > >> > >> > >> Hmm, maybe I'm on drugs. The biggest problems in BPLv1 > in this > >> area were because the converters for a given class were > generated, > >> essentially, by its class_<...> instantiation. But I > have already > >> said that from-python conversion for a given wrapped > class is > >> normally done statically. > >> > >> User-defined converters still need to be exposed to all > the > >> extensions which use the types, somehow. It would be > better not to > >> replicate that code. > > > > I haven't thought about that at all. But it's a good > point, and it's > > impossible to not replicate the code with static > dispatch. > > Right. BPLv1 used a nonuniform system of explicit > importing > converters from other modules (thanks, Ralf!) but we went > to uniform > dynamic dispatching for v2. > > >> >> How do *add* a way to convert from Python type A to > C++ type B > >> >> without masking the existing conversion from Python > type Y to C++ > >> >> type Z? > >> > > >> > I don't understand. How are B and Z related? Why > would a > >> > conversion function for B mask conversions to Z? > >> > >> Sorry, B==Z ;-) > > > > Ah, ok. Well, this isn't finished either. We have a > > (unfinished) system which works like this: > > > > template<> > > struct implicit_conversion<0, B> : from {}; > > template<> > > struct implicit_conversion<1, B> : from {}; > > > > Of course, this has all the problems with static > dispatch as > > well.. > > And with multiple implicit conversions being contributed > by multiple > people. Also note that in many environments there's no > guarantee that > different extension modules won't share a link namespace, > so you have > to watch out for ODR problems. Right. We didn't really intend for luabind to be used in this way, but rather for binding closed modules. It seems to me like this can't be very common thing to do though, at least not with lua. I have very little insight in how python is used. > > >> >> Me too; I just have serious doubts that once you > factor in > >> >> everything else that you want going on (e.g. derived > <==> base > >> >> conversions), the ability to dynamically register > conversions has a > >> >> significant cost. > >> > > >> > You might be right. I'll investigate how runtime > dispatch > >> > would affect luabind the next couple of days, in > particular > >> > I will look at what this would do to our policy > system. > >> > >> OK. Incidentally, I find much of what you're doing > very appealing, > >> and I think that if we could find a way to share a lot > of > >> technology it would be fantastic. > > > > I think so too. I'm looking around in BPL's conversion > system now > > trying to understand how I incorporate it in luabind. > > I am not convinced I got it 100% right. You've forced me > to think > about the issues again in a new way. It may be that the > best answer > blends our two approaches. Your converter implementation with static ref's to the registry entry is really clever. Instead of doing this we have general converters which is used to convert all user-defined types. To do this we need a map<..> lookup to find the appropriate converter and this really sucks. As mentioned before, lua can have multiple states, so it would be cool if the converters would be bound to the state somehow. This would probably mean we would need to store a hash table in the registry entries and hash the lua state pointer (or something associated with the state) though, and I don't know if there is sufficient need for the feature to introduce this overhead. I don't know if I understand the issues with multiple extension modules. You register the converters in a map with the typeinfo as key, but I don't understand how this could ever work between dll's. Do you compare the typenames? If so, this could never work between modules compiled with different compilers. So it seems to me like this feature can't be that useful, what am I missing? Anyway, I find your converter system more appealing than ours. There are some issues which need to be taken care of; We choose best match, not first match, when trying different overloads. This means we need to keep the storage for the converter on the stack of a function that is unaware of the converter size (at compile time). So we need to either have a fixed size buffer on the stack, and hope it works, or allocate the storage at runtime. For clarification: void dispatcher(..) { *storage here* try all overloads call best overload } -- Daniel Wallin From rwgk at yahoo.com Sun Jun 22 15:44:03 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Sun, 22 Jun 2003 06:44:03 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <3ef594a4.543f.16838@student.umu.se> Message-ID: <20030622134403.8809.qmail@web20206.mail.yahoo.com> --- Daniel Wallin wrote: > Right. We didn't really intend for luabind to be used in > this way, but rather for binding closed modules. It seems to > me like this can't be very common thing to do though, at > least not with lua. I have very little insight in how python > is used. Boost.Python's "cross-module" feature is absolutely essential for us. Unfortunately my cross-module web page seems to have fallen through the cracks in the V1->V2 transition, but here it is, resurrected: http://cci.lbl.gov/~rwgk/boost_1_28_0/libs/python/doc/cross_module.html Adding to this: imagine you had to link all extensions statically into Python. This is analog to not having cross-module support. Maybe it is not important if you don't expect others to extend your system, but such a barrier against natural growth is unacceptable for us. Anecdotal comment: If you go way back in the Boost mailing list (4th quarter of 2000) you can see that David wasn't very fond of the cross-module idea at all :-) Regarding the "static vs. dynamic dispatch" discussion: It seems to me (without having thought it through) that static dispatch is associated with explicitly importing and exporting converters a la Boost.Python V1 (see cross_module.html referenced above). This made building extensions quite cumbersome as our system got bigger. In practice it was a *big* relieve when we upgraded to Boost.Python V2. The dynamic dispatch allowed me to be very generous with introducing a large number of "convenience converters" which would have been impractical in V1. To get an idea look at this fragment of our system for wrapping multi-dimensional C++ arrays: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/cctbx/scitbx/include/scitbx/array_family/boost_python/flex_fwd.h?rev=1.4&content-type=text/vnd.viewcvs-markup Each of the 20 C++ types in the signatures of the friend functions in the flew_fwd<> struct (which is just a workaround for MIPSpro but nice to show all the types in one place) needs custom from_python converters (plural!) *for each* T, of which we have 14 right now. Due to the dynamic dispatch I can define as many converters as I need *in one place* and use them "just like that" in any other extension module. In contrast, with Boost.Python V1 I had to bring all the right converters into each C++ translation unit. Regarding efficiency considerations: Boost.Python's (V2) conversions are amazingly fast, on the order of 100000 per second on a recent Intel/Linux system (I say "amazingly" because I know, to a certain degree, how involved the C++ code is that makes this happen). But anyway, I believe if you have to cross the language boundary 100000 times you are making a big mistake. If you have to do something 100000 times it can only be in a loop of some form. Simplified: for i in xrange(100000): call_wrapped_function(some_argument[i]) Our approach is to take full advantage of Boost.Python's ease of use in wrapping additional functions: E.g. in C++: void vectorized_function(array& a) { for(std::size_t i=0;i > --- Daniel Wallin wrote: > > Right. We didn't really intend for luabind to be used in > > this way, but rather for binding closed modules. It > seems to > > me like this can't be very common thing to do though, at > > least not with lua. I have very little insight in how > python > > is used. > > Boost.Python's "cross-module" feature is absolutely > essential for us. > > > > To summary my practical experience: Maybe (?) static > dispatch is more > efficient if most of your loops are in the interpreted > layer, but it is > vastly more efficient if you push the rate-limiting loops > down into the > compiled layer. This requires wrapping arrays of > user-defined types > which is much easier handled in a system based on dynamic > dispatch. So > overall dynamic dispatch wins out by a large margin. I mostly agree with everything you say. However, it may still be of interest to be able to bypass the dynamic dispatch system and use converters with static dispatch. I fail to see how wrapping arrays of user-defined types is easier with dynamic dispatch though. How do you import the converters from one module to another? And how does type_info objects compare between dll boundries? -- Daniel Wallin From dave at boost-consulting.com Sun Jun 22 22:11:48 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 22 Jun 2003 16:11:48 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef5f7f7.2e08.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> --- Daniel Wallin wrote: >> >> > Right. We didn't really intend for luabind to be used in this >> > way, but rather for binding closed modules. It seems to me like >> > this can't be very common thing to do though, at least not with >> > lua. I have very little insight in how python is used. >> >> Boost.Python's "cross-module" feature is absolutely essential for >> us. I want to lean a little bit in luabind's direction here. One thing we've been discussing on-and-off is how we can provide some "scoping" for conversions (especially to-python conversions, of which you get only one per type), to prevent different modules from colliding in unpleasant ways. While sharing conversions and types across modules is important for some applications, it's clear that in many situations it's undesirable. For example, two independent modules may be compiled with different compilers, or different alignment options. You just don't want those stepping on each others' toes. Furthermore, on many systems, when two extension modules link to the same shared library, their link symbol spaces are automatically shared, so the symbol insulation one normally gets by being in a separate shared object accessed via dlopen is lost. It seems to me that for groups interested in sharing conversions it might be reasonable to have them to build a shared Boost.Python library for their project, and have every module in the project link to it. That would provide some degree of isolation. Is it important for an extension module author to want to work with types from two packages that have been wrapped in that way? That would imply linking to both of their BPL libraries, which is impossible, unless we find a way to import converters from each without actually using them. I am envisioning a flexible system with at least one dynamic and probably two static library configurations that can be combined to achieve the desired sharing/isolation. >> To summary my practical experience: Maybe (?) static dispatch is >> more efficient if most of your loops are in the interpreted layer, >> but it is vastly more efficient if you push the rate-limiting loops >> down into the compiled layer. This requires wrapping arrays of >> user-defined types which is much easier handled in a system based >> on dynamic dispatch. So overall dynamic dispatch wins out by a >> large margin. > > I mostly agree with everything you say. However, it may > still be of interest to be able to bypass the dynamic > dispatch system and use converters with static dispatch. I > fail to see how wrapping arrays of user-defined types is > easier with dynamic dispatch though. Me too. Comments, Ralf? > How do you import the converters from one module to another? The system does that; the demand for a converter for a given type causes the converter chain in the global converter registry to be bound to a reference at static initialization time. Since all modules that work with the same type are referring to the same registry entry, it "just works" > And how does type_info objects compare between dll boundries? On most platforms, just fine because we've normalized them using boost/python/type_id.hpp. A few platforms (e.g. SGI) still have problems, though. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 22 22:12:06 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 22 Jun 2003 16:12:06 -0400 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <3ef5f7f7.2e08.16838@student.umu.se> (Daniel Wallin's message of "Sun, 22 Jun 2003 19:39:51 +0100") References: <3ef5f7f7.2e08.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> --- Daniel Wallin wrote: >> >> > Right. We didn't really intend for luabind to be used in this >> > way, but rather for binding closed modules. It seems to me like >> > this can't be very common thing to do though, at least not with >> > lua. I have very little insight in how python is used. >> >> Boost.Python's "cross-module" feature is absolutely essential for >> us. I want to lean a little bit in luabind's direction here. One thing we've been discussing on-and-off is how we can provide some "scoping" for conversions (especially to-python conversions, of which you get only one per type), to prevent different modules from colliding in unpleasant ways. While sharing conversions and types across modules is important for some applications, it's clear that in many situations it's undesirable. For example, two independent modules may be compiled with different compilers, or different alignment options. You just don't want those stepping on each others' toes. Furthermore, on many systems, when two extension modules link to the same shared library, their link symbol spaces are automatically shared, so the symbol insulation one normally gets by being in a separate shared object accessed via dlopen is lost. It seems to me that for groups interested in sharing conversions it might be reasonable to have them to build a shared Boost.Python library for their project, and have every module in the project link to it. That would provide some degree of isolation. Is it important for an extension module author to want to work with types from two packages that have been wrapped in that way? That would imply linking to both of their BPL libraries, which is impossible, unless we find a way to import converters from each without actually using them. I am envisioning a flexible system with at least one dynamic and probably two static library configurations that can be combined to achieve the desired sharing/isolation. >> To summary my practical experience: Maybe (?) static dispatch is >> more efficient if most of your loops are in the interpreted layer, >> but it is vastly more efficient if you push the rate-limiting loops >> down into the compiled layer. This requires wrapping arrays of >> user-defined types which is much easier handled in a system based >> on dynamic dispatch. So overall dynamic dispatch wins out by a >> large margin. > > I mostly agree with everything you say. However, it may > still be of interest to be able to bypass the dynamic > dispatch system and use converters with static dispatch. I > fail to see how wrapping arrays of user-defined types is > easier with dynamic dispatch though. Me too. Comments, Ralf? > How do you import the converters from one module to another? The system does that; the demand for a converter for a given type causes the converter chain in the global converter registry to be bound to a reference at static initialization time. Since all modules that work with the same type are referring to the same registry entry, it "just works" > And how does type_info objects compare between dll boundries? On most platforms, just fine because we've normalized them using boost/python/type_id.hpp. A few platforms (e.g. SGI) still have problems, though. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sun Jun 22 22:27:13 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 22 Jun 2003 16:27:13 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef594a4.543f.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: I wrote: >> >> In fact, the more I look at the syntax of luabind, the more I like. >> >> Using addition for policy accumulation is cool. The naming of the >> >> policies is cool. >> > >> > It does increase compile times a bit though >> >> What, overloading '+'? I don't think it's significant. > > I meant composing typelist's with '+' opposed to composing > the typelist manually like in BPL. I think we agree that's probably minor. >> >> >> > This doesn't increase compile times. >> >> >> >> >> >> Good. Virtual functions come with bloat of their own, but that's >> >> >> an implementation detail which can be mitigated. >> >> > >> >> > Right. The virtual functions isn't generated in the >> >> > template, so there is very little code generated. >> >> >> >> I don't see how that's possible, but I guess I'll learn. >> > >> > We can generate the wrapper code in the template, and store >> > function pointers in the object instead of generating a >> > virtual function which generates the wrapper functions. >> >> Well, IIUC, that means you have to treat def() the same when it >> appears inside a class [...] as when it's inside a module [ ... ], >> since there's no delayed evaluation. >> >> ah, wait: you don't use [ ... ] for class, which gets you off >> the hook. >> >> but what about nested classes? Consistency would dictate the >> use of [ ... ]. > > Right, we don't have nested classes. We have thought about a > few solutions: > > class_("A") > .def(..) > [ > class_("inner") > .def(..) > ] > .def(..) > ; Looks pretty! > Or reusing namespace_: > > class_("A"), > namespace_("A") > [ class_(..) ] > > We thought that nested classes is less common than nested > namespaces. Either one works; I like the former, but I think you ought to be able to do both. >> >> The ordering issues basically have to do with the requirement that >> >> classes be wrapped and converters defined before they are used, >> >> syntactically speaking. That caused all kinds of inconveniences in >> >> BPLv1 when interacting classes were wrapped. OTOH I bet it's >> >> possible to implicltly choose conversion methods for classes which >> >> you haven't seen a wrapper for, so maybe that's less of a problem >> >> than I'm making it out to be. >> > >> > Ok. In BPLv1 you generated converter functions using friend >> > functions in templates though, and this was the cause for >> > these ordering issues? >> >> That was one factor. The other factor of course was that each >> class which needed to be converted from Python used its own >> conversion function, where a generalized procedure for converting >> classes will do perfectly well. > > Right. We have a general conversion function for all > user-defined types. We actually have something similar, plus dynamic lookup **as a fallback in case the usual method doesn't work** > More on this later. OK >> There is still an issue of to-python conversions for wrapped >> classes; different ones get generated depending on how the class is >> "held". I'm not convinced that dynamically generating the smart >> pointer conversions is needed, but conversions for virtual function >> dispatching subclass may be. > > I don't understand how this has anything to do with ordering. Unless > you mean that you need to register the types before executing > python/lua code that uses them, which seems pretty obvious. :) It has nothing to do with ordering; I'm just thinking out loud about how much dynamic lookup is actually buying in Boost.Python. >> >> >> How do *add* a way to convert from Python type A to C++ type B >> >> >> without masking the existing conversion from Python type Y to C++ >> >> >> type Z? >> >> > >> >> > I don't understand. How are B and Z related? Why would a >> >> > conversion function for B mask conversions to Z? >> >> >> >> Sorry, B==Z ;-) >> > >> > Ah, ok. Well, this isn't finished either. We have a >> > (unfinished) system which works like this: >> > >> > template<> >> > struct implicit_conversion<0, B> : from {}; >> > template<> >> > struct implicit_conversion<1, B> : from {}; >> > >> > Of course, this has all the problems with static dispatch as >> > well.. >> >> And with multiple implicit conversions being contributed by >> multiple people. Also note that in many environments there's no >> guarantee that different extension modules won't share a link >> namespace, so you have to watch out for ODR problems. > > Right. We didn't really intend for luabind to be used in this way, > but rather for binding closed modules. I think I'm saying that on some systems (not many), there's no such thing as a "closed module". If they're loaded in the same process, they share a link namespace :( >> > I think so too. I'm looking around in BPL's conversion system now >> > trying to understand how I incorporate it in luabind. >> >> I am not convinced I got it 100% right. You've forced me to think >> about the issues again in a new way. It may be that the best >> answer blends our two approaches. > > Your converter implementation with static ref's to the > registry entry is really clever. Thanks! > Instead of doing this we have general converters which is used to > convert all user-defined types. I have the same thing for most from_python conversions; the registry is only used as a fallback in that case. > To do this we need a map<..> lookup to find the appropriate > converter and this really sucks. I can't understand why you'd need that, but maybe I'm missing something. The general mechanism in Boost.Python is that instance_holder::holds(type_info) will give you the address of the contained instance if it's there. > As mentioned before, lua can have multiple states, so it would be > cool if the converters would be bound to the state somehow. Why? It doesn't seem like it would be very useful to have different states doing different conversions. > This would probably mean we would need to store a hash table in the > registry entries and hash the lua state pointer (or something > associated with the state) though, and I don't know if there is > sufficient need for the feature to introduce this overhead. > > I don't know if I understand the issues with multiple extension > modules. You register the converters in a map with the typeinfo as > key, but I don't understand how this could ever work between > dll's. Do you compare the typenames? Depends on the platform. See my other message and boost/python/type_id.hpp. > If so, this could never work between modules compiled with different > compilers. If they don't have compatible ABIs you don't want them to match anyway, but this is currently an area of weakness in the system. > So it seems to me like this feature can't be that useful, > what am I missing? Well, it's terribly useful for teams who are developing large systems. Each individual can produce wrappers just for just her part of it, and they all interact correctly. > Anyway, I find your converter system more appealing than > ours. There are some issues which need to be taken care of; > We choose best match, not first match, when trying different > overloads. This means we need to keep the storage for the > converter on the stack of a function that is unaware of the > converter size (at compile time). So we need to either have > a fixed size buffer on the stack, and hope it works, or > allocate the storage at runtime. I would love to have best match conversion. I was going to do it at one point, but realized eventually that users can sort the overloads so that they always work so I never bothered to code it. > For clarification: > > void dispatcher(..) > { > *storage here* > try all overloads > call best overload > } I've already figured out how to solve this problem; if we can figure out how to share best-conversion technology I'll happily code it up ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Sun Jun 22 22:51:49 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Sun, 22 Jun 2003 13:51:49 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <3ef5f7f7.2e08.16838@student.umu.se> Message-ID: <20030622205149.52661.qmail@web20207.mail.yahoo.com> --- Daniel Wallin wrote: > I mostly agree with everything you say. However, it may > still be of interest to be able to bypass the dynamic > dispatch system and use converters with static dispatch. I > fail to see how wrapping arrays of user-defined types is > easier with dynamic dispatch though. Maybe I wasn't precise enough here. It's not the wrapping that is easier***, but that fact that I do not have to explicitly export and import the converters. > How do you import the converters from one module to another? Hm, not having done the implementation I cannot tell for sure. I am guessing that the registry, which resides in boost_python.dll, holds essentially a few pointers to functions for the convertible() test and the construct() stage. The machine code for these functions is in the extension with the wrappers (i.e. the translation unit with the class_<> instantiation). The other extensions get these function pointers from the registry and then use the machine code from the first extension. David, is this a reasonably accurate view? > And how does type_info objects compare between dll > boundries? Again I can only offer a second-hand view. IIUC, on some platforms it is possible to compare type_info objects across dll boundaries as if they are in the same static link unit. I.e. there is nothing special. On some platforms this is not possible, and type_id::name is used instead. There is only one platform where relying on type_id::name caused a bit of a hick-up, namely IRIX/MIPSpro. See the comment near the top of the flew_fwd.h file (link in my previous message). Anecdotal: when I first suggested the cross-module feature I had not only no idea what to call it, but also no idea of all the difficulties that we would run into. There were moments when I thought there is just no way to get around a particular problem. The MIPSpro type_id::name was one, but not the worst. That was issues with catching exceptions thrown in one extension with the corresponding catch statement in another. In retrospect it almost seems like a miracle that we got it to work on all platforms eventually, but it does! Until quite recently I wasn't sure about Mac OS 10 but even that we got to work now (at least without optimization). I've also had success on an Itanium2 based system, so there is not much out there that we have not tried. Ralf *** Footnote: For several reasons wrapping with Boost.Python V2 is a lot easier compared to wrapping with V1, but maybe static vs. dynamic dispatch is not the crucial difference. __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Mon Jun 23 03:22:09 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 22 Jun 2003 21:22:09 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef5f7f7.2e08.16838@student.umu.se> <20030622205149.52661.qmail@web20207.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > Hm, not having done the implementation I cannot tell for sure. I am > guessing that the registry, which resides in boost_python.dll, holds > essentially a few pointers to functions for the convertible() test and > the construct() stage. The machine code for these functions is in the > extension with the wrappers (i.e. the translation unit with the class_<> > instantiation). The other extensions get these function pointers from > the registry and then use the machine code from the first extension. > David, is this a reasonably accurate view? Yep. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dalwan01 at student.umu.se Mon Jun 23 14:21:03 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 13:21:03 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef6f0af.702f.16838@student.umu.se> > "Daniel Wallin" writes: > > I wrote: > > >> >> In fact, the more I look at the syntax of luabind, > the more I like. > >> >> Using addition for policy accumulation is cool. The > naming of the > >> >> policies is cool. > >> > > >> > It does increase compile times a bit though > >> > >> What, overloading '+'? I don't think it's significant. > > > > I meant composing typelist's with '+' opposed to > composing > > the typelist manually like in BPL. > > I think we agree that's probably minor. Right. > > >> >> >> > This doesn't increase compile times. > >> >> >> > >> >> >> Good. Virtual functions come with bloat of their > own, but that's > >> >> >> an implementation detail which can be mitigated. > >> >> > > >> >> > Right. The virtual functions isn't generated in > the > >> >> > template, so there is very little code generated. > >> >> > >> >> I don't see how that's possible, but I guess I'll > learn. > >> > > >> > We can generate the wrapper code in the template, and > store > >> > function pointers in the object instead of generating > a > >> > virtual function which generates the wrapper > functions. > >> > >> Well, IIUC, that means you have to treat def() the same > when it > >> appears inside a class [...] as when it's inside a > module [ ... ], > >> since there's no delayed evaluation. > >> > >> ah, wait: you don't use [ ... ] for class, which > gets you off > >> the hook. > >> > >> but what about nested classes? Consistency would > dictate the > >> use of [ ... ]. > > > > Right, we don't have nested classes. We have thought > about a > > few solutions: > > > > class_("A") > > .def(..) > > [ > > class_("inner") > > .def(..) > > ] > > .def(..) > > ; > > Looks pretty! :) > > > Or reusing namespace_: > > > > class_("A"), > > namespace_("A") > > [ class_(..) ] > > > > We thought that nested classes is less common than > nested > > namespaces. > > Either one works; I like the former, but I think you ought > to be able > to do both. Yeah, both need really minor adjustments to the system. > > > >> There is still an issue of to-python conversions for > wrapped > >> classes; different ones get generated depending on how > the class is > >> "held". I'm not convinced that dynamically generating > the smart > >> pointer conversions is needed, but conversions for > virtual function > >> dispatching subclass may be. > > > > I don't understand how this has anything to do with > ordering. Unless > > you mean that you need to register the types before > executing > > python/lua code that uses them, which seems pretty > obvious. :) > > It has nothing to do with ordering; I'm just thinking out > loud about > how much dynamic lookup is actually buying in > Boost.Python. Ah, ok. :) > > >> >> >> How do *add* a way to convert from Python type A > to C++ type B > >> >> >> without masking the existing conversion from > Python type Y to C++ > >> >> >> type Z? > >> >> > > >> >> > I don't understand. How are B and Z related? Why > would a > >> >> > conversion function for B mask conversions to Z? > >> >> > >> >> Sorry, B==Z ;-) > >> > > >> > Ah, ok. Well, this isn't finished either. We have a > >> > (unfinished) system which works like this: > >> > > >> > template<> > >> > struct implicit_conversion<0, B> : from {}; > >> > template<> > >> > struct implicit_conversion<1, B> : from {}; > >> > > >> > Of course, this has all the problems with static > dispatch as > >> > well.. > >> > >> And with multiple implicit conversions being > contributed by > >> multiple people. Also note that in many environments > there's no > >> guarantee that different extension modules won't share > a link > >> namespace, so you have to watch out for ODR problems. > > > > Right. We didn't really intend for luabind to be used in > this way, > > but rather for binding closed modules. > > I think I'm saying that on some systems (not many), > there's no such > thing as a "closed module". If they're loaded in the same > process, > they share a link namespace :( > > >> > I think so too. I'm looking around in BPL's > conversion system now > >> > trying to understand how I incorporate it in luabind. > >> > >> I am not convinced I got it 100% right. You've forced > me to think > >> about the issues again in a new way. It may be that > the best > >> answer blends our two approaches. > > > > Your converter implementation with static ref's to the > > registry entry is really clever. > > Thanks! > > > Instead of doing this we have general converters which > is used to > > convert all user-defined types. > > I have the same thing for most from_python conversions; > the registry > is only used as a fallback in that case. Hm, doesn't the conversion of UDT's pass through the normal conversion system? > > > To do this we need a map<..> lookup to find the > appropriate > > converter and this really sucks. > > I can't understand why you'd need that, but maybe I'm > missing > something. The general mechanism in Boost.Python is that > instance_holder::holds(type_info) will give you the > address of the > contained instance if it's there. Right, we have a map when performing c++ -> lua conversions. You just need to do registered::conversions.to_python(..); Correct? > > > As mentioned before, lua can have multiple states, so it > would be > > cool if the converters would be bound to the state > somehow. > > Why? It doesn't seem like it would be very useful to have > different > states doing different conversions. It can be useful to be able to register different types in different states. Otherwise class_() would register global types and def() would register local functions. Or am I wrong in assuming that class_() instantiates registered and add's a few converters? > > > If so, this could never work between modules compiled > with different > > compilers. > > If they don't have compatible ABIs you don't want them to > match > anyway, but this is currently an area of weakness in the > system. Right. > > > So it seems to me like this feature can't be that > useful, > > what am I missing? > > Well, it's terribly useful for teams who are developing > large > systems. Each individual can produce wrappers just for > just her part > of it, and they all interact correctly. Right. > > > Anyway, I find your converter system more appealing than > > ours. There are some issues which need to be taken care > of; > > We choose best match, not first match, when trying > different > > overloads. This means we need to keep the storage for > the > > converter on the stack of a function that is unaware of > the > > converter size (at compile time). So we need to either > have > > a fixed size buffer on the stack, and hope it works, or > > allocate the storage at runtime. > > I would love to have best match conversion. I was going > to do it at > one point, but realized eventually that users can sort the > overloads > so that they always work so I never bothered to code it. Do you still think best match is worth adding, or is sorting an acceptable solution? > > > For clarification: > > > > void dispatcher(..) > > { > > *storage here* > > try all overloads > > call best overload > > } > > I've already figured out how to solve this problem; if we > can figure > out how to share best-conversion technology I'll happily > code it up > ;-) :) How would you do it? I guess you could have static storage in the match-function and store a pointer to that in the converter data, but that wouldn't be thread safe. It seems to me that sharing the conversion code out of the box is going to be hard. Perhaps we should consider parameterizing header files? namespace luabind { #define BOOST_LANG_CONVERSION_PARAMS \ (2, (lua_State*, int)) #include } -- Daniel Wallin From dalwan01 at student.umu.se Mon Jun 23 15:02:52 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 14:02:52 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef6fa7c.3a0d.16838@student.umu.se> > > How do you import the converters from one module to > another? > > The system does that; the demand for a converter for a > given type > causes the converter chain in the global converter > registry to be > bound to a reference at static initialization time. Since > all > modules that work with the same type are referring to the > same > registry entry, it "just works" Ah, right. The registry resides in the boost_python dll.. Clever. :) > > > And how does type_info objects compare between dll > boundries? > > On most platforms, just fine because we've normalized them > using > boost/python/type_id.hpp. A few platforms (e.g. SGI) > still have > problems, though. Ok. -- Daniel Wallin From dalwan01 at student.umu.se Mon Jun 23 15:12:47 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 14:12:47 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef6fccf.504e.16838@student.umu.se> > --- Daniel Wallin wrote: > > I mostly agree with everything you say. However, it may > > still be of interest to be able to bypass the dynamic > > dispatch system and use converters with static dispatch. > I > > fail to see how wrapping arrays of user-defined types is > > easier with dynamic dispatch though. > > Maybe I wasn't precise enough here. It's not the wrapping > that is > easier***, but that fact that I do not have to explicitly > export and > import the converters. Ok. > > > How do you import the converters from one module to > another? > > Hm, not having done the implementation I cannot tell for > sure. I am > guessing that the registry, which resides in > boost_python.dll, holds > essentially a few pointers to functions for the > convertible() test and > the construct() stage. The machine code for these > functions is in the > extension with the wrappers (i.e. the translation unit > with the class_<> > instantiation). The other extensions get these function > pointers from > the registry and then use the machine code from the first > extension. Ok. > > And how does type_info objects compare between dll > > boundries? > > Again I can only offer a second-hand view. IIUC, on some > platforms it is > possible to compare type_info objects across dll > boundaries as if they > are in the same static link unit. I.e. there is nothing > special. On some > platforms this is not possible, and type_id::name is used > instead. > There is only one platform where relying on type_id::name > caused a bit > of a hick-up, namely IRIX/MIPSpro. See the comment near > the top of the > flew_fwd.h file (link in my previous message). Ok. I guess you could unmangle the names to a standardized format if there are problems with this? -- Daniel Wallin From dave at boost-consulting.com Mon Jun 23 15:30:49 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 23 Jun 2003 09:30:49 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef6f0af.702f.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> > Instead of doing this we have general converters which is used to >> > convert all user-defined types. >> >> I have the same thing for most from_python conversions; >> the registry >> is only used as a fallback in that case. > > Hm, doesn't the conversion of UDT's pass through the normal > conversion system? See get_lvalue_from_python in libs/python/src/converter/from_python.cpp. First it calls find_instance_impl, which will always find a pointer to the right type inside a regular wrapped class if such a pointer is findable. The only thing that gets used from the registration in that case is a type_info object, which has been stored in the registration, as opposed to being passed as a separate parameter, just to minimize the amount of object code generated in extension modules which are invoking the conversion. That's for from-python conversions, of course. For to-python conversions, yes, we nearly always end up consulting the registration for the type. But that's cheap, after all - there's just a single method for converting any type to python so it just pulls the function pointer out of the registration and invokes it. Compared to the cost of constructing a Python object, an indirect call gets lost in the noise. The fact that there can only be one way to do that also means that we can introduce some specializations for conversion to python, which bypasses indirection for known types such as int or std::string. Note however that this bypassing actually has a usability cost because the implicit conversion mechanism actually consults the registration records directly, and I'm not currently filling in the to-python registrations for these types with specializations, so some implicit conversion sequences don't work. It may have been premature optimization to use specializations here. >> > To do this we need a map<..> lookup to find the appropriate >> > converter and this really sucks. >> >> I can't understand why you'd need that, but maybe I'm missing >> something. The general mechanism in Boost.Python is that >> instance_holder::holds(type_info) will give you the address of the >> contained instance if it's there. > > Right, we have a map when performing > c++ -> lua conversions. You just need to do > registered::conversions.to_python(..); Correct? Roughly speaking, yes. But what I'm confused about is, if you're using full compile-time dispatching for from-lua conversions, why you don't do the same for to-lua conversions. AFAICT, it's the former where compile-time dispatch is most useful. What's the 2nd argument to the map? >> > As mentioned before, lua can have multiple states, so it would be >> > cool if the converters would be bound to the state somehow. >> >> Why? It doesn't seem like it would be very useful to have >> different states doing different conversions. > > It can be useful to be able to register different types in different > states. Why? > Otherwise class_() would register global types and def() > would register local functions. Or am I wrong in assuming that > class_() instantiates registered and add's a few converters? No, you're correct. However, it also creates a Python type object in the extension module's dictionary, just as def() creates callable python objects in the module's dictionary. I see the converter registry as a separate data structure which exists in parallel with the module's dictionary. I don't see any reason to have a given module register different sets of type conversions in different states, even if it is going to contain different types/functions (though I can't see why you'd want that either). >> > Anyway, I find your converter system more appealing than >> > ours. There are some issues which need to be taken care of; >> > We choose best match, not first match, when trying different >> > overloads. This means we need to keep the storage for the >> > converter on the stack of a function that is unaware of the >> > converter size (at compile time). So we need to either have >> > a fixed size buffer on the stack, and hope it works, or >> > allocate the storage at runtime. >> >> I would love to have best match conversion. I was going to do it >> at one point, but realized eventually that users can sort the >> overloads so that they always work so I never bothered to code it. > > Do you still think best match is worth adding, or is sorting an > acceptable solution? I think in many cases, it's more understandable for users to be able to simply control the order in which converters are tried. It certainly is *more efficient* than trying all converters, if you're going to be truly compulsive about cycles, though I don't really care about that. We do have one guy, though, who's got a massively confusable overload set and I think he's having trouble resolving it because of the easy conversions between C++ (int, long long) and Python (int, LONG). http://aspn.activestate.com/ASPN/Mail/Message/1652647 In general, I'd prefer to have more things "just work" automatically, so yeah I think it's worth adding to Boost.Python. >> > For clarification: >> > >> > void dispatcher(..) >> > { >> > *storage here* >> > try all overloads >> > call best overload >> > } >> >> I've already figured out how to solve this problem; if we can >> figure out how to share best-conversion technology I'll happily >> code it up ;-) > > :) How would you do it? I'll give you a hint, if you agree to cooperate on best-conversion: Your cycle-counters would probably go apoplectic. > I guess you could have static storage in the match-function and > store a pointer to that in the converter data, but that wouldn't be > thread safe. OK, here it is, I'll tell you: you use recursion. > It seems to me that sharing the conversion code out of the box is > going to be hard. You mean without any modification to existing conversion source? I never expected to achieve that. Remember, I'm considering a refactoring of the codebase anyway. > Perhaps we should consider parameterizing header > files? > > namespace luabind > { > #define BOOST_LANG_CONVERSION_PARAMS \ > (2, (lua_State*, int)) > #include > } Hmm, I'm not sure what you're trying to achieve here, but that kind of parameterization seems unneccessary to me. we probably ought to do it with templates if there's any chance at all that these systems would have to be compiled together in some context... though I guess with inclusion into separate namespaces you could get around that. Well OK, let's look at the requirements more carefully before we jump into implementation details. I may be willing to accept additional state. -- Dave Abrahams Boost Consulting www.boost-consulting.com From jon.anderson at prsnet.com Mon Jun 23 15:51:11 2003 From: jon.anderson at prsnet.com (Jon Anderson) Date: Mon, 23 Jun 2003 08:51:11 -0500 Subject: [C++-sig] Re: shared_ptr and inheritance In-Reply-To: References: <200306201340.34691.janders@users.sf.net> Message-ID: <200306230851.11926.jon.anderson@prsnet.com> On Friday 20 June 2003 6:14 pm, David Abrahams wrote: > >> Are you using the latest CVS state? > > > > Should this magically work now? I thought you had to use : > > > > implicitly_convertible(); > > > > in your module def. Thanks. I was using CVS as of the middle of May or so. It's working for me now. Jon From dave at boost-consulting.com Mon Jun 23 15:57:11 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 23 Jun 2003 09:57:11 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef6fccf.504e.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> Again I can only offer a second-hand view. IIUC, on some platforms >> it is possible to compare type_info objects across dll boundaries as >> if they are in the same static link unit. I.e. there is nothing >> special. On some platforms this is not possible, and type_id::name >> is used instead. There is only one platform where relying on >> type_id::name caused a bit of a hick-up, namely IRIX/MIPSpro. See >> the comment near the top of the flew_fwd.h file (link in my previous >> message). > > Ok. I guess you could unmangle the names to a standardized format if > there are problems with this? No :( Sadly, the problem is that typedefs don't always get fully-unwound in type_info names, e.g.: // Module 1, prints ``X::type`` #include template struct X { typedef int type; }; int main() { std::cout << typeid(X::type).name() << std::endl; } // Module 2, prints ``int`` #include int main() { std::cout << typeid(int).name() << std::endl; } There's no issue in different translation units within a module; the linker just picks a generated name() string from one translation unit. -- Dave Abrahams Boost Consulting www.boost-consulting.com From rwgk at yahoo.com Mon Jun 23 17:16:10 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Mon, 23 Jun 2003 08:16:10 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <3ef6fccf.504e.16838@student.umu.se> Message-ID: <20030623151610.21751.qmail@web20202.mail.yahoo.com> --- Daniel Wallin wrote: > > Again I can only offer a second-hand view. IIUC, on some > > platforms it is > > possible to compare type_info objects across dll > > boundaries as if they > > are in the same static link unit. I.e. there is nothing > > special. On some > > platforms this is not possible, and type_id::name is used > > instead. > > There is only one platform where relying on type_id::name > > caused a bit > > of a hick-up, namely IRIX/MIPSpro. See the comment near > > the top of the > > flew_fwd.h file (link in my previous message). > > Ok. I guess you could unmangle the names to a standardized > format if there are problems with this? Yes, as a last resort we could, e.g., manually specialize boost::python::type_info (in boost/python/type_id.h) for all types using the cross-module feature. Fortunately that has not been necessary so far. And I don't see why it should ever be necessary in hypothetical new platforms since there is TTBOMK no technical reason for enforcing inconsistent type_id::name results. I.e. if we just tell the developers of new platforms what we are using type_id::name for it will most be likely easy for them to give us what we need. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dalwan01 at student.umu.se Mon Jun 23 22:29:34 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 21:29:34 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef7632e.71ac.16838@student.umu.se> > "Daniel Wallin" writes: > > >> > Instead of doing this we have general converters > which is used to > >> > convert all user-defined types. > >> > >> I have the same thing for most from_python conversions; > >> the registry > >> is only used as a fallback in that case. > > > > Hm, doesn't the conversion of UDT's pass through the > normal > > conversion system? > > See get_lvalue_from_python in > libs/python/src/converter/from_python.cpp. First it calls > find_instance_impl, which will always find a pointer to > the right type > inside a regular wrapped class if such a pointer is > findable. The > only thing that gets used from the registration in that > case is a > type_info object, which has been stored in the > registration, as > opposed to being passed as a separate parameter, just to > minimize the > amount of object code generated in extension modules which > are > invoking the conversion. Ok. This is roughly what we do too; the pointer is stored in the lua object, together with a pointer to the class_rep* associated with the pointee. The class_rep holds the inheritance tree, so we just compare type_info's and traverse the tree to perform the needed cast. > > That's for from-python conversions, of course. For > to-python > conversions, yes, we nearly always end up consulting the > registration > for the type. But that's cheap, after all - there's just > a single > method for converting any type to python so it just pulls > the function > pointer out of the registration and invokes it. Compared > to the cost > of constructing a Python object, an indirect call gets > lost in the > noise. The fact that there can only be one way to do that > also means > that we can introduce some specializations for conversion > to python, > which bypasses indirection for known types such as int or > std::string. > Note however that this bypassing actually has a usability > cost because > the implicit conversion mechanism actually consults the > registration > records directly, and I'm not currently filling in the > to-python > registrations for these types with specializations, so > some implicit > conversion sequences don't work. It may have been > premature > optimization to use specializations here. Ok, how do you handle conversions of lvalues from c++ -> python? The to_python converter associated with a UDT does rvalue conversion and creates a new object, correct? > > >> > To do this we need a map<..> lookup to find the > appropriate > >> > converter and this really sucks. > >> > >> I can't understand why you'd need that, but maybe I'm > missing > >> something. The general mechanism in Boost.Python is > that > >> instance_holder::holds(type_info) will give you the > address of the > >> contained instance if it's there. > > > > Right, we have a map when > performing > > c++ -> lua conversions. You just need to do > > registered::conversions.to_python(..); Correct? > > Roughly speaking, yes. But what I'm confused about is, if > you're > using full compile-time dispatching for from-lua > conversions, why you > don't do the same for to-lua conversions. AFAICT, it's > the former > where compile-time dispatch is most useful. What's the > 2nd argument > to the map? The second argument is a class_rep*, which holds information about the exposed type. We need this to create the holding object in lua. > > >> > As mentioned before, lua can have multiple states, so > it would be > >> > cool if the converters would be bound to the state > somehow. > >> > >> Why? It doesn't seem like it would be very useful to > have > >> different states doing different conversions. > > > > It can be useful to be able to register different types > in different > > states. > > Why? Because different states might handle completely different tasks. > > > Otherwise class_() would register global types and def() > > would register local functions. Or am I wrong in > assuming that > > class_() instantiates registered and add's a few > converters? > > No, you're correct. However, it also creates a Python > type object in > the extension module's dictionary, just as def() creates > callable > python objects in the module's dictionary. I see the > converter > registry as a separate data structure which exists in > parallel with > the module's dictionary. I don't see any reason to have a > given > module register different sets of type conversions in > different > states, even if it is going to contain different > types/functions > (though I can't see why you'd want that either). As I said earlier, the states can handle different tasks. A common usage is object scripting in games, but you might in the same app use lua for parsing configuration files or scripting the GUI. It's clear that you don't want all these systems to have access to _everything_. I guess you could always register types in the global registry, and only expose them to the states where they are needed though, if there's not enough reason to use different converters for the same type in different states. > > >> > Anyway, I find your converter system more appealing > than > >> > ours. There are some issues which need to be taken > care of; > >> > We choose best match, not first match, when trying > different > >> > overloads. This means we need to keep the storage for > the > >> > converter on the stack of a function that is unaware > of the > >> > converter size (at compile time). So we need to > either have > >> > a fixed size buffer on the stack, and hope it works, > or > >> > allocate the storage at runtime. > >> > >> I would love to have best match conversion. I was > going to do it > >> at one point, but realized eventually that users can > sort the > >> overloads so that they always work so I never bothered > to code it. > > > > Do you still think best match is worth adding, or is > sorting an > > acceptable solution? > > I think in many cases, it's more understandable for users > to be able > to simply control the order in which converters are tried. > It > certainly is *more efficient* than trying all converters, > if you're > going to be truly compulsive about cycles, though I don't > really care > about that. We do have one guy, though, who's got a > massively > confusable overload set and I think he's having trouble > resolving it > because of the easy conversions between C++ (int, long > long) and > Python (int, LONG). > > > http://aspn.activestate.com/ASPN/Mail/Message/1652647 > > In general, I'd prefer to have more things "just work" > automatically, > so yeah I think it's worth adding to Boost.Python. Ok great. > > >> > For clarification: > >> > > >> > void dispatcher(..) > >> > { > >> > *storage here* > >> > try all overloads > >> > call best overload > >> > } > >> > >> I've already figured out how to solve this problem; if > we can > >> figure out how to share best-conversion technology I'll > happily > >> code it up ;-) > > > > :) How would you do it? > > I'll give you a hint, if you agree to cooperate on > best-conversion: Agreed. > > > I guess you could have static storage in the > match-function and > > store a pointer to that in the converter data, but that > wouldn't be > > thread safe. > > OK, here it is, I'll tell you: you use recursion. Ah, I have considered that too. But at the time it seemed a bit complex. You would let the 'matcher' functions call the next matcher, pass the current best-match value along and return some information that tells you if there's been a match further down in the recursion, and just let the matcher call the function when there's no better match before it on the stack. Something like that? It doesn't seem that expensive to me, the recursion won't be very deep anyway. > > > Perhaps we should consider parameterizing header > > files? > > > > namespace luabind > > { > > #define BOOST_LANG_CONVERSION_PARAMS \ > > (2, (lua_State*, int)) > > #include > > } > > Hmm, I'm not sure what you're trying to achieve here, but > that kind of > parameterization seems unneccessary to me. we probably > ought to do it > with templates if there's any chance at all that these > systems would > have to be compiled together in some context... though I > guess with > inclusion into separate namespaces you could get around > that. Well > OK, let's look at the requirements more carefully before > we jump into > implementation details. I may be willing to accept > additional state. Right. The requirement I was aiming to resolve was that we need a different set of parameters when doing our conversions. You have your PyObject*, we have our (lua_State*, int). I thought that parameterizing the implementation and including in different namespaces would solve all issues of that type nicely, though there might be far better solutions. Here are some notes for the conversion requirements: * We need different sets of additional parameters passed through the conversion system. And thus we need different types of function pointers stored in the registry. * We need to have separate registries, so that both systems can be used at the same time. -- Daniel Wallin From dalwan01 at student.umu.se Mon Jun 23 22:36:30 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 21:36:30 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef764ce.5bc.16838@student.umu.se> > "Daniel Wallin" writes: > > >> Again I can only offer a second-hand view. IIUC, on > some platforms > >> it is possible to compare type_info objects across dll > boundaries as > >> if they are in the same static link unit. I.e. there is > nothing > >> special. On some platforms this is not possible, and > type_id::name > >> is used instead. There is only one platform where > relying on > >> type_id::name caused a bit of a hick-up, namely > IRIX/MIPSpro. See > >> the comment near the top of the flew_fwd.h file (link > in my previous > >> message). > > > > Ok. I guess you could unmangle the names to a > standardized format if > > there are problems with this? > > No :( > > Sadly, the problem is that typedefs don't always get > fully-unwound in > type_info names, e.g.: Oh, ok. So you would need to unwound the types manually with specialization. -- Daniel Wallin From dalwan01 at student.umu.se Mon Jun 23 22:37:51 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Mon, 23 Jun 2003 21:37:51 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef7651f.89a.16838@student.umu.se> > --- Daniel Wallin wrote: > > > Again I can only offer a second-hand view. IIUC, on > some > > > platforms it is > > > possible to compare type_info objects across dll > > > boundaries as if they > > > are in the same static link unit. I.e. there is > nothing > > > special. On some > > > platforms this is not possible, and type_id::name is > used > > > instead. > > > There is only one platform where relying on > type_id::name > > > caused a bit > > > of a hick-up, namely IRIX/MIPSpro. See the comment > near > > > the top of the > > > flew_fwd.h file (link in my previous message). > > > > Ok. I guess you could unmangle the names to a > standardized > > format if there are problems with this? > > Yes, as a last resort we could, e.g., manually specialize > boost::python::type_info (in boost/python/type_id.h) for > all types > using the cross-module feature. Fortunately that has not > been > necessary so far. And I don't see why it should ever be > necessary > in hypothetical new platforms since there is TTBOMK no > technical reason > for enforcing inconsistent type_id::name results. I.e. if > we just tell > the developers of new platforms what we are using > type_id::name for > it will most be likely easy for them to give us what we > need. Right. It seems the problems with this are far smaller than I thought.. -- Daniel Wallin From dave at boost-consulting.com Mon Jun 23 23:28:03 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 23 Jun 2003 17:28:03 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef7632e.71ac.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> "Daniel Wallin" writes: >> >> >> > Instead of doing this we have general converters which is used >> >> > to convert all user-defined types. >> >> I have the same thing for most from_python conversions; the >> >> registry is only used as a fallback in that case. >> > Hm, doesn't the conversion of UDT's pass through the normal >> > conversion system? >> >> See get_lvalue_from_python in >> libs/python/src/converter/from_python.cpp. First it calls >> find_instance_impl, which will always find a pointer to the right >> type inside a regular wrapped class if such a pointer is findable. >> The only thing that gets used from the registration in that case is >> a type_info object, which has been stored in the registration, as >> opposed to being passed as a separate parameter, just to minimize >> the amount of object code generated in extension modules which are >> invoking the conversion. > > Ok. This is roughly what we do too; the pointer is stored in the lua > object The pointer to... what? The wrapped object? > together with a pointer to the class_rep* associated with the > pointee. The class_rep holds the inheritance tree, so we just > compare type_info's and traverse the tree to perform the needed > cast. I think that problem is a little more complicated than you're making it out to be, and that your method ends up being slower than it should be in inheritance graphs of any size. First of all, inheritance may be a DAG and you have to prevent infinite loops if you're actually going to support cross-casting. Secondly Boost.Python caches cast sequences so that given the most-derived type of an object, you only have to search once for a conversion to any other type, and after that you can do a simple address calculation. See libs/python/src/object/inheritance.cpp. This probably should be better commented; the algorithms were hard to figure out and I didn't write down rationale for them :( On the other hand, maybe being fast isn't important in this part of the code, and the cacheing should be eliminated ;-) >> That's for from-python conversions, of course. For to-python >> conversions, yes, we nearly always end up consulting the >> registration for the type. But that's cheap, after all - there's >> just a single method for converting any type to python so it just >> pulls the function pointer out of the registration and invokes it. >> Compared to the cost of constructing a Python object, an indirect >> call gets lost in the noise. The fact that there can only be one >> way to do that also means that we can introduce some specializations >> for conversion to python, which bypasses indirection for known types >> such as int or std::string. Note however that this bypassing >> actually has a usability cost because the implicit conversion >> mechanism actually consults the registration records directly, and >> I'm not currently filling in the to-python registrations for these >> types with specializations, so some implicit conversion sequences >> don't work. It may have been premature optimization to use >> specializations here. > > Ok, how do you handle conversions of lvalues from c++ -> python? The > to_python converter associated with a UDT does rvalue conversion and > creates a new object, correct? Yeah. Implicit conversion of lvalues by itself with no ownership management is dangerous so you have to tell Boost.Python to do it. I'm sure you know this, though, since luabind supports "parameter policies." Bad name, though: a primary reason for these is to manage return values (which are not parameters). So I wonder what you're really asking? >> >> > To do this we need a map<..> lookup to find the appropriate >> >> > converter and this really sucks. >> >> >> I can't understand why you'd need that, but maybe I'm missing >> >> something. The general mechanism in Boost.Python is that >> >> instance_holder::holds(type_info) will give you the address of >> >> the contained instance if it's there. >> >> > Right, we have a map when performing c++ -> >> > lua conversions. You just need to do >> > registered::conversions.to_python(..); Correct? >> >> Roughly speaking, yes. But what I'm confused about is, if you're >> using full compile-time dispatching for from-lua conversions, why >> you don't do the same for to-lua conversions. AFAICT, it's the >> former where compile-time dispatch is most useful. What's the 2nd >> argument to the map? > > The second argument is a class_rep*, which holds information about the > exposed type. We need this to create the holding object in lua. Oh, sure. I don't have such a limited view of to-python conversions as that. It's perfectly possible (and often desirable) to register converters which cause std::vector to be converted to a Python built-in list of X objects. It's the converter function itself which may access the corresponding PyTypeObject (equivalent of class_rep*), which it will always get through the static initialization trick. >> >> > As mentioned before, lua can have multiple states, so it would >> >> > be cool if the converters would be bound to the state somehow. >> >> >> Why? It doesn't seem like it would be very useful to have >> >> different states doing different conversions. >> >> > It can be useful to be able to register different types in >> > different states. >> >> Why? > > Because different states might handle completely different tasks. Sure, but then aren't they going to handle different C++ types and/or be running different extension modules? Do you really want the same C++ type converted differently *by the same extension module* in two states? Sounds like premature generalization to me, but I could be wrong. >> > Otherwise class_() would register global types and def() would >> > register local functions. Or am I wrong in assuming that >> > class_() instantiates registered and add's a few >> > converters? >> >> No, you're correct. However, it also creates a Python type object >> in the extension module's dictionary, just as def() creates >> callable python objects in the module's dictionary. I see the >> converter registry as a separate data structure which exists in >> parallel with the module's dictionary. I don't see any reason to >> have a given module register different sets of type conversions in >> different states, even if it is going to contain different >> types/functions (though I can't see why you'd want that either). > > As I said earlier, the states can handle different tasks. A common > usage is object scripting in games, but you might in the same app use > lua for parsing configuration files or scripting the GUI. It's clear > that you don't want all these systems to have access to > _everything_. The question of registry isolation is a separate thing, I think. I am trying to suggest that a single extension module doesn't need to work with different conversion registries in different states. > I guess you could always register types in the global registry, and > only expose them to the states where they are needed though, if > there's not enough reason to use different converters for the same > type in different states. I'm not committed to the idea of a single registry. In fact we've been discussing a hierarchical registry system where converters are searched starting with a local module registry and proceeding upward to the package level and finally ending with a global registry as a fallback. >> >> > Anyway, I find your converter system more appealing than >> >> > ours. There are some issues which need to be taken care of; We >> >> > choose best match, not first match, when trying different >> >> > overloads. This means we need to keep the storage for the >> >> > converter on the stack of a function that is unaware of the >> >> > converter size (at compile time). So we need to either have a >> >> > fixed size buffer on the stack, and hope it works, or allocate >> >> > the storage at runtime. >> >> >> I would love to have best match conversion. I was going to do it >> >> at one point, but realized eventually that users can sort the >> >> overloads so that they always work so I never bothered to code >> >> it. >> >> > Do you still think best match is worth adding, or is sorting an >> > acceptable solution? >> >> I think in many cases, it's more understandable for users to be >> able to simply control the order in which converters are tried. It >> certainly is *more efficient* than trying all converters, if you're >> going to be truly compulsive about cycles, though I don't really >> care about that. We do have one guy, though, who's got a massively >> confusable overload set and I think he's having trouble resolving >> it because of the easy conversions between C++ (int, long long) and >> Python (int, LONG). >> >> http://aspn.activestate.com/ASPN/Mail/Message/1652647 >> In general, I'd prefer to have more things "just work" >> automatically, so yeah I think it's worth adding to Boost.Python. > > Ok great. >> >> >> > For clarification: >> >> > void dispatcher(..) { *storage here* try all overloads call best >> >> > overload } >> >> I've already figured out how to solve this problem; if we can >> >> figure out how to share best-conversion technology I'll happily >> >> code it up ;-) >> > :) How would you do it? >> I'll give you a hint, if you agree to cooperate on best-conversion: > > Agreed. OK. >> >> > I guess you could have static storage in the match-function and >> > store a pointer to that in the converter data, but that wouldn't >> > be thread safe. >> OK, here it is, I'll tell you: you use recursion. > > Ah, I have considered that too. Great minds think alike ;-) > But at the time it seemed a bit complex. You would let the 'matcher' > functions call the next matcher, pass the current best-match value > along and return some information that tells you if there's been a > match further down in the recursion, and just let the matcher call > the function when there's no better match before it on the stack. > Something like that? Yes, something like that. I don't really think it's too complicated. My big problem was trying to figure out a scheme for assigning match quality. C++ uses a kind of "substitutaiblity" rule for resolving partial ordering which seemed like a good way to handle things. How do you do it? > It doesn't seem that expensive to me, the recursion won't be very > deep anyway. I agree; it's rare to have huge overload sets.p >> > Perhaps we should consider parameterizing header files? >> > namespace luabind { #define BOOST_LANG_CONVERSION_PARAMS \ (2, >> > (lua_State*, int)) #include } >> >> Hmm, I'm not sure what you're trying to achieve here, but that kind >> of parameterization seems unneccessary to me. we probably ought to >> do it with templates if there's any chance at all that these >> systems would have to be compiled together in some >> context... though I guess with inclusion into separate namespaces >> you could get around that. Well OK, let's look at the requirements >> more carefully before we jump into implementation details. I may >> be willing to accept additional state. > > Right. The requirement I was aiming to resolve was that we need a > different set of parameters when doing our conversions. I consider that an implementation detail ;-) > You have your PyObject*, we have our (lua_State*, int). What's the int? > I thought that parameterizing the implementation and including in > different namespaces would solve all issues of that type nicely, > though there might be far better solutions. Maybe; I think there are lots of strategies available and which is best probably depends on the other issues we need to address. > Here are some notes for the conversion requirements: > > * We need different sets of additional parameters passed through the > conversion system. And thus we need different types of function > pointers stored in the registry. Sure. > * We need to have separate registries, so that both systems can be > used at the same time. We need separate registries within Boost.Python too; we just don't have them, yet. There's also a potential issue with thread safety if you have modules using the same registry initializing concurrently. With a single Python interpreter state, it's not an issue, since extension module code is always entered on the main thread until a mutex is explicitly released. Anyway, I want to discuss the whole issue of registry isolation in the larger context of what's desirable for both systems and their evolution into the future. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Mon Jun 23 23:30:18 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 23 Jun 2003 17:30:18 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef764ce.5bc.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> "Daniel Wallin" writes: >> >> >> Again I can only offer a second-hand view. IIUC, on >> some platforms >> >> it is possible to compare type_info objects across dll >> boundaries as >> >> if they are in the same static link unit. I.e. there is >> nothing >> >> special. On some platforms this is not possible, and >> type_id::name >> >> is used instead. There is only one platform where >> relying on >> >> type_id::name caused a bit of a hick-up, namely >> IRIX/MIPSpro. See >> >> the comment near the top of the flew_fwd.h file (link >> in my previous >> >> message). >> > >> > Ok. I guess you could unmangle the names to a >> standardized format if >> > there are problems with this? >> >> No :( >> >> Sadly, the problem is that typedefs don't always get >> fully-unwound in >> type_info names, e.g.: > > Oh, ok. So you would need to unwound the types manually with > specialization. That doesn't even work the way you expect; they don't neccessarily unwind completely. It's just a matter of exposing one way to get the type at the beginning of every translation unit. But it works, eventually, though it's ugly as sin. Fortunately EDG knows about this and has fixed the problem in their compiler. Unfortunately it's unclear when SGI will release anything that works based on a modern EDG. :( -- Dave Abrahams Boost Consulting www.boost-consulting.com From nickm at sitius.com Tue Jun 24 06:32:28 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Tue, 24 Jun 2003 00:32:28 -0400 Subject: [C++-sig] return_self_policy Message-ID: <3EF7D45C.7D819158@sitius.com> Posting return_self policy implementation Nikolay -------------- next part -------------- ''' >>> from return_self import * >>> l1=Label() >>> l1 is l1.label("bar") 1 >>> l1 is l1.label("bar").sensitive(0) 1 >>> l1.label("foo").sensitive(0) is l1.sensitive(1).label("bar") 1 ''' def run(args = None): import sys import doctest if args is not None: sys.argv = args return doctest.testmod(sys.modules.get(__name__)) if __name__ == '__main__': print "running..." import sys sys.exit(run()[0]) -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self.cpp Type: application/x-unknown-content-type-cppfile Size: 1233 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self_policy.hpp Type: application/x-unknown-content-type-hppfile Size: 1285 bytes Desc: not available URL: From nikolai.kirsebom at siemens.no Tue Jun 24 11:53:17 2003 From: nikolai.kirsebom at siemens.no (Kirsebom Nikolai) Date: Tue, 24 Jun 2003 11:53:17 +0200 Subject: [C++-sig] Using .add_property with make_getter. Message-ID: <5B5544F322E5D211850F00A0C91DEF3B050DBF57@osll007a.siemens.no> I have the following c++ class: class PyDLEPRInterface : public DLEPRInterface { public: PyDLEPRInterface(); PyDLEPRInterface(const PyDLEPRInterface& objectSrc); }; PyDLEPRInterface::PyDLEPRInterface(const PyDLEPRInterface& objectSrc) { } PyDLEPRInterface::PyDLEPRInterface() { } The class DLEPRInterface defines some public attributes, example CString m_DocumentCategory; int m_UserID; I have a converter for converting CString <--> python string. See code in thread "Exception second time loaded" 20th of June. The two attributes are exposed with the statements: .def_readonly("UserID", &PyDLEPRInterface::m_UserID) and .add_property("DocumentCategory", make_getter(&PyDLEPRInterface::m_DocumentCategory, return_value_policy())) When running in Python, the UserID is available however reading the DocumentCategory attribute produces the following traceback: import DocuLive #<", line 1, in ? TypeError: bad argument type for built-in operation I'm running the python statements in a PyCrust (wxPython/wxWindows) shell application. In my posting 20th of June I asked for help relating to exception when staring the second time. It appears that the starting of the mainloop in the PyCrust application produces the exception if other applications (in Windows) has been activated in between. Have not found the reason for this behaviour. All help is appreciated. Nikolai Kirsebom From dalwan01 at student.umu.se Tue Jun 24 12:35:56 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Tue, 24 Jun 2003 11:35:56 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef8298c.50e4.16838@student.umu.se> > "Daniel Wallin" writes: > > >> "Daniel Wallin" writes: > >> > >> >> > Instead of doing this we have general converters > which is used > >> >> > to convert all user-defined types. > >> >> I have the same thing for most from_python > conversions; the > >> >> registry is only used as a fallback in that case. > >> > Hm, doesn't the conversion of UDT's pass through the > normal > >> > conversion system? > >> > >> See get_lvalue_from_python in > >> libs/python/src/converter/from_python.cpp. First it > calls > >> find_instance_impl, which will always find a pointer to > the right > >> type inside a regular wrapped class if such a pointer > is findable. > >> The only thing that gets used from the registration in > that case is > >> a type_info object, which has been stored in the > registration, as > >> opposed to being passed as a separate parameter, just > to minimize > >> the amount of object code generated in extension > modules which are > >> invoking the conversion. > > > > Ok. This is roughly what we do too; the pointer is > stored in the lua > > object > > The pointer to... what? The wrapped object? Right. > > > together with a pointer to the class_rep* associated > with the > > pointee. The class_rep holds the inheritance tree, so we > just > > compare type_info's and traverse the tree to perform the > needed > > cast. > > I think that problem is a little more complicated than > you're making > it out to be, and that your method ends up being slower > than it should > be in inheritance graphs of any size. First of all, > inheritance may > be a DAG and you have to prevent infinite loops if you're > actually > going to support cross-casting. Secondly Boost.Python > caches cast > sequences so that given the most-derived type of an > object, you only > have to search once for a conversion to any other type, > and after that > you can do a simple address calculation. See > libs/python/src/object/inheritance.cpp. This probably > should be > better commented; the algorithms were hard to figure out > and I didn't > write down rationale for them :( On the other hand, maybe > being fast > isn't important in this part of the code, and the cacheing > should be > eliminated ;-) We only support upcasting, so our method isn't that slow. Generally it's just a linked list traversal. We don't cache though, and caching is a good thing. :) > > >> That's for from-python conversions, of course. For > to-python > >> conversions, yes, we nearly always end up consulting > the > >> registration for the type. But that's cheap, after all > - there's > >> just a single method for converting any type to python > so it just > >> pulls the function pointer out of the registration and > invokes it. > >> Compared to the cost of constructing a Python object, > an indirect > >> call gets lost in the noise. The fact that there can > only be one > >> way to do that also means that we can introduce some > specializations > >> for conversion to python, which bypasses indirection > for known types > >> such as int or std::string. Note however that this > bypassing > >> actually has a usability cost because the implicit > conversion > >> mechanism actually consults the registration records > directly, and > >> I'm not currently filling in the to-python > registrations for these > >> types with specializations, so some implicit conversion > sequences > >> don't work. It may have been premature optimization to > use > >> specializations here. > > > > Ok, how do you handle conversions of lvalues from c++ -> > python? The > > to_python converter associated with a UDT does rvalue > conversion and > > creates a new object, correct? > > Yeah. Implicit conversion of lvalues by itself with no > ownership > management is dangerous so you have to tell Boost.Python > to do it. > I'm sure you know this, though, since luabind supports > "parameter > policies." Bad name, though: a primary reason for these > is to manage > return values (which are not parameters). So I wonder > what you're > really asking? We convert lvalues to lua with no management by default. I don't think this is more dangerous than copying the objects, it's just seg faults instead of silent errors. Both ways are equaly easy to make mistakes with. Our policies primary reason is not to handle return values, but to handle conversion in both directions. For example, adopt() can be used to steal objects that are owned by the interpreter. void f(A*); def("f", &f, adopt(_1)) But yes, the name should indicate both directions.. ConversionPolicy perhaps. > > >> >> > To do this we need a map<..> lookup to find the > appropriate > >> >> > converter and this really sucks. > >> > >> >> I can't understand why you'd need that, but maybe > I'm missing > >> >> something. The general mechanism in Boost.Python is > that > >> >> instance_holder::holds(type_info) will give you the > address of > >> >> the contained instance if it's there. > >> > >> > Right, we have a map when > performing c++ -> > >> > lua conversions. You just need to do > >> > registered::conversions.to_python(..); Correct? > >> > >> Roughly speaking, yes. But what I'm confused about is, > if you're > >> using full compile-time dispatching for from-lua > conversions, why > >> you don't do the same for to-lua conversions. AFAICT, > it's the > >> former where compile-time dispatch is most useful. > What's the 2nd > >> argument to the map? > > > > The second argument is a class_rep*, which holds > information about the > > exposed type. We need this to create the holding object > in lua. > > Oh, sure. I don't have such a limited view of to-python > conversions > as that. It's perfectly possible (and often desirable) to > register > converters which cause std::vector to be converted to a > Python > built-in list of X objects. It's the converter function > itself which > may access the corresponding PyTypeObject (equivalent of > class_rep*), > which it will always get through the static initialization > trick. Right, in our case it's also the converter that may access the class_rep*, and it is also possible to create converters which maps std::vector <-> lua table. So we seem to be doing pretty much the same thing here. Except we access it with a map<..> and you do it alot faster. :) > > >> >> > As mentioned before, lua can have multiple states, > so it would > >> >> > be cool if the converters would be bound to the > state somehow. > >> > >> >> Why? It doesn't seem like it would be very useful > to have > >> >> different states doing different conversions. > >> > >> > It can be useful to be able to register different > types in > >> > different states. > >> > >> Why? > > > > Because different states might handle completely > different tasks. > > Sure, but then aren't they going to handle different C++ > types and/or > be running different extension modules? Do you really > want the same > C++ type converted differently *by the same extension > module* in two > states? > > Sounds like premature generalization to me, but I could be > wrong. I don't know.. It does seem reasonable to not allow different conversions for the same type. > > >> > Otherwise class_() would register global types and > def() would > >> > register local functions. Or am I wrong in assuming > that > >> > class_() instantiates registered and add's a > few > >> > converters? > >> > >> No, you're correct. However, it also creates a Python > type object > >> in the extension module's dictionary, just as def() > creates > >> callable python objects in the module's dictionary. I > see the > >> converter registry as a separate data structure which > exists in > >> parallel with the module's dictionary. I don't see any > reason to > >> have a given module register different sets of type > conversions in > >> different states, even if it is going to contain > different > >> types/functions (though I can't see why you'd want that > either). > > > > As I said earlier, the states can handle different > tasks. A common > > usage is object scripting in games, but you might in the > same app use > > lua for parsing configuration files or scripting the > GUI. It's clear > > that you don't want all these systems to have access to > > _everything_. > > The question of registry isolation is a separate thing, I > think. I am > trying to suggest that a single extension module doesn't > need to work > with different conversion registries in different states. > > > I guess you could always register types in the global > registry, and > > only expose them to the states where they are needed > though, if > > there's not enough reason to use different converters > for the same > > type in different states. > > I'm not committed to the idea of a single registry. In > fact we've > been discussing a hierarchical registry system where > converters are > searched starting with a local module registry and > proceeding upward > to the package level and finally ending with a global > registry as a > fallback. Right, that seems reasonable. > > >> >> > Anyway, I find your converter system more > appealing than > >> >> > ours. There are some issues which need to be taken > care of; We > >> >> > choose best match, not first match, when trying > different > >> >> > overloads. This means we need to keep the storage > for the > >> >> > converter on the stack of a function that is > unaware of the > >> >> > converter size (at compile time). So we need to > either have a > >> >> > fixed size buffer on the stack, and hope it works, > or allocate > >> >> > the storage at runtime. > >> > >> >> I would love to have best match conversion. I was > going to do it > >> >> at one point, but realized eventually that users can > sort the > >> >> overloads so that they always work so I never > bothered to code > >> >> it. > >> > >> > Do you still think best match is worth adding, or is > sorting an > >> > acceptable solution? > >> > >> I think in many cases, it's more understandable for > users to be > >> able to simply control the order in which converters > are tried. It > >> certainly is *more efficient* than trying all > converters, if you're > >> going to be truly compulsive about cycles, though I > don't really > >> care about that. We do have one guy, though, who's got > a massively > >> confusable overload set and I think he's having trouble > resolving > >> it because of the easy conversions between C++ (int, > long long) and > >> Python (int, LONG). > >> > >> http://aspn.activestate.com/ASPN/Mail/Message/1652647 > >> In general, I'd prefer to have more things "just work" > >> automatically, so yeah I think it's worth adding to > Boost.Python. > > > > Ok great. > >> > >> >> > For clarification: > >> >> > void dispatcher(..) { *storage here* try all > overloads call best > >> >> > overload } > >> >> I've already figured out how to solve this problem; > if we can > >> >> figure out how to share best-conversion technology > I'll happily > >> >> code it up ;-) > >> > :) How would you do it? > >> I'll give you a hint, if you agree to cooperate on > best-conversion: > > > > Agreed. > > OK. > > >> > >> > I guess you could have static storage in the > match-function and > >> > store a pointer to that in the converter data, but > that wouldn't > >> > be thread safe. > >> OK, here it is, I'll tell you: you use recursion. > > > > Ah, I have considered that too. > > Great minds think alike ;-) > > > But at the time it seemed a bit complex. You would let > the 'matcher' > > functions call the next matcher, pass the current > best-match value > > along and return some information that tells you if > there's been a > > match further down in the recursion, and just let the > matcher call > > the function when there's no better match before it on > the stack. > > Something like that? > > Yes, something like that. I don't really think it's too > complicated. My big problem was trying to figure out a > scheme for > assigning match quality. C++ uses a kind of > "substitutaiblity" rule > for resolving partial ordering which seemed like a good > way to handle > things. How do you do it? We just let every converter return a value indicating how good the match was, where 0 is perfect match and -1 is no match. When performing implicit conversions, every step in the conversions inreases the match value. Maybe I'm naive, but is there need for anything more complicated? > > > It doesn't seem that expensive to me, the recursion > won't be very > > deep anyway. > > I agree; it's rare to have huge overload sets.p > > >> > Perhaps we should consider parameterizing header > files? > >> > namespace luabind { #define > BOOST_LANG_CONVERSION_PARAMS \ (2, > >> > (lua_State*, int)) #include > } > >> > >> Hmm, I'm not sure what you're trying to achieve here, > but that kind > >> of parameterization seems unneccessary to me. we > probably ought to > >> do it with templates if there's any chance at all that > these > >> systems would have to be compiled together in some > >> context... though I guess with inclusion into separate > namespaces > >> you could get around that. Well OK, let's look at the > requirements > >> more carefully before we jump into implementation > details. I may > >> be willing to accept additional state. > > > > Right. The requirement I was aiming to resolve was that > we need a > > different set of parameters when doing our conversions. > > I consider that an implementation detail ;-) > > > You have your PyObject*, we have our (lua_State*, int). > > What's the int? An index to the object being converted on the lua stack. > > > I thought that parameterizing the implementation and > including in > > different namespaces would solve all issues of that type > nicely, > > though there might be far better solutions. > > Maybe; I think there are lots of strategies available and > which is > best probably depends on the other issues we need to > address. > > > Here are some notes for the conversion requirements: > > > > * We need different sets of additional parameters > passed through the > > conversion system. And thus we need different types of > function > > pointers stored in the registry. > > Sure. > > > * We need to have separate registries, so that both > systems can be > > used at the same time. > > We need separate registries within Boost.Python too; we > just don't > have them, yet. There's also a potential issue with > thread safety if > you have modules using the same registry initializing > concurrently. > With a single Python interpreter state, it's not an issue, > since > extension module code is always entered on the main thread > until a > mutex is explicitly released. Anyway, I want to discuss > the whole > issue of registry isolation in the larger context of > what's desirable > for both systems and their evolution into the future. Right. For luabind it seems reasonable to accept a single registry for every module, and perhaps global registry used by interacting modules as well. It doesn't seem that interesting to register different conversions for different states anymore. (at least not to me, but I could be wrong..). But if we where to increase the isolation of the registries, each state could just as well get their own registry. -- Daniel Wallin From dave at boost-consulting.com Tue Jun 24 13:11:47 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 07:11:47 -0400 Subject: [C++-sig] Re: return_self_policy References: <3EF7D45C.7D819158@sitius.com> Message-ID: Nikolay Mladenov writes: > Posting return_self policy implementation > > Nikolay''' Nikolay, This is wonderful! Now, I hate to do this, but I just realized that this should really be generalized to something which takes an argument number as its parameter and returns that argument: return_identity<0> // error return_identity<>, return_identity<1> // same as return_self_policy return_identity<2> // return the 2nd argument return_identity<3> // return the 3rd argument ... etc. Don't you think that makes more sense? Would you mind making this modification? Thoughts, objections, screaming...? -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 24 13:22:39 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 07:22:39 -0400 Subject: [C++-sig] Re: Using .add_property with make_getter. References: <5B5544F322E5D211850F00A0C91DEF3B050DBF57@osll007a.siemens.no> Message-ID: Kirsebom Nikolai writes: > I have the following c++ class: > class PyDLEPRInterface : public DLEPRInterface > { > public: > PyDLEPRInterface(); > PyDLEPRInterface(const PyDLEPRInterface& objectSrc); > }; > > PyDLEPRInterface::PyDLEPRInterface(const PyDLEPRInterface& objectSrc) > { > } > > PyDLEPRInterface::PyDLEPRInterface() > { > } > > The class DLEPRInterface defines some public attributes, example > CString m_DocumentCategory; > int m_UserID; > > > I have a converter for converting CString <--> python string. > See code in thread "Exception second time loaded" 20th of June. > > The two attributes are exposed with the statements: > .def_readonly("UserID", &PyDLEPRInterface::m_UserID) > and > .add_property("DocumentCategory", > make_getter(&PyDLEPRInterface::m_DocumentCategory, > return_value_policy())) > > When running in Python, the UserID is available What does "available" mean? > however reading the DocumentCategory attribute produces the > following traceback: > > import DocuLive #< PyDELEPRInterface object > import CString #< v = DocuLive.getit() #< v.UserID #< 44 > v.DocumentCategory > Traceback (most recent call last): > File "", line 1, in ? > TypeError: bad argument type for built-in operation Can you post a complete, minimal test case that we can use to reproduce the problem? > I'm running the python statements in a PyCrust (wxPython/wxWindows) > shell application. In my posting 20th of June I asked for help > relating to exception when staring the second time. It appears that > the starting of the mainloop in the PyCrust application produces the > exception if other applications (in Windows) has been activated in > between. I don't know what that means, but I can tell you that if you want to initialize the same Boost.Python extension modules a 2nd time, the Boost.Python DLL must be unloaded first. I don't know what it takes to do that, but I'm guessing we have a problem because it gets referenced by each of the BPL extension modules which is loaded, and they in turn are being kept alive because no PyFinalize() is being called, because we don't support PyFinalize() yet. Dirk Gerrits has been working on a solution, but has been waylaid. It's an important feature to get implemented, but Boost Consulting has to focus on projects which have been funded. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 24 13:54:53 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 07:54:53 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef8298c.50e4.16838@student.umu.se> Message-ID: Daniel, [Please try to cut out irrelevant quoting; thanks] "Daniel Wallin" writes: >> "Daniel Wallin" writes: >> >> > together with a pointer to the class_rep* associated >> > with the pointee. The class_rep holds the inheritance >> > tree, so we just compare type_info's and traverse the >> > tree to perform the needed cast. >> >> I think that problem is a little more complicated than >> you're making it out to be, and that your method ends up >> being slower than it should be in inheritance graphs of >> any size. First of all, inheritance may be a DAG and you >> have to prevent infinite loops if you're actually going >> to support cross-casting. Secondly Boost.Python caches >> cast sequences so that given the most-derived type of an >> object, you only have to search once for a conversion to >> any other type, and after that you can do a simple >> address calculation. See >> libs/python/src/object/inheritance.cpp. This probably >> should be better commented; the algorithms were hard to >> figure out and I didn't write down rationale for them :( >> On the other hand, maybe being fast isn't important in >> this part of the code, and the cacheing should be >> eliminated ;-) > > We only support upcasting, so our method isn't that slow. Surely you want to be able to go in both directions, though? Surely not everyone using lua is interested in just speed and not usability? > Generally it's just a linked list traversal. We don't > cache though, and caching is a good thing. :) Yeah, probably. It would be good to share all of that. >> > Ok, how do you handle conversions of lvalues from c++ >> > -> python? The to_python converter associated with a >> > UDT does rvalue conversion and creates a new object, >> > correct? >> >> Yeah. Implicit conversion of lvalues by itself with no >> ownership management is dangerous so you have to tell >> Boost.Python to do it. I'm sure you know this, though, >> since luabind supports "parameter policies." Bad name, >> though: a primary reason for these is to manage return >> values (which are not parameters). So I wonder what >> you're really asking? > > We convert lvalues to lua with no management by default. I > don't think this is more dangerous than copying the > objects, it's just seg faults instead of silent > errors. Your way, mistakes by the user of the *interpreter* can easily crash the system. My way, only the guy/gal doing the wrapping has to be careful: >>> x = X() >>> z = x.y >>> del x >>> z.foo() # crash The users of these interpreted environments have an expectation that their interpreter won't *crash* just because of the way they've used it. > Both ways are equaly easy to make mistakes with. Totally disagree. Done my way, we force the guy/gal to consider whether he really wants to do something unsafe before he does it. You probably think I copy objects by default, but I don't. That was BPLv1. In BPLv2 I issue an error unless the user supplies a call policy. Finally, let me point out that although we currently use Python weak references to accomplish this I realized last night that there's a *much* easier and more-efficient way to do it using a special kind of smart pointer to refer to the referenced object. > Our policies primary reason is not to handle return > values, but to handle conversion in both directions. For > example, adopt() can be used to steal objects that are > owned by the interpreter. > > void f(A*); def("f", &f, adopt(_1)) What, you just leak a reference here? Or is it something else? I had a major client who was sure he was going to need to leak references, but he eventually discovered that the provided call policies could always be made to do something more-intelligent, so I never put the reference-leaker in the library. I haven't had a single request for it since, either. > But yes, the name should indicate both > directions.. ConversionPolicy perhaps. Hmm, this is really very specific to calls, because it *does* manage arguments and return values. I really think CallPolicy is better. In any case I think we should converge on this, one way or another; there will be more languages, and you do want to be able to steal my users, right? . That'll be a lot easier if they see familiar terminology ;-) >> Oh, sure. I don't have such a limited view of to-python >> conversions as that. It's perfectly possible (and often >> desirable) to register converters which cause >> std::vector to be converted to a Python built-in list >> of X objects. It's the converter function itself which >> may access the corresponding PyTypeObject (equivalent of >> class_rep*), which it will always get through the static >> initialization trick. > > Right, in our case it's also the converter that may access > the class_rep*, and it is also possible to create > converters which maps std::vector <-> lua table. So we > seem to be doing pretty much the same thing here. Except > we access it with a map<..> and you do it alot faster. :) OK. >> >> >> > As mentioned before, lua can have multiple >> >> >> > states, so it would be cool if the converters >> >> >> > would be bound to the state somehow. >> >> >> >> >> Why? It doesn't seem like it would be very useful >> >> >> to have different states doing different >> >> >> conversions. >> >> >> >> > It can be useful to be able to register different >> >> > types in different states. >> >> >> Why? >> >> > Because different states might handle completely >> > different tasks. >> >> Sure, but then aren't they going to handle different C++ >> types and/or be running different extension modules? Do >> you really want the same C++ type converted differently >> *by the same extension module* in two states? Sounds >> like premature generalization to me, but I could be >> wrong. > > I don't know.. It does seem reasonable to not allow > different conversions for the same type. Phew! ;-) >> I'm not committed to the idea of a single registry. In >> fact we've been discussing a hierarchical registry system >> where converters are searched starting with a local >> module registry and proceeding upward to the package >> level and finally ending with a global registry as a >> fallback. > > Right, that seems reasonable. Cool. And let me also point out that if the module doesn't have to collaborate with other modules, you don't even need a registry lookup at static initialization time. A static data member of a class template is enough to create an area of storage associated with a C++ type. There's no central registry at all in that case. I have grave doubts about whether it's worth special-casing the code for this, but it might make threading easier to cope with. >> My big problem was trying to figure out a scheme for >> assigning match quality. C++ uses a kind of >> "substitutaiblity" rule for resolving partial ordering >> which seemed like a good way to handle things. How do >> you do it? > > We just let every converter return a value indicating how > good the match was, where 0 is perfect match and -1 is no > match. When performing implicit conversions, every step in > the conversions inreases the match value. > > Maybe I'm naive, but is there need for anything more complicated? Well, it's the "multimethod problem": consider base and derived class formal arguments which both match an actual argument, or int <--> float conversions. How much do you increase the match value by for a particular match? >> > Right. The requirement I was aiming to resolve was that we need a >> > different set of parameters when doing our conversions. >> >> I consider that an implementation detail ;-) >> >> > You have your PyObject*, we have our (lua_State*, int). >> >> What's the int? > > An index to the object being converted on the lua stack. Oh, I guess lua hasn't handed you a pointer to an object at that point? Well, OK. >> We need separate registries within Boost.Python too; we >> just don't have them, yet. There's also a potential >> issue with thread safety if you have modules using the >> same registry initializing concurrently. With a single >> Python interpreter state, it's not an issue, since >> extension module code is always entered on the main >> thread until a mutex is explicitly released. Anyway, I >> want to discuss the whole issue of registry isolation in >> the larger context of what's desirable for both systems >> and their evolution into the future. > > Right. For luabind it seems reasonable to accept a single > registry for every module, and perhaps global registry > used by interacting modules as well. > > It doesn't seem that interesting to register different > conversions for different states anymore. (at least not to > me, but I could be wrong..). But if we where to increase > the isolation of the registries, each state could just as > well get their own registry. Let's continue poking at the issues until we get clarity. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nikolai.kirsebom at siemens.no Tue Jun 24 14:37:37 2003 From: nikolai.kirsebom at siemens.no (Kirsebom Nikolai) Date: Tue, 24 Jun 2003 14:37:37 +0200 Subject: [C++-sig] Re: Using .add_property with make_getter. Message-ID: <5B5544F322E5D211850F00A0C91DEF3B050DBF81@osll007a.siemens.no> > -----Original Message----- > From: David Abrahams [mailto:dave at boost-consulting.com] > Sent: 24. juni 2003 13:23 > To: c++-sig at python.org > Subject: ?C++-sig? Re: Using .add_property with make_getter. > > > Kirsebom Nikolai writes: > > > I have the following c++ class: > > class PyDLEPRInterface : public DLEPRInterface > > { > > public: > > PyDLEPRInterface(); > > PyDLEPRInterface(const PyDLEPRInterface& objectSrc); > > }; > > > > PyDLEPRInterface::PyDLEPRInterface(const PyDLEPRInterface& > objectSrc) > > { > > } > > > > PyDLEPRInterface::PyDLEPRInterface() > > { > > } > > > > The class DLEPRInterface defines some public attributes, example > > CString m_DocumentCategory; > > int m_UserID; > > > > > > I have a converter for converting CString <--> python string. > > See code in thread "Exception second time loaded" 20th of June. > > > > The two attributes are exposed with the statements: > > .def_readonly("UserID", &PyDLEPRInterface::m_UserID) > > and > > .add_property("DocumentCategory", > > make_getter(&PyDLEPRInterface::m_DocumentCategory, > > return_value_policy())) > > > > When running in Python, the UserID is available > > What does "available" mean? Only that I'm able to read it's value from Python. > > > however reading the DocumentCategory attribute produces the > > following traceback: > > > > import DocuLive #< > PyDELEPRInterface object > > import CString #< > v = DocuLive.getit() #< > v.UserID #< > 44 > > v.DocumentCategory > > Traceback (most recent call last): > > File "", line 1, in ? > > TypeError: bad argument type for built-in operation > > Can you post a complete, minimal test case that we can use to > reproduce the problem? > I'll try to make a test-case. Problem with the current system is that it includes a lot of propriatory code that I cannot provide. > > I'm running the python statements in a PyCrust (wxPython/wxWindows) > > shell application. In my posting 20th of June I asked for help > > relating to exception when staring the second time. It appears that > > the starting of the mainloop in the PyCrust application produces the > > exception if other applications (in Windows) has been activated in > > between. > > I don't know what that means, but I can tell you that if you want to > initialize the same Boost.Python extension modules a 2nd time, the > Boost.Python DLL must be unloaded first. I don't know what it takes > to do that, but I'm guessing we have a problem because it gets > referenced by each of the BPL extension modules which is loaded, and > they in turn are being kept alive because no PyFinalize() is being > called, because we don't support PyFinalize() yet. Dirk Gerrits has > been working on a solution, but has been waylaid. It's an important > feature to get implemented, but Boost Consulting has to focus on > projects which have been funded. > > What code actually initiates the Boost.Python initialization ? My extension DLL is not unloaded, at least when running in the debugger (VS 7.0) the Modules windows lists the DLLs (Python22.dll, boost_python_debug.dll and DLEPRPythonDld.dll (mine)). I enclose my 'module' file. Maybe the solution is obvious to someone. I'll be away for some time and I'll look into making a 'workable' test-case when back. Untill then, thanks for your quick reply and help. PS: Could the problem (getting access to the property) be related to the fact that I wrap the actual instance object in a new class PyDLEPRInterface inheriting from DLEPRInterface, something I had to do because of compiler error (see my posting on the 20th of June (Exception second time loaded). PPS: The CPythonDocuLiveDlg dialog is executed with the DoModal(). PPPS: I've not been able to retreive the exception information when the statement "wxPython.lib.PyCrust.PyShellApp.main()" fails. So I've made a counter to indicate where the it failed, an 'i' comes out with the value 2. Nikolai Kirsebom //////////////////////////////////////////////////////////////////////////// //////////////////////////////// // // #include "stdafx.h" #include "resource.h" #include #include #include "PyInterface.h" //#include "Python.h" #include #include #include #include #include #include #include //////////////////////////////////////////////////////////////////////////// ////////////// // class PyDLEPRInterface : public DLEPRInterface { public: PyDLEPRInterface(); PyDLEPRInterface(const PyDLEPRInterface& objectSrc); }; PyDLEPRInterface::PyDLEPRInterface(const PyDLEPRInterface& objectSrc) { } PyDLEPRInterface::PyDLEPRInterface() { } //////////////////////////////////////////////////////////////////////////// ///////////// ///// EXPOSED INTERFACE ///////////////////////////////////////////////////////////////// using namespace boost::python; static PyDLEPRInterface* curr=NULL; static bool ThreadRunning = FALSE; class PythonException { std::string m_exception_type; std::string m_error_message; public: PythonException():m_exception_type(""),m_error_message(""){}; void setExceptionType(std::string msg) {m_exception_type = msg;} void setErrorMessage(std::string msg) {m_error_message = msg;} std::string getExceptionType() {return m_exception_type;} std::string getErrorMessage(void) {return m_error_message;} }; void getExceptionDetail(PythonException& exc) { PyObject* exc_type; PyObject* exc_value; PyObject* exc_traceback; PyObject* pystring; PyErr_Fetch(&exc_type, &exc_value, &exc_traceback); if( exc_type==0 && exc_value==0 && exc_traceback==0) { exc.setExceptionType("Strange: No Python exception occured"); exc.setErrorMessage("Strange: Nothing to report"); } else { pystring = NULL; if (exc_type != NULL && (pystring = PyObject_Str(exc_type)) != NULL && /* str(object) */ (PyString_Check(pystring)) ) exc.setExceptionType(PyString_AsString(pystring)); else exc.setExceptionType(""); Py_XDECREF(pystring); pystring = NULL; if (exc_value != NULL && (pystring = PyObject_Str(exc_value)) != NULL && /* str(object) */ (PyString_Check(pystring)) ) exc.setErrorMessage(PyString_AsString(pystring)); else exc.setErrorMessage(""); Py_XDECREF(pystring); Py_XDECREF(exc_type); Py_XDECREF(exc_value); /* caller owns all 3 */ Py_XDECREF(exc_traceback); /* already NULL'd out */ } } PyDLEPRInterface* getit() { return curr; } BOOST_PYTHON_MODULE(DocuLive) { def("getit", getit, return_value_policy()) ; class_("DLEPRInterface") .def("GetDatabaseName", &PyDLEPRInterface::GetDatabaseName) .def("GetDefaultServerName", &PyDLEPRInterface::GetDefaultServerName) .def("IsRubber", &PyDLEPRInterface::IsRubber) .def_readonly("RecordId", &PyDLEPRInterface::m_RecordID) .def_readonly("Item", &PyDLEPRInterface::m_Item) .def_readonly("OriginalItem", &PyDLEPRInterface::m_OriginalItem) .def_readonly("Row", &PyDLEPRInterface::m_Row) .def_readonly("Col", &PyDLEPRInterface::m_Col) .def_readonly("UserID", &PyDLEPRInterface::m_UserID) .def_readonly("m_CurMenuEntry", &PyDLEPRInterface::m_CurMenuEntry) .add_property("DocumentCategory", make_getter(&PyDLEPRInterface::m_DocumentCategory, return_value_policy())) .add_property("LookupCategory", make_getter(&PyDLEPRInterface::m_LookupCategory, return_value_policy())) .add_property("RecordCategory", make_getter(&PyDLEPRInterface::m_RecordCategory, return_value_policy())) .add_property("IconPurpose", make_getter(&PyDLEPRInterface::m_IconPurpose, return_value_policy())) ; class_("MenuEntry") .add_property("ParamString1", make_getter(&MenuEntry::ParamString1, return_value_policy())) ; } namespace MFCString { namespace { struct CString_to_python_str { static PyObject* convert(CString const& s) { CString ss = s; std::string x = ss.GetBuffer(1000); return boost::python::incref(boost::python::object(x).ptr()); } }; struct CString_from_python_str { CString_from_python_str() { boost::python::converter::registry::push_back( &convertible, &construct, boost::python::type_id()); } static void* convertible(PyObject* obj_ptr) { if (!PyString_Check(obj_ptr)) return 0; return obj_ptr; } static void construct( PyObject* obj_ptr, boost::python::converter::rvalue_from_python_stage1_data* data) { const char* value = PyString_AsString(obj_ptr); if (value == 0) boost::python::throw_error_already_set(); void* storage = ((boost::python::converter::rvalue_from_python_storage*)data)->stor age.bytes; new (storage) CString(value); data->convertible = storage; } }; void init_module() { using namespace boost::python; boost::python::to_python_converter< CString, CString_to_python_str>(); CString_from_python_str(); } }} // namespace MFCString:: BOOST_PYTHON_MODULE(CString) { MFCString::init_module(); } BOOST_PYTHON_MODULE(MFC) { class_("CRect") .def_readwrite("bottom", &CRect::bottom) .def_readwrite("top", &CRect::top) .def_readwrite("right", &CRect::right) .def_readwrite("left", &CRect::left) .def("Height", &CRect::Height) .def("Width", &CRect::Width) ; class_("CPoint") .def_readwrite("x", &CPoint::x) .def_readwrite("y", &CPoint::y) ; } //////////////////////////////////////////////////////////////////////////// //////// /// PYTHON EXECUTOR CLASS ////////////////////////////////////////////////////////// class PyExecutor { public: PyExecutor(); ~PyExecutor(); CString RunStmt(DLEPRInterface * dl); PyObject * MainNamespace; }; PyExecutor::PyExecutor() { // Register the module with the interpreter if (PyImport_AppendInittab("DocuLive", initDocuLive) == -1) throw std::runtime_error("Failed to add DocuLive to the interpreter's builtin modules"); if (PyImport_AppendInittab("CString", initCString) == -1) throw std::runtime_error("Failed to add CString to the interpreter's builtin modules"); if (PyImport_AppendInittab("MFC", initMFC) == -1) throw std::runtime_error("Failed to add MFC to the interpreter's builtin modules"); Py_Initialize(); boost::python::handle<> main_module(borrowed( PyImport_AddModule("__main__"))); boost::python::handle<> main_namespace(borrowed( PyModule_GetDict(main_module.get()) )); MainNamespace = main_namespace.get(); } PyExecutor::~PyExecutor() { Py_Finalize(); } CString PyExecutor::RunStmt(DLEPRInterface * dlepr) { PyObject *p = NULL; curr = (PyDLEPRInterface *)dlepr; PythonException p_exc; // Create empty exception object on stack try { boost::python::handle<> result(PyRun_String( "try:\n" " i = 1\n" " import wxPython.lib.PyCrust.PyShellApp\n" " i += 1\n" " wxPython.lib.PyCrust.PyShellApp.main()\n" " i += 1\n" " del wxPython\n" " i += 1\n" " valx = 'ok'\n" " i += 1\n" "except:\n" " valx = 'error'\n" " raise str(i)\n" "#except:\n" "#valx = 'error'\n", Py_file_input, MainNamespace, MainNamespace)); result.reset(); p = PyRun_String("valx", Py_eval_input, MainNamespace, MainNamespace); result.release(); } catch(error_already_set) // What should we catch here?? { getExceptionDetail(p_exc); std::string s1 = p_exc.getExceptionType(); std::string s2 = p_exc.getErrorMessage(); CString s; //s.Format("Exception: %s %s", s1, s2); AfxMessageBox(s); throw(p_exc); } if (p != NULL) { /**/ char *s; int i = PyArg_Parse(p, "s", &s); return _T(s); /**/ } else { return _T("NULL"); } } //////////////////////////////////////////////////////////////////////////// /// //// THREAD SUPPORT FUNCTIONS ///////////////////////////////////////////////// struct IfStruct { IfStruct(DLEPRInterface * i, CPythonDocuLiveDlg *dlg) { m_If = i; m_Dlg = dlg; }; DLEPRInterface * m_If; CPythonDocuLiveDlg * m_Dlg; }; UINT ThreadFunc(LPVOID pParam) { static PyExecutor *x = NULL; if (x == NULL) { x = new PyExecutor(); } IfStruct * p = (IfStruct *) pParam; CString v = x->RunStmt(p->m_If); //Send close message to window p->m_Dlg->SendMessage(WM_CLOSE); return 0; } //////////////////////////////////////////////////////////////////////////// /// // CPythonDocuLiveDlg dialog IMPLEMENT_DYNAMIC(CPythonDocuLiveDlg, CDialog) CPythonDocuLiveDlg::CPythonDocuLiveDlg(DLEPRInterface* pInterface, CWnd* pParent /*=NULL*/) : CDialog(CPythonDocuLiveDlg::IDD, pParent),m_Interface(*pInterface) { } CPythonDocuLiveDlg::~CPythonDocuLiveDlg() { } void CPythonDocuLiveDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); } BOOL CPythonDocuLiveDlg::OnInitDialog() { this->MoveWindow(0,0,100,100); //Should be outside the screen (-100,-100,100,100) IfStruct * s = new IfStruct(&m_Interface, this); m_Thread = AfxBeginThread(ThreadFunc, s); return TRUE; // return TRUE unless you set the focus to a control // EXCEPTION: OCX Property Pages should return FALSE } void CPythonDocuLiveDlg::OnOK() { CDialog::OnOK(); } void CPythonDocuLiveDlg::OnCancel() { CDialog::OnCancel(); } BEGIN_MESSAGE_MAP(CPythonDocuLiveDlg, CDialog) END_MESSAGE_MAP() // CPythonDocuLiveDlg message handlers From rwgk at yahoo.com Tue Jun 24 14:47:55 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 24 Jun 2003 05:47:55 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: Message-ID: <20030624124755.76529.qmail@web20209.mail.yahoo.com> --- David Abrahams wrote: > > Your way, mistakes by the user of the *interpreter* can > easily crash the system. My way, only the guy/gal doing the > wrapping has to be careful: > > >>> x = X() > >>> z = x.y > >>> del x > >>> z.foo() # crash > > The users of these interpreted environments have an > expectation that their interpreter won't *crash* just > because of the way they've used it. > As a user who (finally!) writes mainly new Python code I really value the safe-but-certain approach. If the interpreter crashes somewhere deep down in the application without printing a backtrace it is often very frustrating and time-consuming to isolate the problem by adding print statements. In a large-scale application it is crucial that all components are rock-solid. Ralf __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Tue Jun 24 15:03:53 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 09:03:53 -0400 Subject: [C++-sig] Re: Using .add_property with make_getter. References: <5B5544F322E5D211850F00A0C91DEF3B050DBF81@osll007a.siemens.no> Message-ID: Kirsebom Nikolai writes: >> From: David Abrahams [mailto:dave at boost-consulting.com] >> >> Kirsebom Nikolai writes: >> >> > >> > I have a converter for converting CString <--> python string. >> > See code in thread "Exception second time loaded" 20th of June. >> > >> > The two attributes are exposed with the statements: >> > .def_readonly("UserID", &PyDLEPRInterface::m_UserID) >> > and >> > .add_property("DocumentCategory", >> > make_getter(&PyDLEPRInterface::m_DocumentCategory, >> > return_value_policy())) >> > >> > When running in Python, the UserID is available >> >> What does "available" mean? > > Only that I'm able to read it's value from Python. Near as I can tell, your problem is that there is no CString -> Python string conversion registered. Have you wrapped functions which return CString objects? >> > however reading the DocumentCategory attribute produces the >> > following traceback: >> > >> > import DocuLive #<> > PyDELEPRInterface object >> > import CString #<> > v = DocuLive.getit() #<> > v.UserID #<> > 44 >> > v.DocumentCategory >> > Traceback (most recent call last): >> > File "", line 1, in ? >> > TypeError: bad argument type for built-in operation >> >> Can you post a complete, minimal test case that we can use to >> reproduce the problem? >> > > I'll try to make a test-case. Thanks. > Problem with the current system is that it > includes a lot of propriatory code that I cannot provide. > >> > I'm running the python statements in a PyCrust (wxPython/wxWindows) >> > shell application. In my posting 20th of June I asked for help >> > relating to exception when staring the second time. It appears that >> > the starting of the mainloop in the PyCrust application produces the >> > exception if other applications (in Windows) has been activated in >> > between. >> >> I don't know what that means, but I can tell you that if you want to >> initialize the same Boost.Python extension modules a 2nd time, the >> Boost.Python DLL must be unloaded first. I don't know what it takes >> to do that, but I'm guessing we have a problem because it gets >> referenced by each of the BPL extension modules which is loaded, and >> they in turn are being kept alive because no PyFinalize() is being >> called, because we don't support PyFinalize() yet. Dirk Gerrits has >> been working on a solution, but has been waylaid. It's an important >> feature to get implemented, but Boost Consulting has to focus on >> projects which have been funded. > > What code actually initiates the Boost.Python initialization ? Nothing explicit; it's initialized on most systems as soon as the first Boost.Python extension module is loaded. > My extension DLL is not unloaded, at least when running in the > debugger (VS 7.0) the Modules windows lists the DLLs (Python22.dll, > boost_python_debug.dll and DLEPRPythonDld.dll (mine)). > > I enclose my 'module' file. Maybe the solution is obvious to someone. I'll > be away for some time and I'll look into making a 'workable' test-case when > back. Untill then, thanks for your quick reply and help. My pleasure. > PS: Could the problem (getting access to the property) be related to the > fact that I wrap the actual instance object in a new class PyDLEPRInterface > inheriting from DLEPRInterface, something I had to do because of compiler > error (see my posting on the 20th of June (Exception second time loaded). Yep. The make_getter call doesn't know anything about the relationship between PyDLEPRInterface and DLEPRInterface, so it can't know that it's legit for the first argument to be a PyDLEPRInterface object. Try: Whatever PyDLEPRInterface::*doc_category = &PyDLEPRInterface::m_DocumentCategory; ... .add_property( "DocumentCategory", make_getter( doc_category, return_value_policy() ) ) Instead. > PPS: The CPythonDocuLiveDlg dialog is executed with the DoModal(). > > PPPS: I've not been able to retreive the exception information when the > statement "wxPython.lib.PyCrust.PyShellApp.main()" fails. So I've made a > counter to indicate where the it failed, an 'i' comes out with the value 2. > -- Dave Abrahams Boost Consulting www.boost-consulting.com From nickm at sitius.com Tue Jun 24 15:41:23 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Tue, 24 Jun 2003 09:41:23 -0400 Subject: [C++-sig] Re: return_self_policy References: <3EF7D45C.7D819158@sitius.com> Message-ID: <3EF85503.38A43BC6@sitius.com> David Abrahams wrote: > > Nikolay Mladenov ?nickm at sitius.com? writes: > > ? Posting return_self policy implementation > ? > ? Nikolay''' > > Nikolay, > > This is wonderful! Now, I hate to do this, but I just realized that > this should really be generalized to something which takes an > argument number as its parameter and returns that argument: I have already thought about it (I expected it from you ;-) ) and it is already there in some form: the definition of return_self_policy is template struct return_self_policy : detail::return_arg<0, Base> {} > > return_identity?0? // error > return_identity??, return_identity?1? // same as return_self_policy > return_identity?2? // return the 2nd argument > return_identity?3? // return the 3rd argument > ... > > etc. So return_arg is as your return_identity, although return_arg<0> is not an error but return_self. > > Don't you think that makes more sense? Would you mind making this > modification? I agree that it makes more sense, but I am not sure how much the "more" is. Generally I don't mind. > > Thoughts, objections, screaming...? My question is: why start counting from 1? This will make the code more complicated and difficult to read. > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com Regards, Nikolay From rwgk at yahoo.com Tue Jun 24 16:23:48 2003 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 24 Jun 2003 07:23:48 -0700 (PDT) Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <20030624124755.76529.qmail@web20209.mail.yahoo.com> Message-ID: <20030624142348.8383.qmail@web20206.mail.yahoo.com> --- "Ralf W. Grosse-Kunstleve" wrote: > the safe-but-certain approach. If the interpreter crashes somewhere deep ^^^^^^^^^^^^^^^^ Oops, what was I thinking?!? __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Tue Jun 24 17:20:59 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 11:20:59 -0400 Subject: [C++-sig] Re: return_self_policy References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> Message-ID: Nikolay Mladenov writes: > David Abrahams wrote: >> >> Nikolay Mladenov ?nickm at sitius.com? writes: >> >> ? Posting return_self policy implementation >> ? >> ? Nikolay''' >> >> Nikolay, >> >> This is wonderful! Now, I hate to do this, but I just realized that >> this should really be generalized to something which takes an >> argument number as its parameter and returns that argument: > > I have already thought about it (I expected it from you ;-) ) > and it is already there in some form: > the definition of return_self_policy is > > template > struct return_self_policy : > detail::return_arg<0, Base> {} > Oh, wonderful! Very shrewd of you to anticipate me. >> >> return_identity?0? // error >> return_identity??, return_identity?1? // same as return_self_policy >> return_identity?2? // return the 2nd argument >> return_identity?3? // return the 3rd argument >> ... >> >> etc. > > So return_arg is as your return_identity, although return_arg<0> is not > an error but return_self. Well, y'see, by convention in call policies, zero is used to refer to the return value and 1 refers to the first argument. See with_custodian_and_ward. >> Don't you think that makes more sense? Would you mind making this >> modification? > > I agree that it makes more sense, but I am not sure how much the > "more" is. :-) > Generally I don't mind. > >> >> Thoughts, objections, screaming...? > > My question is: why start counting from 1? This will make the code more > complicated and difficult to read. Consistency. It's a convention in CallPolicies. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Tue Jun 24 17:21:47 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 11:21:47 -0400 Subject: [C++-sig] Re: Interest in luabind References: <20030624124755.76529.qmail@web20209.mail.yahoo.com> <20030624142348.8383.qmail@web20206.mail.yahoo.com> Message-ID: "Ralf W. Grosse-Kunstleve" writes: > --- "Ralf W. Grosse-Kunstleve" wrote: >> the safe-but-certain approach. If the interpreter crashes somewhere deep > ^^^^^^^^^^^^^^^^ > > Oops, what was I thinking?!? Not sure. What's the matter with what you wrote? -- Dave Abrahams Boost Consulting www.boost-consulting.com From dalwan01 at student.umu.se Tue Jun 24 20:05:39 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Tue, 24 Jun 2003 19:05:39 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef892f3.cae.16838@student.umu.se> > >> I think that problem is a little more complicated than > >> you're making it out to be, and that your method ends > up > >> being slower than it should be in inheritance graphs of > >> any size. First of all, inheritance may be a DAG and > you > >> have to prevent infinite loops if you're actually going > >> to support cross-casting. Secondly Boost.Python caches > >> cast sequences so that given the most-derived type of > an > >> object, you only have to search once for a conversion > to > >> any other type, and after that you can do a simple > >> address calculation. See > >> libs/python/src/object/inheritance.cpp. This probably > >> should be better commented; the algorithms were hard to > >> figure out and I didn't write down rationale for them > :( > >> On the other hand, maybe being fast isn't important in > >> this part of the code, and the cacheing should be > >> eliminated ;-) > > > > We only support upcasting, so our method isn't that > slow. > > Surely you want to be able to go in both directions, > though? > Surely not everyone using lua is interested in just speed > and not usability? We probably would like to be able to go in both directions. We also don't want to force the user to compile with RTTI turned on, so we currently supply a LUABIND_TYPEID macro to overload the typeid calls for a unique id for the type. This of course causes some problems if we want to downcast, so we would need to be able to turn downcasting off, or let the user supply their own RTTI-system somehow. > > > Generally it's just a linked list traversal. We don't > > cache though, and caching is a good thing. :) > > Yeah, probably. It would be good to share all of that. Sure. > > >> > Ok, how do you handle conversions of lvalues from c++ > >> > -> python? The to_python converter associated with a > >> > UDT does rvalue conversion and creates a new object, > >> > correct? > >> > >> Yeah. Implicit conversion of lvalues by itself with no > >> ownership management is dangerous so you have to tell > >> Boost.Python to do it. I'm sure you know this, though, > >> since luabind supports "parameter policies." Bad name, > >> though: a primary reason for these is to manage return > >> values (which are not parameters). So I wonder what > >> you're really asking? > > > > We convert lvalues to lua with no management by default. > I > > don't think this is more dangerous than copying the > > objects, it's just seg faults instead of silent > > errors. > > > Your way, mistakes by the user of the *interpreter* can > easily crash the system. My way, only the guy/gal doing > the > wrapping has to be careful: > > >>> x = X() > >>> z = x.y > >>> del x > >>> z.foo() # crash > > The users of these interpreted environments have an > expectation that their interpreter won't *crash* just > because of the way they've used it. > Right, I thought you always copied the object. My mistake. > > > Both ways are equaly easy to make mistakes with. > > Totally disagree. Done my way, we force the guy/gal to > consider whether he really wants to do something unsafe > before he does it. You probably think I copy objects by > default, but I don't. That was BPLv1. In BPLv2 I issue an > error unless the user supplies a call policy. > > Finally, let me point out that although we currently use > Python weak references to accomplish this I realized last > night that there's a *much* easier and more-efficient way > to do it using a special kind of smart pointer to refer to > the referenced object. Ah ok, function which returns lvalues causes compile time errors. When we decided to do it our way we thought returning unmanaged lvalue's would be the most common usage. We only considered copying the object as an alternative, perhaps it's better to give compile time errors. > > > Our policies primary reason is not to handle return > > values, but to handle conversion in both directions. For > > example, adopt() can be used to steal objects that are > > owned by the interpreter. > > > > void f(A*); def("f", &f, adopt(_1)) > > What, you just leak a reference here? Or is it something > else? I had a major client who was sure he was going to > need to leak references, but he eventually discovered that > the provided call policies could always be made to do > something more-intelligent, so I never put the > reference-leaker in the library. I haven't had a single > request for it since, either. The above is (almost) the equivalent of: void f(auto_ptr*); It is very useful when wrapping interfaces which expects the user to create objects and give up ownership. > > > But yes, the name should indicate both > > directions.. ConversionPolicy perhaps. > > Hmm, this is really very specific to calls, because it > *does* manage arguments and return values. I really think > CallPolicy is better. In any case I think we should > converge on this, one way or another; there will be more > languages, and you do want to be able to steal my users, > right? . That'll be a lot easier if they see > familiar terminology ;-) I don't think it's specific to calls, but to all conversion of types between the languages. We can use policies when fetching values from lua, or when calling lua functions from c++: A* stolen_obj = object_cast(get_globals(L)["obj"], adopt(result)); call_function(L, "f", stolen_obj) [ adopt(_1) ]; And yeah, of course stealing your users is our goal. :) > >> I'm not committed to the idea of a single registry. In > >> fact we've been discussing a hierarchical registry > system > >> where converters are searched starting with a local > >> module registry and proceeding upward to the package > >> level and finally ending with a global registry as a > >> fallback. > > > > Right, that seems reasonable. > > Cool. And let me also point out that if the module > doesn't > have to collaborate with other modules, you don't even > need > a registry lookup at static initialization time. A static > data member of a class template is enough to create an > area > of storage associated with a C++ type. There's no central > registry at all in that case. I have grave doubts about > whether it's worth special-casing the code for this, but > it > might make threading easier to cope with. I can't see why it would be worth it. If the module doesn't interact with other modules threading wouldn't be an issue? So how could it make it easier? > > >> My big problem was trying to figure out a scheme for > >> assigning match quality. C++ uses a kind of > >> "substitutaiblity" rule for resolving partial ordering > >> which seemed like a good way to handle things. How do > >> you do it? > > > > We just let every converter return a value indicating > how > > good the match was, where 0 is perfect match and -1 is > no > > match. When performing implicit conversions, every step > in > > the conversions inreases the match value. > > > > Maybe I'm naive, but is there need for anything more > complicated? > > Well, it's the "multimethod problem": consider base and > derived class formal arguments which both match an actual > argument, or int <--> float conversions. How much do you > increase the match value by for a particular match? I don't know if I get this. We just increase the match value by one for every casting step that is needed for converting the types. > > >> > Right. The requirement I was aiming to resolve was > that we need a > >> > different set of parameters when doing our > conversions. > >> > >> I consider that an implementation detail ;-) > >> > >> > You have your PyObject*, we have our (lua_State*, > int). > >> > >> What's the int? > > > > An index to the object being converted on the lua stack. > > Oh, I guess lua hasn't handed you a pointer to an object > at > that point? Well, OK. Right. Also, you can't get a pointer to all objects in lua, only "userdata" objects. If the object being converted is of primitive type, you can only access it directly from the stack with lua_toXXX() calls. > > >> We need separate registries within Boost.Python too; we > >> just don't have them, yet. There's also a potential > >> issue with thread safety if you have modules using the > >> same registry initializing concurrently. With a single > >> Python interpreter state, it's not an issue, since > >> extension module code is always entered on the main > >> thread until a mutex is explicitly released. Anyway, I > >> want to discuss the whole issue of registry isolation > in > >> the larger context of what's desirable for both systems > >> and their evolution into the future. > > > > Right. For luabind it seems reasonable to accept a > single > > registry for every module, and perhaps global registry > > used by interacting modules as well. > > > > It doesn't seem that interesting to register different > > conversions for different states anymore. (at least not > to > > me, but I could be wrong..). But if we where to increase > > the isolation of the registries, each state could just > as > > well get their own registry. > > Let's continue poking at the issues until we get clarity. > Yeah, I'll have to think about this for a bit, the whole registry thing is quite new to me. -- Daniel Wallin From jochen at neverEngine.com Tue Jun 24 20:58:22 2003 From: jochen at neverEngine.com (jochen) Date: Tue, 24 Jun 2003 20:58:22 +0200 Subject: [C++-sig] Using a phython function as callback Message-ID: Hello everybody. I have a class class Selectable { ... }; which I exported to be part of my python module. Now I wrote a Python function: def UpdateSelectedObject( i_Selectable ): # do something with the Selectable instance. return I got access to the function via the dict of my module. Now I wan't to use PyObject_CallObject() to call the function. But how do I pass a instance of type Selectable to the function. Passing a integer or string is easy, because there a functions available from the PythonAPI. But how do I wrap my instance of type Selectable into a PyObject ? Can I use part of the boost api do this, I mean this is exactly what boost does, isn't it. Can anybody help a bloody beginner ? From nickm at sitius.com Tue Jun 24 21:29:16 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Tue, 24 Jun 2003 15:29:16 -0400 Subject: [C++-sig] Re: Using a phython function as callback References: Message-ID: <3EF8A68C.195494BD@sitius.com> See this http://boost.org/libs/python/doc/v2/call.html jochen wrote: > > Hello everybody. > > I have a class > > class Selectable > { > ... > }; > > which I exported to be part of my python module. > > Now I wrote a Python function: > > def UpdateSelectedObject( i_Selectable ): > > # do something with the Selectable instance. > return > > I got access to the function via the dict of my module. > Now I wan't to use PyObject_CallObject() to call the function. > But how do I pass a instance of type Selectable to the function. > > Passing a integer or string is easy, because there a functions available > from the PythonAPI. > But how do I wrap my instance of type Selectable into a PyObject ? > > Can I use part of the boost api do this, I mean this is exactly what boost > does, isn't it. > > Can anybody help a bloody beginner ? From nicodemus at globalite.com.br Tue Jun 24 21:28:22 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 24 Jun 2003 16:28:22 -0300 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <20030624124755.76529.qmail@web20209.mail.yahoo.com> References: <20030624124755.76529.qmail@web20209.mail.yahoo.com> Message-ID: <3EF8A656.5070702@globalite.com.br> Ralf W. Grosse-Kunstleve wrote: >--- David Abrahams wrote: > > >> >>Your way, mistakes by the user of the *interpreter* can >>easily crash the system. My way, only the guy/gal doing the >>wrapping has to be careful: >> >> >>> x = X() >> >>> z = x.y >> >>> del x >> >>> z.foo() # crash >> >>The users of these interpreted environments have an >>expectation that their interpreter won't *crash* just >>because of the way they've used it. >> >> >> > >As a user who (finally!) writes mainly new Python code I really value >the safe-but-certain approach. If the interpreter crashes somewhere deep >down in the application without printing a backtrace it is often very >frustrating and time-consuming to isolate the problem by adding print >statements. In a large-scale application it is crucial that all >components are rock-solid. >Ralf > We use Boost.Python at our company developing large applications, and I agree with Ralf. Plus, I believe it is against Python philosophy to get a core dump, since Python itself goes great lenghts to prevent explicit memory management by the user. BTW, this discussion about LuaBind is really interesting. 8) From jochen at neverEngine.com Tue Jun 24 22:12:20 2003 From: jochen at neverEngine.com (jochen) Date: Tue, 24 Jun 2003 22:12:20 +0200 Subject: [C++-sig] Using a phython function as callback Message-ID: Thanks for the hint, but I can't figure out how to use it correctly: What I do is: SelectableObject* pSelectable; ... PyObject* pCB = pSelectable->GetOnUpdateSelectionCallBack(); if ( PyCallable_Check( pCB ) ) { boost::python::call< void >( pCB, pSelectable ); } Py_DECREF( pCB ); But the call raises a exection: boost::python::error_already_set How do I correctly convert the instance of type SelectableObject to a valid PyObject ? From dirk at gerrits.homeip.net Tue Jun 24 22:39:41 2003 From: dirk at gerrits.homeip.net (Dirk Gerrits) Date: Tue, 24 Jun 2003 22:39:41 +0200 Subject: [C++-sig] Re: Interest in luabind In-Reply-To: <3EF8A656.5070702@globalite.com.br> References: <20030624124755.76529.qmail@web20209.mail.yahoo.com> <3EF8A656.5070702@globalite.com.br> Message-ID: Nicodemus wrote: > Ralf W. Grosse-Kunstleve wrote: > >> As a user who (finally!) writes mainly new Python code I really value >> the safe-but-certain approach. If the interpreter crashes somewhere deep >> down in the application without printing a backtrace it is often very >> frustrating and time-consuming to isolate the problem by adding print >> statements. In a large-scale application it is crucial that all >> components are rock-solid. >> Ralf >> > > We use Boost.Python at our company developing large applications, and I > agree with Ralf. > Plus, I believe it is against Python philosophy to get a core dump, > since Python itself goes great lenghts to prevent explicit memory > management by the user. I concur. Anything that has gone wrong with my Python programs so far has always resulted in a nice and clear exception with a traceback. If only that would have been the case in C++... :P Great libraries like the STL and Boost alleviate the pain though. > BTW, this discussion about LuaBind is really interesting. 8) I agree! And perhaps some of it can serve as a fine basis for that 'Boost.Python core documentation'? Regards, Dirk Gerrits From dave at boost-consulting.com Tue Jun 24 22:42:04 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 16:42:04 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef892f3.cae.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> >> I think that problem is a little more complicated than >> >> you're making it out to be, and that your method ends >> >> up being slower than it should be in inheritance >> >> graphs of any size. First of all, inheritance may be >> >> a DAG and you have to prevent infinite loops if you're >> >> actually going to support cross-casting. Secondly >> >> Boost.Python caches cast sequences so that given the >> >> most-derived type of an object, you only have to >> >> search once for a conversion to any other type, and >> >> after that you can do a simple address calculation. >> >> See libs/python/src/object/inheritance.cpp. This >> >> probably should be better commented; the algorithms >> >> were hard to figure out and I didn't write down >> >> rationale for them :( On the other hand, maybe being >> >> fast isn't important in this part of the code, and the >> >> cacheing should be eliminated ;-) >> > >> > We only support upcasting, so our method isn't that >> > slow. >> >> Surely you want to be able to go in both directions, >> though? Surely not everyone using lua is interested in >> just speed and not usability? > > We probably would like to be able to go in both > directions. We also don't want to force the user to > compile with RTTI turned on I'm sure that's a capability some Boost.Python users would appreciate, too. > so we currently supply a LUABIND_TYPEID macro to overload > the typeid calls for a unique id for the type. This functions something like a specialization? > This of course causes some problems if we want to > downcast And if you want to support CBD (component based development, i.e. cross-module conversions), unless you can somehow get all authors to agree on how to identify types. Well, I guess they could use strings and manually supply what std::type_info::name() does. > so we would need to be able to turn downcasting off, or > let the user supply their own RTTI-system somehow. That's pretty straightforward, fortunately. We need to be careful about what kinds of reconfigurability is available through macros. Extensions linked to the same shared library all share a link symbol space and thus are subject to ODR problems. >> >> > Ok, how do you handle conversions of lvalues from c++ >> >> > -> python? The to_python converter associated with a >> >> > UDT does rvalue conversion and creates a new object, >> >> > correct? >> >> >> >> Yeah. Implicit conversion of lvalues by itself with >> >> no ownership management is dangerous so you have to >> >> tell Boost.Python to do it. I'm sure you know this, >> >> though, since luabind supports "parameter policies." >> >> Bad name, though: a primary reason for these is to >> >> manage return values (which are not parameters). So I >> >> wonder what you're really asking? >> > >> > We convert lvalues to lua with no management by >> > default. I don't think this is more dangerous than >> > copying the objects, it's just seg faults instead of >> > silent errors. >> >> >> Your way, mistakes by the user of the *interpreter* can >> easily crash the system. My way, only the guy/gal doing >> the wrapping has to be careful: >> >> >>> x = X() >> >>> z = x.y >> >>> del x >> >>> z.foo() # crash >> >> The users of these interpreted environments have an >> expectation that their interpreter won't *crash* just >> because of the way they've used it. >> > > Right, I thought you always copied the object. My mistake. OK, sorry for the heat. >> > Both ways are equaly easy to make mistakes with. >> >> Totally disagree. Done my way, we force the guy/gal to >> consider whether he really wants to do something unsafe >> before he does it. You probably think I copy objects by >> default, but I don't. That was BPLv1. In BPLv2 I issue an >> error unless the user supplies a call policy. >> >> Finally, let me point out that although we currently use >> Python weak references to accomplish this I realized last >> night that there's a *much* easier and more-efficient way >> to do it using a special kind of smart pointer to refer to >> the referenced object. > > Ah ok, function which returns lvalues causes compile time > errors. Relatively pretty ones, too. See boost/python/default_call_policies.hpp > When we decided to do it our way we thought returning > unmanaged lvalue's would be the most common usage. We > only considered copying the object as an alternative, > perhaps it's better to give compile time errors. It's *miles* better. Otherwise users will just blindly wrap these things unsafely without considering the consequences. Remember that the segfault may not occur in their tests, due to (un)lucky usage patterns. >> > Our policies primary reason is not to handle return >> > values, but to handle conversion in both >> > directions. For example, adopt() can be used to steal >> > objects that are owned by the interpreter. >> > >> > void f(A*); def("f", &f, adopt(_1)) >> >> What, you just leak a reference here? Or is it something >> else? I had a major client who was sure he was going to >> need to leak references, but he eventually discovered >> that the provided call policies could always be made to >> do something more-intelligent, so I never put the >> reference-leaker in the library. I haven't had a single >> request for it since, either. > > The above is (almost) the equivalent of: > void f(auto_ptr*); What's the significance of a pointer-to-auto_ptr? I'd understand what you meant if you wrote: void f(auto_ptr); instead. I'm going to assume that's what you meant. > It is very useful when wrapping interfaces which expects the > user to create objects and give up ownership. Sure, great. It's a function-call-oriented thing. Before you object, read on. >> Hmm, this is really very specific to calls, because it >> *does* manage arguments and return values. I really think >> CallPolicy is better. In any case I think we should >> converge on this, one way or another; there will be more >> languages, and you do want to be able to steal my users, >> right? . That'll be a lot easier if they see >> familiar terminology ;-) > > I don't think it's specific to calls, but to all conversion > of types between the languages. We can use policies when > fetching values from lua, or when calling lua functions from > c++: > > A* stolen_obj = object_cast(get_globals(L)["obj"], > adopt(result)); What is result? A placeholder? could be spelled: std::auto_ptr stolen = extract >(get_globals(L)["obj"]); in Boost.Python. [I assume this means that all of your C++ objects are held within their lua wrappers by pointer. I went to considerable lengths to allow them to be held by-value, though I'm not sure the efficiency gain is worth the cost in flexibility.] > call_function(L, "f", stolen_obj) [ adopt(_1) ]; That's spelled: call_function(L, "f", std::auto_ptr(stolen)); I can begin to see the syntactic convenience of your way, but I worry about the parameterizability. In the first case "result" is the only possible appropriate arg and in the 2nd case it's "_1". > And yeah, of course stealing your users is our goal. :) I certainly hope so! Likewise, I'm sure! >> >> I'm not committed to the idea of a single registry. >> >> In fact we've been discussing a hierarchical registry >> >> system where converters are searched starting with a >> >> local module registry and proceeding upward to the >> >> package level and finally ending with a global >> >> registry as a fallback. >> > >> > Right, that seems reasonable. >> >> Cool. And let me also point out that if the module >> doesn't have to collaborate with other modules, you don't >> even need a registry lookup at static initialization >> time. A static data member of a class template is enough >> to create an area of storage associated with a C++ type. >> There's no central registry at all in that case. I have >> grave doubts about whether it's worth special-casing the >> code for this, but it might make threading easier to cope >> with. > > I can't see why it would be worth it. I like your attitude. > If the module doesn't interact with other modules > threading wouldn't be an issue? So how could it make it > easier? I guess only in the case that the modules are loadedq multiple times by different threads *and* the compiler provides threadsafe static initializers, you wouldn't have to worry about the central registry being modified while someone was reading it. A minor issue, really. Mutexes handle everything. >> >> My big problem was trying to figure out a scheme for >> >> assigning match quality. C++ uses a kind of >> >> "substitutaiblity" rule for resolving partial ordering >> >> which seemed like a good way to handle things. How do >> >> you do it? >> > >> > We just let every converter return a value indicating >> > how good the match was, where 0 is perfect match and -1 >> > is no match. When performing implicit conversions, >> > every step in the conversions inreases the match value. >> > >> > Maybe I'm naive, but is there need for anything more >> > complicated? >> >> Well, it's the "multimethod problem": consider base and >> derived class formal arguments which both match an actual >> argument, or int <--> float conversions. How much do you >> increase the match value by for a particular match? > > I don't know if I get this. Are you familiar with the problems of multimethod dispatching? Google can help. > We just increase the match value by one for every casting > step that is needed for converting the types. That seems to work for all the trivial cases, but the problem is always phrased in more-complicated terms, I presume for a reason. See http://tinyurl.com/f5t6 One example of a place where it might not work is: struct B {}; struct D : B {}; void f(B*, python::list) void f(D*, std::vector) >>> f(D(), [1, 2, 3]) I want this to choose the first overload, since it requires only lvalue conversions. Other links of interest: http://std.dkuug.dk/jtc1/sc22/wg21/docs/papers/2003/n1463.html http://tinyurl.com/f5vi None of these uses such a trivial algorithm for rating matches. Coincidence? > Right. Also, you can't get a pointer to all objects in lua, > only "userdata" objects. If the object being converted is of > primitive type, you can only access it directly from the > stack with lua_toXXX() calls. OK >> > It doesn't seem that interesting to register different >> > conversions for different states anymore. (at least not >> > to me, but I could be wrong..). But if we where to >> > increase the isolation of the registries, each state >> > could just as well get their own registry. >> >> Let's continue poking at the issues until we get clarity. > > Yeah, I'll have to think about this for a bit, the whole > registry thing is quite new to me. OK. FYI I'm going on vacation 6/26-7/6. I find this conversation really interesting, though, so I'll try to keep an eye on it. -- Dave Abrahams Boost Consulting www.boost-consulting.com From patrick at vrac.iastate.edu Tue Jun 24 22:49:53 2003 From: patrick at vrac.iastate.edu (Patrick Hartling) Date: Tue, 24 Jun 2003 15:49:53 -0500 Subject: [C++-sig] PyErr_Print() causing segmentation fault Message-ID: <3EF8B971.4050400@vrac.iastate.edu> I am trying to make the bridge between my Python and C++ code more robust, and I have run into a strange problem with PyErr_Print(). In my Boost.Python code, I have several wrapper classes for dealing with virtual functions. To prevent exceptions from the Python code from propogating too far up the call stack, I am putting try/catch blocks around all calls to boost::python::call(). For example: class MyClass { virtual void f() = 0; }; struct MyClass_Wrapper : MyClass { virtual void f() { try { call_method(self, "f"); } catch(error_already_set) { PyErr_Print(); } } PyObject* self; }; Exposing this class through Boost.Python is working fine, but when an exception is thrown by the Python implementation of the f() method, PyErr_Print() causes a segmentation fault. The partial stack trace I get is this: #0 0x080af299 in PyErr_Occurred () #1 0x080bbd84 in PyErr_PrintEx () #2 0x080bbac0 in PyErr_Print () Frame #3 is my module's call to PyErr_Print(), and everything beyond that is specific to the software I am exposing to Python. I realize this isn't terribly helpful in this form, and I can dig into the Python interpreter code to get more specific information if it would be useful. This code is executing in a multi-threaded environment, so things are a little more complicated than I describe them here. In particular, I am acquiring the Python GIL any time a thread other than the one that started the Python interpreter calls from C++ into Python. I have tested calling PyErr_Print() with the GIL acquired and not acquired, but it doesn't make any difference. Based on that, I am thinking that the multi-threading aspect isn't a factor, but there is still much I don't know about managing threads between Python and C++. Is there anything specific that prevents me from calling PyErr_Print() from within my module? Is there something better I can use, possibly from the boost::python set of functions? Obviously I can just print "Exception caught" or something, but that makes debugging the exceptions thrown by the Python code more difficult. -Patrick -- Patrick L. Hartling | Research Assistant, VRAC patrick at vrac.iastate.edu | 2624 Howe Hall: 1.515.294.4916 http://www.137.org/patrick/ | http://www.vrac.iastate.edu/ From seefeld at sympatico.ca Tue Jun 24 23:09:34 2003 From: seefeld at sympatico.ca (Stefan Seefeld) Date: Tue, 24 Jun 2003 17:09:34 -0400 Subject: [C++-sig] Using a phython function as callback In-Reply-To: References: Message-ID: <3EF8BE0E.4080001@sympatico.ca> hi Jochen, in order to be able to call into the python function passing your C++ object you have to define a python wrapper for it (using 'class_' or similar). With that, a call to your python callable object will automatically look up the conversion, so the 'UpdateSelectedObject' function will access the selectable through a temporarily created wrapper of type 'class_'. Regards, Stefan From dave at boost-consulting.com Tue Jun 24 23:28:08 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 17:28:08 -0400 Subject: [C++-sig] Re: Using a phython function as callback References: Message-ID: "jochen" writes: > Thanks for the hint, but I can't figure out how to use it correctly: > > What I do is: > > SelectableObject* pSelectable; > ... > PyObject* pCB = pSelectable->GetOnUpdateSelectionCallBack(); > > if ( PyCallable_Check( pCB ) ) > { > boost::python::call< void >( pCB, pSelectable ); > } > > Py_DECREF( pCB ); > > > But the call raises a exection: > boost::python::error_already_set That means the call produced a python exception. If you're embedding Python, see the main() function in libs/python/test/embedding.cpp for a way to see what that was. If you're writing extension modules, just let it propagate back to Python and examine the backtrace. > How do I correctly convert the instance of type SelectableObject to > a valid PyObject ? python::object x(*pSelectable); -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Tue Jun 24 23:59:37 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 24 Jun 2003 18:59:37 -0300 Subject: [C++-sig] register_ptr_to_python docs and test Message-ID: <3EF8C9C9.3080502@globalite.com.br> Hi all, Finally I have done the documentation and tests for the auxiliar function register_ptr_to_python, discussed in this thread: http://aspn.activestate.com/ASPN/Mail/Message/1604484 I am not sure if I did grasp everything, so a review of the doc would be great. Regards, Nicodemus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: register_ptr.cpp URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: register_ptr_test.py URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: register_ptr_to_python.hpp URL: From dave at boost-consulting.com Wed Jun 25 03:12:01 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 21:12:01 -0400 Subject: [C++-sig] Re: register_ptr_to_python docs and test References: <3EF8C9C9.3080502@globalite.com.br> Message-ID: Nicodemus writes: >

Introduction

> > ^ ^ < > You have this problem throughout. Did you validate your html? > defines a function that allows the user to register smart pointers for > classes with virtual member functions that will be overwritten in > Python. You mean "overridden" and not "overwritten". But this has nothing at all to do with virtual member functions AFAICT. <boost/python/converter/register_ptr_to_python.hpp> supplies register_ptr_to_python, a function template which registers a conversion from a smart pointers to Python. The resulting Python object holds a copy of the converted smart pointer, but behaves as though it were a wrapped copy of the pointee. If the pointee type has virtual functions and the class representing its dynamic (most-derived) type has been wrapped, the Python object will be an instance of the wrapper for the most-derived type. > Classes that have virtual methods (and the user has supplied a > wrapper, X_Wrapper, allowing Python to callback into > C++) live in Python not as X objects, but as > instances of the wrapper class. This is a problem because > conversions to-python of smart_ptr objects won't > work since the objects that live in Python are held actually by > smart_ptr instances. OK, this is one reason you might want to do use register_ptr_to_python, but it's not the only reason. > Using the function in this header to register the smart pointer allows > correct conversion to-python of smart_ptr instances, > but has one drawback: a X object created in Python will not be > able to be passed as a smart_ptr& argument. > But it is still possible to pass a X object created in Python?as > smart_ptr, smart_ptr, > smart_ptr const&, or > smart_ptr const& as an argument. That's because, unless X is wrapped with smart_ptr as a template argument to its class_<...>, X objects created in Python are held directly in the containing Python object. > >

Functions

> > Example(s) > >

C++ Wrapper Code

> > Here is an example of a module that contains a class A with > virtual methods and some functions that work with ^^^^^^^ functions. > boost::shared_ptr
. > > > struct A > { > virtual int f() { return 0; } > }; > > shared_ptr New() { return shared_ptr( new A() ); } > > int Ok( const shared_ptr& a ) { return a->f(); } > > int Fail( shared_ptr& a ) { return a->f(); } > > struct A_Wrapper: A > { > A_Wrapper(PyObject* self_): self(self_) {} > int f() { return call_method(self, "f"); } > int default_f() { return A::f(); } > PyObject* self; > }; > > BOOST_PYTHON_MODULE(register_ptr) > { > class_("A") > .def("f", &A::f, &A_Wrapper::default_f) > ; > > def("New", &New); > def("Ok", &Call); > def("Fail", &Fail); > > register_ptr_to_python< shared_ptr >(); > } > > >

Python Code

> > >>>> from register_ptr import * >>>> a = A() >>>> Ok(a) # ok, passed as shared_ptr
> 0 >>>> Fail(a) # passed as shared_ptr&, and was created in Python! > Traceback (most recent call last): > File "", line 1, in ? > TypeError: bad argument type for built-in operation >>>> >>>> na = New() # now "na" is actually a shared_ptr >>>> Ok(a) > 0 >>>> Fail(a) > 0 >>>> > > > If shared_ptr is registered as follows: > > > class_ >("A") > .def("f", &A::f, &A_Wrapper::default_f) > ; > > > There will be an error when trying to convert shared_ptr to > shared_ptr: > > >>>> a = New() > Traceback (most recent call last): > File "", line 1, in ? > TypeError: No to_python (by-value) converter found for C++ type: class boost::shared_ptr >>>> > > > Revised > > 24 Jun, 2003 > > > © Copyright ../../../../people/dave_abrahams.htm > 2002. All Rights Reserved. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 25 03:16:11 2003 From: dave at boost-consulting.com (David Abrahams) Date: Tue, 24 Jun 2003 21:16:11 -0400 Subject: [C++-sig] On vacation 6/26-7/6 Message-ID: The title says it all. I will be watching certain interesting conversations out of the corner of my very limited bandwidth, and maybe even replying, but don't expect much responsiveness from me for the next two weeks. Thanks! Regards, -- Dave Abrahams Boost Consulting www.boost-consulting.com From nicodemus at globalite.com.br Wed Jun 25 04:00:19 2003 From: nicodemus at globalite.com.br (Nicodemus) Date: Tue, 24 Jun 2003 23:00:19 -0300 Subject: [C++-sig] Re: register_ptr_to_python docs and test In-Reply-To: References: <3EF8C9C9.3080502@globalite.com.br> Message-ID: <3EF90233.1040406@globalite.com.br> David Abrahams wrote: >Nicodemus writes: > > > >>

Introduction

>> >> >> >> > ^ ^ > < > > >You have this problem throughout. Did you validate your html? > > No, I didn't, sorry. But I tested it in mozilla and internet explorer, and both display it fine. But even checking the text for this kind of error, I couldn't find any, even this one you're pointing. I guess the html got mixed up in the sending? It happened before in another doc patch... 8/ >> defines a function that allows the user to register smart pointers for >> classes with virtual member functions that will be overwritten in >> Python. >> >> > >You mean "overridden" and not "overwritten". > Sure! Sorry. > But this has nothing at all to do with virtual member functions AFAICT. > Yeah, it is because of how the C++ object is held in Python, because of the wrapper class; but you are right, this paragraph does not explain it well. > <boost/python/converter/register_ptr_to_python.hpp> > supplies register_ptr_to_python, a function template > which registers a conversion from a smart pointers to Python. The > resulting Python object holds a copy of the converted smart pointer, > but behaves as though it were a wrapped copy of the pointee. If > the pointee type has virtual functions and the class representing > its dynamic (most-derived) type has been wrapped, the Python object > will be an instance of the wrapper for the most-derived type. > > WAY better! >> Classes that have virtual methods (and the user has supplied a >> wrapper, X_Wrapper, allowing Python to callback into >> C++) live in Python not as X objects, but as >> instances of the wrapper class. This is a problem because >> conversions to-python of smart_ptr objects won't >> work since the objects that live in Python are held actually by >> smart_ptr instances. >> >> > >OK, this is one reason you might want to do use >register_ptr_to_python, but it's not the only reason. > > Sorry, that was the only reason I knew... what's the other? >> Using the function in this header to register the smart pointer allows >> correct conversion to-python of smart_ptr instances, >> but has one drawback: a X object created in Python will not be >> able to be passed as a smart_ptr& argument. >> But it is still possible to pass a X object created in Python as >> smart_ptr, smart_ptr, >> smart_ptr const&, or >> smart_ptr const& as an argument. >> >> > >That's because, unless X is wrapped with smart_ptr as a template >argument to its class_<...>, X objects created in Python are held >directly in the containing Python object. > > Using register_ptr_to_python to register the smart pointer allows correct to-python conversions of smart_ptr instances, but has one drawback: a X object created in Python will not be able to be passed as a smart_ptr & argument because, unless X is wrapped with smart_ptr as a template argument to its class_<...>, X objects created in Python are held directly in the containing Python object. But it is still possible to pass a X object created in Python as smart_ptr, smart_ptr, smart_ptr const&, or smart_ptr const& as an argument to a function. ? >>Here is an example of a module that contains a class A with >>virtual methods and some functions that work with >> >> > ^^^^^^^ >functions. > Fixed. Dave, we can discuss changes here and then I can commit myself, if you want. Regards, Nicodemus. From dalwan01 at student.umu.se Wed Jun 25 12:00:22 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Wed, 25 Jun 2003 11:00:22 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef972b6.43af.16838@student.umu.se> > "Daniel Wallin" writes: > >> Surely you want to be able to go in both directions, > >> though? Surely not everyone using lua is interested in > >> just speed and not usability? > > > > We probably would like to be able to go in both > > directions. We also don't want to force the user to > > compile with RTTI turned on > > I'm sure that's a capability some Boost.Python users would > appreciate, too. > > > so we currently supply a LUABIND_TYPEID macro to > overload > > the typeid calls for a unique id for the type. > > This functions something like a specialization? Right. You could do something like: #define LUABIND_TYPE_INFO int #define LUABIND_TYPEID(type) my_type_id::value > > > This of course causes some problems if we want to > > downcast > > And if you want to support CBD (component based > development, > i.e. cross-module conversions), unless you can somehow get > all authors to agree on how to identify types. Well, I > guess they could use strings and manually supply what > std::type_info::name() does. It doesn't seem wrong that the authors need to use the same typeid. If you aren't developing a closed system, don't change typeid system. > > > so we would need to be able to turn downcasting off, or > > let the user supply their own RTTI-system somehow. > > That's pretty straightforward, fortunately. > > We need to be careful about what kinds of > reconfigurability > is available through macros. Extensions linked to the > same > shared library all share a link symbol space and thus are > subject to ODR problems. Right. We currently have quite a few configuration macros. LUABIND_MAX_ARITY LUABIND_MAX_BASES LUABIND_NO_ERROR_CHECKING LUABIND_DONT_COPY_STRINGS LUABIND_NO_EXCEPTIONS . and the typeid macros. Most are results of user requests. Massive configuration is quite important to our users, since lua is used alot on embedded systems. > >> > Both ways are equaly easy to make mistakes with. > >> > >> Totally disagree. Done my way, we force the guy/gal to > >> consider whether he really wants to do something unsafe > >> before he does it. You probably think I copy objects > by > >> default, but I don't. That was BPLv1. In BPLv2 I issue > an > >> error unless the user supplies a call policy. > >> > >> Finally, let me point out that although we currently > use > >> Python weak references to accomplish this I realized > last > >> night that there's a *much* easier and more-efficient > way > >> to do it using a special kind of smart pointer to refer > to > >> the referenced object. > > > > Ah ok, function which returns lvalues causes compile > time > > errors. > > Relatively pretty ones, too. See > boost/python/default_call_policies.hpp Looks good. We have similar ones in some policies (for example adopt can only convert pointers). > > > When we decided to do it our way we thought returning > > unmanaged lvalue's would be the most common usage. We > > only considered copying the object as an alternative, > > perhaps it's better to give compile time errors. > > It's *miles* better. Otherwise users will just blindly > wrap > these things unsafely without considering the > consequences. > Remember that the segfault may not occur in their tests, > due > to (un)lucky usage patterns. You're probably right. We saw the convenience of not having to type a policy name as important. > > >> > Our policies primary reason is not to handle return > >> > values, but to handle conversion in both > >> > directions. For example, adopt() can be used to steal > >> > objects that are owned by the interpreter. > >> > > >> > void f(A*); def("f", &f, adopt(_1)) > >> > >> What, you just leak a reference here? Or is it > something > >> else? I had a major client who was sure he was going > to > >> need to leak references, but he eventually discovered > >> that the provided call policies could always be made to > >> do something more-intelligent, so I never put the > >> reference-leaker in the library. I haven't had a > single > >> request for it since, either. > > > > The above is (almost) the equivalent of: > > void f(auto_ptr
*); > > What's the significance of a pointer-to-auto_ptr? I'd > understand what you meant if you wrote: > > void f(auto_ptr); > > instead. I'm going to assume that's what you meant. Yeah, that's what I meant. I'm lazy and copy-pasted and forgot to remove the *. :) > > > It is very useful when wrapping interfaces which expects > the > > user to create objects and give up ownership. > > Sure, great. It's a function-call-oriented thing. Before > you object, read on. > > >> Hmm, this is really very specific to calls, because it > >> *does* manage arguments and return values. I really > think > >> CallPolicy is better. In any case I think we should > >> converge on this, one way or another; there will be > more > >> languages, and you do want to be able to steal my > users, > >> right? . That'll be a lot easier if they see > >> familiar terminology ;-) > > > > I don't think it's specific to calls, but to all > conversion > > of types between the languages. We can use policies when > > fetching values from lua, or when calling lua functions > from > > c++: > > > > A* stolen_obj = object_cast(get_globals(L)["obj"], > > adopt(result)); > > What is result? A placeholder? Exactly. boost::arg<0> result; > > could be spelled: > > std::auto_ptr stolen > = extract > >(get_globals(L)["obj"]); > > in Boost.Python. > > [I assume this means that all of your C++ objects are held > within their lua wrappers by pointer. I went to > considerable lengths to allow them to be held by-value, > though I'm not sure the efficiency gain is worth the cost > in flexibility.] Correct. We hold all C++ objects by pointer. I can see why it could be interesting to be able to hold objects by value though, especially with small objects. I'm not sure it would be worth it though, since the user can just provide a custom pool allocator if allocation is an issue. > > > call_function(L, "f", stolen_obj) [ adopt(_1) ]; > > That's spelled: > > call_function(L, "f", std::auto_ptr(stolen)); Wouldn't both the examples you provided require A to be held by auto_ptr in python? > > I can begin to see the syntactic convenience of your way, > but I worry about the parameterizability. In the first > case > "result" is the only possible appropriate arg and in the > 2nd > case it's "_1". What exactly do you worry about? That someone would use the wrong placeholder? I think the intended use is quite clear. It's trivial to STATIC_ASSERT if someone uses the wrong placeholder though. > >> >> My big problem was trying to figure out a scheme for > >> >> assigning match quality. C++ uses a kind of > >> >> "substitutaiblity" rule for resolving partial > ordering > >> >> which seemed like a good way to handle things. How > do > >> >> you do it? > >> > > >> > We just let every converter return a value indicating > >> > how good the match was, where 0 is perfect match and > -1 > >> > is no match. When performing implicit conversions, > >> > every step in the conversions inreases the match > value. > >> > > >> > Maybe I'm naive, but is there need for anything more > >> > complicated? > >> > >> Well, it's the "multimethod problem": consider base and > >> derived class formal arguments which both match an > actual > >> argument, or int <--> float conversions. How much do > you > >> increase the match value by for a particular match? > > > > I don't know if I get this. > > Are you familiar with the problems of multimethod > dispatching? Google can help. I wasn't no, and google didn't seem to like me. :) > > > We just increase the match value by one for every > casting > > step that is needed for converting the types. > > That seems to work for all the trivial cases, but the > problem is always phrased in more-complicated terms, I > presume for a reason. See http://tinyurl.com/f5t6 > > One example of a place where it might not work is: > > struct B {}; struct D : B {}; > > void f(B*, python::list) > void f(D*, std::vector) > > >>> f(D(), [1, 2, 3]) > > I want this to choose the first overload, since it > requires > only lvalue conversions. rvalue conversions could always give much larger errors than lvalue conversions. This would solve some cases, but it isn't clear to me what you want. Do you always want to choose the first overload, no matter how many lvalue conversions it involves? void f(B*, B*, B*, python::list) void f(D*, D*, D*, std::vector) f(D(), D(), D(), [1, 2, 3]) To me it's sufficient to have a really simple overload system, and if that doesn't work the user can register the overloads with different names. > >> > It doesn't seem that interesting to register > different > >> > conversions for different states anymore. (at least > not > >> > to me, but I could be wrong..). But if we where to > >> > increase the isolation of the registries, each state > >> > could just as well get their own registry. > >> > >> Let's continue poking at the issues until we get > clarity. > > > > Yeah, I'll have to think about this for a bit, the whole > > registry thing is quite new to me. > > OK. FYI I'm going on vacation 6/26-7/6. I find this > conversation really interesting, though, so I'll try to > keep an eye on it. Ok great, have fun. -- Daniel Wallin From dalwan01 at student.umu.se Wed Jun 25 12:25:24 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Wed, 25 Jun 2003 11:25:24 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3ef97894.68d6.16838@student.umu.se> > Nicodemus wrote: > > Ralf W. Grosse-Kunstleve wrote: > > > >> As a user who (finally!) writes mainly new Python code > I really value > >> the safe-but-certain approach. If the interpreter > crashes somewhere deep > >> down in the application without printing a backtrace it > is often very > >> frustrating and time-consuming to isolate the problem > by adding print > >> statements. In a large-scale application it is crucial > that all > >> components are rock-solid. > >> Ralf > >> > > > > We use Boost.Python at our company developing large > applications, and I > > agree with Ralf. > > Plus, I believe it is against Python philosophy to get a > core dump, > > since Python itself goes great lenghts to prevent > explicit memory > > management by the user. > > I concur. Anything that has gone wrong with my Python > programs so far > has always resulted in a nice and clear exception with a > traceback. If > only that would have been the case in C++... :P Great > libraries like the > STL and Boost alleviate the pain though. I agree, I said core dump is just as good as silent errors because I was under the impression that BPL copied objects by default. -- Daniel Wallin From dave at boost-consulting.com Wed Jun 25 13:22:02 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 25 Jun 2003 07:22:02 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3ef972b6.43af.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> "Daniel Wallin" writes: >> >> Surely you want to be able to go in both directions, >> >> though? Surely not everyone using lua is interested in >> >> just speed and not usability? >> > >> > We probably would like to be able to go in both >> > directions. We also don't want to force the user to >> > compile with RTTI turned on >> >> I'm sure that's a capability some Boost.Python users would >> appreciate, too. >> >> > so we currently supply a LUABIND_TYPEID macro to >> > overload the typeid calls for a unique id for the type. >> >> This functions something like a specialization? > > Right. You could do something like: > > #define LUABIND_TYPE_INFO int > #define LUABIND_TYPEID(type) my_type_id::value I don't see any reason to get macros involved here. Users could just specialize or overload boost::luapython::type_id, which, in the no-RTTI case, has no no default definition. >> > This of course causes some problems if we want to >> > downcast >> >> And if you want to support CBD (component based >> development, i.e. cross-module conversions), unless you >> can somehow get all authors to agree on how to identify >> types. Well, I guess they could use strings and manually >> supply what std::type_info::name() does. > > It doesn't seem wrong that the authors need to use the > same typeid. If you aren't developing a closed system, > don't change typeid system. I was thinking more of id collisions among extensions which don't intend to share types. It becomes less of an issue if people use string ids. >> > so we would need to be able to turn downcasting off, or >> > let the user supply their own RTTI-system somehow. >> >> That's pretty straightforward, fortunately. >> >> We need to be careful about what kinds of >> reconfigurability is available through macros. >> Extensions linked to the same shared library all share a >> link symbol space and thus are subject to ODR problems. > > Right. We currently have quite a few configuration macros. > > LUABIND_MAX_ARITY > LUABIND_MAX_BASES It's pretty easy to avoid these causing any real-world ODR problems. > LUABIND_NO_ERROR_CHECKING What kind of error checking gets turned off? > LUABIND_DONT_COPY_STRINGS What kind of string copying gets disabled? > LUABIND_NO_EXCEPTIONS This one, at least, is clear. > . and the typeid macros. > > Most are results of user requests. Massive configuration > is quite important to our users, since lua is used alot on > embedded systems. Have your users come back after getting these features and given you any feedback about how much performance they've gained or space they've saved? >> Relatively pretty ones, too. See >> boost/python/default_call_policies.hpp > > Looks good. We have similar ones in some policies (for > example adopt can only convert pointers). Good. Another place we could create common infrastructure. >> > When we decided to do it our way we thought returning >> > unmanaged lvalue's would be the most common usage. We >> > only considered copying the object as an alternative, >> > perhaps it's better to give compile time errors. >> >> It's *miles* better. Otherwise users will just blindly >> wrap these things unsafely without considering the >> consequences. Remember that the segfault may not occur >> in their tests, due to (un)lucky usage patterns. > > You're probably right. We saw the convenience of not > having to type a policy name as important. It is good to be the king. >> What's the significance of a pointer-to-auto_ptr? I'd >> understand what you meant if you wrote: >> >> void f(auto_ptr); >> >> instead. I'm going to assume that's what you meant. > > Yeah, that's what I meant. I'm lazy and copy-pasted and > forgot to remove the *. :) > >> >> > It is very useful when wrapping interfaces which expects the >> > user to create objects and give up ownership. >> >> Sure, great. It's a function-call-oriented thing. Before >> you object, read on. >> >> >> Hmm, this is really very specific to calls, because it >> >> *does* manage arguments and return values. I really >> >> think CallPolicy is better. In any case I think we >> >> should converge on this, one way or another; there >> >> will be more languages, and you do want to be able to >> >> steal my users, right? . That'll be a lot >> >> easier if they see familiar terminology ;-) >> > >> > I don't think it's specific to calls, but to all >> > conversion of types between the languages. We can use >> > policies when fetching values from lua, or when calling >> > lua functions from c++: >> > >> > A* stolen_obj = object_cast(get_globals(L)["obj"], >> > adopt(result)); >> >> What is result? A placeholder? > > Exactly. boost::arg<0> result; > >> >> could be spelled: >> >> std::auto_ptr stolen >> = extract >> >(get_globals(L)["obj"]); >> >> in Boost.Python. >> >> [I assume this means that all of your C++ objects are held >> within their lua wrappers by pointer. I went to >> considerable lengths to allow them to be held by-value, >> though I'm not sure the efficiency gain is worth the cost >> in flexibility.] > > Correct. We hold all C++ objects by pointer. I can see why > it could be interesting to be able to hold objects by value > though, especially with small objects. I'm not sure it would > be worth it though, since the user can just provide a custom > pool allocator if allocation is an issue. Still costs an extra 4 bytes (gasp!) for the pointer. Yeah, it was in one of my contract specs, so I had to implement it. Besides, it seemed like fun, but I bet nobody notices and I'd be very happy to rip it out and follow your lead on this. >> > call_function(L, "f", stolen_obj) [ adopt(_1) ]; >> >> That's spelled: >> >> call_function(L, "f", std::auto_ptr(stolen)); > > Wouldn't both the examples you provided require A to be held > by auto_ptr in python? Only the A created by call_function for this callback. You can have As held any number of ways in the same program. >> I can begin to see the syntactic convenience of your way, >> but I worry about the parameterizability. In the first >> case "result" is the only possible appropriate arg and in >> the 2nd case it's "_1". > > What exactly do you worry about? That someone would use > the wrong placeholder? I think the intended use is quite > clear. I guess I'm just a worrywart. Do the other policies have similar broad applicability? If so, I'm inclined to accept that you have the right idea. > It's trivial to STATIC_ASSERT if someone uses the wrong > placeholder though. Sure. >> Are you familiar with the problems of multimethod >> dispatching? Google can help. > > I wasn't no, and google didn't seem to like me. :) Did my links help? >> > We just increase the match value by one for every >> > casting step that is needed for converting the types. >> >> That seems to work for all the trivial cases, but the >> problem is always phrased in more-complicated terms, I >> presume for a reason. See http://tinyurl.com/f5t6 >> >> One example of a place where it might not work is: >> >> struct B {}; struct D : B {}; >> >> void f(B*, python::list) >> void f(D*, std::vector) >> >> >>> f(D(), [1, 2, 3]) >> >> I want this to choose the first overload, since it >> requires only lvalue conversions. > > rvalue conversions could always give much larger errors > than lvalue conversions. This would solve some cases, but > it isn't clear to me what you want. Do you always want to > choose the first overload, no matter how many lvalue > conversions it involves? > > void f(B*, B*, B*, python::list) > void f(D*, D*, D*, std::vector) > > f(D(), D(), D(), [1, 2, 3]) Yeah, I guess that looks right, but I don't know. At some point things are just ambiguous. > To me it's sufficient to have a really simple overload > system, and if that doesn't work the user can register the > overloads with different names. Hmm. I like your philosophy. As a last resort, let me ask someone I know who's done a lot of multimethod dispatching stuff if there's any reason to do something more complicated. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Wed Jun 25 13:35:47 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 25 Jun 2003 07:35:47 -0400 Subject: [C++-sig] Re: register_ptr_to_python docs and test References: <3EF8C9C9.3080502@globalite.com.br> <3EF90233.1040406@globalite.com.br> Message-ID: Nicodemus writes: > David Abrahams wrote: > >>Nicodemus writes: >> >> >>>

Introduction

>>> >>> >>> >> ^ ^ >> < > >> >>You have this problem throughout. Did you validate your html? >> > > No, I didn't, sorry. But I tested it in mozilla and internet explorer, > and both display it fine. But even checking the text for this kind of > error, I couldn't find any, even this one you're pointing. I guess the > html got mixed up in the sending? It happened before in another doc > patch... 8/ Actually it was just GNUs helpfully "Washing" the embedded HTML in display. The article came through just fine, sorry. >>> defines a function that allows the user to register smart >>> pointers for classes with virtual member functions that will be >>> overwritten in Python. >>> >> >>You mean "overridden" and not "overwritten". > > Sure! Sorry. > >> But this has nothing at all to do with virtual member functions >> AFAICT. >> > > Yeah, it is because of how the C++ object is held in Python, because > of the wrapper class; but you are right, this paragraph does not > explain it well. > >> <boost/python/converter/register_ptr_to_python.hpp> >> supplies register_ptr_to_python, a function template >> which registers a conversion from a smart pointers to Python. The >> resulting Python object holds a copy of the converted smart pointer, >> but behaves as though it were a wrapped copy of the pointee. If >> the pointee type has virtual functions and the class representing >> its dynamic (most-derived) type has been wrapped, the Python object >> will be an instance of the wrapper for the most-derived type. >> > > WAY better! Thanks ;-) >>> Classes that have virtual methods (and the user has supplied a >>> wrapper, X_Wrapper, allowing Python to callback into >>> C++) live in Python not as X objects, but as >>> instances of the wrapper class. This is a problem because >>> conversions to-python of smart_ptr objects won't >>> work since the objects that live in Python are held actually by >>> smart_ptr instances. >>> >> >>OK, this is one reason you might want to do use >>register_ptr_to_python, but it's not the only reason. > > Sorry, that was the only reason I knew... what's the other? If you want to enable any kind of smart pointer to be converted to Python, this is how you register the converter. You can only cause one kind of smart pointer to be registered as a consequence of using it in the class_<...> parameters. This lets you register as many as you want. Using a variety of smart pointers you can create all kinds of smart proxies. For example, Joel is just finishing work on a system for wrapping indexed containers which uses a kind of smart pointer which keeps the container alive and contains the index to use, and does range checking on dereference. >>> Using the function in this header to register the smart pointer >>> allows correct conversion to-python of smart_ptr >>> instances, but has one drawback: a X object created >>> in Python will not be able to be passed as a >>> smart_ptr& argument. But it is still possible >>> to pass a X object created in Python as smart_ptr, >>> smart_ptr, smart_ptr >>> const&, or smart_ptr const& >>> as an argument. >>> >> >>That's because, unless X is wrapped with smart_ptr as a template >>argument to its class_<...>, X objects created in Python are held >>directly in the containing Python object. >> > > Using register_ptr_to_python to register the smart pointer > allows correct to-python conversions of smart_ptr > instances, but has one drawback: a X object created in Python > will not be able to be passed as a smart_ptr & > argument because, unless X is wrapped with smart_ptr as > a template argument to its class_<...>, X objects created in Python are > held directly in the containing Python object. > But it is still possible to pass a X object created in Python as > smart_ptr, smart_ptr, smart_ptr const&, or > smart_ptr const& as an argument to a function. I'd rather not phrase this as a drawback of using this function, because it's really not. It *is* useful to note that in order to convert a Python X object to a smart_ptr& (non-const reference other than shared_ptr&), the embedded C++ object must be held by smart_ptr, and that when wrapped objects are created by calling the constructor from Python, how they are held is determined by the HeldType parameter to class_<...> > ? > >>>Here is an example of a module that contains a class A with >>>virtual methods and some functions that work with >>> >> ^^^^^^^ >>functions. >> > > Fixed. > > Dave, we can discuss changes here and then I can commit myself, if you want. That sounds great. I think we're almost there. Thanks for your work! -- Dave Abrahams Boost Consulting www.boost-consulting.com From gathmann at cenix-bioscience.com Wed Jun 25 18:16:52 2003 From: gathmann at cenix-bioscience.com (F. Oliver Gathmann) Date: Wed, 25 Jun 2003 18:16:52 +0200 Subject: [C++-sig] passing a dynamic PyObject * from C++ to Python Message-ID: <3EF9CAF4.80208@cenix-bioscience.com> I have a function that reads a file and returns (depending on the content of the file) either a Foo or a Bar wrapped in a PyObject* (both Foo and Bar objects are expensive to copy and have to be passed via a smart pointer). With Boost.Python v1 I was able to do PyObject * readFile(char const * fileName) { PyObject * obj; FileInfo info(fileName); if (info.isFoo()) { std::auto_ptr foo(new Foo(info.data())); obj = to_python(foo); } else { std::auto_ptr bar(new Bar(info.data())); obj = to_python(bar); } return ref(obj); } and wrap this like so: BOOST_PYTHON_MODULE_INIT(vigracore) { module_builder this_module("reader"); this_module.def(&readFile, "readFile"); } With Boost.Python v2, I have not been able to find an equivalent solution; I tried various approaches gathered from the Boost.Python FAQ and documentation, e.g. template T identity(T x) { return x; } template object get_pointer_reference(T x) { object f = make_function(&identity, return_value_policy() ); return f(x); } object readFile(char const * fileName) { FileInfo info(fileName); if (info.isFoo()) { Foo * fooPtr = new Foo(info.data()); return get_pointer_reference(fooPtr); } else { Bar * barPtr = new Bar(info.data()); return get_pointer_reference(barPtr); } } BOOST_PYTHON_MODULE(reader) { def("readFile", &readFile); } which compiles fine but crashes at runtime (using gcc 3.2.2 on Linux with a recent CVS version of Boost). Any hints that might help me solving this mystery would be much appreciated! Oliver -------------------------------------------------------------------- F. Oliver Gathmann, Ph.D. Director IT Unit Cenix BioScience GmbH Pfotenhauer Strasse 108 phone: +49 (351) 210-2735 D-01307 Dresden, Germany fax: +49 (351) 210-1309 fingerprint: 8E0E 9A64 A07E 0D1A D302 34C2 421A AE9F 4E13 A009 public key: http://www.cenix-bioscience.com/public_keys/gathmann.gpg -------------------------------------------------------------------- From leo4ever22 at yahoo.com Wed Jun 25 19:36:41 2003 From: leo4ever22 at yahoo.com (Leonard Dick) Date: Wed, 25 Jun 2003 10:36:41 -0700 (PDT) Subject: [C++-sig] Re: register_ptr_to_python docs and test In-Reply-To: Message-ID: <20030625173641.63627.qmail@web21302.mail.yahoo.com> Hi, Consultants, I'll did like to have a html code for internet exploer for windows xp Lectures on basic programming language and c++ thanks for that Leo __________________________________ Do you Yahoo!? SBC Yahoo! DSL - Now only $29.95 per month! http://sbc.yahoo.com From dave at boost-consulting.com Wed Jun 25 20:33:57 2003 From: dave at boost-consulting.com (David Abrahams) Date: Wed, 25 Jun 2003 14:33:57 -0400 Subject: [C++-sig] Re: passing a dynamic PyObject * from C++ to Python References: <3EF9CAF4.80208@cenix-bioscience.com> Message-ID: "F. Oliver Gathmann" writes: > With Boost.Python v2, I have not been able to find an equivalent > solution; I tried various approaches gathered from the Boost.Python > FAQ and documentation, e.g. Nicodemus has nearly finished building/documenting a function which allows you to register conversions from any smart pointer to Python. See recent posts to this list. You could use that. > template T identity(T x) { return x; } template T> object get_pointer_reference(T x) { object f = > make_function(&identity, > return_value_policy() > ); return f(x); } Hmm, *nasty* formatting! template T identity(T x) { return x; } template object get_pointer_reference(T x) { object f = make_function( &identity, return_value_policy()); return f(x); } Hmm, looks OK to me. > which compiles fine but crashes at runtime (using gcc 3.2.2 on Linux > with a recent CVS version of Boost). Any hints that might help me > solving this mystery would be much appreciated! Any hints about the condition of the program at the time of crash would be a big help in helping you. A reproducible test case would be even better. -- Dave Abrahams Boost Consulting www.boost-consulting.com From Sengan.Baring-Gould at nsc.com Thu Jun 26 01:35:06 2003 From: Sengan.Baring-Gould at nsc.com (Sengan.Baring-Gould at nsc.com) Date: Wed, 25 Jun 2003 17:35:06 -0600 (MDT) Subject: [C++-sig] [newbie] Pyste overloaded member function problems Message-ID: <200306252335.RAA16836@ia.nsc.com> Hi, I am using pyste and boost 1.30.0 to wrap a C++ class which has the following overloaded member functions: class Parser { ... typedef unsigned char CFormatFlags; const string& Get( string, string, const string&, const CFormatFlags in_flags = DEFAULT_FORMATTING ) const; long Get( string, string, const long ) const; int Get( string, string, const int ) const; const string& Get( string, const string&, const CFormatFlags in_flags = DEFAULT_FORMATTING ) const; long Get( string, const long ) const; int Get( string, const int ) const; ... }; In order to prevent Python from converting Get( string, string, const long ) into const string& Get( string, const string&, CFormatFlags ) I have to hack the generated file to remove all but the first 2 def() statements corresponding to the first 2 Gets. 1/ Is there a more elegant method to solve this problem? 2/ Is there a way to tell pyste to exclude particular overloaded member functions? 3/ Is there a way to specify a policy in pyste dependent on the overloaded member function: I have set_policy(CSettingsParser.Get, return_value_policy(copy_const_reference)) but then I have to remove that for the long Get(string, string. long) case from the generated file. Secondly, this class has member functions: class Parser { ... typedef list CNameList; CNameList List( void ) const; CNameList List( const string section_name ) const; ... }; In order to get a python list from this I wrote 2 wrapper functions. However, I can't figure out how to get pyste to use the set_wrapper() functions to use the wrappers. So again I hacked the generated file. 4/ Is there some other way to do this without hacking the generated file? I would like to congratulate the authors of pyste and boost python. Having dipped my foot in the water by writing an extension module for std::list/std::list, I can see this package really reduces the amount of code one needs to write. Thanks, Sengan From gathmann at cenix-bioscience.com Thu Jun 26 14:20:55 2003 From: gathmann at cenix-bioscience.com (F. Oliver Gathmann) Date: Thu, 26 Jun 2003 14:20:55 +0200 Subject: [C++-sig] Re: passing a dynamic PyObject * from C++ to Python In-Reply-To: References: <3EF9CAF4.80208@cenix-bioscience.com> Message-ID: <3EFAE527.3040301@cenix-bioscience.com> David Abrahams wrote: >"F. Oliver Gathmann" writes: > > > >> template T identity(T x) { return x; } template > T> object get_pointer_reference(T x) { object f = >> make_function(&identity, >> return_value_policy() >> ); return f(x); } >> >> > >Hmm, *nasty* formatting! > > Strange - the formatting looks fine in the c++-SIG archives... > template > T identity(T x) { > return x; > } > > template > object get_pointer_reference(T x) > { > object f = make_function( > &identity, return_value_policy()); > > return f(x); > } > >Hmm, looks OK to me. > > > >>which compiles fine but crashes at runtime (using gcc 3.2.2 on Linux >>with a recent CVS version of Boost). Any hints that might help me >>solving this mystery would be much appreciated! >> >> > >Any hints about the condition of the program at the time of crash >would be a big help in helping you. A reproducible test case would >be even better. > Sorry; I thought my error was so obvious that pseudocode would be enough to find out what's wrong. Here is a complete test case: #include #include using namespace boost::python; template T identity(T x) { return x; } template object get_pointer_reference(T x) { object f = make_function(&identity, return_value_policy() ); return f(x); } struct Foo {}; struct Bar {}; object readFooOrBar(bool fooOrBar) { if (fooOrBar) { Foo * fooPtr = new Foo(); return get_pointer_reference(fooPtr); } else { Bar * barPtr = new Bar(); return get_pointer_reference(barPtr); } } BOOST_PYTHON_MODULE(_problem) { class_ >("Foo", no_init) ; class_ >("Bar", no_init) ; def("read", &readFooOrBar); } And in Python: Python 2.2.2 (#1, Apr 10 2003, 23:02:08) [GCC 3.3 20030226 (prerelease) (SuSE Linux)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from _problem import * >>> foo = read(1) >>> bar = read(0) >>> del foo >>> del bar Segmentation fault When using Nicodemus' register_ptr_to_python template like so BOOST_PYTHON_MODULE(_problem) { class_("Foo", no_init) ; register_ptr_to_python >(); class_("Bar", no_init) ; register_ptr_to_python >(); def("read", &readFooOrBar); } the situation is essentially unchanged. BTW, I had to insert a typename directive for gcc to compile the register_ptr_to_python template: namespace boost { namespace python { template void register_ptr_to_python(P* = 0) { typedef typename boost::python::pointee

::type X; ^------^ objects::class_value_wrapper< P , objects::make_ptr_instance< X , objects::pointer_holder > >(); } }} // namespace boost::python I guess I am still somewhat at a loss here... - but I am certainly learning a lot ;-) Oliver -------------------------------------------------------------------- F. Oliver Gathmann, Ph.D. Director IT Unit Cenix BioScience GmbH Pfotenhauer Strasse 108 phone: +49 (351) 210-2735 D-01307 Dresden, Germany fax: +49 (351) 210-1309 fingerprint: 8E0E 9A64 A07E 0D1A D302 34C2 421A AE9F 4E13 A009 public key: http://www.cenix-bioscience.com/public_keys/gathmann.gpg -------------------------------------------------------------------- From dalwan01 at student.umu.se Thu Jun 26 14:29:15 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Thu, 26 Jun 2003 13:29:15 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3efae71b.33f6.16838@student.umu.se> > "Daniel Wallin" writes: > > >> "Daniel Wallin" writes: > >> >> Surely you want to be able to go in both directions, > >> >> though? Surely not everyone using lua is interested > in > >> >> just speed and not usability? > >> > > >> > We probably would like to be able to go in both > >> > directions. We also don't want to force the user to > >> > compile with RTTI turned on > >> > >> I'm sure that's a capability some Boost.Python users > would > >> appreciate, too. > >> > >> > so we currently supply a LUABIND_TYPEID macro to > >> > overload the typeid calls for a unique id for the > type. > >> > >> This functions something like a specialization? > > > > Right. You could do something like: > > > > #define LUABIND_TYPE_INFO int > > #define LUABIND_TYPEID(type) my_type_id::value > > I don't see any reason to get macros involved here. Users > could just specialize or overload > boost::luapython::type_id, > which, in the no-RTTI case, has no no default definition. Note that we need to store the type_info object, and that's why we have macros for this. So that the user can change the type of that object. This wouldn't really work with specialization. > >> > This of course causes some problems if we want to > >> > downcast > >> > >> And if you want to support CBD (component based > >> development, i.e. cross-module conversions), unless you > >> can somehow get all authors to agree on how to identify > >> types. Well, I guess they could use strings and > manually > >> supply what std::type_info::name() does. > > > > It doesn't seem wrong that the authors need to use the > > same typeid. If you aren't developing a closed system, > > don't change typeid system. > > I was thinking more of id collisions among extensions > which > don't intend to share types. It becomes less of an issue > if people use string ids. But like I said, if you intend to use your module with other modules; don't change the typeid system. I don't like forcing the id type, if you want to use int's I think you should be allowed to. > > >> > so we would need to be able to turn downcasting off, > or > >> > let the user supply their own RTTI-system somehow. > >> > >> That's pretty straightforward, fortunately. > >> > >> We need to be careful about what kinds of > >> reconfigurability is available through macros. > >> Extensions linked to the same shared library all share > a > >> link symbol space and thus are subject to ODR problems. > > > > Right. We currently have quite a few configuration > macros. > > > > LUABIND_MAX_ARITY > > LUABIND_MAX_BASES > > It's pretty easy to avoid these causing any real-world ODR > problems. Right. They are only used to control the number of template parameters to a few metaprogramming struct's. > > > LUABIND_NO_ERROR_CHECKING > > What kind of error checking gets turned off? Error checking when performing overload matching. In particular it turns off the pretty error messages. > > > LUABIND_DONT_COPY_STRINGS > > What kind of string copying gets disabled? It causes names to be held by const char* instead of std::string. (class names, function names etc). > > . and the typeid macros. > > > > Most are results of user requests. Massive configuration > > is quite important to our users, since lua is used alot > on > > embedded systems. > > Have your users come back after getting these features and > given you any feedback about how much performance they've > gained or space they've saved? No. But some of them is a must-have for alot of developers. In particular the ability to turn off exception handling. > >> What's the significance of a pointer-to-auto_ptr? I'd > >> understand what you meant if you wrote: > >> > >> void f(auto_ptr); > >> > >> instead. I'm going to assume that's what you meant. > > > > Yeah, that's what I meant. I'm lazy and copy-pasted and > > forgot to remove the *. :) > > > >> > >> > It is very useful when wrapping interfaces which > expects the > >> > user to create objects and give up ownership. > >> > >> Sure, great. It's a function-call-oriented thing. > Before > >> you object, read on. > >> > >> >> Hmm, this is really very specific to calls, because > it > >> >> *does* manage arguments and return values. I really > >> >> think CallPolicy is better. In any case I think we > >> >> should converge on this, one way or another; there > >> >> will be more languages, and you do want to be able > to > >> >> steal my users, right? . That'll be a lot > >> >> easier if they see familiar terminology ;-) > >> > > >> > I don't think it's specific to calls, but to all > >> > conversion of types between the languages. We can use > >> > policies when fetching values from lua, or when > calling > >> > lua functions from c++: > >> > > >> > A* stolen_obj = > object_cast(get_globals(L)["obj"], > >> > adopt(result)); > >> > >> What is result? A placeholder? > > > > Exactly. boost::arg<0> result; > > > >> > >> could be spelled: > >> > >> std::auto_ptr stolen > >> = extract > >> >(get_globals(L)["obj"]); > >> > >> in Boost.Python. > >> > >> [I assume this means that all of your C++ objects are > held > >> within their lua wrappers by pointer. I went to > >> considerable lengths to allow them to be held by-value, > >> though I'm not sure the efficiency gain is worth the > cost > >> in flexibility.] > > > > Correct. We hold all C++ objects by pointer. I can see > why > > it could be interesting to be able to hold objects by > value > > though, especially with small objects. I'm not sure it > would > > be worth it though, since the user can just provide a > custom > > pool allocator if allocation is an issue. > > Still costs an extra 4 bytes (gasp!) for the pointer. > Yeah, > it was in one of my contract specs, so I had to implement > it. Besides, it seemed like fun, but I bet nobody notices > and I'd be very happy to rip it out and follow your lead > on > this. Need to think about this some more. It's no special case in BPL though, it's just another type of instance_holder, correct? > > >> > call_function(L, "f", stolen_obj) [ adopt(_1) > ]; > >> > >> That's spelled: > >> > >> call_function(L, "f", > std::auto_ptr(stolen)); > > > > Wouldn't both the examples you provided require A to be > held > > by auto_ptr in python? > > Only the A created by call_function for this callback. > You > can have As held any number of ways in the same program. Ah, right. You just instantiate different instance_holder's. Does this mean there's special handling of auto_ptr's? Otherwise, how can you tell which type is being held by the pointer? (In the call_function example). > > >> I can begin to see the syntactic convenience of your > way, > >> but I worry about the parameterizability. In the first > >> case "result" is the only possible appropriate arg and > in > >> the 2nd case it's "_1". > > > > What exactly do you worry about? That someone would use > > the wrong placeholder? I think the intended use is quite > > clear. > > I guess I'm just a worrywart. Do the other policies have > similar broad applicability? If so, I'm inclined to > accept > that you have the right idea. Most conversion policies have double direction, and even the ones that doesn't can still be used in different contexts. ('result' when doing C++ -> lua, and '_N' when doing lua -> C++). > >> Are you familiar with the problems of multimethod > >> dispatching? Google can help. > > > > I wasn't no, and google didn't seem to like me. :) > > Did my links help? A little. But I don't feel even remotly comfortable on the subject. :) > > >> > We just increase the match value by one for every > >> > casting step that is needed for converting the types. > >> > >> That seems to work for all the trivial cases, but the > >> problem is always phrased in more-complicated terms, I > >> presume for a reason. See http://tinyurl.com/f5t6 > >> > >> One example of a place where it might not work is: > >> > >> struct B {}; struct D : B {}; > >> > >> void f(B*, python::list) > >> void f(D*, std::vector) > >> > >> >>> f(D(), [1, 2, 3]) > >> > >> I want this to choose the first overload, since it > >> requires only lvalue conversions. > > > > rvalue conversions could always give much larger errors > > than lvalue conversions. This would solve some cases, > but > > it isn't clear to me what you want. Do you always want > to > > choose the first overload, no matter how many lvalue > > conversions it involves? > > > > void f(B*, B*, B*, python::list) > > void f(D*, D*, D*, std::vector) > > > > f(D(), D(), D(), [1, 2, 3]) > > Yeah, I guess that looks right, but I don't know. At some > point things are just ambiguous. Exactly. Perhaps introducing weights on conversions might increase the ambiguity for the user, since it's hard to tell at which point the overload system will turn around and choose another overload. Does this make any sense? > > > To me it's sufficient to have a really simple overload > > system, and if that doesn't work the user can register > the > > overloads with different names. > > Hmm. I like your philosophy. > > As a last resort, let me ask someone I know who's done a > lot > of multimethod dispatching stuff if there's any reason to > do > something more complicated. Great, do that. -- Daniel Wallin From janders at users.sourceforge.net Thu Jun 26 17:23:52 2003 From: janders at users.sourceforge.net (Jon Anderson) Date: Thu, 26 Jun 2003 10:23:52 -0500 Subject: [C++-sig] cross-module shared_ptr downcast Message-ID: <200306261023.52958.janders@users.sf.net> I have a Container that holds shared_ptr's to a Base class. I can successfully add and retrieve shared_ptrs to a Derived class, properly downcasted on retrieval, as along as the bindings for both the Base and Derived class are defined in the same module. However, when I have the Base class bindings defined in module A, and the Container and Derived class bindings defined in module B, the down cast no longer happens. Each module is a separate .so. Should this work, or am I limited to having them all in the same module in order to make the downcasts work? Thanks, Jon From Jaleco at gameone.com.tw Fri Jun 27 04:31:35 2003 From: Jaleco at gameone.com.tw (Jaleco) Date: Fri, 27 Jun 2003 10:31:35 +0800 Subject: [C++-sig] a pointer question Message-ID: <000801c33c54$3aae5990$3bf4383d@jalecoxp> Dear all I am trying use boost.python to export freetype library to python. But I encounter pointer problem.boost don't know pointer. How do I export class pointer to python ? / Includes ==================================================================== #include class B { public : int d ; }; class A { public : A(int d) { pb = new(B);pb->d = d ;} ; int getd(void) { return pb->d;} ; B *pb ; }; // Using ======================================================================= using namespace boost::python; // Module ====================================================================== BOOST_PYTHON_MODULE(ft2) { class_ ("A",init()) .add_property("d",&A::getd) .def_readonly("pb",&A::pb); class_ ("B") .def_readwrite("d",&B::d) ; } C:\source\boost_1_30_0\libs\python\ft2>python Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ft2 >>> a = ft2.A(1) >>> pb = a.pb Traceback (most recent call last): File "", line 1, in ? TypeError: No to_python (by-value) converter found for C++ type: class B * >>> another question is , I try to add BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(B) ; to my source source code, This problem is gone. But the answer is wrong ................. // Includes ==================================================================== #include class B { public : int d ; }; class A { public : A(int d) { pb = new(B);pb->d = d ;} ; int getd(void) { return pb->d;} ; B *pb ; }; BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(B) ; // Using ======================================================================= using namespace boost::python; // Module ====================================================================== BOOST_PYTHON_MODULE(ft2) { class_ ("A",init()) .add_property("d",&A::getd) .def_readonly("pb",&A::pb); class_ ("B") .def_readwrite("d",&B::d) ; } Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import ft2 >>> a = ft2.A(1) >>> pb=a.pb >>> pb >>> pb.d 10445536 >>> a.d 1 >>> From christophe.grimault at novagrid.com Fri Jun 27 12:01:19 2003 From: christophe.grimault at novagrid.com (christophe grimault) Date: Fri, 27 Jun 2003 10:01:19 +0000 Subject: [C++-sig] Simply calling a routine ... References: <3EF9CAF4.80208@cenix-bioscience.com> <3EFAE527.3040301@cenix-bioscience.com> Message-ID: <3EFC15EF.2070304@novagrid.com> Hi all, I have a written several routines in C++ using blitz++. These routines are called sucessively by the 'main' routine. I want to rewrite the main in python in order to gain development speed and flexibility. The routines use blitz++ matrix template library but the interfaces to the routine are very common C++ calls with basic C++ types (int, float*, short*, etc...) : int foo( short* a, float b, float* c, int N ){ Array A(N); A.data() = a; // Do some work return ( ERROR_CODE ); } Is it straightforward to declare these routines with pyste (or the classic boost method). Does the use of blitz in these routines (which belong to another file than the main, beginning with include of blitz.hh) complicates the process. I'm a bit lost because of the use of "templates in blitz++" versus "integration with boost". Also, as you can see in the example routine above, I started with a creation of a blitz array 'A' from the array 'a'. Because I'm afraid of the work to get back and forth from Numeric array in Python to Blitz++ templated Array in C++ is a complicated and big amount of work. Thanks for any response on these to points, and for any suggestion or criticism on my way of handling the problem. Christophe Grimault -- ------------------------------------------------------ | Christophe Grimault Tel: 02 23 23 52 59 | | NovaGrid SA http://www.novagrid.com | | fax: 02 23 23 62 32 | | mail: christophe.grimault at novagrid.com | | 2, Bd Sebastopol | | 35000 RENNES | | France | |______________________________________________________| From gathmann at cenix-bioscience.com Fri Jun 27 10:13:09 2003 From: gathmann at cenix-bioscience.com (F. Oliver Gathmann) Date: Fri, 27 Jun 2003 10:13:09 +0200 Subject: [C++-sig] Re: register_ptr_to_python docs and test In-Reply-To: <3EAB04F3.20909@globalite.com.br> References: <3EF8C9C9.3080502@globalite.com.br> <3EF90233.1040406@globalite.com.br> <3EAB04F3.20909@globalite.com.br> Message-ID: <3EFBFC95.1000202@cenix-bioscience.com> > David Abrahams wrote: > >> That sounds great. I think we're almost there. Thanks for your work! > > My pleasure! Here is a new version, with the latest changes. I think > that it is now good enough to commit. Regards, Nicodemus. gcc 3.2.2 on Linux only stopped complaining after I inserted a "typename" directive as shown: >// Copyright David Abrahams 2002. Permission to copy, use, >// modify, sell and distribute this software is granted provided this >// copyright notice appears in all copies. This software is provided >// "as is" without express or implied warranty, and with no claim as >// to its suitability for any purpose. >#ifndef REGISTER_PTR_TO_PYTHON_HPP >#define REGISTER_PTR_TO_PYTHON_HPP > >#include >#include > >namespace boost { namespace python { > >template >void register_ptr_to_python(P* = 0) >{ > typedef typename boost::python::pointee

::type X; ^^^^^^^^ > objects::class_value_wrapper< > P > , objects::make_ptr_instance< > X > , objects::pointer_holder > > > >(); >} > >}} // namespace boost::python > >#endif // REGISTER_PTR_TO_PYTHON_HPP > Regards, Oliver From nickm at sitius.com Fri Jun 27 16:13:23 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Fri, 27 Jun 2003 10:13:23 -0400 Subject: [C++-sig] Re: a pointer question References: <000801c33c54$3aae5990$3bf4383d@jalecoxp> Message-ID: <3EFC5103.6947595F@sitius.com> You have to write class_ ("A",init()) .add_property("d",&A::getd) .add_property("pb",make_getter(&A::pb, return_internal_reference<1>())); check the docs for make_getter. Jaleco wrote: > > Dear all > > I am trying use boost.python to export freetype library to python. > But I encounter pointer problem.boost don't know pointer. > How do I export class pointer to python ? > > / Includes > ==================================================================== > #include ?boost/python.hpp? > > class B > { > public : > int d ; > > }; > class A > { > public : > A(int d) { pb = new(B);pb-?d = d ;} ; > int getd(void) { return pb-?d;} ; > B *pb ; > }; > > // Using > ======================================================================= > using namespace boost::python; > > // Module > ====================================================================== > BOOST_PYTHON_MODULE(ft2) > { > class_?A? ("A",init?int?()) > .add_property("d",?A::getd) > .def_readonly("pb",?A::pb); > class_?B? ("B") > .def_readwrite("d",?B::d) ; > > } > > C:\source\boost_1_30_0\libs\python\ft2?python > Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > ??? import ft2 > ??? a = ft2.A(1) > ??? pb = a.pb > Traceback (most recent call last): > File "?stdin?", line 1, in ? > TypeError: No to_python (by-value) converter found for C++ type: class B * > ??? > > another question is , I try to add > BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(B) ; to > my source source code, This problem is gone. > But the answer is wrong ................. > > // Includes > ==================================================================== > #include ?boost/python.hpp? > > class B > { > public : > int d ; > > }; > class A > { > public : > A(int d) { pb = new(B);pb-?d = d ;} ; > int getd(void) { return pb-?d;} ; > B *pb ; > }; > BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(B) ; > // Using > ======================================================================= > using namespace boost::python; > > // Module > ====================================================================== > BOOST_PYTHON_MODULE(ft2) > { > class_?A? ("A",init?int?()) > .add_property("d",?A::getd) > .def_readonly("pb",?A::pb); > class_?B? ("B") > .def_readwrite("d",?B::d) ; > > } > > Python 2.2.3 (#42, May 30 2003, 18:12:08) [MSC 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > ??? import ft2 > ??? a = ft2.A(1) > ??? pb=a.pb > ??? pb > ?ft2.B object at 0x009082C8? > ??? pb.d > 10445536 > ??? a.d > 1 > ??? From Wenning_Qiu at csgsystems.com Fri Jun 27 18:10:28 2003 From: Wenning_Qiu at csgsystems.com (Qiu, Wenning) Date: Fri, 27 Jun 2003 11:10:28 -0500 Subject: [C++-sig] Embedding Python, threading and scalability Message-ID: <9532424DC30C934689726CEDBE15F41703CC8859@omasrv07.csgsystems.com> I am researching issues related to emdedding Python in C++ for a project. My project will be running on an SMP box and requires scalability. However, my test shows that Python threading has very poor performance in terms of scaling. In fact it doesn't scale at all. I wrote a simple test program to complete given number of iterations of a simple loop. The total number of iterations can be divided evenly among a number of threads. My test shows that as the number of threads grows, the CPU usage grows and the response time gets longer. For example, to complete the same amount of work, one thread takes 10 seconds, 2 threads take 20 seconds and 3 threads take 30 seconds. The fundamental reason for lacking scalability is that Python uses a global interpreter lock for thread safety. That global lock must be held by a thread before it can safely access Python objects. I thought I might be able to make embedded Python scalable by embedding multiple interpreters and have them run independently in different threads. However "Python/C API Reference Manual" chapter 8 says that "The global interpreter lock is also shared by all threads, regardless of to which interpreter they belong". Therefore with current implementation, even multiple interpreters do not provide scalability. Has anyone on this list run into the same problem that I have, or does anyone know of any plan of totally insulating multiple embedded Python interpreters? I've attached my test script to this message. I am using Python 2.2.3. <> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mytest.py Type: application/octet-stream Size: 1581 bytes Desc: mytest.py URL: From nickm at sitius.com Fri Jun 27 22:46:19 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Fri, 27 Jun 2003 16:46:19 -0400 Subject: [C++-sig] Re: return_self_policy / return_arg References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> Message-ID: <3EFCAD1B.BFA2AA5B@sitius.com> Posting again -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ''' >>> from return_self import * >>> l1=Label() >>> l1 is l1.label("bar") 1 >>> l1 is l1.label("bar").sensitive(0) 1 >>> l1.label("foo").sensitive(0) is l1.sensitive(1).label("bar") 1 >>> return_arg is return_arg(return_arg) 1 ''' def run(args = None): import sys import doctest if args is not None: sys.argv = args return doctest.testmod(sys.modules.get(__name__)) if __name__ == '__main__': print "running..." import sys sys.exit(run()[0]) -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self.cpp Type: application/x-unknown-content-type-cppfile Size: 1400 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self_policy.hpp Type: application/x-unknown-content-type-hppfile Size: 2538 bytes Desc: not available URL: From dave at boost-consulting.com Fri Jun 27 22:48:11 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 16:48:11 -0400 Subject: [C++-sig] Re: register_ptr_to_python docs and test References: <3EF8C9C9.3080502@globalite.com.br> <3EF90233.1040406@globalite.com.br> <3EAB04F3.20909@globalite.com.br> <3EFBFC95.1000202@cenix-bioscience.com> <3EAC22E5.9080501@globalite.com.br> Message-ID: Nicodemus writes: > Hi Oliver, > > F. Oliver Gathmann wrote: > >> gcc 3.2.2 on Linux only stopped complaining after I inserted a >> "typename" directive as shown: > > > Thanks for the remainder! > > I just commited to CVS. Thanks, Nico! -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 27 22:47:01 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 16:47:01 -0400 Subject: [C++-sig] Re: cross-module shared_ptr downcast References: <200306261023.52958.janders@users.sf.net> Message-ID: Jon Anderson writes: > I have a Container that holds shared_ptr's to a Base class. I can > successfully add and retrieve shared_ptrs to a Derived class, properly > downcasted on retrieval, as along as the bindings for both the Base and > Derived class are defined in the same module. However, when I have the Base > class bindings defined in module A, and the Container and Derived class > bindings defined in module B, the down cast no longer happens. Each module > is a separate .so. > > Should this work, or am I limited to having them all in the same module in > order to make the downcasts work? Whether or not this will work is heavily dependent on your compiler's ABI. In general, if you want to get it to work, you have to get the .sos to link to one-another, or to a common shared library which contains the base class's RTTI info. How to generate the RTTI info is again dependent on your compiler's ABI, but usually it's enough to define one of the base class' member functions in the common shared library. HTH, -- Dave Abrahams Boost Consulting www.boost-consulting.com From cleung at eos.ubc.ca Fri Jun 27 23:02:06 2003 From: cleung at eos.ubc.ca (Charles Leung) Date: Fri, 27 Jun 2003 14:02:06 -0700 (PDT) Subject: [C++-sig] Simply calling a routine ... In-Reply-To: <3EFC15EF.2070304@novagrid.com> References: <3EF9CAF4.80208@cenix-bioscience.com> <3EFAE527.3040301@cenix-bioscience.com> <3EFC15EF.2070304@novagrid.com> Message-ID: <33259.137.82.23.131.1056747726.squirrel@webmail.eos.ubc.ca> Hi Christophe, We've previously implemented a Numeric array wrapper, num_util, available for download at: http://www.eos.ubc.ca/research/clouds/software.html Feel free to give it a try ;-) This wrapper generally works with Boost's numeric::array class. It allows users to access certain array properties and to instantiate Python numeric array in C++. There are some example usage in the num_test.cpp file. Your example, utilizing num_util, will probably look something like... ... namespace nbpl = num_util; namespace py = boost::python; py::long_ foo( py::numeric::array a, float b, py::numeric::array c, int N ){ Array A(N); A.data() = (short*) nbpl::data(a); // Do some work return ( py::long_(ERROR_CODE) ); } I hope this helps. Cheers, Charles From dave at boost-consulting.com Fri Jun 27 23:31:15 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 17:31:15 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3efae71b.33f6.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> "Daniel Wallin" writes: >> >> >> "Daniel Wallin" writes: >> >> >> Surely you want to be able to go in both >> >> >> directions, though? Surely not everyone using lua >> >> >> is interested in just speed and not usability? >> >> > >> >> > We probably would like to be able to go in both >> >> > directions. We also don't want to force the user to >> >> > compile with RTTI turned on >> >> >> >> I'm sure that's a capability some Boost.Python users >> >> would appreciate, too. >> >> >> >> > so we currently supply a LUABIND_TYPEID macro to >> >> > overload the typeid calls for a unique id for the >> >> > type. >> >> >> >> This functions something like a specialization? >> > >> > Right. You could do something like: >> > >> > #define LUABIND_TYPE_INFO int >> > #define LUABIND_TYPEID(type) my_type_id::value >> >> I don't see any reason to get macros involved here. >> Users could just specialize or overload >> boost::luapython::type_id, which, in the no-RTTI case, >> has no no default definition. > > Note that we need to store the type_info object, and > that's why we have macros for this. So that the user can > change the type of that object. This wouldn't really work > with specialization. template struct type_info #ifndef NO_RTTI { typedef std::type_info type; static type get(); } #endif ; template struct int_type_info { typedef int type; static int get() { return n; } }; template <> struct type_info : int_type_info<42> {}; Of course I have no objection to using macros to generate specializations. >> >> > This of course causes some problems if we want to >> >> > downcast >> >> >> >> And if you want to support CBD (component based >> >> development, i.e. cross-module conversions), unless you >> >> can somehow get all authors to agree on how to identify >> >> types. Well, I guess they could use strings and manually >> >> supply what std::type_info::name() does. >> > >> > It doesn't seem wrong that the authors need to use the >> > same typeid. If you aren't developing a closed system, >> > don't change typeid system. >> >> I was thinking more of id collisions among extensions >> which don't intend to share types. It becomes less of an >> issue if people use string ids. > > But like I said, if you intend to use your module with other > modules; don't change the typeid system. I'm not talking about changing systems, just the need to ensure unique type IDs for different types across modules. > I don't like forcing the id type, if you want to use int's > I think you should be allowed to. Sure; I have no objection to that. >> > LUABIND_NO_ERROR_CHECKING >> >> What kind of error checking gets turned off? > > Error checking when performing overload matching. In > particular it turns off the pretty error messages. Another feature I want to steal from you. It's good that it's configurable. >> > LUABIND_DONT_COPY_STRINGS >> >> What kind of string copying gets disabled? > > It causes names to be held by const char* instead of > std::string. (class names, function names etc). Why not always do that? >> > . and the typeid macros. >> > >> > Most are results of user requests. Massive configuration >> > is quite important to our users, since lua is used alot on >> > embedded systems. >> >> Have your users come back after getting these features and >> given you any feedback about how much performance they've >> gained or space they've saved? > > No. But some of them is a must-have for alot of > developers. In particular the ability to turn off > exception handling. Oh sure, I believe that one, especially because some shops (advisedly or no) have a policy against EH and RTTI. You don't want to just leave them out. I am just leery of configurability in general and would tend to resist making any new macros part of the official release until users had told me it made a big difference to them in alpha/beta stages. In particular, "massive" configurability is not neccessarily desirable. It creates "massive" maintenance and testing headaches. >> >> [I assume this means that all of your C++ objects are held >> >> within their lua wrappers by pointer. I went to >> >> considerable lengths to allow them to be held by-value, >> >> though I'm not sure the efficiency gain is worth the cost >> >> in flexibility.] >> > >> > Correct. We hold all C++ objects by pointer. I can see why >> > it could be interesting to be able to hold objects by value >> > though, especially with small objects. I'm not sure it would >> > be worth it though, since the user can just provide a custom >> > pool allocator if allocation is an issue. >> >> Still costs an extra 4 bytes (gasp!) for the pointer. >> Yeah, it was in one of my contract specs, so I had to >> implement it. Besides, it seemed like fun, but I bet >> nobody notices and I'd be very happy to rip it out and >> follow your lead on this. > > Need to think about this some more. It's no special case > in BPL though, it's just another type of instance_holder, > correct? Not only that. It's an issue of where the instance holder gets constructed. In this case it is constructed directly in the storage for the Python object. >> > Wouldn't both the examples you provided require A to be held >> > by auto_ptr in python? >> >> Only the A created by call_function for this callback. >> You can have As held any number of ways in the same >> program. > > Ah, right. You just instantiate different instance_holder's. > Does this mean there's special handling of auto_ptr's? What do you mean by "special handling"? I think the answer is no, though I also think there should be special handling for convenience and efficiency. > Otherwise, how can you tell which type is being held by > the pointer? (In the call_function example). The to-python converter that gets registered for auto_ptr by using it in a class, ... > or by using register_pointer_to_python > knows what to do. >> >> I can begin to see the syntactic convenience of your way, >> >> but I worry about the parameterizability. In the first >> >> case "result" is the only possible appropriate arg and in >> >> the 2nd case it's "_1". >> > >> > What exactly do you worry about? That someone would use >> > the wrong placeholder? I think the intended use is quite >> > clear. >> >> I guess I'm just a worrywart. Do the other policies have >> similar broad applicability? If so, I'm inclined to >> accept >> that you have the right idea. > > Most conversion policies have double direction Cool, I accept your scheme. > and even the > ones that doesn't can still be used in different contexts. > ('result' when doing C++ -> lua, and '_N' when doing lua -> > C++). How is that *not* a case of bidirectionality? That rule makes me nervous, because the terms are backwards when calling lua/python from C++. Do people get confused? >> > void f(B*, B*, B*, python::list) >> > void f(D*, D*, D*, std::vector) >> > >> > f(D(), D(), D(), [1, 2, 3]) >> >> Yeah, I guess that looks right, but I don't know. At some >> point things are just ambiguous. > > Exactly. Perhaps introducing weights on conversions might > increase the ambiguity for the user, since it's hard to tell > at which point the overload system will turn around and > choose another overload. Does this make any sense? Yes. But as I meant to say to Andrei, none of the "classic" multimethod systems account for coercion, so if there's some reason they can't use your simple rule it has to be something you can understand in terms of simple inheritance hierarchies. >> > To me it's sufficient to have a really simple overload >> > system, and if that doesn't work the user can register the >> > overloads with different names. >> >> Hmm. I like your philosophy. >> >> As a last resort, let me ask someone I know who's done a >> lot of multimethod dispatching stuff if there's any >> reason to do something more complicated. > > Great, do that. Too bad it didn't pan out :( -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 27 23:40:11 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 17:40:11 -0400 Subject: [C++-sig] Re: passing a dynamic PyObject * from C++ to Python References: <3EF9CAF4.80208@cenix-bioscience.com> <3EFAE527.3040301@cenix-bioscience.com> Message-ID: "F. Oliver Gathmann" writes: > David Abrahams wrote: > >>"F. Oliver Gathmann" writes: >> >> >>> template T identity(T x) { return x; } template >> T> object get_pointer_reference(T x) { object f = >>> make_function(&identity, >>> return_value_policy() >>> ); return f(x); } >>> >> >>Hmm, *nasty* formatting! >> > Strange - the formatting looks fine in the c++-SIG archives... > >> template T identity(T x) { >> return x; >> } >> >> template object get_pointer_reference(T x) >> { object f = make_function( >> &identity, return_value_policy()); >> >> return f(x); >> } >> >>Hmm, looks OK to me. >> >> >>>which compiles fine but crashes at runtime (using gcc 3.2.2 on Linux >>>with a recent CVS version of Boost). Any hints that might help me >>>solving this mystery would be much appreciated! >>> >> >>Any hints about the condition of the program at the time of crash >>would be a big help in helping you. A reproducible test case would >>be even better. >> > Sorry; I thought my error was so obvious that pseudocode would be enough > to find out what's wrong. I'm not that smart. Sometimes I need to crash it myself and look at it with a debugger Also, as in this case, sometimes there's a much easier solution when you see it all in context... Here's the fix for this approach: template object get_pointer_reference(T x) { object f = make_function( &identity, return_value_policy() ); return f(ptr(x)); // ^^^^ ^ } The problem is that x is copied by default when calling back into Python, so the copy being deleted by the auto_ptr was already owned by another Python object. > BOOST_PYTHON_MODULE(_problem) > { > class_ >("Foo", no_init) > ; > class_ >("Bar", no_init) > ; > def("read", &readFooOrBar); > } > > And in Python: > > Python 2.2.2 (#1, Apr 10 2003, 23:02:08) > [GCC 3.3 20030226 (prerelease) (SuSE Linux)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> from _problem import * >>>> foo = read(1) >>>> bar = read(0) >>>> del foo >>>> del bar > Segmentation fault > > When using Nicodemus' register_ptr_to_python template like so > > > BOOST_PYTHON_MODULE(_problem) > { > class_("Foo", no_init) > ; > register_ptr_to_python >(); > class_("Bar", no_init) > ; > register_ptr_to_python >(); > def("read", &readFooOrBar); > } > > the situation is essentially unchanged. No surprise. Those converters are registered in exactly the same way when auto_ptr is a parameter to class_<...>. Since you have those converters registered, why not simply: object readFooOrBar(bool fooOrBar) { if (fooOrBar) { std::auto_ptr p(new Foo()); return object(p); } else { std::auto_ptr p(new Bar()); return object(p); } } ?? That seems much easier to me. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Fri Jun 27 23:49:26 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 17:49:26 -0400 Subject: [C++-sig] Re: return_self_policy / return_arg References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> Message-ID: Very nice. Just a couple nits: Nikolay Mladenov writes: >>> from return_self import * >>> l1=Label() >>> l1 is l1.label("bar") 1 >>> l1 is l1.label("bar").sensitive(0) 1 >>> l1.label("foo").sensitive(0) is l1.sensitive(1).label("bar") 1 >>> return_arg is return_arg(return_arg) 1 These tests will break when we get bool in Python 2.3. I use assert instead. > namespace boost { namespace python > { > namespace detail > { > template > struct return_arg_policy_pos_argument_must_be_greater_than_zero > # if defined(__GNUC__) && __GNUC__ >= 3 || defined(__EDG__) > {error} ^^^^^ This will produce an error a little bit too early ;-) I think you'd better delete it. > # endif > ; > Once these fixes have been made, anyone with CVS write permission can commit it as far as I'm concerned, or you can remind me when I come back from vacation. -- Dave Abrahams Boost Consulting www.boost-consulting.com From eric at enthought.com Sat Jun 28 00:08:02 2003 From: eric at enthought.com (eric jones) Date: Fri, 27 Jun 2003 17:08:02 -0500 Subject: [C++-sig] [ANN] SciPy '03 -- The 2nd Annual Python for Scientific Computing Workshop Message-ID: <015701c33cf8$96be3670$8901a8c0@ERICDESKTOP> Hey folks, I've been postponing this announcement because the registration page isn't active yet. It's getting late though, and I thought I'd at least let you know SciPy '03 is happening. I'll repost when registration is open. Thanks, Eric ------------------------------------------------------- SciPy '03 The 2nd Annual Python for Scientific Computing Workshop ------------------------------------------------------- CalTech, Pasadena, CA September 11-12, 2003 http://www.scipy.org/site_content/scipy03 This workshop provides a unique opportunity to learn and affect what is happening in the realm of scientific computing with Python. Attendees will have the opportunity to review the available tools and how they apply to specific problems. By providing a forum for developers to share their Python expertise with the wider industrial, academic, and research communities, this workshop will foster collaboration and facilitate the sharing of software components, techniques and a vision for high level language use in scientific computing. The cost of the workshop is $100.00 and includes 2 breakfasts and 2 lunches on Sept. 11th and 12th, one dinner on Sept. 11th, and snacks during breaks. Online registration is not available yet, but will be soon. We would like to have a wide variety of presenters this year. If you have a paper you would like to present, please contact eric at enthought.com. Discussion about the conference may be directed to the SciPy-user mailing list: Mailing list page: http://www.scipy.org/MailList Mailinbg list address: scipy-user at scipy.org Please forward this announcement to anyone/list that might be interested. ------------- Co-Hosted By: ------------- The National Biomedical Computation Resource (NBCR, SDSC, San Diego, CA) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://nbcr.sdsc.edu The mission of the National Biomedical Computation Resource at the San Diego Supercomputer Center is to conduct, catalyze, and enable biomedical research by harnessing advanced computational technology. The Center for Advanced Computing Research (CACR, CalTech, Pasadena, CA) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://nbcr.sdsc.edu CACR is dedicated to the pursuit of excellence in the field of high-performance computing, communication, and data engineering. Major activities include carrying out large-scale scientific and engineering applications on parallel supercomputers and coordinating collaborative research projects on high-speed network technologies, distributed computing and database methodologies, and related topics. Our goal is to help further the state of the art in scientific computing. Enthought, Inc. (Austin, TX) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ http://enthought.com Enthought, Inc. provides scientific and business computing solutions through software development, consulting and training. ---------------------------------------------- eric jones 515 Congress Ave www.enthought.com Suite 1614 512 536-1057 Austin, Tx 78701 From dave at boost-consulting.com Sat Jun 28 00:09:12 2003 From: dave at boost-consulting.com (David Abrahams) Date: Fri, 27 Jun 2003 18:09:12 -0400 Subject: [C++-sig] Re: Embedding Python, threading and scalability References: <9532424DC30C934689726CEDBE15F41703CC8859@omasrv07.csgsystems.com> Message-ID: "Qiu, Wenning" writes: > My project will be running on an SMP box and requires > scalability. However, my test shows that Python threading has very > poor performance in terms of scaling. In fact it doesn't scale at all. > > I wrote a simple test program to complete given number of iterations > of a simple loop. The total number of iterations can be divided evenly > among a number of threads. My test shows that as the number of threads > grows, the CPU usage grows and the response time gets longer. For > example, to complete the same amount of work, one thread takes 10 > seconds, 2 threads take 20 seconds and 3 threads take 30 seconds. > > The fundamental reason for lacking scalability is that Python uses a > global interpreter lock for thread safety. That global lock must be > held by a thread before it can safely access Python objects. > > I thought I might be able to make embedded Python scalable by > embedding multiple interpreters and have them run independently in > different threads. However "Python/C API Reference Manual" chapter 8 > says that "The global interpreter lock is also shared by all threads, > regardless of to which interpreter they belong". Therefore with > current implementation, even multiple interpreters do not provide > scalability. > > Has anyone on this list run into the same problem that I have, or does > anyone know of any plan of totally insulating multiple embedded Python This question has little to do with Boost.Python, so I doubt you'll find good answers here. I would try comp.lang.python; I think people will rush to help you there. Good luck, -- Dave Abrahams Boost Consulting www.boost-consulting.com From dalwan01 at student.umu.se Sat Jun 28 01:02:07 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Sat, 28 Jun 2003 00:02:07 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3efcccef.725c.16838@student.umu.se> > "Daniel Wallin" writes: > >> Users could just specialize or overload > >> boost::luapython::type_id, which, in the no-RTTI case, > >> has no no default definition. > > > > Note that we need to store the type_info object, and > > that's why we have macros for this. So that the user can > > change the type of that object. This wouldn't really > work > > with specialization. > > template struct type_info > #ifndef NO_RTTI > { > typedef std::type_info type; > static type get(); > } > #endif > ; > > template > struct int_type_info > { > typedef int type; > static int get() { return n; } > }; > > template <> > struct type_info > : int_type_info<42> > {}; > > Of course I have no objection to using macros to generate > specializations. The type of the type-identifier needs to be known to the library though, so my point is that it can't be a template which the user just specializes. You also need to set the type for the compiled part of the library. Or am I missing something here? > > >> >> > This of course causes some problems if we want to > >> >> > downcast > >> >> > >> >> And if you want to support CBD (component based > >> >> development, i.e. cross-module conversions), unless > you > >> >> can somehow get all authors to agree on how to > identify > >> >> types. Well, I guess they could use strings and > manually > >> >> supply what std::type_info::name() does. > >> > > >> > It doesn't seem wrong that the authors need to use > the > >> > same typeid. If you aren't developing a closed > system, > >> > don't change typeid system. > >> > >> I was thinking more of id collisions among extensions > >> which don't intend to share types. It becomes less of > an > >> issue if people use string ids. > > > > But like I said, if you intend to use your module with > other > > modules; don't change the typeid system. > > I'm not talking about changing systems, just the need to > ensure unique type IDs for different types across modules. Yeah I know, what I'm saying is that you will only get problems like that if you change your type_info represantation to something like 'int'. In which case you have changed type id system. This isn't a problem as long as you stick to typeid() and type_info? > >> > LUABIND_DONT_COPY_STRINGS > >> > >> What kind of string copying gets disabled? > > > > It causes names to be held by const char* instead of > > std::string. (class names, function names etc). > > Why not always do that? Hold const char*? Because you can't control their lifetime? :) > > >> > . and the typeid macros. > >> > > >> > Most are results of user requests. Massive > configuration > >> > is quite important to our users, since lua is used > alot on > >> > embedded systems. > >> > >> Have your users come back after getting these features > and > >> given you any feedback about how much performance > they've > >> gained or space they've saved? > > > > No. But some of them is a must-have for alot of > > developers. In particular the ability to turn off > > exception handling. > > Oh sure, I believe that one, especially because some shops > (advisedly or no) have a policy against EH and RTTI. You > don't want to just leave them out. I am just leery of > configurability in general and would tend to resist making > any new macros part of the official release until users > had > told me it made a big difference to them in alpha/beta > stages. In particular, "massive" configurability is not > neccessarily desirable. It creates "massive" maintenance > and testing headaches. I agree. I can ask around on our mailing list later about what kind of configuration people think is interesting, and how much of what we have now they are using. > > >> >> [I assume this means that all of your C++ objects > are held > >> >> within their lua wrappers by pointer. I went to > >> >> considerable lengths to allow them to be held > by-value, > >> >> though I'm not sure the efficiency gain is worth the > cost > >> >> in flexibility.] > >> > > >> > Correct. We hold all C++ objects by pointer. I can > see why > >> > it could be interesting to be able to hold objects by > value > >> > though, especially with small objects. I'm not sure > it would > >> > be worth it though, since the user can just provide a > custom > >> > pool allocator if allocation is an issue. > >> > >> Still costs an extra 4 bytes (gasp!) for the pointer. > >> Yeah, it was in one of my contract specs, so I had to > >> implement it. Besides, it seemed like fun, but I bet > >> nobody notices and I'd be very happy to rip it out and > >> follow your lead on this. > > > > Need to think about this some more. It's no special case > > in BPL though, it's just another type of > instance_holder, > > correct? > > Not only that. It's an issue of where the instance holder > gets constructed. In this case it is constructed directly > in the storage for the Python object. Right, that's pretty cool. > > >> > Wouldn't both the examples you provided require A to > be held > >> > by auto_ptr in python? > >> > >> Only the A created by call_function for this callback. > >> You can have As held any number of ways in the same > >> program. > > > > Ah, right. You just instantiate different > instance_holder's. > > Does this mean there's special handling of auto_ptr's? > > What do you mean by "special handling"? I think the > answer > is no, though I also think there should be special > handling > for convenience and efficiency. I meant like an overload to extract the type held by the auto_ptr. But you already answered that. > > > Otherwise, how can you tell which type is being held by > > the pointer? (In the call_function example). > > The to-python converter that gets registered for > auto_ptr > by using it in a class, ... > or by > using register_pointer_to_python > knows > what to do. Ah, right. It's pretty nice to be able to hold the instances in different kind of holders. > > >> >> I can begin to see the syntactic convenience of your > way, > >> >> but I worry about the parameterizability. In the > first > >> >> case "result" is the only possible appropriate arg > and in > >> >> the 2nd case it's "_1". > >> > > >> > What exactly do you worry about? That someone would > use > >> > the wrong placeholder? I think the intended use is > quite > >> > clear. > >> > >> I guess I'm just a worrywart. Do the other policies > have > >> similar broad applicability? If so, I'm inclined to > >> accept > >> that you have the right idea. > > > > Most conversion policies have double direction > > Cool, I accept your scheme. Great. > > > and even the > > ones that doesn't can still be used in different > contexts. > > ('result' when doing C++ -> lua, and '_N' when doing lua > -> > > C++). > > How is that *not* a case of bidirectionality? :) It is, my fault. I meant it could be used in different contexts with the same direction, it should read something like: 'result' when calling a C++ function, '_N' when calling a lua function. > That rule makes me nervous, because the terms are > backwards > when calling lua/python from C++. Do people get confused? I don't think they do, I don't think anyone has reported any problems with it yet. It's backwards for a reason though, and it's pretty clear when to use which placeholder. (except in some special cases, where documentation helps). > > >> > void f(B*, B*, B*, python::list) > >> > void f(D*, D*, D*, std::vector) > >> > > >> > f(D(), D(), D(), [1, 2, 3]) > >> > >> Yeah, I guess that looks right, but I don't know. At > some > >> point things are just ambiguous. > > > > Exactly. Perhaps introducing weights on conversions > might > > increase the ambiguity for the user, since it's hard to > tell > > at which point the overload system will turn around and > > choose another overload. Does this make any sense? > > Yes. But as I meant to say to Andrei, none of the > "classic" > multimethod systems account for coercion, so if there's > some > reason they can't use your simple rule it has to be > something you can understand in terms of simple > inheritance > hierarchies. Right. I'll try to read up on this a bit more, I missed those Andrei mails you cc:ed yesterday, gonna read them today. -- Daniel Wallin From nickm at sitius.com Sat Jun 28 01:49:27 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Fri, 27 Jun 2003 19:49:27 -0400 Subject: [C++-sig] Re: return_self_policy / return_arg References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> Message-ID: <3EFCD807.F82FD757@sitius.com> And again ... PS. How can I get you to commit the pathes I posted some time ago about keywords? David Abrahams wrote: > > Very nice. Just a couple nits: > > Nikolay Mladenov writes: > > >>> from return_self import * > >>> l1=Label() > >>> l1 is l1.label("bar") > 1 > >>> l1 is l1.label("bar").sensitive(0) > 1 > >>> l1.label("foo").sensitive(0) is l1.sensitive(1).label("bar") > 1 > >>> return_arg is return_arg(return_arg) > 1 > > These tests will break when we get bool in Python 2.3. I use assert > instead. > > > namespace boost { namespace python > > { > > namespace detail > > { > > template > > struct return_arg_policy_pos_argument_must_be_greater_than_zero > > # if defined(__GNUC__) && __GNUC__ >= 3 || defined(__EDG__) > > {error} > ^^^^^ > This will produce an error a little bit too early ;-) > I think you'd better delete it. > > # endif > > ; > > > > Once these fixes have been made, anyone with CVS write permission can > commit it as far as I'm concerned, or you can remind me when I come > back from vacation. > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com -------------- next part -------------- ''' >>> from return_self import * >>> l1=Label() >>> assert l1 is l1.label("bar") >>> assert l1 is l1.label("bar").sensitive(0) >>> assert l1.label("foo").sensitive(0) is l1.sensitive(1).label("bar") >>> assert return_arg is return_arg(return_arg) ''' def run(args = None): import sys import doctest if args is not None: sys.argv = args return doctest.testmod(sys.modules.get(__name__)) if __name__ == '__main__': print "running..." import sys sys.exit(run()[0]) -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self.cpp Type: application/x-unknown-content-type-cppfile Size: 1400 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: return_self_policy.hpp Type: application/x-unknown-content-type-hppfile Size: 2533 bytes Desc: not available URL: From dave at boost-consulting.com Sat Jun 28 17:51:46 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 28 Jun 2003 11:51:46 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3efcccef.725c.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> "Daniel Wallin" writes: > > The type of the type-identifier needs to be known to the > library though, so my point is that it can't be a template > which the user just specializes. You also need to set the > type for the compiled part of the library. Or am I missing > something here? Probably not; I think you have a good point, at least, if we need runtime polymorphism among type id objects or if they need to be manipulated in the compiled part of the library. typedefs still work, though. >> >> >> > This of course causes some problems if we want to >> >> >> > downcast >> >> >> >> >> >> And if you want to support CBD (component based >> >> >> development, i.e. cross-module conversions), unless >> >> >> you can somehow get all authors to agree on how to >> >> >> identify types. Well, I guess they could use >> >> >> strings and manually supply what >> >> >> std::type_info::name() does. >> >> > >> >> > It doesn't seem wrong that the authors need to use the >> >> > same typeid. If you aren't developing a closed system, >> >> > don't change typeid system. >> >> >> >> I was thinking more of id collisions among extensions >> >> which don't intend to share types. It becomes less of an >> >> issue if people use string ids. >> > >> > But like I said, if you intend to use your module with other >> > modules; don't change the typeid system. >> >> I'm not talking about changing systems, just the need to >> ensure unique type IDs for different types across modules. > > Yeah I know, what I'm saying is that you will only get > problems like that if you change your type_info > represantation to something like 'int'. In which case you > have changed type id system. This isn't a problem as long > as you stick to typeid() and type_info? Assuming a single compiler and the use of distinct namespaces, no. >> >> > LUABIND_DONT_COPY_STRINGS >> >> >> >> What kind of string copying gets disabled? >> > >> > It causes names to be held by const char* instead of >> > std::string. (class names, function names etc). >> >> Why not always do that? > > Hold const char*? Because you can't control their > lifetime? :) You mean that some people want to compute these things dynamically instead of using string literals? Nobody has ever asked me for that. I bet this is one area of configurability that could be dropped. >> >> > . and the typeid macros. >> >> > >> >> > Most are results of user requests. Massive configuration >> >> > is quite important to our users, since lua is used alot on >> >> > embedded systems. >> >> >> >> Have your users come back after getting these features and >> >> given you any feedback about how much performance they've >> >> gained or space they've saved? >> > >> > No. But some of them is a must-have for alot of >> > developers. In particular the ability to turn off >> > exception handling. >> >> Oh sure, I believe that one, especially because some >> shops (advisedly or no) have a policy against EH and >> RTTI. You don't want to just leave them out. I am just >> leery of configurability in general and would tend to >> resist making any new macros part of the official release >> until users had told me it made a big difference to them >> in alpha/beta stages. In particular, "massive" >> configurability is not neccessarily desirable. It >> creates "massive" maintenance and testing headaches. > > I agree. I can ask around on our mailing list later about > what kind of configuration people think is interesting, > and how much of what we have now they are using. Cool. >> >> >> [I assume this means that all of your C++ objects are held >> >> >> within their lua wrappers by pointer. I went to >> >> >> considerable lengths to allow them to be held by-value, >> >> >> though I'm not sure the efficiency gain is worth the cost >> >> >> in flexibility.] >> >> > >> >> > Correct. We hold all C++ objects by pointer. I can see why >> >> > it could be interesting to be able to hold objects by value >> >> > though, especially with small objects. I'm not sure it would >> >> > be worth it though, since the user can just provide a custom >> >> > pool allocator if allocation is an issue. >> >> >> >> Still costs an extra 4 bytes (gasp!) for the pointer. >> >> Yeah, it was in one of my contract specs, so I had to >> >> implement it. Besides, it seemed like fun, but I bet >> >> nobody notices and I'd be very happy to rip it out and >> >> follow your lead on this. >> > >> > Need to think about this some more. It's no special >> > case in BPL though, it's just another type of >> > instance_holder, correct? >> >> Not only that. It's an issue of where the instance holder >> gets constructed. In this case it is constructed directly >> in the storage for the Python object. > > Right, that's pretty cool. Yeah, but I doubt it's saving much, and it leads to much complication (c.f. instance_new and instance_dealloc in libs/python/src/object/class.cpp). If objects are getting created and destroyed a *lot* it could reduce fragmentation and increase locality... but as I said I have big doubts. >> The to-python converter that gets registered for >> auto_ptr by using it in a class, >> ... > or by using >> register_pointer_to_python > knows what >> to do. > > Ah, right. It's pretty nice to be able to hold the instances > in different kind of holders. Yeah. Note also that shared_ptr is "magic", in that any wrapped T can be converted to shared_ptr regardless of holder, and the resulting shared_ptr can be converted back to the original Python object (not just a new one sharing the same C++ object). >> > and even the >> > ones that doesn't can still be used in different contexts. >> > ('result' when doing C++ -> lua, and '_N' when doing lua -> >> > C++). >> >> How is that *not* a case of bidirectionality? > > :) It is, my fault. I meant it could be used in different > contexts with the same direction, it should read something > like: 'result' when calling a C++ function, '_N' when > calling a lua function. > >> That rule makes me nervous, because the terms are >> backwards when calling lua/python from C++. Do people >> get confused? > > I don't think they do, I don't think anyone has reported any > problems with it yet. It's backwards for a reason though, > and it's pretty clear when to use which placeholder. (except > in some special cases, where documentation helps). OK. Maybe we could allow a default argument which would "just work" for these cases. Another option is "_", which is used in MPL for something similar. -- Dave Abrahams Boost Consulting www.boost-consulting.com From dave at boost-consulting.com Sat Jun 28 17:53:28 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 28 Jun 2003 11:53:28 -0400 Subject: [C++-sig] Re: return_self_policy / return_arg References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> <3EFCD807.F82FD757@sitius.com> Message-ID: Nikolay Mladenov writes: > PS. How can I get you to commit the pathes I posted some time ago about > keywords? Nikolay, If they got lost in the shuffle, I sincerely apologize. Please post a pointer to the post containing the patches. I'll try to look them over this week during my "vacation" ;-) -- Dave Abrahams Boost Consulting www.boost-consulting.com From dalwan01 at student.umu.se Sat Jun 28 22:41:56 2003 From: dalwan01 at student.umu.se (Daniel Wallin) Date: Sat, 28 Jun 2003 21:41:56 +0100 Subject: [C++-sig] Re: Interest in luabind Message-ID: <3efdfd94.975.16838@student.umu.se> > "Daniel Wallin" writes: > > >> "Daniel Wallin" writes: > > > > The type of the type-identifier needs to be known to the > > library though, so my point is that it can't be a > template > > which the user just specializes. You also need to set > the > > type for the compiled part of the library. Or am I > missing > > something here? > > Probably not; I think you have a good point, at least, if > we > need runtime polymorphism among type id objects or if they > need to be manipulated in the compiled part of the > library. > > typedefs still work, though. Yeah, a typedef would have been a much better solution. I think the reason we use macro's is because we used const type_info* directly, instead of wrapping it in a class. So we couldn't simply use < and == when comparing the types. > > >> >> >> > This of course causes some problems if we want > to > >> >> >> > downcast > >> >> >> > >> >> >> And if you want to support CBD (component based > >> >> >> development, i.e. cross-module conversions), > unless > >> >> >> you can somehow get all authors to agree on how > to > >> >> >> identify types. Well, I guess they could use > >> >> >> strings and manually supply what > >> >> >> std::type_info::name() does. > >> >> > > >> >> > It doesn't seem wrong that the authors need to use > the > >> >> > same typeid. If you aren't developing a closed > system, > >> >> > don't change typeid system. > >> >> > >> >> I was thinking more of id collisions among > extensions > >> >> which don't intend to share types. It becomes less > of an > >> >> issue if people use string ids. > >> > > >> > But like I said, if you intend to use your module > with other > >> > modules; don't change the typeid system. > >> > >> I'm not talking about changing systems, just the need > to > >> ensure unique type IDs for different types across > modules. > > > > Yeah I know, what I'm saying is that you will only get > > problems like that if you change your type_info > > represantation to something like 'int'. In which case > you > > have changed type id system. This isn't a problem as > long > > as you stick to typeid() and type_info? > > Assuming a single compiler and the use of distinct > namespaces, no. Right. > > >> >> > LUABIND_DONT_COPY_STRINGS > >> >> > >> >> What kind of string copying gets disabled? > >> > > >> > It causes names to be held by const char* instead of > >> > std::string. (class names, function names etc). > >> > >> Why not always do that? > > > > Hold const char*? Because you can't control their > > lifetime? :) > > You mean that some people want to compute these things > dynamically instead of using string literals? Nobody has > ever asked me for that. I bet this is one area of > configurability that could be dropped. We actually have one user that uses this feature, I don't know if he actually needs to though. So yeah, it could probably be dropped. > >> > Need to think about this some more. It's no special > >> > case in BPL though, it's just another type of > >> > instance_holder, correct? > >> > >> Not only that. It's an issue of where the instance > holder > >> gets constructed. In this case it is constructed > directly > >> in the storage for the Python object. > > > > Right, that's pretty cool. > > Yeah, but I doubt it's saving much, and it leads to much > complication (c.f. instance_new and instance_dealloc in > libs/python/src/object/class.cpp). If objects are getting > created and destroyed a *lot* it could reduce > fragmentation > and increase locality... but as I said I have big doubts. Yeah, fragmentation issues can always be solved with pool allocation. Is the complexity only due to value-holders, or just different kind of holders? Seems to me (without having looked at instance_new/instance_dealloc yet..) like you would need the same allocation routines for pointer-holders as well. > >> The to-python converter that gets registered for > >> auto_ptr by using it in a class, > >> ... > or by using > >> register_pointer_to_python > knows > what > >> to do. > > > > Ah, right. It's pretty nice to be able to hold the > instances > > in different kind of holders. > > Yeah. > > Note also that shared_ptr is "magic", in that any wrapped > T > can be converted to shared_ptr regardless of holder, > and > the resulting shared_ptr can be converted back to the > original Python object (not just a new one sharing the > same > C++ object). Ok. I noticed that while browsing the code. The custom deleter is a powerful tool. :) > >> > and even the > >> > ones that doesn't can still be used in different > contexts. > >> > ('result' when doing C++ -> lua, and '_N' when doing > lua -> > >> > C++). > >> > >> How is that *not* a case of bidirectionality? > > > > :) It is, my fault. I meant it could be used in > different > > contexts with the same direction, it should read > something > > like: 'result' when calling a C++ function, '_N' when > > calling a lua function. > > > >> That rule makes me nervous, because the terms are > >> backwards when calling lua/python from C++. Do people > >> get confused? > > > > I don't think they do, I don't think anyone has reported > any > > problems with it yet. It's backwards for a reason > though, > > and it's pretty clear when to use which placeholder. > (except > > in some special cases, where documentation helps). > > OK. Maybe we could allow a default argument which would > "just work" for these cases. Another option is "_", which > is used in MPL for something similar. Sure, that would be nice. You can also introduce additional placeholder aliases for the cases where '_N' and 'result' doesn't fit very well. -- Daniel Wallin From dave at boost-consulting.com Sun Jun 29 00:34:30 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sat, 28 Jun 2003 18:34:30 -0400 Subject: [C++-sig] Re: Interest in luabind References: <3efdfd94.975.16838@student.umu.se> Message-ID: "Daniel Wallin" writes: >> >> > Need to think about this some more. It's no special >> >> > case in BPL though, it's just another type of >> >> > instance_holder, correct? >> >> >> >> Not only that. It's an issue of where the instance >> >> holder gets constructed. In this case it is >> >> constructed directly in the storage for the Python >> >> object. >> > >> > Right, that's pretty cool. >> >> Yeah, but I doubt it's saving much, and it leads to much >> complication (c.f. instance_new and instance_dealloc in >> libs/python/src/object/class.cpp). If objects are >> getting created and destroyed a *lot* it could reduce >> fragmentation and increase locality... but as I said I >> have big doubts. > > Yeah, fragmentation issues can always be solved with pool > allocation. But locality can't, for what that's worth. Frankly I think it's not worth much. > Is the complexity only due to value-holders, or just > different kind of holders? Seems to me (without having > looked at instance_new/instance_dealloc yet..) like you > would need the same allocation routines for > pointer-holders as well. I'm trying to tell you it has nothing to do with the how the holders contain their data [though see below]; it's about how/where the holders themselves are constructed. The object model is something like this: every wrapped class instance contains an endogenous linked list of holders (to handle multiple inheritance from wrapped classes) and some raw storage to use for holder allocation. A class wrapper knows the size and alignment of its default holder, so usually the holder will be allocated right in the raw storage. If the holder turns out to be too big for that storage, e.g. the default holder contains an auto_ptr but for some reason a value_holder is used, or a value_holder is used where U is derived from T, then the holder will be dynamically allocated instead. Additional holders in MI situations can also end up being dynamically allocated. Arranging for the extra storage in the Python object means we have to fool Python into thinking the objects are variable-length (like a tuple), and incurs a few other complications. Well, OK, actually I suppose the argument forwarding problem (http://tinyurl.com/fist) causes a lot of additional complexity because references need to be passed through reference_wrapper arguments, and that would go away if there were no value_holders. I barely even notice that anymore, since it was part of BPLv1... but now that you mention it, value_holder is probably causing more complication in the code than in-place holder allocation. Eliminating both might be a huge simplification. Additionally, it might become possible to convert nearly any object to nearly any kind of smart pointer. value_holders impose some real limitations on usability. >> >> The to-python converter that gets registered for >> >> auto_ptr by using it in a class, >> >> ... > or by using >> >> register_pointer_to_python > knows what >> >> to do. >> > >> > Ah, right. It's pretty nice to be able to hold the instances >> > in different kind of holders. >> >> Yeah. >> >> Note also that shared_ptr is "magic", in that any wrapped >> T can be converted to shared_ptr regardless of holder, >> and the resulting shared_ptr can be converted back to >> the original Python object (not just a new one sharing >> the same C++ object). > > Ok. I noticed that while browsing the code. The custom > deleter is a powerful tool. :) Yeah, Peter's design really is beautiful. >> >> > and even the >> >> > ones that doesn't can still be used in different contexts. >> >> > ('result' when doing C++ -> lua, and '_N' when doing lua -> >> >> > C++). >> >> >> >> How is that *not* a case of bidirectionality? >> > >> > :) It is, my fault. I meant it could be used in different >> > contexts with the same direction, it should read something >> > like: 'result' when calling a C++ function, '_N' when >> > calling a lua function. >> > >> >> That rule makes me nervous, because the terms are >> >> backwards when calling lua/python from C++. Do people >> >> get confused? >> > >> > I don't think they do, I don't think anyone has reported any >> > problems with it yet. It's backwards for a reason though, >> > and it's pretty clear when to use which placeholder. (except >> > in some special cases, where documentation helps). >> >> OK. Maybe we could allow a default argument which would >> "just work" for these cases. Another option is "_", which >> is used in MPL for something similar. > > Sure, that would be nice. You can also introduce additional > placeholder aliases for the cases where '_N' and 'result' > doesn't fit very well. My point is that I'm trying to get away from a scenario where users have to think about which choice is right, when the context dictates that there can only be one right choice. Adding different names for the choices doesn't help with that problem. This is a minor nit. It sounds like we're converging quite rapidly. What other issues do we need to deal with? -- Dave Abrahams Boost Consulting www.boost-consulting.com From hugo at adept.co.za Sun Jun 29 15:14:43 2003 From: hugo at adept.co.za (Hugo van der Merwe) Date: Sun, 29 Jun 2003 15:14:43 +0200 Subject: [C++-sig] Cross-module support? Message-ID: <20030629131443.GB8552@vervet.localnet> I am trying to wrap a library that has classes derived from a library that has already been wrapped - is this currently possible? If not, I am considering using some C++ code to do the parts of the work that requires access to both libraries, in this case I believe each library will need some "global registry" of sorts, so that the C++ code can change data in each library with having to get/return data from/to python. Any other alternatives? Thanks, Hugo van der Merwe From paustin at eos.ubc.ca Sun Jun 29 16:49:25 2003 From: paustin at eos.ubc.ca (Philip Austin) Date: Sun, 29 Jun 2003 07:49:25 -0700 Subject: [C++-sig] Simply calling a routine ... In-Reply-To: <33259.137.82.23.131.1056747726.squirrel@webmail.eos.ubc.ca> References: <3EF9CAF4.80208@cenix-bioscience.com> <3EFAE527.3040301@cenix-bioscience.com> <3EFC15EF.2070304@novagrid.com> <33259.137.82.23.131.1056747726.squirrel@webmail.eos.ubc.ca> Message-ID: <16126.64629.354065.164195@gull.eos.ubc.ca> Charles Leung writes: > Your example, utilizing num_util, will probably look something like... > > ... > namespace nbpl = num_util; > namespace py = boost::python; > > py::long_ foo( py::numeric::array a, float b, > py::numeric::array c, int N ){ !!> Array A(N); !!> A.data() = (short*) nbpl::data(a); > > // Do some work Note that Blitz has a flag to prevent it from invoking delete on a's data pointer when A goes out of scope -- the lines marked !! need to be replaced with: short* data = (short*) nbpl::data(a); Array A(data,shape(N),neverDeleteData); Regards, Phil From dave at boost-consulting.com Sun Jun 29 17:15:00 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 29 Jun 2003 11:15:00 -0400 Subject: [C++-sig] Re: Cross-module support? References: <20030629131443.GB8552@vervet.localnet> Message-ID: Hugo van der Merwe writes: > I am trying to wrap a library that has classes derived from a library > that has already been wrapped - is this currently possible? Yes; it "just works", with some compiler-dependent caveats related to downcasting polymorphic classes. See http://aspn.activestate.com/ASPN/Mail/Message/1688156 for how to deal with those if need automatic downcasting. -- Dave Abrahams Boost Consulting www.boost-consulting.com From nickm at sitius.com Sun Jun 29 22:56:16 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sun, 29 Jun 2003 16:56:16 -0400 Subject: [C++-sig] Re: (return_self_policy / return_arg): Keywors References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> <3EFCD807.F82FD757@sitius.com> Message-ID: <3EFF5270.788370C2@sitius.com> Dave, That is what I have as diffs now. The diffs are from somewhat old cvs state (1.29.0). I've been using it for quite some time for init<> and it seems to be working nicely, but I have no tests, nor docs. If you approve this though, I will write docs and tests. Thanks, Nikolay David Abrahams wrote: > > Nikolay Mladenov writes: > > > PS. How can I get you to commit the pathes I posted some time ago about > > keywords? > > Nikolay, > > If they got lost in the shuffle, I sincerely apologize. Please post > a pointer to the post containing the patches. I'll try to look them > over this week during my "vacation" ;-) > > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com -------------- next part -------------- 44a45,51 > keywords operator , (const keywords<1> &k) const > { > python::detail::keywords res; > std::copy(elements, elements+size, res.elements); > res.elements[size] = k.elements[0]; > return res; > } 46a54,56 > > > 98c108,120 < # define BOOST_PYTHON_ASSIGN_NAME(z, n, _) result.elements[n].name = name##n; --- > struct arg : detail::keywords<1> > { > template > arg &operator = (T const &value) > { > elements[0].default_value = handle<>(python::borrowed(object(value).ptr())); > return *this; > } > arg (char const *name){elements[0].name = name;} > operator detail::keyword const &()const {return elements[0];} > }; > > # define BOOST_PYTHON_ASSIGN_NAME(z, n, _) result.elements[n] = kwd##n; 100c122 < inline detail::keywords args(BOOST_PP_ENUM_PARAMS_Z(1, n, char const* name)) \ --- > inline detail::keywords args(BOOST_PP_ENUM_PARAMS_Z(1, n, detail::keyword const& kwd)) \ -------------- next part -------------- 21a22 > keyword(char const* n=0):name(n){} -------------- next part -------------- 57a58 > unsigned m_nkeyword_values; -------------- next part -------------- 35a36 > , m_nkeyword_values(0) 42d42 < 48a49,50 > object tpl (handle<>(PyTuple_New(names_and_defaults[i].default_value?2:1))); > 50,51c52,53 < m_arg_names.ptr() < , i + keyword_offset --- > tpl.ptr() > , 0 55a58,72 > if(names_and_defaults[i].default_value){ > PyTuple_SET_ITEM( > tpl.ptr() > , 1 > , incref(names_and_defaults[i].default_value.get()) > ); > ++m_nkeyword_values; > } > > > PyTuple_SET_ITEM( > m_arg_names.ptr() > , i + keyword_offset > , incref(tpl.ptr()) > ); 82c99 < if (total_args >= f->m_min_arity && total_args <= f->m_max_arity) --- > if (total_args+f->m_nkeyword_values >= f->m_min_arity && total_args <= f->m_max_arity) 85c102 < if (nkeywords > 0) --- > if (nkeywords > 0 || total_args < f->m_min_arity) 94,95c111,112 < // build a new arg tuple < args2 = handle<>(PyTuple_New(total_args)); --- > // build a new arg tuple, will adjust its size later > args2 = handle<>(PyTuple_New(f->m_max_arity)); 102c119,120 < for (std::size_t j = nargs; j < total_args; ++j) --- > std::size_t j = nargs, k = nargs, size=PyTuple_GET_SIZE(f->m_arg_names.ptr()); > for (; j < f->m_max_arity && jm_arg_names.ptr(), j)); --- > PyObject* kwd=PyTuple_GET_ITEM(f->m_arg_names.ptr(), j); > > PyObject* value = nkeywords?PyDict_GetItem( > keywords, PyTuple_GET_ITEM(kwd, 0)) : 0; 108a129,132 > if(PyTuple_GET_SIZE(kwd)>1) > value = PyTuple_GET_ITEM(kwd, 1); > if (!value) > { 112a137 > }else ++k; 114a140,156 > > if(args2.get()){ > //check if we proccessed all the arguments > if(k < total_args) > args2 = handle<>(); > > //adjust the parameter tuple size > if(jm_max_arity){ > handle<> args3( PyTuple_New(j) ); > for(size_t l=0; l!=j; ++l) > { > PyTuple_SET_ITEM(args3.get(), l, PyTuple_GET_ITEM(args3.get(), l)); > PyTuple_SET_ITEM(args2.get(), l, 0); > } > args2 = args3; > } > } From dave at boost-consulting.com Mon Jun 30 01:24:21 2003 From: dave at boost-consulting.com (David Abrahams) Date: Sun, 29 Jun 2003 19:24:21 -0400 Subject: [C++-sig] Re: (return_self_policy / return_arg): Keywors References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> <3EFCD807.F82FD757@sitius.com> <3EFF5270.788370C2@sitius.com> Message-ID: Nikolay Mladenov writes: > Dave, > > That is what I have as diffs now. The diffs are from somewhat old cvs > state (1.29.0). > I've been using it for quite some time for init<> and it seems to be > working nicely, but I have no tests, nor docs. > If you approve this though, I will write docs and tests. I need to see some examples/description to know what it does, I think. IIRC it has something to do with defaulted keyword arguments? If it's what I remember, I'd be very enthusiastic about adding it to the library! Thanks, Dave Nit: the coding style should be made consistent with the rest of the library. These don't really fit: > std::size_t j = nargs, k = nargs, size=PyTuple_GET_SIZE(f->m_arg_names.ptr()); > }else ++k; -- Dave Abrahams Boost Consulting www.boost-consulting.com From nickm at sitius.com Mon Jun 30 03:07:21 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sun, 29 Jun 2003 21:07:21 -0400 Subject: [C++-sig] Re: (return_self_policy / return_arg): Keywors References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> <3EFCD807.F82FD757@sitius.com> <3EFF5270.788370C2@sitius.com> Message-ID: <3EFF8D49.9492CA00@sitius.com> This is how I use it: /////the.cpp//// python::class_ ("FileDialog", python::no_init) .def(python::init(python::args( python::arg("title") = (const char *)0 , python::arg("prompt") = (const char *)0 , python::arg("extension") = (const char *)0 , python::arg("directory")= (const char *)0 , python::arg("file")= (const char *)0 )) ) .def(python::init >() ) /////the.py//// fd1 = FileDialog() fd2 = FileDialog("Title") fd2 = FileDialog(title="Title") fd2 = FileDialog("Title", extension='.py', prompt='Select your script') David Abrahams wrote: > > Nikolay Mladenov writes: > > > Dave, > > > > That is what I have as diffs now. The diffs are from somewhat old cvs > > state (1.29.0). > > I've been using it for quite some time for init<> and it seems to be > > working nicely, but I have no tests, nor docs. > > If you approve this though, I will write docs and tests. > > I need to see some examples/description to know what it does, I think. > IIRC it has something to do with defaulted keyword arguments? If it's > what I remember, I'd be very enthusiastic about adding it to the > library! > > Thanks, > Dave > > Nit: the coding style should be made consistent with the rest of the > library. These don't really fit: > > > std::size_t j = nargs, k = nargs, size=PyTuple_GET_SIZE(f->m_arg_names.ptr()); > > > }else ++k; > I will look into this. If you mean that some new line are missing, consider it fixed. > -- > Dave Abrahams > Boost Consulting > www.boost-consulting.com From nickm at sitius.com Mon Jun 30 05:35:28 2003 From: nickm at sitius.com (Nikolay Mladenov) Date: Sun, 29 Jun 2003 23:35:28 -0400 Subject: [C++-sig] Virus Message-ID: <3EFFB000.331141C4@sitius.com> I got two e-mails from two members of this list containing virus This link http://www.ku.edu/acs/virus/viruses/sobigE.shtml has description of the virus. For me this means, that probably a third member of the list has infected machine. nikolay From dave at boost-consulting.com Mon Jun 30 06:25:02 2003 From: dave at boost-consulting.com (David Abrahams) Date: Mon, 30 Jun 2003 00:25:02 -0400 Subject: [C++-sig] Re: (return_self_policy / return_arg): Keywors References: <3EF7D45C.7D819158@sitius.com> <3EF85503.38A43BC6@sitius.com> <3EFCAD1B.BFA2AA5B@sitius.com> <3EFCD807.F82FD757@sitius.com> <3EFF5270.788370C2@sitius.com> <3EFF8D49.9492CA00@sitius.com> Message-ID: Nikolay Mladenov writes: > This is how I use it: Reformatting to avoid "endline layout" (google for it): /////the.cpp//// class_( "FileDialog", python::no_init ) .def( init< const char * , const char * , const char * , const char * , const char * >( args( arg("title") = (const char *)0 , arg("prompt") = (const char *)0 , arg("extension") = (const char *)0 , arg("directory") = (const char *)0 , arg("file") = (const char *)0 ) ) ) .def( init< optional< const char * , const char * , const char * , const char * , const char * > >() ) /////the.py//// fd1 = FileDialog() fd2 = FileDialog("Title") fd2 = FileDialog(title="Title") fd2 = FileDialog("Title", extension='.py', prompt='Select your script') OK, I don't understand this. Isn't the 2nd constructor redundant? Why are you using no_init? Doesn't this stuff work for regular functions and member functions, too? ...and shouldn't we get rid of the need to write the outer "args(...)"? I suggest you write the documentation which would explain all this, but posting informally is fine if you try to ensure that I don't have to ask you lots more questions in order to understand it ;-) >> Nit: the coding style should be made consistent with the rest of the >> library. These don't really fit: >> >> > std::size_t j = nargs, k = nargs, size=PyTuple_GET_SIZE(f->m_arg_names.ptr()); >> >> > }else ++k; >> > > I will look into this. If you mean that some new line are missing, > consider it fixed. Also that you used commas between variable declarations, K&R braces within functions, no spaces around '?', '<', '>', ... -- Dave Abrahams Boost Consulting www.boost-consulting.com From gathmann at cenix-bioscience.com Mon Jun 30 08:30:21 2003 From: gathmann at cenix-bioscience.com (F. Oliver Gathmann) Date: Mon, 30 Jun 2003 08:30:21 +0200 Subject: [C++-sig] Re: passing a dynamic PyObject * from C++ to Python In-Reply-To: References: <3EF9CAF4.80208@cenix-bioscience.com> <3EFAE527.3040301@cenix-bioscience.com> Message-ID: <3EFFD8FD.2090608@cenix-bioscience.com> David Abrahams wrote: >Since you have those converters registered, why not simply: > > object readFooOrBar(bool fooOrBar) > { if (fooOrBar) > { > std::auto_ptr p(new Foo()); > return object(p); > } > else > { > std::auto_ptr p(new Bar()); > return object(p); > } > } > >?? > >That seems much easier to me. > > > Doh! I thought I had tried this - works like a charme. Many thanks for pointing this out (particularly given the fact that you are on vacation this week...). Oliver -- -------------------------------------------------------------------- F. Oliver Gathmann, Ph.D. Director IT Unit Cenix BioScience GmbH Pfotenhauer Strasse 108 phone: +49 (351) 210-2735 D-01307 Dresden, Germany fax: +49 (351) 210-1309 fingerprint: 8E0E 9A64 A07E 0D1A D302 34C2 421A AE9F 4E13 A009 public key: http://www.cenix-bioscience.com/public_keys/gathmann.gpg --------------------------------------------------------------------