From Holger.Joukl at LBBW.de Wed Apr 3 10:08:57 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Wed, 3 Apr 2013 10:08:57 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <51540153.8030408@copanitalia.com> References: <51540153.8030408@copanitalia.com> Message-ID: Hi Giuseppe, thanks for answering and sorry for the delayed response. Easter holidays :-) Giuseppe Corbelli wrote on 28.03.2013 09:37:39: > On 26/03/2013 18:51, Holger Joukl wrote: > > > > Hi, > > > > I'm wrapping a C++ library that's actually just a thin wrapper around a C > > lib. > > > > Through a dispatch() method of a Dispatcher class there's an invocation of > > callbacks which get implemented on the Python side, by subclassing callback > > classes exposed through Boost.Python. > > > > Now, for performing the actual dispatch the C++ library calls into the > > underlying C library. > > > > This hasn't been a problem so far as we've been targetting Solaris using > > Sun/Oracle studio > > compilers. > > > > However, on Linux compiled with GCC, if an exception gets raised in the > > Python callback the > > program segfaults. Obviously the C part of the vendor library has not been > > compiled with > > (GCC-) exception support for Linux. > > > > Unfortunately this isn't under my control and the library provider seems > > not to be willing > > to make the library robust against exceptions in user callback code. > > > > So I need to keep the Boost C++ exception that "signals" the Python > > exception from passing > > through the C parts. > > How does the library handle callbacks? An event loop in a separate thread? Do > you have to explicitly call some blocking function to run the event loop? Yes. There's a separate thread that schedules events to a queue being dispatch in the main loop, in the main thread. And yes, there's a blocking call to a dispatch method with the possibility of a timeout or alternatively a non-blocking poll call (we make sure to properly release & reacquire the Python GIL during blocking). > If it's a C lib likely the callbacks are just function pointers. Being called > inside a C object I'd say you assumption is correct: no C++ exceptions to be > raised inside callbacks (the lib is gcc-compiled, it has no knowledge of > exceptions). Correct, the callbacks are function pointers from the C libs point of view. > If you could recompile with g++ the exception would raise from the > event loop, > unless catched while calling the callback function pointer. While I do have the C++ sources (to be able to recompile for different C++ compilers) I don't have the C sources so I unfortunately I can't recompile the C-lib parts with exceptions support. Sidenote: For Oracle/Sun Solaris Studio compilers this isn't a problem as you basically can't and don't need to switch the exception support of the compiler for mixing C and C++ code (http://www.oracle.com/technetwork/articles/servers-storage-dev/mixingcandcpluspluscode-305840.html) > Sidenote: how does the C++ exception mechanism work under the hood? What > happens if it's called inside a C compiled function? Not sure if I understand your question correctly, but: - Solaris Studio compilers: An exception raised in a C++ compiled function that got called from a C compiled function called from a C++ main will properly propagate to the main caller - GCC: An exception raised in a C++ compiled function that got called from a C compiled function called from a C++ main will - segfault if the C compiled function hasn't been compiled with exception support (-fexceptions) - properly propagate to the main caller if the C compiled function has been compiled with exception support Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From giuseppe.corbelli at copanitalia.com Mon Apr 8 11:29:10 2013 From: giuseppe.corbelli at copanitalia.com (Giuseppe Corbelli) Date: Mon, 08 Apr 2013 11:29:10 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: <51540153.8030408@copanitalia.com> Message-ID: <51628DE6.9060105@copanitalia.com> On 03/04/2013 10:08, Holger Joukl wrote: > Hi Giuseppe, > > thanks for answering and sorry for the delayed response. Easter > holidays :-) To punish you here's another late reply. > Giuseppe Corbelli wrote on 28.03.2013 > 09:37:39: > >> On 26/03/2013 18:51, Holger Joukl wrote: >>> >>> Hi, >>> >>> I'm wrapping a C++ library that's actually just a thin wrapper around a >>> C lib. >>> >>> Through a dispatch() method of a Dispatcher class there's an invocation of >>> callbacks which get implemented on the Python side, by subclassing >>> callback classes exposed through Boost.Python. >>> >>> Now, for performing the actual dispatch the C++ library calls into the >>> underlying C library. >>> >>> This hasn't been a problem so far as we've been targetting Solaris using >>> Sun/Oracle studio >>> compilers. >>> >>> However, on Linux compiled with GCC, if an exception gets raised in the >>> Python callback the >>> program segfaults. Obviously the C part of the vendor library has not >>> been compiled with >>> (GCC-) exception support for Linux. >>> >>> Unfortunately this isn't under my control and the library provider >>> seems not to be willing >>> to make the library robust against exceptions in user callback code. >>> >>> So I need to keep the Boost C++ exception that "signals" the Python >>> exception from passing >>> through the C parts. >> >> How does the library handle callbacks? An event loop in a separate >> thread? Do you have to explicitly call some blocking function to run the >> event loop? > > Yes. There's a separate thread that schedules events to a queue being > dispatch in the main loop, in the main thread. > And yes, there's a blocking call to a dispatch method with the possibility > of a timeout or alternatively a non-blocking poll call (we make sure to properly > release& reacquire the Python GIL during blocking). > >> If it's a C lib likely the callbacks are just function pointers. Being >> called inside a C object I'd say you assumption is correct: no C++ >> exceptions to be >> raised inside callbacks (the lib is gcc-compiled, it has no knowledge of >> exceptions). > > Correct, the callbacks are function pointers from the C libs point of view. > >> If you could recompile with g++ the exception would raise from the >> event loop, >> unless catched while calling the callback function pointer. > > While I do have the C++ sources (to be able to recompile for different C++ > compilers) > I don't have the C sources so I unfortunately I can't recompile the C-lib > parts with > exceptions support. > Sidenote: For Oracle/Sun Solaris Studio compilers this isn't a problem as > you basically can't > and don't need to switch the exception support of the compiler for mixing C > and C++ code > (http://www.oracle.com/technetwork/articles/servers-storage-dev/mixingcandcpluspluscode-305840.html) I have found a couple of references. http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) http://gcc.gnu.org/wiki/Visibility The proprietary lib is shared, right? linked to? shared? static? >> Sidenote: how does the C++ exception mechanism work under the hood? What >> happens if it's called inside a C compiled function? > > Not sure if I understand your question correctly, but: > - Solaris Studio compilers: An exception raised in a C++ compiled function > that got called > from a C compiled function called from a C++ main will properly propagate > to the main caller > - GCC: An exception raised in a C++ compiled function that got called > from a C compiled function called from a C++ main will > - segfault if the C compiled function hasn't been compiled with exception > support (-fexceptions) > - properly propagate to the main caller if the C compiled function has > been compiled with exception support Well, I was thinking about ASM level :-) -- Giuseppe Corbelli WASP Software Engineer, Copan Italia S.p.A Phone: +390303666318 Fax: +390302659932 E-mail: giuseppe.corbelli at copanitalia.com From Holger.Joukl at LBBW.de Mon Apr 8 14:11:07 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Mon, 8 Apr 2013 14:11:07 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <51628DE6.9060105@copanitalia.com> References: <51540153.8030408@copanitalia.com> <51628DE6.9060105@copanitalia.com> Message-ID: Hi, Giuseppe Corbelli wrote on 08.04.2013 11:29:10: > On 03/04/2013 10:08, Holger Joukl wrote: > > Hi Giuseppe, > > > > thanks for answering and sorry for the delayed response. Easter > > holidays :-) > > To punish you here's another late reply. ;-) > >>> However, on Linux compiled with GCC, if an exception gets raised in the > >>> Python callback the > >>> program segfaults. Obviously the C part of the vendor library has not > >>> been compiled with > >>> (GCC-) exception support for Linux. > >>> > >>> Unfortunately this isn't under my control and the library provider > >>> seems not to be willing > >>> to make the library robust against exceptions in user callback code. > >>> > >>> So I need to keep the Boost C++ exception that "signals" the Python > >>> exception from passing > >>> through the C parts. > >> > >> How does the library handle callbacks? An event loop in a separate > >> thread? Do you have to explicitly call some blocking function to run the > >> event loop? > > > > Yes. There's a separate thread that schedules events to a queue being > > dispatch in the main loop, in the main thread. > > And yes, there's a blocking call to a dispatch method with the possibility > > of a timeout or alternatively a non-blocking poll call (we make > sure to properly > > release& reacquire the Python GIL during blocking). > > > >> If it's a C lib likely the callbacks are just function pointers. Being > >> called inside a C object I'd say you assumption is correct: no C++ > >> exceptions to be > >> raised inside callbacks (the lib is gcc-compiled, it has no knowledge of > >> exceptions). > > > > Correct, the callbacks are function pointers from the C libs point of view. > > > >> If you could recompile with g++ the exception would raise from the > >> event loop, > >> unless catched while calling the callback function pointer. > > > > While I do have the C++ sources (to be able to recompile for different C++ > > compilers) > > I don't have the C sources so I unfortunately I can't recompile the C-lib > > parts with > > exceptions support. > > Sidenote: For Oracle/Sun Solaris Studio compilers this isn't a problem as > > you basically can't > > and don't need to switch the exception support of the compiler for mixing C > > and C++ code > > (http://www.oracle.com/technetwork/articles/servers-storage-dev/ > mixingcandcpluspluscode-305840.html) > > I have found a couple of references. > http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) > http://gcc.gnu.org/wiki/Visibility Thanks, I'll need to look into these. > The proprietary lib is shared, right? linked to? shared? static? Shared C lib, we compile a shared C++ lib linked to the C lib from the vendor C++ sources, which we shared-link the (shared) Boost.Python wrapper to. > >> Sidenote: how does the C++ exception mechanism work under the hood? What > >> happens if it's called inside a C compiled function? > > > > Not sure if I understand your question correctly, but: > > - Solaris Studio compilers: An exception raised in a C++ compiled function > > that got called > > from a C compiled function called from a C++ main will properly propagate > > to the main caller > > - GCC: An exception raised in a C++ compiled function that got called > > from a C compiled function called from a C++ main will > > - segfault if the C compiled function hasn't been compiled withexception > > support (-fexceptions) > > - properly propagate to the main caller if the C compiled function has > > been compiled with exception support > > Well, I was thinking about ASM level :-) No clue whatsoever. Regards Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From giuseppe.corbelli at copanitalia.com Tue Apr 9 09:09:14 2013 From: giuseppe.corbelli at copanitalia.com (Giuseppe Corbelli) Date: Tue, 09 Apr 2013 09:09:14 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: <51540153.8030408@copanitalia.com> <51628DE6.9060105@copanitalia.com> Message-ID: <5163BE9A.2070805@copanitalia.com> On 08/04/2013 14:11, Holger Joukl wrote: >> I have found a couple of references. >> http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) >> http://gcc.gnu.org/wiki/Visibility > > Thanks, I'll need to look into these. > >> The proprietary lib is shared, right? linked to? shared? static? > > Shared C lib, we compile a shared C++ lib linked to the C lib from the > vendor > C++ sources, which we shared-link the (shared) Boost.Python wrapper to. Quoting from the manual: There are several situations in which an application should use the shared libgcc instead of the static version. The most common of these is when the application wishes to throw and catch exceptions across different shared libraries. In that case, each of the libraries as well as the application itself should use the shared libgcc. However, if a library or main executable is supposed to throw or catch exceptions, you must link it using the G++ or GCJ driver, as appropriate for the languages used in the program, or using the option -shared-libgcc, such that it is linked with the shared libgcc. Likely the C_lib.so is not linked to libgcc_s.so. Don't know if it's possible to "extract" the objects from the shared .so. Maybe playing dirty with LD_PRELOAD? -- Giuseppe Corbelli WASP Software Engineer, Copan Italia S.p.A Phone: +390303666318 Fax: +390302659932 E-mail: giuseppe.corbelli at copanitalia.com From s_sourceforge at nedprod.com Tue Apr 9 03:04:33 2013 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Mon, 08 Apr 2013 21:04:33 -0400 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: , <51628DE6.9060105@copanitalia.com>, Message-ID: <51636921.21247.1411F5CF@s_sourceforge.nedprod.com> On 8 Apr 2013 at 14:11, Holger Joukl wrote: > > I have found a couple of references. > > http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) > > http://gcc.gnu.org/wiki/Visibility > > Thanks, I'll need to look into these. I wrote the second one. Really not sure how that helps you. Look into capturing the exception before it enters the C code using std::exception_ptr, and then rethrowing it on reentry to C++. If you don't have C++11 in your C++, Boost provides an okay partial implementation of C++11 exception support. Niall -- Any opinions or advice expressed here do NOT reflect those of my employer BlackBerry Inc. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ -------------- next part -------------- A non-text attachment was scrubbed... Name: SMime.p7s Type: application/x-pkcs7-signature Size: 6061 bytes Desc: not available URL: From Holger.Joukl at LBBW.de Wed Apr 10 13:48:14 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Wed, 10 Apr 2013 13:48:14 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <51636921.21247.1411F5CF@s_sourceforge.nedprod.com> References: , <51628DE6.9060105@copanitalia.com>, <51636921.21247.1411F5CF@s_sourceforge.nedprod.com> Message-ID: Hi, "Cplusplus-sig" wrote on 09.04.2013 03:04:33: > From: "Niall Douglas" > On 8 Apr 2013 at 14:11, Holger Joukl wrote: > > > > I have found a couple of references. > > > http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) > > > http://gcc.gnu.org/wiki/Visibility > > > > Thanks, I'll need to look into these. > > I wrote the second one. Really not sure how that helps you. At the very least I'll definitely learn something :-) > Look into capturing the exception before it enters the C code using > std::exception_ptr, and then rethrowing it on reentry to C++. In a similar approach I'm trying out I currently catch bp::error_already_set before entering C Code and on reentry to C++ call bp::throw_error_already_set() iff PyErr_Occurred(). So I basically make use of Python's global-per-thread exception indicator to "suppress" the exception for the C code and detect the need to re-raise before returning from C++ to Python. I guess that will work just fine unless some non-Python related exception creeps in, either raised from within Boost.Python or in my custom code. > If you don't have C++11 in your C++, Boost provides an okay partial > implementation of C++11 exception support. I'll look into this. This would then mean instrumenting some object with a place to store the caught exception to re-raise upon reentry from C to C++. I take it this would then do much the same as my approach sketched above but "safer and sounder". Thanks, Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From s_sourceforge at nedprod.com Sun Apr 14 00:37:04 2013 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sat, 13 Apr 2013 18:37:04 -0400 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: , <51636921.21247.1411F5CF@s_sourceforge.nedprod.com>, Message-ID: <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com> On 10 Apr 2013 at 13:48, Holger Joukl wrote: > > If you don't have C++11 in your C++, Boost provides an okay partial > > implementation of C++11 exception support. > > I'll look into this. This would then mean instrumenting some object with a > place > to store the caught exception to re-raise upon reentry from C to C++. > I take it this would then do much the same as my approach sketched above > but > "safer and sounder". There is always the trick of keeping a hash table keyed on thread id and call function depth. In fact it could even be made lock free and wait free with Boost v1.53's new lock and wait free queue implementations. Niall -- Any opinions or advice expressed here do NOT reflect those of my employer BlackBerry Inc. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ -------------- next part -------------- A non-text attachment was scrubbed... Name: SMime.p7s Type: application/x-pkcs7-signature Size: 6061 bytes Desc: not available URL: From Holger.Joukl at LBBW.de Wed Apr 17 10:29:08 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Wed, 17 Apr 2013 10:29:08 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <5163BE9A.2070805@copanitalia.com> References: <51540153.8030408@copanitalia.com> <51628DE6.9060105@copanitalia.com> <5163BE9A.2070805@copanitalia.com> Message-ID: Hi, Giuseppe Corbelli wrote on 09.04.2013 09:09:14: > On 08/04/2013 14:11, Holger Joukl wrote: > >> I have found a couple of references. > >> http://gcc.gnu.org/onlinedocs/gcc/Link-Options.html (see static-libgcc) > >> http://gcc.gnu.org/wiki/Visibility > > > > Thanks, I'll need to look into these. > > > >> The proprietary lib is shared, right? linked to? shared? static? > > > > Shared C lib, we compile a shared C++ lib linked to the C lib from the > > vendor > > C++ sources, which we shared-link the (shared) Boost.Python wrapper to. > > Quoting from the manual: > There are several situations in which an application should use the shared > libgcc instead of the static version. The most common of these is when the > application wishes to throw and catch exceptions across different shared > libraries. In that case, each of the libraries as well as the application > itself should use the shared libgcc. Both the shared libboost_python.so and the extension module I'm building link against the shared libgcc_s.so. > However, if a library or main executable is supposed to throw or catch > exceptions, you must link it using the G++ or GCJ driver, as appropriate for > the languages used in the program, or using the option -shared-libgcc, such > that it is linked with the shared libgcc. > > Likely the C_lib.so is not linked to libgcc_s.so. Don't know if it'spossible > to "extract" the objects from the shared .so. Maybe playing dirty > with LD_PRELOAD? While the shared lib is indeed not linked to libgcc_s.so I now don't think this does make any difference for the problem at hand: Experimenting with this simple test lib: 0 $ cat dispatch.h // File: dispatch.h #ifdef __cplusplus extern "C" { #endif typedef char const *cb_arg_t; typedef void (*callback_func_ptr_t)(cb_arg_t); void invoke(callback_func_ptr_t cb, cb_arg_t arg); #ifdef __cplusplus } #endif 0 $ cat dispatch.c // File: dispatch.c #include #include "dispatch.h" void invoke(callback_func_ptr_t cb, cb_arg_t arg) { printf("--> invoke(%d, %s)\n", &(*cb), arg); (*cb)(arg); printf("<-- invoke(%d, %s)\n", &(*cb), arg); } $ cat dispatch_main.cpp // File: dispatch_main.cpp #include #include #include "dispatch.h" void callback(cb_arg_t arg) { printf("--> CPP callback(%s)\n", arg); printf("<-- CPP callback(%s)\n", arg); } void callback_with_exception(cb_arg_t arg) { printf("--> CPP exception-throwing callback(%s)\n", arg); throw std::runtime_error("throwing up"); printf("<-- CPP exception-throwing callback(%s)\n", arg); } int main(void) { std::cout << "--> CPP main" << std::endl; cb_arg_t s = "CPP main callback argument"; invoke(&callback, s); try { invoke(&callback_with_exception, s); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } std::cout << "<-- CPP main" << std::endl; return 0; } I've found that: (1) linking the C lib dynamically or statically against libgcc does not make any difference wrt exception segfaulting. I.e. lib compiled with - gcc -static-libgcc -o libdispatch.so -shared -fPIC dispatch.c: $ ldd libdispatch.so libc.so.1 => /lib/libc.so.1 libm.so.2 => /lib/libm.so.2 /platform/SUNW,Sun-Fire-V490/lib/libc_psr.so.1 - gcc -o libdispatch.so -shared -fPIC dispatch.c $ ldd libdispatch.so libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 libc.so.1 => /lib/libc.so.1 libm.so.2 => /lib/libm.so.2 /platform/SUNW,Sun-Fire-V490/lib/libc_psr.so.1 will not make any difference when running the main program: 0 $ g++ -I. -L. -R. -ldispatch dispatch_main.cpp 0 $ ./a.out --> CPP main --> invoke(68316, CPP main callback argument) --> CPP callback(CPP main callback argument) <-- CPP callback(CPP main callback argument) <-- invoke(68316, CPP main callback argument) --> invoke(68372, CPP main callback argument) --> CPP exception-throwing callback(CPP main callback argument) terminate called after throwing an instance of 'std::runtime_error' what(): throwing up Abort (core dumped) (2) Compiling the C lib with exception support i.e. -fexceptions will make the segfault disappear: 0 $ gcc -static-libgcc -fexceptions -o libdispatch.so -shared -fPIC dispatch.c 0 $ ./a.out --> CPP main --> invoke(68720, CPP main callback argument) --> CPP callback(CPP main callback argument) <-- CPP callback(CPP main callback argument) <-- invoke(68720, CPP main callback argument) --> invoke(68776, CPP main callback argument) --> CPP exception-throwing callback(CPP main callback argument) caught callback exception: throwing up <-- CPP main 0 $ ldd libdispatch.so libc.so.1 => /lib/libc.so.1 libm.so.2 => /lib/libm.so.2 /platform/SUNW,Sun-Fire-V490/lib/libc_psr.so.1 0 $ gcc -fexceptions -o libdispatch.so -shared -fPIC dispatch.c 0 $ ./a.out --> CPP main --> invoke(68720, CPP main callback argument) --> CPP callback(CPP main callback argument) <-- CPP callback(CPP main callback argument) <-- invoke(68720, CPP main callback argument) --> invoke(68776, CPP main callback argument) --> CPP exception-throwing callback(CPP main callback argument) caught callback exception: throwing up <-- CPP main 0 $ ldd libdispatch.so libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 libc.so.1 => /lib/libc.so.1 libm.so.2 => /lib/libm.so.2 /platform/SUNW,Sun-Fire-V490/lib/libc_psr.so.1 So, wrapping up, it looks like if the C lib isn't compiled with exception support there is no way that an exception can propagate through the C parts without segfaulting, regardless of linking dynamically or statically against libgcc. >From the cited gcc documentation I understand that linking dynamically or statically will however influence your capabilities on throwing in *one* shared lib and catching in *another* shared lib, all participating libraries *compiled with exception support*, though. Which means I'll need to do something along the lines that Niall sketched out. Thanks, Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From beamesleach at gmail.com Wed Apr 17 15:54:02 2013 From: beamesleach at gmail.com (Alex Leach) Date: Wed, 17 Apr 2013 14:54:02 +0100 Subject: [C++-sig] std::list wrapper [Was: Segfaults in object deallocation] In-Reply-To: References: <5149DA90.6030806@jaedyn.co> <5149E4F8.8010903@gmail.com> Message-ID: On Wed, 20 Mar 2013 16:53:55 -0000, Alex Leach wrote: > On Wed, 20 Mar 2013 16:34:00 -0000, Jim Bosch > wrote: > >> >> [...] I'm curious what you're using to wrap std::list, as you clearly >> have methods that return std::list, and the standard library can be >> tricky to translate to Python due to things like iterator invalidation. > > Heh, not yet anyway! Although I've laid out the foundations, I haven't > yet implemented my attempt at a solution, nor tested it, I've since got around to fixing the std::list wrapper I think I sent around before. Lots of errors in that version! Now, I've got it working, though; its sort and reverse functions use C++ STL algorithms instead of Python's, but that's probably a good thing; results appear to be consistent with those on a normal Python list. That said, I haven't profiled it for speed and compared performance against Python lists. I attach a complete example in the attached zip archive; just run 'python test_make_list.py' to compile an example extension, which exposes std::list and std::list. Unit-tests are then run to compare those against identical Python lists. make_list.zip contents:- ./test_make_list.py // Compiles example code and runs unit-tests ./test_make_list.cpp // example showing how to expose list and list ./boost_helpers/make_list.hpp // defines boost::python::list_indexing_suite ./boost_helpers/make_callback.hpp // needed for sort and reverse optional binary predicate ./util.py // helper functions for compiling with distutils A couple things to mention:- 1. make_callback.hpp isn't as tidy as make_list.hpp; it probably shouldn't even be necessary, but I needed something to help calling back into Python code, and this seemed to work. I wrote quite a nice template implementation using variadic templates, but the C++98 alternative isn't great. 2. I haven't (yet) exposed a good initialiser for the list classes, so it's not possible to do, e.g. >>> int_list = test_make_list.IntList([1, 2, 3]) Instead, I've been populating lists like this:- >>> int_list = test_make_list.IntList() >>> int_list[:] = [1, 2, 3] Obviously, that's not quite ideal. Any help or advice with making a templatised initialiser would be very welcome though! Kind regards, Alex -------------- next part -------------- A non-text attachment was scrubbed... Name: make_list.zip Type: application/zip Size: 7153 bytes Desc: not available URL: From Holger.Joukl at LBBW.de Wed Apr 17 17:13:51 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Wed, 17 Apr 2013 17:13:51 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com> References: , <51636921.21247.1411F5CF@s_sourceforge.nedprod.com>, <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com> Message-ID: Hi, > From: "Niall Douglas" > On 10 Apr 2013 at 13:48, Holger Joukl wrote: > > > > If you don't have C++11 in your C++, Boost provides an okay partial > > > implementation of C++11 exception support. I'm currently using gcc 4.6.1 which supports the necessary features using -std=c++0x but also Solaris Studio 12.2 which doesn't so I'd need the Boost implementation. > > I'll look into this. This would then mean instrumenting some object with a > > place > > to store the caught exception to re-raise upon reentry from C to C++. > > I take it this would then do much the same as my approach sketched above > > but > > "safer and sounder". > > There is always the trick of keeping a hash table keyed on thread id > and call function depth. In fact it could even be made lock free and > wait free with Boost v1.53's new lock and wait free queue > implementations. A bit of a simplified example version using Boost's partial C++11 exception support would then be s.th. like: 0 $ cat dispatch_main_exception_map.cpp // File: dispatch_main_exception_map.cpp #include #include #include #include // header-only: #include // header-only: #include #include "dispatch.h" // the global per-thread exception storage boost::unordered_map exception_map; void callback(cb_arg_t arg) { std::cout << "--> CPP callback" << arg << std::endl; std::cout << "<-- CPP callback" << arg << std::endl; } // this callback raises an exception void callback_with_exception(cb_arg_t arg) { std::cout << "--> exception-throwing CPP callback" << arg << std::endl; throw std::runtime_error("throwing up"); std::cout << "<-- exception-throwing CPP callback" << arg << std::endl; } // this callback raises an exception, catches it and stores it in the // global (thread, exception)-map void guarded_callback_with_exception(cb_arg_t arg) { std::cout << "--> guarded exception-throwing CPP callback" << arg << std::endl; try { throw std::runtime_error("throwing up"); } catch (...) { std::cout << "storing exception in exception map" << std::endl; pthread_t current_thread = pthread_self(); exception_map[current_thread] = boost::current_exception(); exception_map.erase(current_thread); //global_exception_holder = boost::current_exception(); } std::cout << "<-- guarded exception-throwing CPP callback" << arg << std::endl; } int main(void) { std::cout << "--> CPP main" << std::endl; cb_arg_t s = "CPP main callback argument"; std::cout << std::endl; invoke(&callback, s); std::cout << std::endl; invoke(&guarded_callback_with_exception, s); pthread_t current_thread = pthread_self(); if (exception_map.find(current_thread) != exception_map.end()) { try { std::cout << "rethrowing exception from exception map" << std::endl; boost::rethrow_exception(exception_map[current_thread]); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } } std::cout << std::endl; try { invoke(&callback_with_exception, s); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } std::cout << std::endl; std::cout << "<-- CPP main" << std::endl; return 0; } Which doesn't respect call function depth (how would I do that?) and doesn't use a queue; I suppose you mean using the lockfree queue for threadsafe access to the hash table. I think I don't even need that for my use case as I basically - call dispatch on a Boost.Python wrapped object - which invokes the C libs dispatcher function - which invokes the registered callbacks (these will usually be implemented in Python) i.e. the same thread that produced the exception will need to handle it. So what I have to make sure is that - no exception can propagate back to the C lib - I recognize an exception has happened, in the initial dispatch method in the same thread, to re-raise it and let Boost.Python translate it back to Python It looks like getting at the thread ID is also not too portable. This is something that the simple solution of using (abusing?) the Python per-thread exception indicator would avoid. Thanks for the valuable hints! Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From s_sourceforge at nedprod.com Thu Apr 18 02:40:49 2013 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Wed, 17 Apr 2013 20:40:49 -0400 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: , <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com>, Message-ID: <516F4111.26650.4255D8A9@s_sourceforge.nedprod.com> On 17 Apr 2013 at 17:13, Holger Joukl wrote: > // the global per-thread exception storage > boost::unordered_map exception_map; You can't assume pthread_t is of integral type. You can a thread_t (from C11) I believe. You may not care on your supported platforms though. > throw std::runtime_error("throwing up"); If you're going to use Boost's exception_ptr implementation, you really ought to throw using Boost's exception throw macro. Otherwise stuff become unreliable. > void guarded_callback_with_exception(cb_arg_t arg) { > std::cout << "--> guarded exception-throwing CPP callback" << arg << > std::endl; > try { > throw std::runtime_error("throwing up"); > } catch (...) { > std::cout << "storing exception in exception map" << std::endl; > pthread_t current_thread = pthread_self(); > exception_map[current_thread] = boost::current_exception(); > exception_map.erase(current_thread); > //global_exception_holder = boost::current_exception(); > } > std::cout << "<-- guarded exception-throwing CPP callback" << arg << > std::endl; > } If you're just going to do this, I'd suggest you save yourself some hassle and look into packaged_task which is a std::function combined with a std::future. It takes care of the exception trapping and management for you. Do bear in mind there is absolutely no reason you can't use a future within a single thread to traverse some state over third party binary blob code, indeed I do this in my own code at times e.g. if Qt, which doesn't like exceptions, stands between my exception throwing code at the end of a Qt signal and my code which called Qt. > pthread_t current_thread = pthread_self(); > if (exception_map.find(current_thread) != exception_map.end()) { > try { > std::cout << "rethrowing exception from exception map" << > std::endl; > boost::rethrow_exception(exception_map[current_thread]); > } catch (const std::exception& exc) { > std::cout << "caught callback exception: " << exc.what() << > std::endl; > } > } Why not using thread local storage? > Which doesn't respect call function depth (how would I do that?) Keep a count of nesting levels, so if a callback calls a callback which calls a callback etc. > and doesn't use a queue; I suppose you mean using the lockfree queue for > threadsafe > access to the hash table. Maybe a hash table of lockfree queues. Depends on your needs. > I think I don't even need that for my use case as I basically > - call dispatch on a Boost.Python wrapped object > - which invokes the C libs dispatcher function > - which invokes the registered callbacks (these will usually be implemented > in Python) > > i.e. the same thread that produced the exception will need to handle it. Another possibly useful idea is to have the C libs dispatcher dispatch packaged_task's into a per-thread lock free queue which you then dispatch on control return when the C library isn't in the way anymore. Call it deferred callbacks :) > Thanks for the valuable hints! You're welcome. Niall -- Any opinions or advice expressed here do NOT reflect those of my employer BlackBerry Inc. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ -------------- next part -------------- A non-text attachment was scrubbed... Name: SMime.p7s Type: application/x-pkcs7-signature Size: 6061 bytes Desc: not available URL: From Holger.Joukl at LBBW.de Thu Apr 18 11:13:38 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Thu, 18 Apr 2013 11:13:38 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: , <51636921.21247.1411F5CF@s_sourceforge.nedprod.com>, <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com> Message-ID: Before trying out Niall's wealth of suggestions, just to fix up my own erroneous example: > From: Holger Joukl > A bit of a simplified example version using Boost's partial C++11 exception > support > would then be s.th. like: > > 0 $ cat dispatch_main_exception_map.cpp > // File: dispatch_main_exception_map.cpp > > #include > [...] > // this callback raises an exception, catches it and stores it in the > // global (thread, exception)-map > void guarded_callback_with_exception(cb_arg_t arg) { > std::cout << "--> guarded exception-throwing CPP callback" << arg << > std::endl; > try { > throw std::runtime_error("throwing up"); > } catch (...) { > std::cout << "storing exception in exception map" << std::endl; > pthread_t current_thread = pthread_self(); > exception_map[current_thread] = boost::current_exception(); > exception_map.erase(current_thread); > [...] ^^^^^^^^^^^^^^^^^^^ | This defeats the purpose, of course, and rather needs to be put where the re-throw takes place. Shouldn't make late additions and then overlook the missing output. $ cat dispatch_main_exception_map.cpp // File: dispatch_main_exception_map.cpp #include #include #include #include // header-only: #include // header-only: #include #include "dispatch.h" // the global per-thread exception storage boost::unordered_map exception_map; void callback(cb_arg_t arg) { std::cout << "--> CPP callback" << arg << std::endl; std::cout << "<-- CPP callback" << arg << std::endl; } // this callback raises an exception void callback_with_exception(cb_arg_t arg) { std::cout << "--> exception-throwing CPP callback" << arg << std::endl; throw std::runtime_error("throwing up"); std::cout << "<-- exception-throwing CPP callback" << arg << std::endl; } // this callback raises an exception, catches it and stores it in the // global (thread, exception)-map void guarded_callback_with_exception(cb_arg_t arg) { std::cout << "--> guarded exception-throwing CPP callback" << arg << std::endl; try { throw std::runtime_error("throwing up"); } catch (...) { std::cout << "storing exception in exception map" << std::endl; pthread_t current_thread = pthread_self(); exception_map[current_thread] = boost::current_exception(); } std::cout << "<-- guarded exception-throwing CPP callback" << arg << std::endl; } int main(void) { std::cout << "--> CPP main" << std::endl; cb_arg_t s = "CPP main callback argument"; std::cout << std::endl; invoke(&callback, s); std::cout << std::endl; invoke(&guarded_callback_with_exception, s); pthread_t current_thread = pthread_self(); if (exception_map.find(current_thread) != exception_map.end()) { try { std::cout << "rethrowing exception from exception map" << std::endl; boost::exception_ptr current_exception(exception_map [current_thread]); exception_map.erase(current_thread); boost::rethrow_exception(current_exception); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } } std::cout << std::endl; try { invoke(&callback_with_exception, s); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } std::cout << std::endl; std::cout << "<-- CPP main" << std::endl; return 0; } 0 $ ./a.out --> CPP main --> invoke(136888, CPP main callback argument) --> CPP callbackCPP main callback argument <-- CPP callbackCPP main callback argument <-- invoke(136888, CPP main callback argument) --> invoke(137608, CPP main callback argument) --> guarded exception-throwing CPP callbackCPP main callback argument storing exception in exception map <-- guarded exception-throwing CPP callbackCPP main callback argument <-- invoke(137608, CPP main callback argument) rethrowing exception from exception map caught callback exception: throwing up --> invoke(137112, CPP main callback argument) --> exception-throwing CPP callbackCPP main callback argument caught callback exception: throwing up <-- CPP main Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From beamesleach at gmail.com Thu Apr 18 15:09:32 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 18 Apr 2013 14:09:32 +0100 Subject: [C++-sig] Tips for exposing classes with own memory management model Message-ID: Dear list, Apologies if this has been asked before, but I'm struggling to find anything strictly related.. Background ---------- This library I'm trying to wrap uses its own memory management model, where almost every class derives from an object with loads of memory management-related member functions; it also has a couple of friend classes related to counting and locking. I don't intend to expose any of these memory-related functions or friend classes to Python, but I was thinking that performance could be quite badly affected if both Python and C++ code are performing separate memory management implementations. Optimal memory usage -------------------- I would suppose that memory usage on class instances would probably contain unnecessary bloat too, as I think each exposed class instantiation would allocate memory for a normal PyObject as well as unexposed C++ member functions. Right thing to do ----------------- I initially hoped to use a 'return_internal_reference' CallPolicy on the class_<..> init calls, but I doubt that is The Right Thing To Do. Would it be a better design to define a PyTypeObject for this C++ base class and its friends? If I did, could I still use functions in boost::python? I don't think PyTypeObject's are supposed to be derived, so I don't have a clue what extra I'd have to do to make it work with Boost::Python. How should one proceed with this? Links to archived emails or documentation would be great.. If I can conjure up something good enough for Boost, I'd be happy to contribute, if possible. Thanks for your time, and kind regards, Alex From talljimbo at gmail.com Thu Apr 18 16:24:09 2013 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 18 Apr 2013 10:24:09 -0400 Subject: [C++-sig] Tips for exposing classes with own memory management model In-Reply-To: References: Message-ID: On Apr 18, 2013 9:12 AM, "Alex Leach" wrote: > > Dear list, > > Apologies if this has been asked before, but I'm struggling to find anything strictly related.. > > Background > ---------- > > This library I'm trying to wrap uses its own memory management model, where almost every class derives from an object with loads of memory management-related member functions; it also has a couple of friend classes related to counting and locking. I don't intend to expose any of these memory-related functions or friend classes to Python, but I was thinking that performance could be quite badly affected if both Python and C++ code are performing separate memory management implementations. > > Optimal memory usage > -------------------- > > I would suppose that memory usage on class instances would probably contain unnecessary bloat too, as I think each exposed class instantiation would allocate memory for a normal PyObject as well as unexposed C++ member functions. > > Right thing to do > ----------------- > > I initially hoped to use a 'return_internal_reference' CallPolicy on the class_<..> init calls, but I doubt that is The Right Thing To Do. > > Would it be a better design to define a PyTypeObject for this C++ base class and its friends? If I did, could I still use functions in boost::python? I don't think PyTypeObject's are supposed to be derived, so I don't have a clue what extra I'd have to do to make it work with Boost::Python. > > > > How should one proceed with this? Links to archived emails or documentation would be great.. If I can conjure up something good enough for Boost, I'd be happy to contribute, if possible. > If you go with writing your own PyTypeObject, you will indeed have a lot more control, but it will greatly limit how much Boost.Python you can use (no class_, for instance, at least), and you'll need to dive deep into the Boost.Python implementation to learn how and when you can use it. I'd only consider recommending this approach if you wanted to wrap one or two simple classes this way and then use regular Boost.Python for everything else. I think the best solution would probably be to use shared_ptr with a custom deleter, as that gives you control over how your objects are allocated while giving Boost.Python an object it knows how to handle extremely well. One key ingredient of this is that instead of wrapping C++ constructors, you'll want to wrap factory functions that return shared_ptrs. You can even wrap such functions as Python constructors using make_constructor. All that said, my first recommendation would be to try to wrap it (or at least a subset of it) without trying to get the optimal memory performance first, and only fix it if it actually is a performance problem. You might be surprised at where the time ends up going. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From beamesleach at gmail.com Thu Apr 18 17:44:42 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 18 Apr 2013 16:44:42 +0100 Subject: [C++-sig] Tips for exposing classes with own memory management model In-Reply-To: References: Message-ID: Thank you for the quick response! On Thu, 18 Apr 2013 15:24:09 +0100, Jim Bosch wrote: > > > If you go with writing your own PyTypeObject, you will indeed have a lot > more control, but it will greatly limit >how much Boost.Python you can > use (no class_, for instance, at least), and you'll need to dive deep > into the >Boost.Python implementation to learn how and when you can use > it. I'd only consider recommending this approach if >you wanted to wrap > one or two simple classes this way and then use regular Boost.Python for > everything else. No class_<..> type would be a problem, as I've already exposed a lot of its derived classes this way.. There's got to be about 100 in total. :-\ > > I think the best solution would probably be to use shared_ptr with a > custom deleter, as that gives you control over >how your objects are > allocated while giving Boost.Python an object it knows how to handle > extremely well. One key >ingredient of this is that instead of wrapping > C++ constructors, you'll want to wrap factory functions that return > >shared_ptrs. You can even wrap such functions as Python constructors > using make_constructor. I've already had to do this once, so I've got some experience with the technique, although can't remember exactly why it was needed. Thank you for a viable option, though! > > All that said, my first recommendation would be to try to wrap it (or at > least a subset of it) without trying to >get the optimal memory > performance first, and only fix it if it actually is a performance > problem. You might be >surprised at where the time ends up going. :) Lol, yes, I can see how attempting to define a new PyTypeObject could become very time-consuming! If I were to go this route though (I probably will), is there a Boost Python registry or something where I can register these new types? I just thought to look into list.hpp, as I figure the boost::python::list would probably make use of the PyListType, which could be thought of as a similar PyTypeObject specialisation to one I might like to create. ...... Now I've done a little more digging, I think I have an idea of how to do it, which I'll detail now.. Any tips, pointers or advice in Boost::Python'ising it would be appreciated, as I have next to no experience with the Python-C API beyond Boost. Resources --------- Worth mentioning that the best (simplest) resources I've found to go by, are: [1] - For the C-Python side: http://docs.python.org/2/extending/newtypes.html [2] - For Boost-Python side: http://www.boost.org/doc/libs/1_53_0/boost/python/list.hpp Steps to expose:- ----------------- a. - Create mytypeobject.h, a raw C-Python extension type, as explained in [1]. a.i. - Create unit-test, by deriving such a type in Python, making a few instances, deleting some and leaving others to the garbage collector. b. Create mytypeobject.hpp, where a type similar to boost::python::list is declared. Register it, with for example:- // From list.hpp:- // namespace converter { template <> struct object_manager_traits : pytype_object_manager_traits<&PyList_Type,list> { }; } What else? ---------- I very much doubt it will be this simple, as any exposed class_<> would probably still attach a typical PyObject, thereby using PyTypeObject's memory management functions, rather than that of any type object specialisation. Example ------- For example, let's suppose PyListObject has some functionality I need in an exposed class, and I need the garbage collector to use PyListType instead of PyTypeObject, when managing its memory. Is there a way to attach a PyListObject to a class_<> instance, instead of a PyObject? Perhaps an alternative to the class_<> template could be designed to use any arbitrary type, instead of the default PyTypeObject. Something which could be used like this would be cool:- boost::python::type_object_ ("alt_list", "PyListObject wrapped in Boost.", init<>()) .def("append", PyList_Append) /// ... ; How much farther do you think I would need to dig into Boost internals to implement such functionality? Worth it? Cheers, Alex > > Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbosch at astro.princeton.edu Thu Apr 18 18:14:05 2013 From: jbosch at astro.princeton.edu (Jim Bosch) Date: Thu, 18 Apr 2013 12:14:05 -0400 Subject: [C++-sig] Tips for exposing classes with own memory management model In-Reply-To: References: Message-ID: <51701BCD.3040706@astro.princeton.edu> On 04/18/2013 11:44 AM, Alex Leach wrote: > Thank you for the quick response! > > On Thu, 18 Apr 2013 15:24:09 +0100, Jim Bosch wrote: > > If you go with writing your own PyTypeObject, you will indeed have a lot more control, but it will greatly limit how much Boost.Python you can use (no class_, for instance, at least), and you'll need to dive deep into the Boost.Python implementation to learn how and when you can use it. I'd only consider recommending this approach if you wanted to wrap one or two simple classes this way and then use regular Boost.Python for everything else. > > > No class_<..> type would be a problem, as I've already exposed a lot of its derived classes this way.. There's got to be about 100 in total. :-\ > > I think the best solution would probably be to use shared_ptr with a custom deleter, as that gives you control over how your objects are allocated while giving Boost.Python an object it knows how to handle extremely well. One key ingredient of this is that instead of wrapping C++ constructors, you'll want to wrap factory functions that return shared_ptrs. You can even wrap such functions as Python constructors using make_constructor. > > I've already had to do this once, so I've got some experience with the technique, although can't remember exactly why it was needed. Thank you for a viable option, though! > > All that said, my first recommendation would be to try to wrap it (or at least a subset of it) without trying to get the optimal memory performance first, and only fix it if it actually is a performance problem. You might be surprised at where the time ends up going. > > > :) Lol, yes, I can see how attempting to define a new PyTypeObject could become very time-consuming! > > If I were to go this route though (I probably will), is there a Boost Python registry or something where I can register these new types? I just thought to look into list.hpp, as I figure the boost::python::list would probably make use of the PyListType, which could be thought of as a similar PyTypeObject specialisation to one I might like to create. > > You can't really register the types themselves. All you can do is register custom converters for them, i.e. You'll need to read the code and comments in the "converter" subdirectories of the Boost.Python source to really learn how to do that, though I think there are some how-tos scattered about the web. That would be enough to allow you to wrap functions and member functions that take these objects using boost::python::make_function and the like, and you could then add those to your type object using C API calls (PyObject_SetAttr) or their Boost.Python equivalents. Even so, you're starting down what seems like a really painful road. > ...... > > > Now I've done a little more digging, I think I have an idea of how to do it, which I'll detail now.. Any tips, pointers or advice in Boost::Python'ising it would be appreciated, as I have next to no experience with the Python-C API beyond Boost. > > Resources > --------- > > Worth mentioning that the best (simplest) resources I've found to go by, are: > > [1] - For the C-Python side: http://docs.python.org/2/extending/newtypes.html > [2] - For Boost-Python side: http://www.boost.org/doc/libs/1_53_0/boost/python/list.hpp > > > Steps to expose:- > ----------------- > > a. - Create mytypeobject.h, a raw C-Python extension type, as explained in [1]. > > a.i. - Create unit-test, by deriving such a type in Python, making a > few instances, deleting some and leaving others to the garbage > collector. > b. Create mytypeobject.hpp, where a type similar to boost::python::list is declared. Register it, with for example:- > > // From list.hpp:- > // > namespace converter > { > template <> > struct object_manager_traits > : pytype_object_manager_traits<&PyList_Type,list> > { > }; > } > Doing this sort of thing will allow you to get a Python object that's an instance of your PyTypeObject. It might be a useful bit of utility code, but it's really not directly what you want, I think, which is to be able to convert between Python instances of your class and C++ instances. > What else? > ---------- > > I very much doubt it will be this simple, as any exposed class_<> would probably still attach a typical PyObject, thereby using PyTypeObject's memory management functions, rather than that of any type object specialisation. > > > Example > ------- > > For example, let's suppose PyListObject has some functionality I need in an exposed class, and I need the garbage collector to use PyListType instead of PyTypeObject, when managing its memory. > > Is there a way to attach a PyListObject to a class_<> instance, instead of a PyObject? Perhaps an alternative to the class_<> template could be designed to use any arbitrary type, instead of the default PyTypeObject. Something which could be used like this would be cool:- > > boost::python::type_object_ > ("alt_list", "PyListObject wrapped in Boost.", > init<>()) > .def("append", PyList_Append) > /// ... > ; > > How much farther do you think I would need to dig into Boost internals to implement such functionality? Worth it? > It's pretty much definitely not worth it, IMO; you'd have to essentially duplicate and rewrite major parts of Boost.Python to support putting a custom PyTypeObject in a class_. The class_ infrastructure relies very heavily not just on its own PyTypeObject hiearchy, but also on a custom metaclass. In fact, now that I think about it, you'll probably need to do some of that even if you don't try to use class_ or something like it. I was originally thinking that maybe you could get away with essentially wrapping your own classes just using the Python C API directly (i.e. following the approach in the "extending and embedding" tutorial in the official Python docs), but then use Boost.Python to wrap all of your functions and handle type conversion. But even that seems like it's pretty difficult. So I guess the summary is that I think you may be making a mistake by taking this approach, but I'm sure you'll learn something either way. You've been warned ;-) Jim From beamesleach at gmail.com Thu Apr 18 20:17:28 2013 From: beamesleach at gmail.com (Alex Leach) Date: Thu, 18 Apr 2013 19:17:28 +0100 Subject: [C++-sig] Tips for exposing classes with own memory management model In-Reply-To: <51701BCD.3040706@astro.princeton.edu> References: <51701BCD.3040706@astro.princeton.edu> Message-ID: Hi, Thanks again for the fast response! On Thu, 18 Apr 2013 17:14:05 +0100, Jim Bosch wrote: > > You can't really register the types themselves. All you can do is > register custom converters for them, i.e. You'll need to read the code > and comments in the "converter" subdirectories of the Boost.Python > source to really learn how to do that, though I think there are some > how-tos scattered about the web. Registered converters do sound another good option; I have seen some nice how-tos describing their usage. Thanks again for another decent suggestion. So far, I've found the class_<> template so easy to use that I've hardly needed to touch converters, nor any form of registry. > > That would be enough to allow you to wrap functions and member functions > that take these objects using boost::python::make_function and the like, > and you could then add those to your type object using C API calls > (PyObject_SetAttr) or their Boost.Python equivalents. Even so, you're > starting down what seems like a really painful road. Yes, I think you're right; it does sound a very long and painful road! But when I use a generator to iterate through containers holding millions of instances - which I totally intend to do - I'd like the base classes of these instances to be as lightweight as possible. I don't know much (anything) about the implementations, but I've read that Cython's memoryview's are amazingly efficient at extracting lightweight instances from C/C++ containers. I think numpy arrays might be similar in this regard. From what I've been seen today, I imagine there are specialised PyTypeObject's somewhere in their midsts. > > Doing this sort of thing will allow you to get a Python object that's an > instance of your PyTypeObject. It might be a useful bit of utility > code, but it's really not directly what you want, I think, which is to > be able to convert between Python instances of your class and C++ > instances. Yes, I was hoping this memory-managed base class could be used very similarly to both Python's and Boost's 'object'. Registered converters seem like a must in this regard. > > It's pretty much definitely not worth it, IMO; you'd have to essentially > duplicate and rewrite major parts of Boost.Python to support putting a > custom PyTypeObject in a class_. The class_ infrastructure relies very > heavily not just on its own PyTypeObject hiearchy, but also on a custom > metaclass. I wouldn't want to repeat anything - I'm familiar with the principles D.R.Y. and KISS... Inheriting functionality from class_<> and overriding specific members would be preferable, assuming its constructors doesn't explicitly use PyTypeObject. Is that impossible, given the current class_ template inheritance chain? > In fact, now that I think about it, you'll probably need to do some of > that even if you don't try to use class_ or something like it. I was > originally thinking that maybe you could get away with essentially > wrapping your own classes just using the Python C API directly (i.e. > following the approach in the "extending and embedding" tutorial in the > official Python docs), but then use Boost.Python to wrap all of your > functions and handle type conversion. But even that seems like it's > pretty difficult. Yes, that does sound tough! I've struggled to understand the C-Python docs before, and can't waste too much time atm repeating the process.. > > So I guess the summary is that I think you may be making a mistake by > taking this approach, but I'm sure you'll learn something either way. > You've been warned ;-) Thanks for the warnings, and please excuse my naiivety. It was late last night when my real problem made me think of defining a new PyTypeObject, but I should definitely steer clear of any potential pitfalls and time-thieves atm. I should just get it working, first off... Actual problem -------------- My actual problem? (Sorry, it's hard to explain without an inheritance diagram... So I just made one - see the png attached.) Yesterday, I witnessed first hand the diamond of death multiple inheritance problem. I had extended the Base class to the memory-managed class - lets call these BaseObj and ManagedObj, respectively - with an additional 'Print(void)' method, to be exposed in Python as PyBaseObj.__repr__. ADerived (and about 100 other classes) all inherit from ManagedObj, and I thought it could be possible to call 'this->Print()', in e.g. PyADerived's methods. As the libraries I'm using don't use virtual inheritance, I soon learnt this would be impossible, but I figured I could add 'bases' to the 'class_' template constructor, and call it from C++ with: 'bp::object(aderived_obj).attr("__repr__")()'. So I've done that, which seems to work fine, but there do seem to be problems in the registry when calling certain derived class methods. I've got the old error:- Boost.Python.ArgumentError: Python argument types in PyBDerived.GetA(PyBDerived) did not match C++ signature: GetA(PyBDerived{lvalue}) The return type (which isn't mentioned in the above error) is a reference to another class - PyADerived - which also indirectly derives from ManagedObj. I can't figure out exactly why this breaks - I've played a lot with constness, to no avail - but I think the fix might be to expose ManagedObj directly, and not a class derived from it, which is what I've been doing fairly repeatedly up until now... BaseObj, however, would have to be derived, as it has pure-virtual member functions. It would probably have been much more sensible to write the Print method as a free function, and attach it to ManagedObj with def(...), rather than exposing as a derived class member function. I don't actually need any functionality in BaseObj that I can't call from a derived class, so I think I should just skip exposing that and attach the 'Print' method directly to the exposed ManagedObj... Apologies for the rambling and the lengthening emails! Thanks again for taking the time to write out your thoughts. Kind regards, Alex -- Using Opera's mail client: http://www.opera.com/mail/ -------------- next part -------------- A non-text attachment was scrubbed... Name: thin_wrappers.png Type: image/png Size: 30732 bytes Desc: not available URL: From beamesleach at gmail.com Fri Apr 19 14:26:20 2013 From: beamesleach at gmail.com (Alex Leach) Date: Fri, 19 Apr 2013 13:26:20 +0100 Subject: [C++-sig] Tips for exposing classes with own memory management model In-Reply-To: <51701BCD.3040706@astro.princeton.edu> References: <51701BCD.3040706@astro.princeton.edu> Message-ID: Hi again, Whilst hoping for a reply, I thought I'd add some further insights I've learnt about the current PyTypeObject scheme. On Thu, 18 Apr 2013 17:14:05 +0100, Jim Bosch wrote: > I was originally thinking that maybe you could get away with > essentially wrapping your own classes just using the Python C API > directly (i.e. following the approach in the "extending and embedding" > tutorial in the official Python docs), but then use Boost.Python to wrap > all of your functions and handle type conversion. But even that seems > like it's pretty difficult. Well, this is basically what I did... I started playing around with noddy_NoddyType[1], to see what I could learn about Boost.Python's current way of things. I don't want to break my code just yet, nor fall any more of a victim to the early optimisation problem, but here we go anyway.. Exposing a simple type ---------------------- Wrapping noddy_NoddyObject is about as simple as it gets: ///////////////////////////////////////////////////////////////// // // File: noddy.hpp #include "noddy.h" // (code from [1]) struct NoddyClass : noddy_NoddyObject { NoddyClass(void) : noddy_NoddyObject() {} ~NoddyClass() {} }; // Register converter for classes derived from NoddyClass. // i.e. Tell Boost::Python to use noddy_NoddyType members. // // N.B. For this to compile, noddy_NoddyType must be forward // declared like PyListType. // // i.e. // PyAPI_DATA(PyTypeObject) noddy_NoddyType; // // Then, don't(!) declare noddy_NoddyType as static. namespace boost { namespace python { namespace converter { template <> struct object_manager_traits : pytype_object_manager_traits<&noddy_NoddyType, NoddyClass> { }; } } } ///////////////////////////////////////////////////////////////// // // File: noddy.cpp #include "noddy.hpp" #include #include BOOST_PYTHON_MODULE(noddy) { boost::python::class_ ("Noddy", "Empty Python object using custom PyTypeObject", boost::python::init<>()[ ...CallPolicies... ]) // ... ; } ----------------- Analysis -------- Now, I'm not exactly experienced when it comes to disassembling and the like, but an easy observation to make, from the Python side, is to use sys.getsizeof() :- >>> import sys, noddy >>> nod = noddy.Noddy() >>> print sys.getsizeof(nod) 80 Okay, 80 bytes apparently. And a plain old Python object? >>> obj = object() >>> print sys.getsizeof(obj) 16 If I use only the C-Python API, compiling the code exactly as listed in [1], then:- >>> print sys.getsizeof(nod) 16 Relatively, that's quite a vast size difference (x5), so I thought I'd look for an explanation. I found it in a couple of places: the python wiki [2], and instance.hpp[3]. Sequence objects ---------------- As mentioned on the wiki, amongst other things, a wrapped object is the size of "the extra size required to allow variable-length data in the instance". This is fixed in instance.hpp, where Boost uses the 'PyObject_VAR_HEAD' macro at the top of the instance template. This is instead of the comparatively much smaller 'PyObject_HEAD' macro used in simple objects. A fairer test then, might be to compare a Python list() to a Boost.Python-wrapped NoddyObject:- >>> print sys.getsizeof(list()) 72 The relative size difference is now much smaller (x10/9), but I can't explain the last lost 8 bytes. There are apparently "zero or more bytes of padding required to ensure that the instanceholder is properly aligned." I don't really understand why this is necessary; isn't the compiler supposed to decide how to align objects? Well, apparently numpy arrays do the same thing[4]. NumPy ----- So, what about numpy arrays? >>> print sys.getsizeof(numpy.array([])) 80 Oh, that's impressive! It's identical. With that in mind, I would believe that the object is as good as it gets for sequence types. But what about simple numpy datatypes? >>> print sys.getsizeof( numpy.float64() ) 24 Ah, now there's a noticeable difference (x10/3). But, I think Boost objects could be made identical in this respect, with some time and dedication... ------------------ It doesn't appear that anyone has ever had an issue with this design, but it seems to me that there is large room for improvement in memory efficiency, when it comes to simple data-types managing only one thing at a time. Notes ----- * I can't see any effect from the object_manager_traits<> call. Doesn't seem to matter if I use it or not, but that's probably because I haven't done anything special with noddy_NoddyType. * I tried some different CallPolicies on the class_<> init method: the default; return_internal_reference; and copy_const_reference. None of these affect object size. * A couple of open source extension modules where I've seen PyTypeObject's used: Numpy API docs [4] and PythonQt source code [5]. ------------------ Now I'm guessing that there are no plans to improve the memory usage of bp::object's. From what Jim has said and from what I've seen in 'class.hpp', 'object_base.hpp' and other header files, it would probably require some dramatic modifications. But could it be as simple as adding a template to instance.hpp? Probably not.. It's not currently in my capacity to rewrite large portions of boost python; I'm far too new to Boost and C++ in general to begin to attempt this really, and I've got a thesis to write atm, too... Back to previous use case ------------------------- From what I've seen in my extension library (and using the classes from my last email), every ManagedObj could be designated as a simple datatype; there are (far fewer) dedicated containers for managing multiple instances of them; these would require using the current instance system, but most of my exposed classes would benefit from a smaller Python instance. If anyone has any thoughts or ideas on how to squeeze extra performance from simple PyTypeObject's, please do share! Kind regards, Alex [1] - http://docs.python.org/2/extending/newtypes.html [2] - http://wiki.python.org/moin/boost.python/InternalDataStructures [3] - http://www.boost.org/doc/libs/1_53_0/boost/python/object/instance.hpp [4] - http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html [5] - http://pythonqt.sourceforge.net/PythonQtClassWrapper_8h_source.html From christian.staudt at kit.edu Mon Apr 22 13:42:28 2013 From: christian.staudt at kit.edu (Christian Staudt) Date: Mon, 22 Apr 2013 13:42:28 +0200 Subject: [C++-sig] building Boost.Python with Message-ID: Hello cplusplus-sig, I would very much like to provide my C++ project with a Python 3 interface. Boost.Python seems like a good option, but I have trouble getting started. I believe I need to build Boost.Python from source against my Python3.3 installation, but I am running into problems which I wasn't able to solve with the help of [1]. Is it appropriate on this list to ask for support in the build process? Would anyone be willing to look into the issue? Chris [1]: http://www.boost.org/doc/libs/1_53_0/libs/python/doc/building.html -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4923 bytes Desc: not available URL: From talljimbo at gmail.com Mon Apr 22 15:46:30 2013 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 22 Apr 2013 09:46:30 -0400 Subject: [C++-sig] building Boost.Python with In-Reply-To: References: Message-ID: <51753F36.2060600@gmail.com> On 04/22/2013 07:42 AM, Christian Staudt wrote: > Hello cplusplus-sig, > > I would very much like to provide my C++ project with a Python 3 interface. Boost.Python seems like a good option, but I have trouble getting started. I believe I need to build Boost.Python from source against my Python3.3 installation, but I am running into problems which I wasn't able to solve with the help of [1]. > > Is it appropriate on this list to ask for support in the build process? Would anyone be willing to look into the issue? > I think it's entirely appropriate, and it's probably worthwhile to post more details, but speaking for myself, I know little about either Boost.Build or Python 3.3, and I suspect there may not be a lot of expertise on this list for the combination. I'm not sure where else to look, though - but I see that you've also posted this question on Stack Overflow, and I think that and here are probably the best places I can think of. Jim From Andreas.Kostyrka at kapsch.net Mon Apr 22 21:13:24 2013 From: Andreas.Kostyrka at kapsch.net (Kostyrka (External user) Andreas) Date: Mon, 22 Apr 2013 21:13:24 +0200 Subject: [C++-sig] pygccxml Message-ID: Hi! I just wondered where the authorative repository for pygccxml/pyplusplus are? The sf.net ones seem to lack commits for rather long periods of time. TIA, AndreasThe information contained in this e-mail message is privileged and confidential and is for the exclusive use of the addressee. The person who receives this message and who is not the addressee, one of his employees or an agent entitled to hand it over to the addressee, is informed that he may not use, disclose or reproduce the contents thereof, and is kindly asked to notify the sender and delete the e-mail immediately. From 217534 at gmail.com Wed Apr 24 00:33:47 2013 From: 217534 at gmail.com (=?ISO-8859-1?Q?Jan_M=FCller?=) Date: Wed, 24 Apr 2013 00:33:47 +0200 Subject: [C++-sig] pylab not properly working when embedded into Qt Message-ID: <51770C4B.9040001@gmail.com> Hi! I try to embed python into a Qt C++ application. I don't need any speciality, just run some prepared scripts from time to time. E.g. I have a function which should do some plotting through pylab. When I call this function ("run", see below) from the main (GUI) thread, everything works fine and as expected. But as soon as I spawn a new thread to run this function in it, it stops to work ('hangs' forever?) Strangely, the problem only shows up when I call 'import pylab'. It seems everything else which I tested (all other non-pylab related python code) works fine even when executed from any thread. I need to run the python scripts in a different thread than the main thread (which also controls the GUI) to keep the application responsive while running longer scripts. int run() { Py_Initialize(); PyGILState_STATE state = PyGILState_Ensure(); PyRun_SimpleString("import pylab; pylab.plot([1,2]); pylab.show();"); PyGILState_Release(state); Py_Finalize(); } // the function from the Qt GUI-thread (main thread) works fine // (with any python string in PyRun_SimpleString() ) run(); // but this blocks forever at 'import pylab' std::thread t = std::thread(run); t.join(); (I call Py_Initialize only once in the program) Could this be a threading issue? I don't think so, because even when the qt app uses multiple threads, only one of it accesses python variables. It seems to me, pylab does not like it when another thread controls the GUIs!?! Is there a work around/fix? What do I miss? I tried putting Py_Initialize/Py_Finalize also into the main thread, but it didn't made any difference. Many thanks for any hints! Regards, Jan From Andreas.Kostyrka at kapsch.net Wed Apr 24 15:15:11 2013 From: Andreas.Kostyrka at kapsch.net (Kostyrka (External user) Andreas) Date: Wed, 24 Apr 2013 15:15:11 +0200 Subject: [C++-sig] pylab not properly working when embedded into Qt In-Reply-To: <51770C4B.9040001@gmail.com> References: <51770C4B.9040001@gmail.com> Message-ID: <1635364C2ABD024BAF4859C82AB100611C10D57E7F@S060VS20.kapsch.co.at> Just to make sure, pylab.show() interacts with the GUI, right? Most GUIs, including Qt if I remember right, are not multithreaded, meaning you've got one GUI thread running the GUI main event loop, and any manipulation of the GUI does need to happen from there. (One usually has mechanisms to inject events/work into the event loop, and to push out work into other threads, ...) My recommendation would be to run it with three separate PyRun_SimpleString() and see where it blocks. Some locking considerations: * import does internally locking. * If pylab.show() does use Qt GUI functionality, it will almost burn (that can also include failure modes like blocking) for sure. * If pylab.plot() is the long running thing, then consider calling pylab.plot() in a separate thread, and when done pylab.plot() in the GUI/main thread. The usual way to keep a GUI responsive during long work, if work involves the GUI is to make "yield to process GUI events" calls in you inner loop. (the exact naming depends again on the GUI). This way the GUI is handled and stays responsive, while your work can proceed. Regards, Andreas The information contained in this e-mail message is privileged and confidential and is for the exclusive use of the addressee. The person who receives this message and who is not the addressee, one of his employees or an agent entitled to hand it over to the addressee, is informed that he may not use, disclose or reproduce the contents thereof, and is kindly asked to notify the sender and delete the e-mail immediately. From Holger.Joukl at LBBW.de Mon Apr 29 16:46:26 2013 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Mon, 29 Apr 2013 16:46:26 +0200 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: <516F4111.26650.4255D8A9@s_sourceforge.nedprod.com> References: , <5169DE10.20163.2D4AB992@s_sourceforge.nedprod.com>, <516F4111.26650.4255D8A9@s_sourceforge.nedprod.com> Message-ID: Hi, > From: "Niall Douglas" > On 17 Apr 2013 at 17:13, Holger Joukl wrote: > > > // the global per-thread exception storage > > boost::unordered_map exception_map; > > You can't assume pthread_t is of integral type. You can a thread_t > (from C11) I believe. You may not care on your supported platforms > though. I see. If it isn't an integral type I suppose I'd need to supply a custom hash implementation to be able to use unordered_map. > > throw std::runtime_error("throwing up"); > > If you're going to use Boost's exception_ptr implementation, you > really ought to throw using Boost's exception throw macro. Otherwise > stuff become unreliable. Right. This should then rather be throw boost::enable_current_exception(std::runtime_error("throwing up")); Looks like unfortunately Boost.Python does not use this itself so a packaged_task/future will throw unknown_exception for the deferred invocation of a boost::python::error_already_set-throwing call. I could work around this in my case by wrapping the original call in try-catch and re-raise like try { this->get_override("onMsg")(bp::ptr(listener), boost::ref(msg)); } catch (const bp::error_already_set& e) { // enable support for cloning the current exception, to avoid having // a packaged_task/future throw an unknown_exception boost::enable_current_exception(bp::error_already_set()); } > > [...] > > If you're just going to do this, I'd suggest you save yourself some > hassle and look into packaged_task which is a std::function combined > with a std::future. It takes care of the exception trapping and > management for you. Do bear in mind there is absolutely no reason you > can't use a future within a single thread to traverse some state over > third party binary blob code, indeed I do this in my own code at > times e.g. if Qt, which doesn't like exceptions, stands between my > exception throwing code at the end of a Qt signal and my code which > called Qt. > > > [...] > > Why not using thread local storage? Using thread local storage of a queue of deferred futures might then look s.th. like this: $ cat dispatch_main_packaged_task_tls_queue.cpp // File: dispatch_main_packaged_task_tls_queue.cpp #include #include #include #include #include // header-only #include // header-only #include // header-only #include #include "dispatch.h" // global thread local store of void-returning task results // (per-thread queue of futures) class CURRENT_THREAD_STATE { public: typedef boost::unique_future unique_future_t; typedef boost::shared_ptr unique_future_ptr_t; typedef boost::lockfree::spsc_queue< unique_future_ptr_t, boost::lockfree::capacity<10> > future_queue_t; typedef boost::thread_specific_ptr< future_queue_t > tls_t; private: // the internal per-thread futures queue storage static tls_t _current_thread_tls; public: // package a task and defer it // (to a thread-local queue of of void-returning futures) template static void deferred(Fn func, A0 const & arg1) { // need a nullary function object as an arg to packaged_task so // use boost::bind boost::packaged_task pt(boost::bind(func, arg1)); unique_future_t fut = pt.get_future(); future_queue_t * queue_ptr = _current_thread_tls.get(); if (queue_ptr == NULL) { queue_ptr = new future_queue_t; _current_thread_tls.reset(queue_ptr); } unique_future_ptr_t fut_ptr(new unique_future_t(boost::move(fut))); queue_ptr->push(fut_ptr); pt(); } // retrieve the deferred task results static void get_deferred(void) { future_queue_t * queue_ptr = _current_thread_tls.get(); if (queue_ptr != NULL) { unique_future_ptr_t fut_ptr; if (queue_ptr->pop(fut_ptr)) { if (fut_ptr) { fut_ptr->get(); } } } } }; // don't forget the definition of the static member variable CURRENT_THREAD_STATE::tls_t CURRENT_THREAD_STATE::_current_thread_tls; void callback(cb_arg_t arg) { std::cout << "--> CPP callback arg=" << arg << std::endl; std::cout << "<-- CPP callback arg=" << arg << std::endl; } // this callback raises an exception void callback_with_exception(cb_arg_t arg) { std::cout << "--> exception-throwing CPP callback arg=" << arg << std::endl; std::string errormsg("throwing up "); errormsg += arg; std::cout << "errormsg=" << errormsg << std::endl; throw std::runtime_error(errormsg); std::cout << "<-- exception-throwing CPP callback arg=" << arg << std::endl; } // this calls the exception raising callback but has packaged_task manage // the result and possible exceptions void guarded_callback_with_exception(cb_arg_t arg) { std::cout << "--> guarded exception-throwing CPP callback arg=" << arg << std::endl; CURRENT_THREAD_STATE::deferred(&callback_with_exception, arg); std::cout << "<-- guarded exception-throwing CPP callback arg=" << arg << std::endl; } int main(void) { std::cout << "--> CPP main" << std::endl; cb_arg_t s = "CPP main callback argument"; std::cout << std::endl << "invoking callback" << std::endl; invoke(&callback, s); std::cout << std::endl << "invoking guarded callback" << std::endl; invoke(&guarded_callback_with_exception, s); try { std::cout << "rethrowing exception from global tls" << std::endl; CURRENT_THREAD_STATE::get_deferred(); } catch (const std::exception& exc) { std::cout << "caught rethrown callback exception: " << exc.what() << std::endl; } for (int i=0; i<3; ++i) { std::string cb_string_arg = std::string("CPP main cb arg ") + boost::lexical_cast(i); std::cout << std::endl << "invoking guarded callback (run " << i << ")" << std::endl; invoke(&guarded_callback_with_exception, cb_string_arg.c_str()); } for (int i=0; i<3; ++i) { try { std::cout << "rethrowing exception from global tls (run " << i << ")" << std::endl; CURRENT_THREAD_STATE::get_deferred(); } catch (const std::exception& exc) { std::cout << "caught rethrown callback exception: " << exc.what () << std::endl; } } // this is expected to segfault iff the underlying C lib isn't compiled with // exception support std::cout << std::endl << "invoking exception-throwing callback" << std::endl; std::cout << "(will segfault iff c lib is not exception-enabled)" << std::endl; try { invoke(&callback_with_exception, s); } catch (const std::exception& exc) { std::cout << "caught callback exception: " << exc.what() << std::endl; } std::cout << std::endl; std::cout << "<-- CPP main" << std::endl; return 0; } 0 $ BOOST_VERSION=1.53.0; BOOST_DIR=boost_${BOOST_VERSION//./_}; echo $BOOST_DIR; /apps/local/gcc/4.6.1/bin/g++ -pthreads -I. -L. -R. -R /apps/prod/gcc/4.6.1/lib/ -R /var/tmp/BUILD/gcc/$BOOST_DIR/stage/gcc-4.6.1/py2.7/boost/$BOOST_VERSION/lib -L /var/tmp/BUILD/gcc/$BOOST_DIR/stage/gcc-4.6.1/py2.7/boost/$BOOST_VERSION/lib -I /var/tmp/BUILD/gcc/$BOOST_DIR/ -ldispatch -lboost_thread -lboost_system -lboost_atomic dispatch_main_packaged_task_tls_queue.cpp && ./a.out boost_1_53_0 --> CPP main invoking callback --> invoke(298752, CPP main callback argument) --> CPP callback arg=CPP main callback argument <-- CPP callback arg=CPP main callback argument <-- invoke(298752, CPP main callback argument) invoking guarded callback --> invoke(299312, CPP main callback argument) --> guarded exception-throwing CPP callback arg=CPP main callback argument --> exception-throwing CPP callback arg=CPP main callback argument errormsg=throwing up CPP main callback argument <-- guarded exception-throwing CPP callback arg=CPP main callback argument <-- invoke(299312, CPP main callback argument) rethrowing exception from global tls caught rethrown callback exception: throwing up CPP main callback argument invoking guarded callback (run 0) --> invoke(299312, CPP main cb arg 0) --> guarded exception-throwing CPP callback arg=CPP main cb arg 0 --> exception-throwing CPP callback arg=CPP main cb arg 0 errormsg=throwing up CPP main cb arg 0 <-- guarded exception-throwing CPP callback arg=CPP main cb arg 0 <-- invoke(299312, CPP main cb arg 0) invoking guarded callback (run 1) --> invoke(299312, CPP main cb arg 1) --> guarded exception-throwing CPP callback arg=CPP main cb arg 1 --> exception-throwing CPP callback arg=CPP main cb arg 1 errormsg=throwing up CPP main cb arg 1 <-- guarded exception-throwing CPP callback arg=CPP main cb arg 1 <-- invoke(299312, CPP main cb arg 1) invoking guarded callback (run 2) --> invoke(299312, CPP main cb arg 2) --> guarded exception-throwing CPP callback arg=CPP main cb arg 2 --> exception-throwing CPP callback arg=CPP main cb arg 2 errormsg=throwing up CPP main cb arg 2 <-- guarded exception-throwing CPP callback arg=CPP main cb arg 2 <-- invoke(299312, CPP main cb arg 2) rethrowing exception from global tls (run 0) caught rethrown callback exception: throwing up CPP main cb arg 0 rethrowing exception from global tls (run 1) caught rethrown callback exception: throwing up CPP main cb arg 1 rethrowing exception from global tls (run 2) caught rethrown callback exception: throwing up CPP main cb arg 2 invoking exception-throwing callback (will segfault iff c lib is not exception-enabled) --> invoke(298904, CPP main callback argument) --> exception-throwing CPP callback arg=CPP main callback argument errormsg=throwing up CPP main callback argument terminate called after throwing an instance of 'std::runtime_error' what(): throwing up CPP main callback argument Abort (core dumped) (core dump expected for the unguarded case as I did not compile the C lib with exception support) Notes: - not generalized, supports only 1-argument void-returning callables - Boost doesn't list Boost.Atomic as a non-header-only lib but you need to compile & link with it (for lockfree) - need -lboost_system when using -lboost_thread - could also use a per-thread future instead of a queue of futures if there's no possibility of or no interest in a subsequent result overwriting a previous result > Keep a count of nesting levels, so if a callback calls a callback > which calls a callback etc. Ok so no out-of-the-box functionality. > Another possibly useful idea is to have the C libs dispatcher > dispatch packaged_task's into a per-thread lock free queue which you > then dispatch on control return when the C library isn't in the way > anymore. Call it deferred callbacks :) Interesting. So I'd basically transfer the actual callback invocation to exception-capable-land. I feel a bit out of my depths here now wrt to all that boost magic but I've definitely learned a bunch :) Still quite unsure if I shouldn't just keep it really simple and go with the original idea of catching boost::python::error_already_set and re-raising if (PyErr_Occurred()) (and not add a dependency to Boost.Thread, Boost.System and possibly Boost.Atomic)... Thanks again for this wealth of useful hints and suggestions, Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From beamesleach at gmail.com Mon Apr 29 18:26:38 2013 From: beamesleach at gmail.com (Alex Leach) Date: Mon, 29 Apr 2013 17:26:38 +0100 Subject: [C++-sig] Custom PyTypeObjects In-Reply-To: <51744852.4080407@astro.princeton.edu> References: <51701BCD.3040706@astro.princeton.edu> <51744852.4080407@astro.princeton.edu> Message-ID: Dear list, A thread I started was originally meant to discuss how to use C++ memory management methods (operator new, delete etc.) with a boost python instance. Rather than dwelling on the concern, I've been (successfully) wrapping other code since, but have now arrived at a separate conundrum, which I think could be addressed by the same conceptual solution. This time I've found a working attempt at a solution here on this list[1], and was hoping that more generic, template-ised versions could be introduced into Boost.. [1] - http://mail.python.org/pipermail/cplusplus-sig/2007-August/012438.html The message has turned into a bit of an essay, so I'll summarise what I've written here:- * Python objects and protocols - they're not all the same. * Python Buffers - An example and attempt at exposing one. * C++ IO streams - exposing buffered object interfaces to Python. * Customising PyTypeObjects already used in Boost Python. * "There should be one-- and preferably only one --obvious way to do it." * Summary What's in an Object? -------------------- What I think it boils down to is a lack of support for the different type objects defined in the Python C-API Abstract[2] and Concrete[3] Object Layers. The problem in [1] was related to PyBufferProcs and PyBufferObjects. How can an object representing a buffer be properly exposed to Python? The PyBuffer* structs were designed with this in mind, but are now deprecated in favour of memory view objects [4]. Either way, a `grep` of the Boost Python header and source files show no sign of either API being made of use. [2] - http://docs.python.org/2/c-api/abstract.html [3] - http://docs.python.org/2/c-api/concrete.html [4] - http://docs.python.org/2/c-api/buffer.html#memoryview-objects A buffered solution ------------------- The solution from [1] makes it about as simple as possible for the client / Python registration code to expose a return type that is managed by a PyBuffer_Type-like PyTypeObject. A custom to-python converter is registered and return_value_policy used. However, this is still fairly cumbersome compared to current Boost Python usage, as the C-Python API needs to be used directly and a custom PyTypeObject defined, for any return-type that should use a different type protocol. The solution also goes nowhere to providing the functionality a Python buffer expects, but instead just demonstrates how one might use a new PyTypeObject. A standards-compliant solution ------------------------------ With the C++ standard library in mind, I was wondering what boost python might be able to do with IO streams. I have a family of C++ classes that use iostream-like operators to serialise objects into either XML, plain text, or binary formats. Providing this functionality via a buffered object seems to be the appropriate solution... Using boost python to expose such an interface though, looks non-trivial, to say the least. A boost-friendly solution might be to recognise boost::asio::buffer[6] objects, perhaps using boost::mpl statements in the to-python converter registrations. I'm still trying to get to grips with standard library templates personally, so would prefer if classes derived from ios_base could automatically have their '<<' and '>>' operators exposed at compile time, depending on whether they are read-only or read-write. An exposed seek function would also be useful, when one is available in the C++ type. Specialised PyTypeObjects ------------------------- Discussing each of the different object types is too large a subject to describe in full here, but would it not be sensible for Boost Python to make it easier to expose other PyTypeObjects? The NumPy C-API exposes 8 public and 4 private type specialisations[5], for representing clearly different types of data. These are essentially PyTypeObjects conforming to the API defined in the C-Python object layers documentation[2,3]. With quite a lot more code, Boost Python could potentially provide capability to specialise the type objects for a number of pre-defined base types, by providing custom HolderGenerators[6] for each type specialisation. These HolderGenerators can be referred to by creating corresponding `return_value_policy`s. This is what the solution from [1] does, by defining both a new HolderGenerator and a corresponding return_value_policy. This concept is not problem-free, however. In my case, I'd like to tie a C++ class's streaming interface directly to the PyTypeObject. For Python 2.x this would mean populating a new PyTypeObject's tp_as_buffer attribute to a PyBufferProcs struct. The code from [1] could be modified to do this, but it would take quite a lot more work. (It has..) For Python2.7 and above, there are of course the new buffer and memoryview APIs, but I haven't really read up on or done anything with them yet... [5] - http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html [6] - http://www.boost.org/doc/libs/1_53_0/libs/python/doc/v2/HolderGenerator.html A generalised solution ---------------------- To answer my question from the previous thread I started here, on how to use a custom PyTypeObject on an exposed class_<> hierarchy, I think the way to do this is to use `pytype_object_manager_traits`, as is done in str.hpp, list.hpp, etc. e.g.:- namespace converter { template <> struct object_manager_traits : pytype_object_manager_traits<&PyUnicode_Type, str> { }; } This seems to be the best way to register a PyTypeObject to a C++ class, with Boost. But it does require a tremendous of work, when wanting to use PyTypeObjects that should use STL functionality. C++ IO streams -------------- Mapping C++ STL functions to PyTypeObject attributes[7] does not appear to have been done at all in Boost Python, in so far as I can tell. Of course there are the standard objects, bp::string, list, etc. , which use core Python's respective PyTypeObjects as instance managers, like above, but it doesn't seem like there is a robust way to replace a PyTypeObject's function pointers with STL-conforming implementations. I suppose it is possible to edit the PyTypeObject, after getting it with `object.get_type()`, but that seems a bit of an inefficient, run-time hack. I was playing around with the code from [1] over the weekend, and have started to map the C++ iostream template functions to a PyTypeObject's `tp_as_buffer` member struct, to expose buffered access to C++ formatted stream methods through a PyBufferProcs struct[8]. Admittedly, this was a bit of a pointless exercise, as the buffer protocol has been removed in Python 3, but I am currently developing with Python 2.7 and wanted to try out an initial, working implementation where a custom PyTypeObject is used. For std::i/ostream, there is some production code available that can perform Python file-like object conversions. In particular, the two subsequent replies to this message[9] here on this list, mention open-source libraries that can already do this. And from the code listed in [1], I've made available yet another (partially complete) implementation[10]. [7] - http://docs.python.org/2/c-api/typeobj.html# [8] - http://docs.python.org/2/c-api/typeobj.html#PyBufferProcs [9] - http://mail.python.org/pipermail/cplusplus-sig/2010-March/015411.html [10] - https://github.com/alexleach/bp_helpers Moving forward -------------- Assuming Boost Python follows the Zen of Python, there should be one - and only one - obvious way to achieve what I want. That is currently, to expose a future-proof, STL-compliant iostream interface, through Boost Python. I don't think any of the above implementations are compatible with Python 3, since I don't think any of them use the new Python buffer or memoryview APIs, but I'd like to make the switch soon, myself. I'm sure adding buffer support to Boost Python would be valuable for a number of users. From a backwards-compatibility perspective, it would probably be good to have both the old and the new buffer APIs included in Boost Python, to be selected with a Python preprocessor macro. Memoryviews are a relatively fancy and new feature, but buffers have been around for ages, so it would be good if they were supported, for basically all versions of Python. Ideally though, we would also have memoryview functionality in v2.7+, too. One way to rule them all ------------------------ Now, I've discovered a number of ways to write to_python converters, and am not sure what is the "one obvious way" to define a new PyTypeObject's API. I would be grateful for feedback on which should be the preferred way to expose a class with a custom PyTypeObject. Here are the methods I've looked into:- 1. indexing_suite Perhaps my favourite way I found to expose a to_python converter, was with boost python's indexing_suite, as I did for std::list[11] (also attached to a msg on this list, earlier this month). From the client's perspective, all that needs to be done is to instantiate a template. For examples, see the C++ test code[12]. However, I haven't really looked into how the converter is registered internally, as the base classes take care of that. Either way, the indexing suite functions are only attached to the PyObject, not its respective PyTypeObject. [11] - https://github.com/alexleach/bp_helpers/blob/master/include/boost_helpers/make_list.hpp [12] - https://github.com/alexleach/bp_helpers/blob/master/src/tests/test_make_list.cpp 2. class_<..> The code for the class_ template, its bases and typedefs is really quite advanced, but it can't be said that it is inflexible. Still, I haven't found an "obvious way" to replace a class's object manager. I get a runtime warning if a to_python_converter is registered as well as a class. bp::init has an operator[] method, which can be used to specify a CallPolicy, but I haven't managed to get that to change an instance's base type. The registry is probably the way to do this, but for me at least, the registry is very opaque, so I haven't found a good way to edit or replace a PyTypeObject, either during or after an exposed class_<> has been initialised. 3. to_python_converter This is how the solution in [1] enables to-python (PyObject) conversion, and is also how I've been doing it in the testing code I modified from there[13-15]. A corresponding Conversion class seems necessary to write, for each new type of PyTypeObject. e.g. as done in return_opaque_pointer.hpp and opaque_pointer_converter.hpp. [13] - https://github.com/alexleach/bp_helpers/blob/master/src/tests/test_buffer_object.cpp [14] - https://github.com/alexleach/bp_helpers/blob/master/include/boost_helpers/return_buffer_object.hpp [15] - https://github.com/alexleach/bp_helpers/blob/master/include/boost_helpers/buffer_pointer_converter.hpp 4. Return value policies and HolderGenerators In functions and methods where a CallPolicy can be specified, as I've said already, a custom CallPolicy can be used to refer to a custom HolderGenerator. These specify which PyTypeObject is used for managing the python-converted object, but can a custom return value policy and holder be specified with the class_<> template? I sure would like to find a way... However, alone, this doesn't seem to put a type converter in the registry. I thought that the MakeHolder::execute function should only need to be called once, but my code is currently calling it every time I want a new Python instance. So, I think I must not be registering the class' type id properly[14]... Then there are the lvalue and rvalue Python converters, which admittedly I don't know much about. There's also some other concepts I haven't mentioned above, like install_holders[16], for example, and whatever is done when you add shared_ptr to the class_ template's arguments. [16] - http://www.boost.org/doc/libs/1_53_0/libs/python/doc/v2/instance_holder.html#instance_holder-spec Summary ------- Should the ability to expose C++ istreams and ostreams be added to Boost Python? How should this be done? I thought that having a chainable return_value_policy for both istreams and ostreams would be great. That way they could be both used in conjunction for an iostream, with the functionality just incrementally added to a base PyTypeObject. But I don't see how one could attach additional PyObject methods, like done by a class_ template's def methods. What about memoryviews? If someone was to go ahead and write converters for Python memoryviews, are there any C++ standards-compliant classes that could be accommodated? i.e. Are there any classes defined in the C++ standard for multidimensional, buffered memory access? Which, if any would be an appropriate match to a Python memoryview? I guess that nested std::vectors and lists might be good candidates, but I stand to be corrected. Apologies for the length this became and thank you for sticking with me this far. Any advice, suggestions, pointers to code or documentation I've probably overlooked or neglected, or even criticism would be appreciated. Further discussion on how best to improve Boost Python as it is would be great! I do like to contribute to open source communities when possible, but I am strained for time... Thanks again! Kind regards, Alex From s_sourceforge at nedprod.com Tue Apr 30 03:29:00 2013 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Mon, 29 Apr 2013 21:29:00 -0400 Subject: [C++-sig] question on boost.python exception mechanisms In-Reply-To: References: , <516F4111.26650.4255D8A9@s_sourceforge.nedprod.com>, Message-ID: <517F1E5C.24777.2664A368@s_sourceforge.nedprod.com> > I feel a bit out of my depths here now wrt to all that boost magic > but I've definitely learned a bunch :) Much of the "boost magic" we talked about is now plain C++11. It's very worth while to read "The C++ standard library" by Josuttis from cover to cover (it's about 1000 pages, get the C++11 edition). The hours spent reading it are very quickly repaid in hundreds of hours saved for the rest of your career. A tedious read, but worth completing. Niall -- Any opinions or advice expressed here do NOT reflect those of my employer BlackBerry Inc. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ -------------- next part -------------- A non-text attachment was scrubbed... Name: SMime.p7s Type: application/x-pkcs7-signature Size: 6061 bytes Desc: not available URL: From pythonstuff at handi.dyndns.org Tue Apr 30 23:03:56 2013 From: pythonstuff at handi.dyndns.org (The Novice Coder) Date: Tue, 30 Apr 2013 15:03:56 -0600 Subject: [C++-sig] Embedding Python Question Message-ID: <13629d2c421882f360addea8c70fba08@handi.dyndns.org> Hi, hi! New user, and probably making some silly mistakes, but a few hours with search engines have not provided an answer... Problem: I'm trying to embed Python routines in a DLL and it's not working. Platform and versions: Python 2.7.3 (I have been unable to get boost to compile/link to python 3) Boost 1.54 (svn 84094) Windows 7 (x32) - MSVC 9 Detailed problem: When calling Py_Initialize() an assertion fails the program crashes in object.c line 65- assert((op->_ob_prev == NULL) == (op->_ob_next == NULL)); Test Code: Py_Initialize(); /* Failure point */ m_main_module = boost::python::import("__main__"); m_main_namespace = m_main_module.attr("__dict__"); boost::python::exec("result = 5 ** 2", m_main_namespace); int Tester = boost::python::extract(m_main_namespace["result"]); Values: + op 0x05be172c __Py_NoneStruct {_ob_next=0x00000011 _ob_prev=0x00000000 ob_refcnt=17 ...} _object * + op->_ob_next 0x00000011 {_ob_next=??? _ob_prev=??? ob_refcnt=??? ...} _object * + op->_ob_prev 0x00000000 {_ob_next=??? _ob_prev=??? ob_refcnt=??? ...} _object * Partial Call Stack: msvcr90d.dll!__wassert() + 0xb64 bytes python27_d.dll!_Py_AddToAllObjects(_object * op=0x05be172c, int force=0) Line 65 + 0x2d bytes C python27_d.dll!_PyBuiltin_Init() Line 2680 + 0x2d bytes C python27_d.dll!Py_InitializeEx(int install_sigs=1) Line 214 + 0x5 bytes C python27_d.dll!Py_Initialize() Line 377 + 0x7 bytes C pySim_plugd.dll!PySimProcessor::PySimProcessor() Line 73 + 0x8 bytes C++ Attempted solutions: Tried adding call to Py_Initialize() from main() - Call succeeded, but no effect on module. Tried moving call to Py_Initialize() from main() - Python code fails to execute. Tried duplicating same code in different application - succeeded. =( Thanks for reading!