From vkruglikov at numenta.com Sat Aug 6 05:43:25 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Sat, 6 Aug 2016 09:43:25 +0000 Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions Message-ID: I added the ability to create a manylinux wheel for the nupic.core project (github.com/numenta/nupic.core). However, in testing, I found that some exception-handling logic that used to work is now failing when running the wheel on Ubuntu 16.04, which uses the gcc/g++ 5.4.0 toolchain. One example in particular is the failure to catch the exception std::ios::failure. This makes it impractical to create manylinux wheels from legacy C++ code that also, as in the case of nupic.core, includes globs of additional 3rd party code. It's not practical because the incompatibilities may not be limited to just this one exception class and the actual failures are unreasonably difficult to simulate in testing, making it difficult, if not impossible, to find all occurrences in one's own code, not to mention 3rd party code that it links with. It turns out that, among other things, "the C++11 standard mandated an ABI change for std::ios_base::failure, by giving it a std::system_error base class that wasn't present in C++03" (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66145). The 4.8.2 compiler toolchain on the manylinux docker image supports C++11 in spirit, but doesn't implement the new ABI (its std::ios_base::failure is derived from std::exception); however, the 5.4.0 toolchain on Ubuntu 16.04 implements the new ABI (derives from std::system_error, etc.), and its stdc++ library is compiled using the new toolchain. So, the signature of std::ios_base::failure compiled into the manylinux wheel doesn't match the signature of std::ios_base::failure that's raised by the stdc++ library on Ubuntu 16.04. The following simple app demonstrates the issue: ``` #include #include int main() { try { std::ifstream input("notafile"); input.exceptions(std::ifstream::failbit | std::ifstream::badbit); input.get(); } catch(const std::ios::failure &e) { std::cout << "caught a std::ios::failure: what=" << e.what() << '\n'; } } ``` First, we build and run it on the manylinux docker image: $ docker run -it -v /home/vitaly/localbuilds/ios-base-failure:/ios-base-failure quay.io/pypa/manylinux1_x86_64 bash [root at 39eecd65e630 ios-base-failure]# g++ -std=c++11 test.cpp [root at 39eecd65e630 ios-base-failure]# ./a.out caught a std::ios::failure: what=basic_ios::clear As you can see above, running it on the manylinux container works correctly: it catches std::ios::failure and prints the expected message to stdout. However, when we run the same executable on Ubuntu 16.04, we get a very different result below. We see that the exception wasn't caught. g++ (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609 ('Ubuntu', '16.04', 'denial') vitaly at ubuntuvm:~/localbuilds/ios-base-failure$ ./a.out terminate called after throwing an instance of 'std::ios_base::failure[abi:cxx11]' what(): basic_ios::clear: iostream error Aborted (core dumped) Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Mon Aug 8 17:09:57 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Mon, 8 Aug 2016 21:09:57 +0000 Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions Message-ID: Anyone have a clue how to work around this ABI issue in manylinux wheels? Many thanks! From: Wheel-builders > on behalf of Vitaly Kruglikov > Date: Saturday, August 6, 2016 at 2:43 AM To: "wheel-builders at python.org" > Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions I added the ability to create a manylinux wheel for the nupic.core project (github.com/numenta/nupic.core). However, in testing, I found that some exception-handling logic that used to work is now failing when running the wheel on Ubuntu 16.04, which uses the gcc/g++ 5.4.0 toolchain. One example in particular is the failure to catch the exception std::ios::failure. This makes it impractical to create manylinux wheels from legacy C++ code that also, as in the case of nupic.core, includes globs of additional 3rd party code. It's not practical because the incompatibilities may not be limited to just this one exception class and the actual failures are unreasonably difficult to simulate in testing, making it difficult, if not impossible, to find all occurrences in one's own code, not to mention 3rd party code that it links with. It turns out that, among other things, "the C++11 standard mandated an ABI change for std::ios_base::failure, by giving it a std::system_error base class that wasn't present in C++03" (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66145). The 4.8.2 compiler toolchain on the manylinux docker image supports C++11 in spirit, but doesn't implement the new ABI (its std::ios_base::failure is derived from std::exception); however, the 5.4.0 toolchain on Ubuntu 16.04 implements the new ABI (derives from std::system_error, etc.), and its stdc++ library is compiled using the new toolchain. So, the signature of std::ios_base::failure compiled into the manylinux wheel doesn't match the signature of std::ios_base::failure that's raised by the stdc++ library on Ubuntu 16.04. The following simple app demonstrates the issue: ``` #include #include int main() { try { std::ifstream input("notafile"); input.exceptions(std::ifstream::failbit | std::ifstream::badbit); input.get(); } catch(const std::ios::failure &e) { std::cout << "caught a std::ios::failure: what=" << e.what() << '\n'; } } ``` First, we build and run it on the manylinux docker image: $ docker run -it -v /home/vitaly/localbuilds/ios-base-failure:/ios-base-failure quay.io/pypa/manylinux1_x86_64 bash [root at 39eecd65e630 ios-base-failure]# g++ -std=c++11 test.cpp [root at 39eecd65e630 ios-base-failure]# ./a.out caught a std::ios::failure: what=basic_ios::clear As you can see above, running it on the manylinux container works correctly: it catches std::ios::failure and prints the expected message to stdout. However, when we run the same executable on Ubuntu 16.04, we get a very different result below. We see that the exception wasn't caught. g++ (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609 ('Ubuntu', '16.04', 'denial') vitaly at ubuntuvm:~/localbuilds/ios-base-failure$ ./a.out terminate called after throwing an instance of 'std::ios_base::failure[abi:cxx11]' what(): basic_ios::clear: iostream error Aborted (core dumped) Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: From gqmelo at gmail.com Mon Aug 8 20:32:36 2016 From: gqmelo at gmail.com (Guilherme Melo) Date: Tue, 09 Aug 2016 00:32:36 +0000 Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions In-Reply-To: References: Message-ID: Hi Vitaly, I was curious about your problem and I've been able to reproduce it compiling with gcc 4.8.2 on CentOS5 and gcc 4.8.5 and gcc 4.9.3 on Ubuntu Also it is not even necessary to pass -std=c++11, the bug happens either way. this one exception class and the actual failures are unreasonably difficult > to simulate in testing, making it difficult, if not impossible, to find all > occurrences in one?s own code, not to mention 3rd party code that it links > with. I don't think there is so many incompatibilities. The libstdc++ shipped by gcc 5.1.0+ should be backward compatible, but as you found out it is indeed a bug. As an example, on the company I work for we have hundreds of conda packages compiled with gcc 4.8.5. Recently we changed to gcc 5.4.0 for our applications and did not have to rebuild any of the dependencies (actually we are still compiling the dependencies with 4.8.5) But regarding your problem I don't see much that can be done until they fix this: - Maybe static linking libstdc++? - Maybe manylinux1 should provide a way to ship libstdc++ with wheels for such cases? -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Tue Aug 9 01:51:03 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Tue, 9 Aug 2016 05:51:03 +0000 Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions In-Reply-To: References: Message-ID: I am experimenting with static libstdc++, but am still having the same issue of the std::sio:failure exception not getting caught. It may be because the references to libstdc++ are getting linked in as externals that at runtime might be getting linked to a dynamically-loaded libstdc++. I will try going with version scripts next to remove everything from the exported interface except the single python extension initialization function. Thanks, Vitaly From: Guilherme Melo > Date: Monday, August 8, 2016 at 5:32 PM To: Vitaly Kruglikov >, "wheel-builders at python.org" > Subject: Re: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions Hi Vitaly, I was curious about your problem and I've been able to reproduce it compiling with gcc 4.8.2 on CentOS5 and gcc 4.8.5 and gcc 4.9.3 on Ubuntu Also it is not even necessary to pass -std=c++11, the bug happens either way. this one exception class and the actual failures are unreasonably difficult to simulate in testing, making it difficult, if not impossible, to find all occurrences in one's own code, not to mention 3rd party code that it links with. I don't think there is so many incompatibilities. The libstdc++ shipped by gcc 5.1.0+ should be backward compatible, but as you found out it is indeed a bug. As an example, on the company I work for we have hundreds of conda packages compiled with gcc 4.8.5. Recently we changed to gcc 5.4.0 for our applications and did not have to rebuild any of the dependencies (actually we are still compiling the dependencies with 4.8.5) But regarding your problem I don't see much that can be done until they fix this: - Maybe static linking libstdc++? - Maybe manylinux1 should provide a way to ship libstdc++ with wheels for such cases? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pombredanne at nexb.com Fri Aug 12 08:06:02 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 14:06:02 +0200 Subject: [Wheel-builders] Manylinux, OSX wheel building framework for travis-ci Message-ID: A while back, Matthew Brett wrote: > I've been experimenting with a submodule to support building and > testing OSX and manylinux wheels from the same repository, using > travis-ci. > The README explains the idea : > https://github.com/matthew-brett/multibuild#utilities-for-building-on-travis-ci-with-osx-and-linux Very nice. I will for sure steal code from you! I am for now playing with a simple (and drafty) build loop using appveyor for windows and travis for linux/mac. And I publish the wheels on bintray. https://github.com/pombreda/thirdparty The scripts and .travis/appveryor yaml are crappily organized but works. I have not yet incorporated Olivier Grisel's and team manylinux but this is the next step as I selfishly need wheels for Linux/Mac/Windows for my scancode-toolkit. And eventually will build other pure natives like libarchive and 7z for all these OSes there too (including some that need cygwin or mingw to build on Windows) And then I will need to ensure that the bintray has at least a Pypi-simple layout. -- Cordially Philippe Ombredanne From pombredanne at nexb.com Fri Aug 12 09:19:10 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 15:19:10 +0200 Subject: [Wheel-builders] Manylinux, OSX wheel building framework for travis-ci In-Reply-To: References: Message-ID: last month, Matthew Brett wrote: > I've been experimenting with a submodule to support building and > testing OSX and manylinux wheels from the same repository, using > travis-ci. > The README explains the idea : > https://github.com/matthew-brett/multibuild#utilities-for-building-on-travis-ci-with-osx-and-linux One thing: I am not convinced with your configuration using git modules. Could there be a simpler way either by vendoring your scripts in the repo that needs to be built or packaging the scripts .... as a wheel or sdist somehow? -- Cordially Philippe Ombredanne From pombredanne at nexb.com Fri Aug 12 09:35:04 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 15:35:04 +0200 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels Message-ID: Dear wheelers: What if we came with a blessed and maintained way to easily create wheels for Python packagers that would with minimal efforts build on Linux, Mac and Windows? I know there are many efforts to handle this for Mac, for many Linux, for Windows with various build environments. Some are trying to support also Mac and Linux.... What if we provided a repo that would have it all for the most common use cases (at least common for me): Linux, Mac and Windows on 32 and 64 bits for CPython 2.7 and recent 3.x? And what if we would even publish a public wheelhouse and make it super easy for Pypi publishers to have their wheels built with a simple PR to that repo, without having to maintain the whole setup themselves? -- Cordially Philippe Ombredanne From msarahan at gmail.com Fri Aug 12 09:50:28 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Fri, 12 Aug 2016 13:50:28 +0000 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: What you describe is almost exactly what conda-forge ( https://conda-forge.github.io/) is - but it is for conda packages, not wheels. I'm not sure how readily you or anyone else could adapt it to build wheels (also or instead), but I can't imagine it being exceedingly difficult. On Fri, Aug 12, 2016 at 8:39 AM Philippe Ombredanne wrote: > Dear wheelers: > What if we came with a blessed and maintained way to easily create > wheels for Python packagers that would with minimal efforts build on > Linux, Mac and Windows? I know there are many efforts to handle this > for Mac, for many Linux, for Windows with various build environments. > Some are trying to support also Mac and Linux.... > What if we provided a repo that would have it all for the most common > use cases (at least common for me): Linux, Mac and Windows on 32 and > 64 bits for CPython 2.7 and recent 3.x? > And what if we would even publish a public wheelhouse and make it > super easy for Pypi publishers to have their wheels built with a > simple PR to that repo, without having to maintain the whole setup > themselves? > > -- > Cordially > Philippe Ombredanne > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pombredanne at nexb.com Fri Aug 12 10:27:53 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 16:27:53 +0200 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: On Fri, Aug 12, 2016 at 3:50 PM, Michael Sarahan wrote: > What you describe is almost exactly what conda-forge > (https://conda-forge.github.io/) is - but it is for conda packages, not > wheels. I'm not sure how readily you or anyone else could adapt it to build > wheels (also or instead), but I can't imagine it being exceedingly > difficult. This is indeed very close and bits could be reused as this part seems BSD-licensed.... and the template for a recipe https://github.com/conda-forge/staged-recipes/ in a conda context is about the same as the combo of the many Linux, Mac and Windows wheel building repos out there. But conda is not wheel unfortunately. A fine tool and format but neither wheel nor FLOSS end to end afaik. -- Cordially Philippe Ombredanne From msarahan at gmail.com Fri Aug 12 10:39:55 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Fri, 12 Aug 2016 14:39:55 +0000 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: I did not mean to start a conda vs. wheel discussion, but I feel the need to correct you on conda: it is absolutely end-to-end FLOSS (BSD, 3-clause) https://github.com/conda/conda/blob/master/LICENSE.txt https://github.com/conda/conda-build/blob/master/LICENSE.txt Repositories can be created by simply creating an index (`conda index` command) which can be served by any http server - you need not depend on Continuum as a central repository any more than you depend on PyPI as a central repository. If licensing concerns are preventing you from using conda, I hope you'll reconsider. Otherwise, many people still prefer wheels/pip/virtualenv. That's fine. I don't mean to criticize that. To each, their own. I hope that conda-forge might provide a useful example for improving wheel building capabilities. On Fri, Aug 12, 2016 at 9:28 AM Philippe Ombredanne wrote: > On Fri, Aug 12, 2016 at 3:50 PM, Michael Sarahan > wrote: > > What you describe is almost exactly what conda-forge > > (https://conda-forge.github.io/) is - but it is for conda packages, not > > wheels. I'm not sure how readily you or anyone else could adapt it to > build > > wheels (also or instead), but I can't imagine it being exceedingly > > difficult. > > This is indeed very close and bits could be reused as this part seems > BSD-licensed.... and the template for a recipe > https://github.com/conda-forge/staged-recipes/ in a conda context is > about the same as the combo of the many Linux, Mac and Windows wheel > building repos out there. > But conda is not wheel unfortunately. A fine tool and format but > neither wheel nor FLOSS end to end afaik. > > -- > Cordially > Philippe Ombredanne > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjhelmus at gmail.com Fri Aug 12 10:41:18 2016 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Fri, 12 Aug 2016 09:41:18 -0500 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: <0f737d1d-9b01-3e2d-b56b-26a861d31818@gmail.com> On 08/12/2016 09:27 AM, Philippe Ombredanne wrote: > FLOSS end to end afaik Both conda the software [1] and conda-forge the infrastructure [2] are BSD licensed and are as much FLOSS as the wheel format. Cheers, - Jonathan Helmus [1] https://github.com/conda/conda/blob/master/LICENSE.txt [2] https://github.com/conda-forge/staged-recipes/blob/master/LICENSE From pombredanne at nexb.com Fri Aug 12 11:06:51 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 17:06:51 +0200 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: On Fri, Aug 12, 2016 at 4:39 PM, Michael Sarahan wrote: > I did not mean to start a conda vs. wheel discussion, but I feel the need to > correct you on conda: it is absolutely end-to-end FLOSS (BSD, 3-clause) > > https://github.com/conda/conda/blob/master/LICENSE.txt > https://github.com/conda/conda-build/blob/master/LICENSE.txt > > Repositories can be created by simply creating an index (`conda index` > command) which can be served by any http server - you need not depend on > Continuum as a central repository any more than you depend on PyPI as a > central repository. > > If licensing concerns are preventing you from using conda, I hope you'll > reconsider. Otherwise, many people still prefer wheels/pip/virtualenv. > That's fine. I don't mean to criticize that. To each, their own. I hope > that conda-forge might provide a useful example for improving wheel building > capabilities. Sorry for my naive confusion. I had recalled see proprietary license terms at http://repo.continuum.io/ which is a commercial offering using conda but is not conda alright and that I naively confused and conflated with conda.... My bad. Accept my apologies. I still prefer wheels and this is the topic here anyway! -- Cordially Philippe Ombredanne From pombredanne at nexb.com Fri Aug 12 11:08:17 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 17:08:17 +0200 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: <0f737d1d-9b01-3e2d-b56b-26a861d31818@gmail.com> References: <0f737d1d-9b01-3e2d-b56b-26a861d31818@gmail.com> Message-ID: On Fri, Aug 12, 2016 at 4:41 PM, Jonathan Helmus wrote: > On 08/12/2016 09:27 AM, Philippe Ombredanne wrote: >> >> FLOSS end to end afaik > > Both conda the software [1] and conda-forge the infrastructure [2] are BSD > licensed and are as much FLOSS as the wheel format. See my previous post. I was confused. Accept my apologies. -- Cordially Philippe Ombredanne From matthew.brett at gmail.com Fri Aug 12 13:49:53 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 12 Aug 2016 10:49:53 -0700 Subject: [Wheel-builders] Making it easy for every package publisher to create multi-os wheels In-Reply-To: References: Message-ID: Hi, On Fri, Aug 12, 2016 at 6:35 AM, Philippe Ombredanne wrote: > Dear wheelers: > What if we came with a blessed and maintained way to easily create > wheels for Python packagers that would with minimal efforts build on > Linux, Mac and Windows? I know there are many efforts to handle this > for Mac, for many Linux, for Windows with various build environments. > Some are trying to support also Mac and Linux.... Yes, https://github.com/matthew-brett/multibuild supports OSX and Linux, it's used now by most scientific Python packages and some more. Some repos (e.g numpy) match that with an appveyor script so they can build all the wheels in one shot. For us scientific types, the lack of an open-source-compatible Fortran compiler on Windows is currently a show-stopper, hence the lack of development there. Cheers, Matthew From matthew.brett at gmail.com Fri Aug 12 14:02:40 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Fri, 12 Aug 2016 11:02:40 -0700 Subject: [Wheel-builders] Manylinux, OSX wheel building framework for travis-ci In-Reply-To: References: Message-ID: Hi, On Fri, Aug 12, 2016 at 6:19 AM, Philippe Ombredanne wrote: > last month, Matthew Brett wrote: >> I've been experimenting with a submodule to support building and >> testing OSX and manylinux wheels from the same repository, using >> travis-ci. >> The README explains the idea : >> https://github.com/matthew-brett/multibuild#utilities-for-building-on-travis-ci-with-osx-and-linux > > One thing: I am not convinced with your configuration using git > modules. Could there be a simpler way either by vendoring your scripts > in the repo that needs to be built or packaging the scripts .... as a > wheel or sdist somehow? The code is nearly all bash scripts, and I suspect you'll run into trouble unless the scripts are in a sub-directory of your travis build, so the submodule seemed a natural way to get you there. For a wheel or other Python package, I'd only be using the Python packaging system to unpack the code somewhere, and make that 'somewhere' findable by travis-ci, so it seems like extra complexity for not much gain. Is there something specific you were worried about with the submodule? Cheers, Matthew From pombredanne at nexb.com Fri Aug 12 14:29:17 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Fri, 12 Aug 2016 20:29:17 +0200 Subject: [Wheel-builders] Manylinux, OSX wheel building framework for travis-ci In-Reply-To: References: Message-ID: On Fri, Aug 12, 2016 at 8:02 PM, Matthew Brett wrote: > Hi, > > On Fri, Aug 12, 2016 at 6:19 AM, Philippe Ombredanne > wrote: >> last month, Matthew Brett wrote: >>> I've been experimenting with a submodule to support building and >>> testing OSX and manylinux wheels from the same repository, using >>> travis-ci. >>> The README explains the idea : >>> https://github.com/matthew-brett/multibuild#utilities-for-building-on-travis-ci-with-osx-and-linux >> >> One thing: I am not convinced with your configuration using git >> modules. Could there be a simpler way either by vendoring your scripts >> in the repo that needs to be built or packaging the scripts .... as a >> wheel or sdist somehow? > > The code is nearly all bash scripts, and I suspect you'll run into > trouble unless the scripts are in a sub-directory of your travis > build, so the submodule seemed a natural way to get you there. For > a wheel or other Python package, I'd only be using the Python > packaging system to unpack the code somewhere, and make that > 'somewhere' findable by travis-ci, so it seems like extra complexity > for not much gain. Is there something specific you were worried > about with the submodule? Just a personal distate for their complexity nothing I can overcome. -- Cordially Philippe Ombredanne From vkruglikov at numenta.com Wed Aug 17 03:02:02 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Wed, 17 Aug 2016 07:02:02 +0000 Subject: [Wheel-builders] error: signalfd.h not found when building manylinux wheel In-Reply-To: References: Message-ID: From: Nathaniel Smith > Date: Sunday, July 10, 2016 at 11:08 AM To: Vitaly Kruglikov > Cc: "wheel-builders at python.org" > Subject: Re: [Wheel-builders] error: signalfd.h not found when building manylinux wheel On Jul 10, 2016 10:20 AM, "Vitaly Kruglikov" > wrote: > > > > On 7/9/16, 11:15 PM, "Nathaniel Smith" > wrote: > > >On Fri, Jul 8, 2016 at 10:33 PM, Vitaly Kruglikov > >> wrote: > >> > >> > >> On 7/8/16, 10:07 PM, "Nathaniel Smith" > wrote: > >> > >>>On Fri, Jul 8, 2016 at 3:23 PM, Vitaly Kruglikov > >>>> > >>>wrote: > >>>> I am attempting to build a manylinux wheel from the nupic.core project > >>>>in > >>>> github. I am using the docker image quay.io/pypa/manylinux1_x86_64. > >>>> nupic.core builds and statically links against the capnproto library, > >>>>which > >>>> relies on signalfd.h. Unfortunately, the docker image > >>>> quay.io/pypa/manylinux1_x86_64 does not provide signalfd.h, so my > >>>>build > >>>> fails like this: > >>>> > >>>> Linking CXX static library libkj.a > >>>> [ 27%] Built target kj > >>>> [ 29%] Building CXX object src/kj/CMakeFiles/kj-async.dir/async.c++.o > >>>> [ 30%] Building CXX object > >>>>src/kj/CMakeFiles/kj-async.dir/async-unix.c++.o > >>>> > >>>>/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/async-unix > >>>>.c > >>>>++:36:26: > >>>> fatal error: sys/signalfd.h: No such file or directory > >>>> #include > >>>> > >>>> What is the recommended solution for this problem? > >>> > >>>man signalfd says: > >>> > >>>VERSIONS > >>> signalfd() is available on Linux since kernel 2.6.22. Working > >>>support > >>> is provided in glibc since version 2.8. The signalfd4() > >>>system > >>>call > >>> (see NOTES) is available on Linux since kernel 2.6.27. > >>> > >>>CentOS 5 ships glibc 2.5, so signalfd is simply not available to > >>>manylinux1 wheels. I guess the recommended solution is to modify your > >>>code so that it doesn't require signalfd? It looks you aren't the only > >>>person to run into this this week... > >>> > >>> https://groups.google.com/forum/#!topic/capnproto/OpH9RtOBcZU > >>> > >>>and the suggestion there is to use -DKJ_USE_EPOLL=0 to tell capnproto > >>>not to use signalfd. > >>> > >>>This is an interesting data point for the benefits of defining a > >>>CentOS-6-based manylinux2, though... > >>> > >>>-n > >>> > >>>-- > >>>Nathaniel J. Smith -- https://vorpus.org > >> > >> > >> Thanks Nathaniel, unfortunately it is not as simple as that. > >> Unfortunately, capnproto is not my code, so I am somewhat limited in > >>what > >> I can do with it. You also replied to a similar question concerning > >>pipe2, > >> SOCK_NONBLOCKING, etc. Those are actually all tied together. I also > >>tried > >> building capnproto with -DKJ_USE_EPOLL=0 to get around the signalfd > >> dependency, and that?s what triggered the pipe2, SOCK_NONBLOCKING, etc. > >> was not declared in this scope compiler errors. It turns out that pipe2, > >> etc. are not available on CentOS-5 either. So, -DKJ_USE_EPOLL=0 or not, > >> the compilation fails due to missing headers or symbols in CentOS-5. > >> > >> I am now trying to create a patch for captnproto to enable the build to > >>go > >> through, but it is messy, as other parts of capnproto rely on some of > >>the > >> related headers. What a mess! > > > >Sorry to hear that :-/. > > > >Unfortunately, there's not really anything the manylinux1 image can do > >to help, because the manylinux1 spec says that manylinux1 wheels must > >work with CentOS 5, so by definition code that requires signalfd > >and/or pipe2 cannot be built into a manylinux1 wheel. > > > >If it helps, pipe2(pipefds, ) can be straightforwardly > >replaced by > > > > pipe(pipefds) > > fcntl(pipefds[0], F_SETFD, ) > > fcntl(pipefds[1], F_SETFD, ) > > > >(The pipe2 version is atomic with respect to forks, which is why it > >was added, but if you don't have pipe2 then the fcntl version is > >generally going to be fine outside of some fairly obscure cases.) > > > >Your other option is to lie and cheat :-). Technically you *can* build > >on CentOS 6 and then manually rename your .whl file to claim to be a > >manylinux1 wheel, and PyPI will accept it and pip will install it. > >(auditwheel will scream at you, but nothing forces you to listen to > >auditwheel.) Officially I can't recommend this. Unofficially, if you > >go ahead and do it anyway, then please let us know how it goes, > >because your user's reactions will be a very useful data point for > >designing the manylinux2 spec ;-). > > > >(A "manylinux1" wheel that was built on CentOS 6 should work for > >almost every user; AFAIK the one exception is users on CentOS 5. > >Unfortunately it turns out that we don't have any data on how many > >such users are out there [1], but they're certainly fairly rare...) > > > >-n > > > >[1] https://github.com/pypa/pip/pull/3836 > > > >-- > >Nathaniel J. Smith -- https://vorpus.org > > > Thanks, Nathaniel, for following up here and also in > https://github.com/pypa/manylinux/issues/75. I think that your suggestion > about using CentOS 6 would involve forking the > https://github.com/pypa/manylinux repo, replacing centos:5.11 with > centos:6 and building a new image, including running > build_scripts/build.sh, right? That would be a reasonable approach, yeah. You'll probably find you need to tweak a few other things, like I think the toolchain repo we're using is also centos5 specific. Good luck and let us know how it goes! -n Hi Nathaniel, here is the update you asked for: I ended up following your recommendation and created my own manylinux docker image from centos-6.8. by forking the manylinux github repo into Numenta?s github repo https://github.com/numenta/manylinux. The README there explains in more detail. After addressing the initial build issues, I ran into several additional problems that initially prevented cross-platform execution of the python extension: 1. Runtime symbol preemption and 2. c++ ABI incompatibility. The problem details are described in the comment https://discourse.numenta.org/t/segmentation-fault-while-running-basic-swarm/877/23?u=vkruglikov, and my solution is described in https://discourse.numenta.org/t/segmentation-fault-while-running-basic-swarm/877/24?u=vkruglikov. So far, after building nupic.bindings extension using my custom manylinux docker image (having solved those additional symbol preemption and c++ ABI compatibility problems), the wheel was able to pass all nupic tests on centos-6.8, Ubuntu 14.04, and Ubuntu 16.04. FYI and thanks for your support, Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Wed Aug 17 03:06:40 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Wed, 17 Aug 2016 07:06:40 +0000 Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions Message-ID: >>>>>>>> From: Wheel-builders > on behalf of Vitaly Kruglikov > Date: Saturday, August 6, 2016 at 2:43 AM To: "wheel-builders at python.org" > Subject: [Wheel-builders] manylinux wheel troubles using C++ and streaming exceptions I added the ability to create a manylinux wheel for the nupic.core project (github.com/numenta/nupic.core). However, in testing, I found that some exception-handling logic that used to work is now failing when running the wheel on Ubuntu 16.04, which uses the gcc/g++ 5.4.0 toolchain. One example in particular is the failure to catch the exception std::ios::failure. This makes it impractical to create manylinux wheels from legacy C++ code that also, as in the case of nupic.core, includes globs of additional 3rd party code. It's not practical because the incompatibilities may not be limited to just this one exception class and the actual failures are unreasonably difficult to simulate in testing, making it difficult, if not impossible, to find all occurrences in one's own code, not to mention 3rd party code that it links with. It turns out that, among other things, "the C++11 standard mandated an ABI change for std::ios_base::failure, by giving it a std::system_error base class that wasn't present in C++03" (see https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66145). The 4.8.2 compiler toolchain on the manylinux docker image supports C++11 in spirit, but doesn't implement the new ABI (its std::ios_base::failure is derived from std::exception); however, the 5.4.0 toolchain on Ubuntu 16.04 implements the new ABI (derives from std::system_error, etc.), and its stdc++ library is compiled using the new toolchain. So, the signature of std::ios_base::failure compiled into the manylinux wheel doesn't match the signature of std::ios_base::failure that's raised by the stdc++ library on Ubuntu 16.04. The following simple app demonstrates the issue: ``` #include #include int main() { try { std::ifstream input("notafile"); input.exceptions(std::ifstream::failbit | std::ifstream::badbit); input.get(); } catch(const std::ios::failure &e) { std::cout << "caught a std::ios::failure: what=" << e.what() << '\n'; } } ``` First, we build and run it on the manylinux docker image: $ docker run -it -v /home/vitaly/localbuilds/ios-base-failure:/ios-base-failure quay.io/pypa/manylinux1_x86_64 bash [root at 39eecd65e630 ios-base-failure]# g++ -std=c++11 test.cpp [root at 39eecd65e630 ios-base-failure]# ./a.out caught a std::ios::failure: what=basic_ios::clear As you can see above, running it on the manylinux container works correctly: it catches std::ios::failure and prints the expected message to stdout. However, when we run the same executable on Ubuntu 16.04, we get a very different result below. We see that the exception wasn't caught. g++ (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609 ('Ubuntu', '16.04', 'denial') vitaly at ubuntuvm:~/localbuilds/ios-base-failure$ ./a.out terminate called after throwing an instance of 'std::ios_base::failure[abi:cxx11]' what(): basic_ios::clear: iostream error Aborted (core dumped) Vitaly <<<<<<<<<<<<< It turns out that this problem was due to C++ ABI incompatibility. In case anyone runs into a similar issue, the solution is described here: https://discourse.numenta.org/t/segmentation-fault-while-running-basic-swarm/877/24?u=vkruglikov Best, Vitaly -------------- next part -------------- An HTML attachment was scrubbed... URL: From vkruglikov at numenta.com Wed Aug 17 03:09:18 2016 From: vkruglikov at numenta.com (Vitaly Kruglikov) Date: Wed, 17 Aug 2016 07:09:18 +0000 Subject: [Wheel-builders] manylinux: futex lock in capnproto hangs when running manylinux wheel on Ubuntu 16.04 In-Reply-To: References: Message-ID: >>>>>>>>>>>> On 7/25/16, 11:12 AM, "Wheel-builders on behalf of Vitaly Kruglikov" wrote: >When I build a manylinux wheel for nupuc.core >(https://github.com/numenta/nupic.core/pull/1001), all nupic.core and >nupic (?https://github.com/numenta/nupic) tests pass on Ubuntu 14.04. >However, when I run nupic unit tests on Ubuntu 16.04, I always get a futex >lock hang at >https://github.com/sandstorm-io/capnproto/blob/v0.5.3/c%2B%2B/src/kj/mutex >. >c%2B%2B#L87 (a statically-linked copy of capnproto embedded in the python >extension .so that?s part of the nupic.bindings manylinux wheel built by >nupic.core). > >The extension build uses shared libs: libc.so.6, libstdc++.so.6, and >libgcc_s.so.1. Built and running against Python 2.7.11. I use a custom >manylinux docker image that?s created from a fork of manylinux that >replaces centos5 with centos6.8 >(https://github.com/numenta/manylinux/pull/1) as suggested in >https://mail.python.org/pipermail/wheel-builders/2016-July/000175.html. >This image has been pushed to quay.io/numenta/manylinux1_x86_64_centos6. > >The traceback to the hang looks like this: > >(gdb) bt >#0 syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38 >#1 0x00007f2e042d7d77 in kj::_::Mutex::lock (this=0x42b6610, >exclusivity=) > at >/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/mutex.c++:87 >#2 0x00007f2e042a658e in >kj::MutexGuarded >::lockExclusive >(this=0x42b6610) > at >/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/kj/mutex.h:300 >#3 capnp::SchemaLoader::loadNative (this=0x42b6610, >nativeSchema=0x7f2e045c1f40 ) > at >/nupic.core/build/scripts/ThirdParty/Source/CapnProto/src/capnp/schema-loa >d >er.c++:2069 >#4 0x00007f2e04074761 in >capnp::SchemaLoader::loadCompiledTypeAndDependencies >(this=) > at >/nupic.core/build/scripts/ThirdParty/Install/include/capnp/schema-loader.h >: >168 >#5 capnp::SchemaParser::loadCompiledTypeAndDependencies >(this=) > at >/nupic.core/build/scripts/ThirdParty/Install/include/capnp/schema-parser.h >: >83 >#6 nupic::getBuilder (pyBuilder=0x7f2e0a0a55f0) at >/nupic.core/src/nupic/py_support/PyCapnp.hpp:77 >#7 0x00007f2e03fcacd3 in nupic_Network_write__SWIG_2 (self=0x3253090, >pyBuilder=) > at >/nupic.core/build/scripts/src/nupic/bindings/engine_internalPYTHON_wrap.cx >x >:5287 >#8 0x00007f2e03ff878f in _wrap_Network_write__SWIG_2 >(nobjs=nobjs at entry=2, swig_obj=swig_obj at entry=0x7ffc743cf0d0) > at >/nupic.core/build/scripts/src/nupic/bindings/engine_internalPYTHON_wrap.cx >x >:27690 >#9 0x00007f2e03ff8c05 in _wrap_Network_write (self=0x0, args=out>) > at >/nupic.core/build/scripts/src/nupic/bindings/engine_internalPYTHON_wrap.cx >x >:27812 >#10 0x00000000004cb26d in PyEval_EvalFrameEx () >#11 0x00000000004c22e5 in PyEval_EvalCodeEx () > > >I am going to put in additional effort to isolate the issue to a small >code footprint from the vast body of code that it?s in now. However, in >the meantime, I was hoping that someone might have run into something >similar and might share some helpful clues about the issue or possibly how >to debug it efficiently. > >Many thanks, >Vitaly > >_______________________________________________ >Wheel-builders mailing list >Wheel-builders at python.org >https://mail.python.org/mailman/listinfo/wheel-builders <<<<<<<<<<<<<<<<<<< This problem was caused by the confluence of symbol preemption and c++ ABI incompatibility between the manylinux toolchain and the build of shared libstdc++ on Ubuntu 16.04. My solution is described in https://discourse.numenta.org/t/segmentation-fault-while-running-basic-swar m/877/24?u=vkruglikov. FYI, Vitaly