From nate at bx.psu.edu Tue Mar 1 14:18:04 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Tue, 1 Mar 2016 14:18:04 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 Message-ID: So I am working on building my first set of manylinux1 wheels. The first candidate is psycopg2. I was able to generate the wheel which brings in all the necessary libs: # auditwheel show dist/psycopg2-2.6.1-cp27-cp27mu-linux_x86_64.whl psycopg2-2.6.1-cp27-cp27mu-linux_x86_64.whl is consistent with the following platform tag: "linux_x86_64". The wheel references the following external versioned symbols in system-provided shared libraries: GLIBC_2.3. The following external shared libraries are required by the wheel: { "libc.so.6": "/lib64/libc-2.5.so", "libcom_err.so.2": "/lib64/libcom_err.so.2.1", "libcrypt.so.1": "/lib64/libcrypt-2.5.so", "libcrypto.so.6": "/lib64/libcrypto.so.0.9.8e", "libdl.so.2": "/lib64/libdl-2.5.so", "libgssapi_krb5.so.2": "/usr/lib64/libgssapi_krb5.so.2.2", "libk5crypto.so.3": "/usr/lib64/libk5crypto.so.3.1", "libkeyutils.so.1": "/lib64/libkeyutils-1.2.so", "libkrb5.so.3": "/usr/lib64/libkrb5.so.3.3", "libkrb5support.so.0": "/usr/lib64/libkrb5support.so.0.1", "liblber-2.3.so.0": "/usr/lib64/liblber-2.3.so.0.2.31", "libldap_r-2.3.so.0": "/usr/lib64/libldap_r-2.3.so.0.2.31", "libpq.so.5": "/usr/pgsql-9.5/lib/libpq.so.5.8", "libpthread.so.0": "/lib64/libpthread-2.5.so", "libresolv.so.2": "/lib64/libresolv-2.5.so", "libsasl2.so.2": "/usr/lib64/libsasl2.so.2.0.22", "libselinux.so.1": "/lib64/libselinux.so.1", "libsepol.so.1": "/lib64/libsepol.so.1", "libssl.so.6": "/lib64/libssl.so.0.9.8e", "libz.so.1": "/lib64/libz.so.1.2.3" } In order to achieve the tag platform tag "manylinux1_x86_64" the following shared library dependencies will need to be eliminated: libcom_err.so.2, libcrypto.so.6, libgssapi_krb5.so.2, libk5crypto.so.3, libkeyutils.so.1, libkrb5.so.3, libkrb5support.so.0, liblber-2.3.so.0, libldap_r-2.3.so.0, libpq.so.5, libresolv.so.2, libsasl2.so.2, libselinux.so.1, libsepol.so.1, libssl.so.6, libz.so.1 [root at 03bf985a7c8a psycopg2-2.6.1]# auditwheel repair dist/psycopg2-2.6.1-cp27-cp27mu-linux_x86_64.whl Repairing psycopg2-2.6.1-cp27-cp27mu-linux_x86_64.whl Grafting: /usr/lib64/libsasl2.so.2.0.22 Grafting: /lib64/libsepol.so.1 Grafting: /usr/lib64/libk5crypto.so.3.1 Grafting: /lib64/libselinux.so.1 Grafting: /lib64/libcom_err.so.2.1 Grafting: /lib64/libz.so.1.2.3 Grafting: /usr/lib64/libldap_r-2.3.so.0.2.31 Grafting: /usr/pgsql-9.5/lib/libpq.so.5.8 Grafting: /usr/lib64/libkrb5support.so.0.1 Grafting: /lib64/libkeyutils-1.2.so Grafting: /lib64/libresolv-2.5.so Grafting: /usr/lib64/libgssapi_krb5.so.2.2 Grafting: /usr/lib64/libkrb5.so.3.3 Grafting: /usr/lib64/liblber-2.3.so.0.2.31 Grafting: /lib64/libssl.so.0.9.8e Grafting: /lib64/libcrypto.so.0.9.8e Setting RPATH: psycopg2/_psycopg.so Writing fixed-up wheel written to /build/psycopg2-2.6.1/wheelhouse/psycopg2-2.6.1-cp27-cp27mu-linux_x86_64.whl However, attempting to use this wheel on Ubuntu 14.04 results in a failure to load libpython2.7.so.1: ImportError: libpython2.7.so.1.0: cannot open shared object file: No such file or directory PEP 513 addresses libpythonX.Y.so.1 but only to say that it does not need to be linked, nor should it be. Is my understanding correct, then, that I would need to fix psycopg2 myself to *not* link to libpython (if this works - won't there be unresolved symbols at link time?)? Is there a more general solution to this problem? Related, some of these are concerning to me. Although CentOS 5 receives updates, so its OpenSSL should be secure and bug-free, this is still OpenSSL 0.9.8e. And bundling SELinux and Kerberos/GSSAPI libs also makes me a bit worried. --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Mar 1 16:58:50 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 1 Mar 2016 13:58:50 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Tue, Mar 1, 2016 at 11:18 AM, Nate Coraor wrote: > So I am working on building my first set of manylinux1 wheels. The first > candidate is psycopg2. I was able to generate the wheel which brings in all > the necessary libs: [...] > However, attempting to use this wheel on Ubuntu 14.04 results in a failure > to load libpython2.7.so.1: > > ImportError: libpython2.7.so.1.0: cannot open shared object file: No such > file or directory > > PEP 513 addresses libpythonX.Y.so.1 but only to say that it does not need to > be linked, nor should it be. Is my understanding correct, then, that I would > need to fix psycopg2 myself to *not* link to libpython (if this works - > won't there be unresolved symbols at link time?)? Is there a more general > solution to this problem? Right, you need to not link to libpython. This does work -- it turns out that the linker doesn't check for unresolved symbols unless you specifically ask it to (via -Wl,--no-undefined or similar), I guess because the people who write linkers consider what we're doing here to be a valid use case :-). Are you using the docker images to build these? I thought we fixed it to not link to libpython by default, but if not then we should... (If your python is built with --disable-shared, which is the default when building from source, then by default any wheels you build with that python will not link to libpython. Unfortunately CentOS's python is built with --enable-shared. This is one of the reasons that the docker images build their own python instead of using the one shipped by CentOS.) Also, I guess Robert is a bit distracted by thesis-ing, but I think he was at least planning to make auditwheel check for and reject wheels that link to libpython (or possibly repair them -- I think patchelf can do that). Perhaps he'll speak up about that, or I'm sure he'd welcome a pull request :-). I guess that's the more-general solution you're looking for. > Related, some of these are concerning to me. Although CentOS 5 receives > updates, so its OpenSSL should be secure and bug-free, this is still OpenSSL > 0.9.8e. And bundling SELinux and Kerberos/GSSAPI libs also makes me a bit > worried. I agree, it is worrying! But there's not much that can be done about it: if you want to make a single wheel that works across lots of linux distributions, and you need openssl, then you need to ship openssl, and that means that you need to take responsibility for keeping an eye out for security releases and respin your wheels when they happen. I guess the psycopg2 developers are already doing this for their Windows wheels, so perhaps they'll be willing to take on that responsibility for Linux wheels too? Or there might be things we can do to mitigate this somewhat: e.g., one can imagine the different folks who need to ship openssl getting together and building some shared infrastructure, so that they can share the burden -- like have one set of build scripts to update openssl and rebuild all their packages. Or once the native-libraries-in-wheels effort gets off the ground (sorry, I've been meaning to send out an email to kick that off but have been distracted by other things), then they'll be able to share a single openssl wheel and then that'll also centralize the update process. But fundamentally, the unavoidable fact is that if someone's not willing to take on responsibility for this somehow, then probably it's a bad idea for that person to ship manylinux wheels (or windows wheels, for that matter). -n -- Nathaniel J. Smith -- https://vorpus.org From nate at bx.psu.edu Wed Mar 2 11:05:00 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Wed, 2 Mar 2016 11:05:00 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Tue, Mar 1, 2016 at 4:58 PM, Nathaniel Smith wrote: > On Tue, Mar 1, 2016 at 11:18 AM, Nate Coraor wrote: > > So I am working on building my first set of manylinux1 wheels. The first > > candidate is psycopg2. I was able to generate the wheel which brings in > all > > the necessary libs: > [...] > > However, attempting to use this wheel on Ubuntu 14.04 results in a > failure > > to load libpython2.7.so.1: > > > > ImportError: libpython2.7.so.1.0: cannot open shared object file: No > such > > file or directory > > > > PEP 513 addresses libpythonX.Y.so.1 but only to say that it does not > need to > > be linked, nor should it be. Is my understanding correct, then, that I > would > > need to fix psycopg2 myself to *not* link to libpython (if this works - > > won't there be unresolved symbols at link time?)? Is there a more general > > solution to this problem? > > Right, you need to not link to libpython. This does work -- it turns > out that the linker doesn't check for unresolved symbols unless you > specifically ask it to (via -Wl,--no-undefined or similar), I guess > because the people who write linkers consider what we're doing here to > be a valid use case :-). > > Are you using the docker images to build these? I thought we fixed it > to not link to libpython by default, but if not then we should... (If > your python is built with --disable-shared, which is the default when > building from source, then by default any wheels you build with that > python will not link to libpython. Unfortunately CentOS's python is > built with --enable-shared. This is one of the reasons that the docker > images build their own python instead of using the one shipped by > CentOS.) > I'm using the docker image and the purpose-built Python, yeah. It's a container that I started a few weeks ago though, so I can pull a new version of the image and see if it's fixed. The Python in my version of the image definitely has a shared libpython in the purpose-built Python build: libpython2.7.so.1.0 => /opt/2.7.11/lib/libpython2.7.so.1.0 (0x00007fab9bdfd000) > > Also, I guess Robert is a bit distracted by thesis-ing, but I think he > was at least planning to make auditwheel check for and reject wheels > that link to libpython (or possibly repair them -- I think patchelf > can do that). Perhaps he'll speak up about that, or I'm sure he'd > welcome a pull request :-). I guess that's the more-general solution > you're looking for. > A good idea, but if the docker image isn't supposed to produce wheels that link libpython, that's also a good general solution for me. =) > > > Related, some of these are concerning to me. Although CentOS 5 receives > > updates, so its OpenSSL should be secure and bug-free, this is still > OpenSSL > > 0.9.8e. And bundling SELinux and Kerberos/GSSAPI libs also makes me a bit > > worried. > > I agree, it is worrying! But there's not much that can be done about > it: if you want to make a single wheel that works across lots of linux > distributions, and you need openssl, then you need to ship openssl, > and that means that you need to take responsibility for keeping an eye > out for security releases and respin your wheels when they happen. > > I guess the psycopg2 developers are already doing this for their > Windows wheels, so perhaps they'll be willing to take on that > responsibility for Linux wheels too? Or there might be things we can > do to mitigate this somewhat: e.g., one can imagine the different > folks who need to ship openssl getting together and building some > shared infrastructure, so that they can share the burden -- like have > one set of build scripts to update openssl and rebuild all their > packages. Or once the native-libraries-in-wheels effort gets off the > ground (sorry, I've been meaning to send out an email to kick that off > but have been distracted by other things), then they'll be able to > share a single openssl wheel and then that'll also centralize the > update process. But fundamentally, the unavoidable fact is that if > someone's not willing to take on responsibility for this somehow, then > probably it's a bad idea for that person to ship manylinux wheels (or > windows wheels, for that matter). > Sure, this makes sense to me. I wonder if it would be a good idea to have something like Conda in the docker image that would make installing up-to-date versions of these sorts of dependencies trivial. Thanks, --nate P.S., the archives for this list aren't showing up on mail.python.org, could you take a look in the list admin settings and see if they're enabled? > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Wed Mar 2 12:23:08 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Wed, 2 Mar 2016 12:23:08 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Wed, Mar 2, 2016 at 11:05 AM, Nate Coraor wrote: > On Tue, Mar 1, 2016 at 4:58 PM, Nathaniel Smith wrote: > >> On Tue, Mar 1, 2016 at 11:18 AM, Nate Coraor wrote: >> > So I am working on building my first set of manylinux1 wheels. The first >> > candidate is psycopg2. I was able to generate the wheel which brings in >> all >> > the necessary libs: >> [...] >> > However, attempting to use this wheel on Ubuntu 14.04 results in a >> failure >> > to load libpython2.7.so.1: >> > >> > ImportError: libpython2.7.so.1.0: cannot open shared object file: No >> such >> > file or directory >> > >> > PEP 513 addresses libpythonX.Y.so.1 but only to say that it does not >> need to >> > be linked, nor should it be. Is my understanding correct, then, that I >> would >> > need to fix psycopg2 myself to *not* link to libpython (if this works - >> > won't there be unresolved symbols at link time?)? Is there a more >> general >> > solution to this problem? >> >> Right, you need to not link to libpython. This does work -- it turns >> out that the linker doesn't check for unresolved symbols unless you >> specifically ask it to (via -Wl,--no-undefined or similar), I guess >> because the people who write linkers consider what we're doing here to >> be a valid use case :-). >> >> Are you using the docker images to build these? I thought we fixed it >> to not link to libpython by default, but if not then we should... (If >> your python is built with --disable-shared, which is the default when >> building from source, then by default any wheels you build with that >> python will not link to libpython. Unfortunately CentOS's python is >> built with --enable-shared. This is one of the reasons that the docker >> images build their own python instead of using the one shipped by >> CentOS.) >> > > I'm using the docker image and the purpose-built Python, yeah. It's a > container that I started a few weeks ago though, so I can pull a new > version of the image and see if it's fixed. The Python in my version of the > image definitely has a shared libpython in the purpose-built Python build: > > libpython2.7.so.1.0 => /opt/2.7.11/lib/libpython2.7.so.1.0 > (0x00007fab9bdfd000) > Yeah, sure enough, this is just due to an out-of-date image. The one I was using only had the shared libpython, the latest one only has the static. A psycopg2 wheel built in the new image is working. Two further questions now that this works for me: 1. With SOABI tag support now available for Python 2.X, would a PR adding UCS-2 builds of Python to the docker image be accepted? 2. Is anyone working on the pip side of the changes necessary to install manylinux1 wheels? If not, I'll do this. Also, I wonder if we should vendor auditwheel into wheel itself so you can directly build manylinux1 wheels. > >> >> Also, I guess Robert is a bit distracted by thesis-ing, but I think he >> was at least planning to make auditwheel check for and reject wheels >> that link to libpython (or possibly repair them -- I think patchelf >> can do that). Perhaps he'll speak up about that, or I'm sure he'd >> welcome a pull request :-). I guess that's the more-general solution >> you're looking for. >> > > A good idea, but if the docker image isn't supposed to produce wheels that > link libpython, that's also a good general solution for me. =) > > >> >> > Related, some of these are concerning to me. Although CentOS 5 receives >> > updates, so its OpenSSL should be secure and bug-free, this is still >> OpenSSL >> > 0.9.8e. And bundling SELinux and Kerberos/GSSAPI libs also makes me a >> bit >> > worried. >> >> I agree, it is worrying! But there's not much that can be done about >> it: if you want to make a single wheel that works across lots of linux >> distributions, and you need openssl, then you need to ship openssl, >> and that means that you need to take responsibility for keeping an eye >> out for security releases and respin your wheels when they happen. >> >> I guess the psycopg2 developers are already doing this for their >> Windows wheels, so perhaps they'll be willing to take on that >> responsibility for Linux wheels too? Or there might be things we can >> do to mitigate this somewhat: e.g., one can imagine the different >> folks who need to ship openssl getting together and building some >> shared infrastructure, so that they can share the burden -- like have >> one set of build scripts to update openssl and rebuild all their >> packages. Or once the native-libraries-in-wheels effort gets off the >> ground (sorry, I've been meaning to send out an email to kick that off >> but have been distracted by other things), then they'll be able to >> share a single openssl wheel and then that'll also centralize the >> update process. But fundamentally, the unavoidable fact is that if >> someone's not willing to take on responsibility for this somehow, then >> probably it's a bad idea for that person to ship manylinux wheels (or >> windows wheels, for that matter). >> > > Sure, this makes sense to me. I wonder if it would be a good idea to have > something like Conda in the docker image that would make installing > up-to-date versions of these sorts of dependencies trivial. > > Thanks, > --nate > > P.S., the archives for this list aren't showing up on mail.python.org, > could you take a look in the list admin settings and see if they're enabled? > > > >> >> -n >> >> -- >> Nathaniel J. Smith -- https://vorpus.org >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Wed Mar 2 12:57:09 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Wed, 2 Mar 2016 09:57:09 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: Hey guys, Sorry I'm out of the loop. Nathaniel correctly diagnosed the problem as thesis stuff. Quick responses below: > 1. With SOABI tag support now available for Python 2.X, would a PR adding UCS-2 builds of Python to the docker image be accepted? +1 from me. I think there's an issue open for this. You just need to decide something rational for the directory layout of which Python goes where and what the symlinks will be. I think this will also require getting the UCS-2 detection code properly removed from auditwheel (https://github.com/pypa/auditwheel/pull/17). Sorry I haven't done that yet. > 2. Is anyone working on the pip side of the changes necessary to install manylinux1 wheels? If not, I'll do this. IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). I think adding your voice there might help move it along. I think that PR is ready to merge, but just lacking the last necessary push from a pip maintainer to merge it. > Also, I wonder if we should vendor auditwheel into wheel itself so you can directly build manylinux1 wheels. IMO it's too early for that. auditwheel is not mature, and besides you really need the docker env. On that note, any help getting auditwheel in better shape -- especially UX-wise if you have any free cycles -- would be totally awesome. On Wed, Mar 2, 2016 at 9:23 AM, Nate Coraor wrote: > On Wed, Mar 2, 2016 at 11:05 AM, Nate Coraor wrote: > >> On Tue, Mar 1, 2016 at 4:58 PM, Nathaniel Smith wrote: >> >>> On Tue, Mar 1, 2016 at 11:18 AM, Nate Coraor wrote: >>> > So I am working on building my first set of manylinux1 wheels. The >>> first >>> > candidate is psycopg2. I was able to generate the wheel which brings >>> in all >>> > the necessary libs: >>> [...] >>> > However, attempting to use this wheel on Ubuntu 14.04 results in a >>> failure >>> > to load libpython2.7.so.1: >>> > >>> > ImportError: libpython2.7.so.1.0: cannot open shared object file: No >>> such >>> > file or directory >>> > >>> > PEP 513 addresses libpythonX.Y.so.1 but only to say that it does not >>> need to >>> > be linked, nor should it be. Is my understanding correct, then, that I >>> would >>> > need to fix psycopg2 myself to *not* link to libpython (if this works - >>> > won't there be unresolved symbols at link time?)? Is there a more >>> general >>> > solution to this problem? >>> >>> Right, you need to not link to libpython. This does work -- it turns >>> out that the linker doesn't check for unresolved symbols unless you >>> specifically ask it to (via -Wl,--no-undefined or similar), I guess >>> because the people who write linkers consider what we're doing here to >>> be a valid use case :-). >>> >>> Are you using the docker images to build these? I thought we fixed it >>> to not link to libpython by default, but if not then we should... (If >>> your python is built with --disable-shared, which is the default when >>> building from source, then by default any wheels you build with that >>> python will not link to libpython. Unfortunately CentOS's python is >>> built with --enable-shared. This is one of the reasons that the docker >>> images build their own python instead of using the one shipped by >>> CentOS.) >>> >> >> I'm using the docker image and the purpose-built Python, yeah. It's a >> container that I started a few weeks ago though, so I can pull a new >> version of the image and see if it's fixed. The Python in my version of the >> image definitely has a shared libpython in the purpose-built Python build: >> >> libpython2.7.so.1.0 => /opt/2.7.11/lib/libpython2.7.so.1.0 >> (0x00007fab9bdfd000) >> > > Yeah, sure enough, this is just due to an out-of-date image. The one I was > using only had the shared libpython, the latest one only has the static. A > psycopg2 wheel built in the new image is working. > > Two further questions now that this works for me: > > 1. With SOABI tag support now available for Python 2.X, would a PR adding > UCS-2 builds of Python to the docker image be accepted? > > 2. Is anyone working on the pip side of the changes necessary to install > manylinux1 wheels? If not, I'll do this. > > Also, I wonder if we should vendor auditwheel into wheel itself so you can > directly build manylinux1 wheels. > > >> >>> >>> Also, I guess Robert is a bit distracted by thesis-ing, but I think he >>> was at least planning to make auditwheel check for and reject wheels >>> that link to libpython (or possibly repair them -- I think patchelf >>> can do that). Perhaps he'll speak up about that, or I'm sure he'd >>> welcome a pull request :-). I guess that's the more-general solution >>> you're looking for. >>> >> >> A good idea, but if the docker image isn't supposed to produce wheels >> that link libpython, that's also a good general solution for me. =) >> >> >>> >>> > Related, some of these are concerning to me. Although CentOS 5 receives >>> > updates, so its OpenSSL should be secure and bug-free, this is still >>> OpenSSL >>> > 0.9.8e. And bundling SELinux and Kerberos/GSSAPI libs also makes me a >>> bit >>> > worried. >>> >>> I agree, it is worrying! But there's not much that can be done about >>> it: if you want to make a single wheel that works across lots of linux >>> distributions, and you need openssl, then you need to ship openssl, >>> and that means that you need to take responsibility for keeping an eye >>> out for security releases and respin your wheels when they happen. >>> >>> I guess the psycopg2 developers are already doing this for their >>> Windows wheels, so perhaps they'll be willing to take on that >>> responsibility for Linux wheels too? Or there might be things we can >>> do to mitigate this somewhat: e.g., one can imagine the different >>> folks who need to ship openssl getting together and building some >>> shared infrastructure, so that they can share the burden -- like have >>> one set of build scripts to update openssl and rebuild all their >>> packages. Or once the native-libraries-in-wheels effort gets off the >>> ground (sorry, I've been meaning to send out an email to kick that off >>> but have been distracted by other things), then they'll be able to >>> share a single openssl wheel and then that'll also centralize the >>> update process. But fundamentally, the unavoidable fact is that if >>> someone's not willing to take on responsibility for this somehow, then >>> probably it's a bad idea for that person to ship manylinux wheels (or >>> windows wheels, for that matter). >>> >> >> Sure, this makes sense to me. I wonder if it would be a good idea to have >> something like Conda in the docker image that would make installing >> up-to-date versions of these sorts of dependencies trivial. >> >> Thanks, >> --nate >> >> P.S., the archives for this list aren't showing up on mail.python.org, >> could you take a look in the list admin settings and see if they're enabled? >> >> >> >>> >>> -n >>> >>> -- >>> Nathaniel J. Smith -- https://vorpus.org >>> >> >> > > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Wed Mar 2 16:21:55 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 2 Mar 2016 22:21:55 +0100 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: > IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). I think adding your voice there might help move it along. I think that PR is ready to merge, but just lacking the last necessary push from a pip maintainer to merge it. If you manage to build manylinux1 wheels for psycopg2 it would be great to publish them to a public URL and report that you can install them successfully on various linux versions (e.g. old and recent debian, ubuntu, fedora, arch...) using this branch of pip. It would also be great to test on non-compatible variant of linux (e.g. alpine linux) that this version of pip ignores those manylinux1 wheels as expected. I think that would help convince the pip maintainers that this PR is ready for merge. -- Olivier From nate at bx.psu.edu Thu Mar 3 11:36:49 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 3 Mar 2016 11:36:49 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Wed, Mar 2, 2016 at 12:57 PM, Robert T. McGibbon wrote: > Hey guys, > > Sorry I'm out of the loop. Nathaniel correctly diagnosed the problem as > thesis stuff. Quick responses below: > > > 1. With SOABI tag support now available for Python 2.X, would a PR > adding UCS-2 builds of Python to the docker image be accepted? > > +1 from me. I think there's an issue open for this. You just need to > decide something rational for the directory layout of which Python goes > where and what the symlinks will be. > > I think this will also require getting the UCS-2 detection code properly > removed from auditwheel (https://github.com/pypa/auditwheel/pull/17). > Sorry I haven't done that yet. > Okay, I'll get going on these. > > 2. Is anyone working on the pip side of the changes necessary to > install manylinux1 wheels? If not, I'll do this. > > IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). > I think adding your voice there might help move it along. I think that PR > is ready to merge, but just lacking the last necessary push from a pip > maintainer to merge it. > > > > Also, I wonder if we should vendor auditwheel into wheel itself so you > can directly build manylinux1 wheels. > > IMO it's too early for that. auditwheel is not mature, and besides you > really need the docker env. On that note, any help getting auditwheel in > better shape -- especially UX-wise if you have any free cycles -- would be > totally awesome. > What do you have in mind? --nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Thu Mar 3 11:42:25 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 3 Mar 2016 11:42:25 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Wed, Mar 2, 2016 at 4:21 PM, Olivier Grisel wrote: > > IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). > I think adding your voice there might help move it along. I think that PR > is ready to merge, but just lacking the last necessary push from a pip > maintainer to merge it. > > If you manage to build manylinux1 wheels for psycopg2 it would be > great to publish them to a public URL and report that you can install > them successfully on various linux versions (e.g. old and recent > debian, ubuntu, fedora, arch...) using this branch of pip. > I'm getting our wheel build system ( https://github.com/galaxyproject/starforge) set up to build manylinux1 wheels in an automated fashion. Here's the cp27mu wheel: http://www.bx.psu.edu/~nate/wheels/ More to come once the UCS-2 Pythons are ready in the image. It would also be great to test on non-compatible variant of linux > (e.g. alpine linux) that this version of pip ignores those manylinux1 > wheels as expected. > > I think that would help convince the pip maintainers that this PR is > ready for merge. > > -- > Olivier > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Thu Mar 3 12:08:57 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Thu, 3 Mar 2016 09:08:57 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: > What do you have in mind? I'm not 100% sure. One thing is that I was thinking that the information could be returned from `auditwheel show` kind of like a "report card". To be more concrete, the manylinux1 policy has a couple different prongs, like symbol versions, external libraries, etc. So it could be presented as an easy to understand table or something: "Your wheel passes on X, but needs work on Y", or something. I was also thinking about using ASCII colors in the output to make this easier to visually parse. Another thing is that currently, `auditwheel repair` just vendors all shared libraries into the wheel. I think Nathaniel mentioned that it might be a good idea to be a little more discerning -- maybe require the user to list libraries individually or something. Obviously there's often a licensing concern with vendoring in external shared libraries that might be GPL, so we probably want to somehow, through the UX, make sure that the user is aware of the consequences of their choices and isn't put in a situation where they might accidentally violate the GPL without their knowledge or intent. -Robert On Thu, Mar 3, 2016 at 8:36 AM, Nate Coraor wrote: > On Wed, Mar 2, 2016 at 12:57 PM, Robert T. McGibbon > wrote: > >> Hey guys, >> >> Sorry I'm out of the loop. Nathaniel correctly diagnosed the problem as >> thesis stuff. Quick responses below: >> >> > 1. With SOABI tag support now available for Python 2.X, would a PR >> adding UCS-2 builds of Python to the docker image be accepted? >> >> +1 from me. I think there's an issue open for this. You just need to >> decide something rational for the directory layout of which Python goes >> where and what the symlinks will be. >> >> I think this will also require getting the UCS-2 detection code properly >> removed from auditwheel (https://github.com/pypa/auditwheel/pull/17). >> Sorry I haven't done that yet. >> > > Okay, I'll get going on these. > > >> > 2. Is anyone working on the pip side of the changes necessary to >> install manylinux1 wheels? If not, I'll do this. >> >> IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). >> I think adding your voice there might help move it along. I think that PR >> is ready to merge, but just lacking the last necessary push from a pip >> maintainer to merge it. >> >> >> > Also, I wonder if we should vendor auditwheel into wheel itself so you >> can directly build manylinux1 wheels. >> >> IMO it's too early for that. auditwheel is not mature, and besides you >> really need the docker env. On that note, any help getting auditwheel in >> better shape -- especially UX-wise if you have any free cycles -- would be >> totally awesome. >> > > What do you have in mind? > > --nate > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Mar 3 18:01:44 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 3 Mar 2016 15:01:44 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Wed, Mar 2, 2016 at 1:21 PM, Olivier Grisel wrote: >> IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). I think adding your voice there might help move it along. I think that PR is ready to merge, but just lacking the last necessary push from a pip maintainer to merge it. > > If you manage to build manylinux1 wheels for psycopg2 it would be > great to publish them to a public URL and report that you can install > them successfully on various linux versions (e.g. old and recent > debian, ubuntu, fedora, arch...) using this branch of pip. > > It would also be great to test on non-compatible variant of linux > (e.g. alpine linux) that this version of pip ignores those manylinux1 > wheels as expected. > > I think that would help convince the pip maintainers that this PR is > ready for merge. FYI from IRC (#pypa-dev on Freenode) just now: 12:50:31> hello dstufft, are you planning to release 8.1 soon ? 12:50:38> xafer: yes 12:50:47> before monday 12:51:07> primarily for https://github.com/pypa/pip/pull/3497 12:53:42> ok, so not today ? :) 12:57:18> xafer: Depends on how I feel once the vicodin wears off and if there's pending stuff for 8.1 that will get completed in time if I wait a day or two [...] 13:29:41> xafer: fwiw, the reason I'm planning on doing 8.1 prior to monday, is there's a good chance we can get pip 8.1 in Ubuntu 16.04 if we're released prior to monday -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Thu Mar 3 19:21:47 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 3 Mar 2016 16:21:47 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: Also, regarding the pypi enablement (allowing manylinux wheels onto pypi): njs: if someone writes PRs for pypi legacy and warehouse I can probably review/merge/deploy them this weekend tbh otherwise, it'll be whenever I get around to it -n On Thu, Mar 3, 2016 at 3:01 PM, Nathaniel Smith wrote: > On Wed, Mar 2, 2016 at 1:21 PM, Olivier Grisel wrote: >>> IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). I think adding your voice there might help move it along. I think that PR is ready to merge, but just lacking the last necessary push from a pip maintainer to merge it. >> >> If you manage to build manylinux1 wheels for psycopg2 it would be >> great to publish them to a public URL and report that you can install >> them successfully on various linux versions (e.g. old and recent >> debian, ubuntu, fedora, arch...) using this branch of pip. >> >> It would also be great to test on non-compatible variant of linux >> (e.g. alpine linux) that this version of pip ignores those manylinux1 >> wheels as expected. >> >> I think that would help convince the pip maintainers that this PR is >> ready for merge. > > FYI from IRC (#pypa-dev on Freenode) just now: > > 12:50:31> hello dstufft, are you planning to release 8.1 soon ? > 12:50:38> xafer: yes > 12:50:47> before monday > 12:51:07> primarily for https://github.com/pypa/pip/pull/3497 > 12:53:42> ok, so not today ? :) > 12:57:18> xafer: Depends on how I feel once the vicodin > wears off and if there's pending stuff for 8.1 that will get completed > in time if I wait a day or two > > [...] > > 13:29:41> xafer: fwiw, the reason I'm planning on doing 8.1 > prior to monday, is there's a good chance we can get pip 8.1 in Ubuntu > 16.04 if we're released prior to monday > > -- > Nathaniel J. Smith -- https://vorpus.org -- Nathaniel J. Smith -- https://vorpus.org From olivier.grisel at ensta.org Fri Mar 4 11:00:33 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Fri, 4 Mar 2016 17:00:33 +0100 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: #3497 has been merged into the pip develop branch \o/. -- Olivier From nate at bx.psu.edu Fri Mar 4 11:03:12 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 4 Mar 2016 11:03:12 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: Congrats! On Fri, Mar 4, 2016 at 11:00 AM, Olivier Grisel wrote: > #3497 has been merged into the pip develop branch \o/. > > -- > Olivier > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Mar 4 11:23:59 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 4 Mar 2016 08:23:59 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Mar 4, 2016 8:00 AM, "Olivier Grisel" wrote: > > #3497 has been merged into the pip develop branch \o/. WOOT Thanks Robert for writing the patch, and Olivier for so patiently shepherding it through! ...so, now who wants to write the pypi patches? :-) (See: https://bitbucket.org/pypa/pypi/issues/385/implement-pep-513 https://github.com/pypa/warehouse ) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Fri Mar 4 17:02:18 2016 From: nate at bx.psu.edu (Nate Coraor) Date: Fri, 4 Mar 2016 17:02:18 -0500 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Wed, Mar 2, 2016 at 4:21 PM, Olivier Grisel wrote: > > IIRC, there is an open PR to pip (https://github.com/pypa/pip/pull/3497). > I think adding your voice there might help move it along. I think that PR > is ready to merge, but just lacking the last necessary push from a pip > maintainer to merge it. > > If you manage to build manylinux1 wheels for psycopg2 it would be > great to publish them to a public URL and report that you can install > them successfully on various linux versions (e.g. old and recent > debian, ubuntu, fedora, arch...) using this branch of pip. > > It would also be great to test on non-compatible variant of linux > (e.g. alpine linux) that this version of pip ignores those manylinux1 > wheels as expected. > I think that would help convince the pip maintainers that this PR is > ready for merge. > So this wasn't really required now that it's been merged, but I wanted to prove for my own sake that these wheels are going to work. Indeed they did, with flying colors. psycopg2 wheels built on the modified manylinux1 image (modified to include UCS-2 Python builds) and assembled with auditwheel install and work (with SSL!) on as-bare-as-possible images of Debian 7 and 8, Ubuntu 12.04 and 14.04, Fedora 21, CentOS 6 and 7, openSUSE 13.2, and refused to install on Alpine. I tested with as many "standard" Python and setuptools versions as possible (e.g. Python 2 and 3 from APT on Debianish systems; standard 2.6 plus 2.7 and 3.3 from SCL on CentOS 6, standard 2.7 plus 3.3 from EPEL on CentOS 7, etc.). None of these standard builds are UCS-2, so it would be good to test the wheels on some of these systems with a UCS-2 Python, but I ran out of time for that. It would probably be a decent idea to test them with Enthought and Anaconda as well. Wheels are here: https://depot.galaxyproject.org/nate/wheelhouse/ Build/test scripts/logs are here: https://gist.github.com/natefoo/dae16c8669388cd20406 And I PR'd the changes for UCS-2 Python builds in the Docker image: https://github.com/pypa/manylinux/pull/35 --nate > > -- > Olivier > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Mar 6 16:30:56 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 6 Mar 2016 13:30:56 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: On Fri, Mar 4, 2016 at 8:23 AM, Nathaniel Smith wrote: > On Mar 4, 2016 8:00 AM, "Olivier Grisel" wrote: >> >> #3497 has been merged into the pip develop branch \o/. > > WOOT > > Thanks Robert for writing the patch, and Olivier for so patiently > shepherding it through! > > ...so, now who wants to write the pypi patches? :-) > > (See: > https://bitbucket.org/pypa/pypi/issues/385/implement-pep-513 > https://github.com/pypa/warehouse > ) I went ahead and did it: https://bitbucket.org/pypa/pypi/pull-requests/105/allow-manylinux-wheels-as-per-pep-513/diff https://github.com/pypa/warehouse/pull/1011 So now we're just waiting for Donald to have a moment to review and deploy. -n -- Nathaniel J. Smith -- https://vorpus.org From stefanv at berkeley.edu Sun Mar 6 16:51:57 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Sun, 6 Mar 2016 13:51:57 -0800 Subject: [Wheel-builders] libpythonX.Y.so.1 In-Reply-To: References: Message-ID: Nicely done, thanks Nathaniel. St?fan On Mar 6, 2016 13:31, "Nathaniel Smith" wrote: > On Fri, Mar 4, 2016 at 8:23 AM, Nathaniel Smith wrote: > > On Mar 4, 2016 8:00 AM, "Olivier Grisel" > wrote: > >> > >> #3497 has been merged into the pip develop branch \o/. > > > > WOOT > > > > Thanks Robert for writing the patch, and Olivier for so patiently > > shepherding it through! > > > > ...so, now who wants to write the pypi patches? :-) > > > > (See: > > https://bitbucket.org/pypa/pypi/issues/385/implement-pep-513 > > https://github.com/pypa/warehouse > > ) > > I went ahead and did it: > > https://bitbucket.org/pypa/pypi/pull-requests/105/allow-manylinux-wheels-as-per-pep-513/diff > https://github.com/pypa/warehouse/pull/1011 > > So now we're just waiting for Donald to have a moment to review and deploy. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Mar 7 07:34:51 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 7 Mar 2016 13:34:51 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels Message-ID: Hi Matthew, It is my understanding that you have built manylinux1 wheels for the latest numpy and scipy releases at: https://nipy.bic.berkeley.edu/manylinux Those builds embed openblas but I don't see how to find the version number. Other downstream projects such as scikit-learn might need to run on the same openblas version. How did you build those wheels, do you have travis config somewhere or do you use docker locally? Do you plan to update the travis config script for the matrix entry that generate wheels to generate manylinux1 wheels instead? https://github.com/numpy/numpy/blob/master/.travis.yml#L62 At the moment those dev wheels are uploaded to: http://travis-dev-wheels.scipy.org/ The goal is to make it possible for downstream projects to test against numpy master on travis but they need to be careful to install atlas as done in numpy's .travis. Using manylinux1 wheels would make it easier to setup CI for dowstream project without having to worry about atlas. Do you plan to generate a clib_openblas wheel in the longer term? This would reduce the size of scipy, pandas and scikit-learn manylinux1 wheels and increase installation speed for users of the scipy stack. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From matthew.brett at gmail.com Mon Mar 7 14:39:43 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 7 Mar 2016 11:39:43 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Hi, On Mon, Mar 7, 2016 at 4:34 AM, Olivier Grisel wrote: > Hi Matthew, > > It is my understanding that you have built manylinux1 wheels for the > latest numpy and scipy releases at: > > https://nipy.bic.berkeley.edu/manylinux I have. I was testing out the manylinux build system, trying to get a set of wheels we can use on travis-ci for different versions of Ubuntu. > Those builds embed openblas but I don't see how to find the version > number. Other downstream projects such as scikit-learn might need to > run on the same openblas version. > > How did you build those wheels, do you have travis config somewhere or > do you use docker locally? I am using docker locally for now. My build scripts are here: https://github.com/matthew-brett/manylinux-builds > Do you plan to update the travis config script for the matrix entry > that generate wheels to generate manylinux1 wheels instead? You mean, within the travis-ci docker container, open up another docker with the manylinux image? > https://github.com/numpy/numpy/blob/master/.travis.yml#L62 > > At the moment those dev wheels are uploaded to: > > http://travis-dev-wheels.scipy.org/ > > The goal is to make it possible for downstream projects to test > against numpy master on travis but they need to be careful to install > atlas as done in numpy's .travis. Using manylinux1 wheels would make > it easier to setup CI for dowstream project without having to worry > about atlas. Yes, true. > Do you plan to generate a clib_openblas wheel in the longer term? > This would reduce the size of scipy, pandas and scikit-learn > manylinux1 wheels and increase installation speed for users of the > scipy stack. There's the main sticking point(s). First - should we be using openblas for the manylinux wheels? We know of at least one bug that is not yet fixed in openblas master, and our vague suspicion is that there may be quite a few more: https://github.com/xianyi/OpenBLAS/issues/783 As you know, for Windows, we dropped back to ATLAS for now : https://github.com/numpy/numpy/issues/5479 https://mail.scipy.org/pipermail/numpy-discussion/2016-March/075125.html Second - how should we go about naming / building / distributing our external (non-Python) dependencies? Cheers, Matthew From olivier.grisel at ensta.org Mon Mar 7 15:29:22 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 7 Mar 2016 21:29:22 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: >> Do you plan to update the travis config script for the matrix entry >> that generate wheels to generate manylinux1 wheels instead? > > You mean, within the travis-ci docker container, open up another > docker with the manylinux image? I mean changing the .travis.yml config of numpy to enable the docker service so as to use the manylinux1 image to build and run tests on manylinux1 wheels for the master branch of numpy. > Second - how should we go about naming / building / distributing our > external (non-Python) dependencies? I think Nathaniel suggested clib_, that is clib_openblas in our case. > First - should we be using openblas for the manylinux wheels? We know > of at least one bug that is not yet fixed in openblas master, and our > vague suspicion is that there may be quite a few more: Alright, we can defer the discussion to externalize the BLAS/LAPACK implementation to later. Let's get some wheel out with embedded openblas for now. -- Olivier From matthew.brett at gmail.com Mon Mar 7 15:33:38 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 7 Mar 2016 12:33:38 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Hi, On Mon, Mar 7, 2016 at 12:29 PM, Olivier Grisel wrote: >>> Do you plan to update the travis config script for the matrix entry >>> that generate wheels to generate manylinux1 wheels instead? >> >> You mean, within the travis-ci docker container, open up another >> docker with the manylinux image? > > I mean changing the .travis.yml config of numpy to enable the docker > service so as to use the manylinux1 image to build and run tests on > manylinux1 wheels for the master branch of numpy. Do you happen to have any good examples to hand? Sorry to be lazy, I haven't used docker in travis yet. >> Second - how should we go about naming / building / distributing our >> external (non-Python) dependencies? > > I think Nathaniel suggested clib_, that is clib_openblas > in our case. The naming step is certainly the easiest! But there are other problems. The most obvious one is - what package structure should we go for? A prefix within the unpackacked directory? (clib_openblas/lib clib_openblas/include ...)? >> First - should we be using openblas for the manylinux wheels? We know >> of at least one bug that is not yet fixed in openblas master, and our >> vague suspicion is that there may be quite a few more: > > Alright, we can defer the discussion to externalize the BLAS/LAPACK > implementation to later. Let's get some wheel out with embedded > openblas for now. What about the problems with openblas? Should we instead get a wheel out with embedded ATLAS? Cheers, Matthew From njs at pobox.com Mon Mar 7 20:27:21 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 7 Mar 2016 17:27:21 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On Mon, Mar 7, 2016 at 12:33 PM, Matthew Brett wrote: > What about the problems with openblas? Should we instead get a wheel > out with embedded ATLAS? Probably this is a discussion for the numpy list rather than wheel-builders, but I'd definitely vote for doing the same thing on as many platforms as possible. Trying to simultaneously support ATLAS on Windows + Accelerate on OSX + OpenBLAS on Linux is just silly :-). More on topic: the manylinux1 PR for warehouse (the next generation of PyPI) just got merged and deployed. The interesting thing about this is that the warehouse test install at warehouse.python.org is backed by the same database as pypi.python.org, so uploads made to warehouse will appear in pypi. Result: IIUC an incantation like this should work to upload manylinux1 wheels to pypi, right now: twine upload -r warehouse.python.org *-manylinux1*.whl -n -- Nathaniel J. Smith -- https://vorpus.org From matthew.brett at gmail.com Mon Mar 7 20:50:08 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 7 Mar 2016 17:50:08 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On Mon, Mar 7, 2016 at 5:27 PM, Nathaniel Smith wrote: > On Mon, Mar 7, 2016 at 12:33 PM, Matthew Brett wrote: >> What about the problems with openblas? Should we instead get a wheel >> out with embedded ATLAS? > > Probably this is a discussion for the numpy list rather than > wheel-builders, but I'd definitely vote for doing the same thing on as > many platforms as possible. Trying to simultaneously support ATLAS on > Windows + Accelerate on OSX + OpenBLAS on Linux is just silly :-). But - specifically - do you think we should use ATLAS or OpenBLAS on manylinux...? Matthew From njs at pobox.com Mon Mar 7 21:34:05 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 7 Mar 2016 18:34:05 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: I guess ATLAS? On Mar 7, 2016 5:50 PM, "Matthew Brett" wrote: > On Mon, Mar 7, 2016 at 5:27 PM, Nathaniel Smith wrote: > > On Mon, Mar 7, 2016 at 12:33 PM, Matthew Brett > wrote: > >> What about the problems with openblas? Should we instead get a wheel > >> out with embedded ATLAS? > > > > Probably this is a discussion for the numpy list rather than > > wheel-builders, but I'd definitely vote for doing the same thing on as > > many platforms as possible. Trying to simultaneously support ATLAS on > > Windows + Accelerate on OSX + OpenBLAS on Linux is just silly :-). > > But - specifically - do you think we should use ATLAS or OpenBLAS on > manylinux...? > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Tue Mar 8 04:44:59 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Tue, 8 Mar 2016 10:44:59 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Concerning OpenBLAS: as there is a lot of ongoing work at OpenBLAS develop I think it is worth to discuss https://github.com/xianyi/OpenBLAS /issues/783#issuecomment-190457525 and https://github.com/numpy/numpy/issues/5479#issuecomment-184472378 and potentially other bugs with in a seperate OpenBLAS issue with the OpenBLAS developers. OpenBLAS is the most promising free and optimized implementation today. ATLAS *may *be used in future for 32bit, as OpenBLAS won't get new assembler kernels for x86 (32bit Intel) anymore (@wernsaar, priv. comm.). For 64bit architectures the picture is different. It would be unfortunate to refuse OpenBLAS due to this. Carl 2016-03-08 3:34 GMT+01:00 Nathaniel Smith : > I guess ATLAS? > On Mar 7, 2016 5:50 PM, "Matthew Brett" wrote: > >> On Mon, Mar 7, 2016 at 5:27 PM, Nathaniel Smith wrote: >> > On Mon, Mar 7, 2016 at 12:33 PM, Matthew Brett >> wrote: >> >> What about the problems with openblas? Should we instead get a wheel >> >> out with embedded ATLAS? >> > >> > Probably this is a discussion for the numpy list rather than >> > wheel-builders, but I'd definitely vote for doing the same thing on as >> > many platforms as possible. Trying to simultaneously support ATLAS on >> > Windows + Accelerate on OSX + OpenBLAS on Linux is just silly :-). >> >> But - specifically - do you think we should use ATLAS or OpenBLAS on >> manylinux...? >> >> Matthew >> > > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefanv at berkeley.edu Tue Mar 8 05:13:38 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 8 Mar 2016 02:13:38 -0800 Subject: [Wheel-builders] Python 2.7, Python 3.5 on quay.io/pypa/manylinux1 Message-ID: Hey, everyone I would like to build manylinux wheels for scikit-image v0.12. At the moment the default version of Python installed on that image is 2.4 (!), so I will need 2.6, 2.7, 3.4 and 3.5 to build a minimal set. Do you have any experience in installing these? I presume it cannot easily be done via yum without a special outside repository? Any advice appreciated! Thanks, St?fan From olivier.grisel at ensta.org Tue Mar 8 07:56:55 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 8 Mar 2016 13:56:55 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: If we can run all the scipy stack tests (say for instance numpy, scipy, pandas, scikit-learn, scikit-image, statsmodel) with the openblas built on the manylinux1 docker image using Matthew's script on a variety of boxes, then I am fine with using openblas. If running the tests reveals unresolved bugs / crashes in OpenBLAS, then I think we should go with atlas in the short term and re-examine that decision in a couple of months. Matthew, FYI to run docker in travis, you just need to enable the docker service in .travis.yml: services: - docker as done in: https://github.com/pypa/manylinux/blob/master/.travis.yml#L3 Then you can use the docker command line to run or build containers within a travis job. Let me know if you extend your scripts to build and upload wheels for the missing projects (scikit-learn, scikit-image, statsmodel and maybe others). I can run the tests on some cloud VMs and a couple of old and newer workstations at my work. -- Olivier Grisel From olivier.grisel at ensta.org Tue Mar 8 09:40:16 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 8 Mar 2016 15:40:16 +0100 Subject: [Wheel-builders] Python 2.7, Python 3.5 on quay.io/pypa/manylinux1 In-Reply-To: References: Message-ID: Hi St?fan, The Python interpreters are installed the /opt folder of the image. More details on how to use in the readme of the github repo that builds the image: https://github.com/pypa/manylinux Matthew is working on a bunch of build scripts for many projects more or less related to numpy: https://github.com/matthew-brett/np-wheel-builder You might be interested in this thread as well: https://mail.python.org/pipermail/wheel-builders/2016-March/000017.html -- Olivier From olivier.grisel at ensta.org Tue Mar 8 09:43:00 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 8 Mar 2016 15:43:00 +0100 Subject: [Wheel-builders] Python 2.7, Python 3.5 on quay.io/pypa/manylinux1 In-Reply-To: References: Message-ID: BTW in general, if you want an external library, you need to build it from the source tarball as the centos 5 packages are likely to be very old. -- Olivier Grisel From olivier.grisel at ensta.org Tue Mar 8 10:19:43 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 8 Mar 2016 16:19:43 +0100 Subject: [Wheel-builders] pip 8.1 with manylinux1 support is likely to make it into Ubuntu Xenial LTS Message-ID: https://launchpad.net/ubuntu/xenial/amd64/python3-pip Xenial is the future Ubuntu 16.04 LTS. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From stefanv at berkeley.edu Tue Mar 8 13:24:18 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 8 Mar 2016 10:24:18 -0800 Subject: [Wheel-builders] Python 2.7, Python 3.5 on quay.io/pypa/manylinux1 In-Reply-To: References: Message-ID: On 8 March 2016 at 06:40, Olivier Grisel wrote: > The Python interpreters are installed the /opt folder of the image. > More details on how to use in the readme of the github repo that > builds the image: > > https://github.com/pypa/manylinux Thanks for the helpful tips, Olivier and Robert! St?fan From matthew.brett at gmail.com Tue Mar 8 13:37:17 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 8 Mar 2016 10:37:17 -0800 Subject: [Wheel-builders] Built ATLAS etc RPMS Message-ID: Hi, At great personal expense (of time, boredom and frustration) I built 64-bit SSE2 atlas / lapack / unoptimized BLAS RPMs using the latest Fedora RPM source packages as templates. Build script: https://github.com/matthew-brett/manylinux-builds/blob/master/build_atlas_rpm.sh RPMS: http://nipy.bic.berkeley.edu/manylinux/rpms/x86_64/ Cheers, Matthew From matthew.brett at gmail.com Tue Mar 8 13:53:34 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 8 Mar 2016 10:53:34 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Hi, On Tue, Mar 8, 2016 at 4:56 AM, Olivier Grisel wrote: > If we can run all the scipy stack tests (say for instance numpy, > scipy, pandas, scikit-learn, scikit-image, statsmodel) with the > openblas built on the manylinux1 docker image using Matthew's script > on a variety of boxes, then I am fine with using openblas. If running > the tests reveals unresolved bugs / crashes in OpenBLAS, then I think > we should go with atlas in the short term and re-examine that decision > in a couple of months. At the moment, we know of https://github.com/xianyi/OpenBLAS/issues/783 which is not yet fixed in master. I see that Zhang Xianyi has set up OpenBLAS buildbot runs already: https://github.com/xianyi/OpenBLAS/issues/785 I guess we could add to those with nightly build / test runs. We need to decide what to do now though. Should we work on building up some heavy-duty CI to convince ourselves OpenBLAS is reliable and commit after that, or should we accept the risk now, on the basis that we will have some chance of errors / crashes? > Matthew, FYI to run docker in travis, you just need to enable the > docker service in .travis.yml: > > services: > - docker > > as done in: https://github.com/pypa/manylinux/blob/master/.travis.yml#L3 > > Then you can use the docker command line to run or build containers > within a travis job. Thanks - yes - should have thought of that one. > Let me know if you extend your scripts to build and upload wheels for > the missing projects (scikit-learn, scikit-image, statsmodel and > maybe others). I can run the tests on some cloud VMs and a couple of > old and newer workstations at my work. I would love to work out some good way of setting up CI for this - a conversation or thread would help a lot - I'm proceeding in a rather ad-hoc way at the moment. Cheers, Matthew From matthew.brett at gmail.com Tue Mar 8 14:15:14 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 8 Mar 2016 11:15:14 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On Tue, Mar 8, 2016 at 10:53 AM, Matthew Brett wrote: > Hi, > > On Tue, Mar 8, 2016 at 4:56 AM, Olivier Grisel wrote: >> If we can run all the scipy stack tests (say for instance numpy, >> scipy, pandas, scikit-learn, scikit-image, statsmodel) with the >> openblas built on the manylinux1 docker image using Matthew's script >> on a variety of boxes, then I am fine with using openblas. If running >> the tests reveals unresolved bugs / crashes in OpenBLAS, then I think >> we should go with atlas in the short term and re-examine that decision >> in a couple of months. > > At the moment, we know of > https://github.com/xianyi/OpenBLAS/issues/783 which is not yet fixed > in master. > > I see that Zhang Xianyi has set up OpenBLAS buildbot runs already: > > https://github.com/xianyi/OpenBLAS/issues/785 > > I guess we could add to those with nightly build / test runs. > > We need to decide what to do now though. Should we work on building > up some heavy-duty CI to convince ourselves OpenBLAS is reliable and > commit after that, or should we accept the risk now, on the basis that > we will have some chance of errors / crashes? Specifically - if we could run the heavy-duty CI now, with some version of OpenBLAS, where the numpy scipy scikit-learn pandas statsmodels tests all pass, on a range of machines, would that be enough to make us commit to OpenBLAS, both now and in the long term? Matthew From stefanv at berkeley.edu Tue Mar 8 14:50:55 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 8 Mar 2016 11:50:55 -0800 Subject: [Wheel-builders] Built ATLAS etc RPMS In-Reply-To: References: Message-ID: On 8 March 2016 at 10:37, Matthew Brett wrote: > At great personal expense (of time, boredom and frustration) I built My skimage build just complained that it needed BLAS, so thanks Matthew! St?fan From njs at pobox.com Tue Mar 8 15:30:06 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 8 Mar 2016 12:30:06 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On Mar 8, 2016 11:16, "Matthew Brett" wrote: > > On Tue, Mar 8, 2016 at 10:53 AM, Matthew Brett wrote: > > Hi, > > > > On Tue, Mar 8, 2016 at 4:56 AM, Olivier Grisel wrote: > >> If we can run all the scipy stack tests (say for instance numpy, > >> scipy, pandas, scikit-learn, scikit-image, statsmodel) with the > >> openblas built on the manylinux1 docker image using Matthew's script > >> on a variety of boxes, then I am fine with using openblas. If running > >> the tests reveals unresolved bugs / crashes in OpenBLAS, then I think > >> we should go with atlas in the short term and re-examine that decision > >> in a couple of months. > > > > At the moment, we know of > > https://github.com/xianyi/OpenBLAS/issues/783 which is not yet fixed > > in master. > > > > I see that Zhang Xianyi has set up OpenBLAS buildbot runs already: > > > > https://github.com/xianyi/OpenBLAS/issues/785 > > > > I guess we could add to those with nightly build / test runs. > > > > We need to decide what to do now though. Should we work on building > > up some heavy-duty CI to convince ourselves OpenBLAS is reliable and > > commit after that, or should we accept the risk now, on the basis that > > we will have some chance of errors / crashes? > > Specifically - if we could run the heavy-duty CI now, with some > version of OpenBLAS, where the numpy scipy scikit-learn pandas > statsmodels tests all pass, on a range of machines, would that be > enough to make us commit to OpenBLAS, both now and in the long term? I think a lot of the people who might care about this aren't on this mailing list? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Tue Mar 8 17:18:54 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 8 Mar 2016 23:18:54 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Indeed, let's move back this discussion to the numpy ML. -- Olivier From stefanv at berkeley.edu Wed Mar 9 02:28:42 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Tue, 8 Mar 2016 23:28:42 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On 7 March 2016 at 17:27, Nathaniel Smith wrote: > Result: IIUC an incantation like this should work to upload manylinux1 > wheels to pypi, right now: > > twine upload -r warehouse.python.org *-manylinux1*.whl Unfortunately not. First, there's a bug in twine that won't let you use warehouse.python.org, but even after that is fixed they've removed the upload endpoint. St?fan From njs at pobox.com Wed Mar 9 18:30:51 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 9 Mar 2016 15:30:51 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: On Tue, Mar 8, 2016 at 11:28 PM, St?fan van der Walt wrote: > On 7 March 2016 at 17:27, Nathaniel Smith wrote: >> Result: IIUC an incantation like this should work to upload manylinux1 >> wheels to pypi, right now: >> >> twine upload -r warehouse.python.org *-manylinux1*.whl > > Unfortunately not. First, there's a bug in twine that won't let you > use warehouse.python.org, but even after that is fixed they've removed > the upload endpoint. On further investigation, it looks like this isn't quite true, it's just that uploading is very obscure. Specifically what you have to do is 1) Create a ~/.pypirc with contents like [distutils] index-servers = pypi warehouse [pypi] username:XXX password:XXX [warehouse] repository:https://warehouse.python.org/pypi username:XXX password:XXX 2) Upload with a command like: twine -r warehouse upload ... Two further issues we noticed: a) Somehow St?fan got a wheel whose platform tag was the 'linux_x86_64.manylinux1_x86_64' (I guess auditwheel did this?). That's wrong -- it should just be 'manylinux1_x86_64' (because warehouse will reject uploads like 'linux_x86_64.manylinux1_x86_64', and because the 'linux' tag claims that it works on *all* linuxes, so even if you were allowed to upload it then it would just break people's systems). b) Once we got past that there's some bug that causes an error message like "HTTPError: 400 Client Error: requires: Invalid Requirement. for url: https://warehouse.python.org/pypi". Not sure what's going on with that yet -- initial impression is that it's some sort of bug in warehouse's attempt to validate the Requires: or Requires-Dist: fields in the wheel. It's been reported upstream. If anyone out there wants to start uploading your own manylinux1 wheels, then feel free to give it a try and let us know how it goes :-). There's some chance you'll hit the requirements bug, but then again, you might not. -n -- Nathaniel J. Smith -- https://vorpus.org From msarahan at gmail.com Wed Mar 9 18:49:18 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Wed, 09 Mar 2016 23:49:18 +0000 Subject: [Wheel-builders] C++ ABI v5 Message-ID: Howdy, Are you all aware of the looming move to the newer C++ ABI for GCC? http://developers.redhat.com/blog/2015/02/05/gcc5-and-the-c11-abi/ This won't have humongous impact on Python as a whole, as it affects only C++ packages. However, for those packages, I think we'll see breakage on new platforms. Importantly, the upcoming Ubuntu 16.04 release will use that newer ABI (15.10 already does, but 16.04 is LTS). Many distros are ending up recompiling their entire package libraries. Just curious - how (and when) will the manylinux docker image tackle this? We (Continuum) don't have a solid answer yet. We are looking into a docker image similar to manylinux that uses a newer GCC that can output either ABI: https://github.com/ContinuumIO/docker-images/pull/20 We welcome feedback and discussion. Best, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Mar 9 19:16:26 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 9 Mar 2016 16:16:26 -0800 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: Hi Michael, On Wed, Mar 9, 2016 at 3:49 PM, Michael Sarahan wrote: > Howdy, > > Are you all aware of the looming move to the newer C++ ABI for GCC? > http://developers.redhat.com/blog/2015/02/05/gcc5-and-the-c11-abi/ > > This won't have humongous impact on Python as a whole, as it affects only > C++ packages. However, for those packages, I think we'll see breakage on > new platforms. Importantly, the upcoming Ubuntu 16.04 release will use that > newer ABI (15.10 already does, but 16.04 is LTS). Many distros are ending > up recompiling their entire package libraries. To check if I'm understanding the problem correctly... it's not that C++ packages will be broken in general, right? The specific situation where things break is: 1) you distribute a C++ library that's built using the old ABI, AND 2) your C++ library has a public API/ABI that involves some of parts of the stdlib whose ABI changed (e.g. std::string), AND 3) your users running Ubuntu 15.10 (for example) want to compile their own code against your C++ library's API/ABI If all these things happen, then the users' builds will fail by default (because the users' compiler will default to using the new ABI, but your package is built using the old ABI), and your users will have to add -D_GLIBCXX_USE_CXX11_ABI=0 to their compile lines. And then AFAICT there are fundamentally two strategies one could use to deal with this: - modify your C++ library so that it exports both old- and new-versions of its ABI from the same .so simultaneously. This is what libstdc++ does, and what that blog post describes how to do. OTOH this requires non-trivial modifications to the package source, and it's unlikely that any distributor is going to try and go through and hack up a giant C++ library to do this (since the cases we're worried about here are like... Qt and LLVM, right? And now that I think about it I'm not even sure whether Qt's ABI is affected, since they go to some effort to avoid using standard library classes in their public ABI -- QString instead of std::string, etc.) - make a build of your C++ library that just uses the new ABI, and figure out some way to manage the resulting hassles involved in keep track of the two builds -- can they be installed simultaneously? how do you make sure that only systems with a new libstdc++ get the new version of the library? or do you switch to distributing libstdc++ yourself and push everyone to the new ABI, even if they're on an old distro? if you do, then can RHEL5 system gcc build against a newer version of libstdc++, or would this mean that you need to start shipping gcc as well?) > Just curious - how (and when) will the manylinux docker image tackle this? > We (Continuum) don't have a solid answer yet. We are looking into a docker > image similar to manylinux that uses a newer GCC that can output either ABI: > https://github.com/ContinuumIO/docker-images/pull/20 The manylinux image doesn't really give much consideration to the problem of distributing non-Python C++ libraries, at least so far... I guess at some point there may be wheels that just contain LLVM or Qt, but we're not there yet :-). In any case, the hard problems to me all seem to be at the distro/package management level, not at the docker image level -- I think once one has decided how one wants to solve those problems, then the docker image part should follow? -n -- Nathaniel J. Smith -- https://vorpus.org From msarahan at gmail.com Wed Mar 9 19:33:09 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Thu, 10 Mar 2016 00:33:09 +0000 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: Hi Nathaniel, Thanks for your feedback. It sounds like you think this won't be much of an issue, and I really hope you're right! This feels something like the y2k bug to me: apocalyptic predictions, but perhaps ultimately little impact. The issues we have seen to a limited extent are: - User runs some of our software, which uses our (old) C++ ABI - In that process, our old libstdc++ now comes higher in priority than the system one - later, some system library goes looking in libstdc++ (ours) and barfs, taking down the whole process. This is really a runtime concern, unfortunately, not a build-time linking problem. I'm less concerned about the problem of telling people how to compile things correctly - package builders are smart people. I just don't like the thought of accidental crashes when people have no idea what an ABI is. This is a distro/package management problem, to be sure, but I think it's relevant for this group (and certainly for Continuum) to think about, since we (manylinux-builders) are the packagers, and consumers of wheels that use c++ may run into this. I think the docker image PR I posted handles this in a reasonable way (providing two similar GCC's, each with different default C++ ABI settings) - and then we have to track which libstdc++ we send to people accordingly. This is sort of your second proposal, I think. We're counting on some modification to Conda in order to help obviate the user's need to know what an ABI is, or which one they have. If anyone wants to follow this work, or get involved, let me know. Best, Michael On Wed, Mar 9, 2016 at 6:16 PM Nathaniel Smith wrote: > Hi Michael, > > On Wed, Mar 9, 2016 at 3:49 PM, Michael Sarahan > wrote: > > Howdy, > > > > Are you all aware of the looming move to the newer C++ ABI for GCC? > > http://developers.redhat.com/blog/2015/02/05/gcc5-and-the-c11-abi/ > > > > This won't have humongous impact on Python as a whole, as it affects only > > C++ packages. However, for those packages, I think we'll see breakage on > > new platforms. Importantly, the upcoming Ubuntu 16.04 release will use > that > > newer ABI (15.10 already does, but 16.04 is LTS). Many distros are > ending > > up recompiling their entire package libraries. > > To check if I'm understanding the problem correctly... it's not that > C++ packages will be broken in general, right? The specific situation > where things break is: > > 1) you distribute a C++ library that's built using the old ABI, AND > 2) your C++ library has a public API/ABI that involves some of parts > of the stdlib whose ABI changed (e.g. std::string), AND > 3) your users running Ubuntu 15.10 (for example) want to compile their > own code against your C++ library's API/ABI > > If all these things happen, then the users' builds will fail by > default (because the users' compiler will default to using the new > ABI, but your package is built using the old ABI), and your users will > have to add -D_GLIBCXX_USE_CXX11_ABI=0 to their compile lines. > > And then AFAICT there are fundamentally two strategies one could use > to deal with this: > - modify your C++ library so that it exports both old- and > new-versions of its ABI from the same .so simultaneously. This is what > libstdc++ does, and what that blog post describes how to do. OTOH this > requires non-trivial modifications to the package source, and it's > unlikely that any distributor is going to try and go through and hack > up a giant C++ library to do this (since the cases we're worried about > here are like... Qt and LLVM, right? And now that I think about it I'm > not even sure whether Qt's ABI is affected, since they go to some > effort to avoid using standard library classes in their public ABI -- > QString instead of std::string, etc.) > > - make a build of your C++ library that just uses the new ABI, and > figure out some way to manage the resulting hassles involved in keep > track of the two builds -- can they be installed simultaneously? how > do you make sure that only systems with a new libstdc++ get the new > version of the library? or do you switch to distributing libstdc++ > yourself and push everyone to the new ABI, even if they're on an old > distro? if you do, then can RHEL5 system gcc build against a newer > version of libstdc++, or would this mean that you need to start > shipping gcc as well?) > > > Just curious - how (and when) will the manylinux docker image tackle > this? > > We (Continuum) don't have a solid answer yet. We are looking into a > docker > > image similar to manylinux that uses a newer GCC that can output either > ABI: > > https://github.com/ContinuumIO/docker-images/pull/20 > > The manylinux image doesn't really give much consideration to the > problem of distributing non-Python C++ libraries, at least so far... I > guess at some point there may be wheels that just contain LLVM or Qt, > but we're not there yet :-). In any case, the hard problems to me all > seem to be at the distro/package management level, not at the docker > image level -- I think once one has decided how one wants to solve > those problems, then the docker image part should follow? > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Mar 9 19:58:17 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 9 Mar 2016 16:58:17 -0800 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: Hi Michael, On Wed, Mar 9, 2016 at 4:33 PM, Michael Sarahan wrote: > Hi Nathaniel, > > Thanks for your feedback. It sounds like you think this won't be much of an > issue, and I really hope you're right! This feels something like the y2k > bug to me: apocalyptic predictions, but perhaps ultimately little impact. > > The issues we have seen to a limited extent are: > > - User runs some of our software, which uses our (old) C++ ABI > - In that process, our old libstdc++ now comes higher in priority than the > system one > - later, some system library goes looking in libstdc++ (ours) and barfs, > taking down the whole process. > > This is really a runtime concern, unfortunately, not a build-time linking > problem. I'm less concerned about the problem of telling people how to > compile things correctly - package builders are smart people. I just don't > like the thought of accidental crashes when people have no idea what an ABI > is. Oh, I see, I didn't realize you were shipping libstdc++. Right, shadowing the system libstdc++ with an old version is probably a bad idea :-). If you're shipping it anyway, then it seems like the solution to this particular problem is "just" to ship a newer version of libstdc++ that provides both ABIs? (I believe Julia uses a similar solution.) Actually building this libstdc++ may be tricky, but once you've done that then the runtime issue is solved, right? (At least with respect to this ABI break. In the longer run, this particular problem with shipping libstdc++ seems extra nasty, because there are a whole class of changes that the libstdc++ developers would *not* consider to be ABI breaks, but that will still break in your case. E.g. if they change the internal representation of some opaque type, and add new methods / new versions of methods that access this internal representation -- which they do, from time to time -- then everything is fine so long as you only have a single libstdc++ in process, but as soon as you have two libstdc++'s in the same ELF namespace with one partially shadowing the other, you are likely to have a bad time.) > This is a distro/package management problem, to be sure, but I think it's > relevant for this group (and certainly for Continuum) to think about, since > we (manylinux-builders) are the packagers, and consumers of wheels that use > c++ may run into this. I think the docker image PR I posted handles this in > a reasonable way (providing two similar GCC's, each with different default > C++ ABI settings) - and then we have to track which libstdc++ we send to > people accordingly. This is sort of your second proposal, I think. We're > counting on some modification to Conda in order to help obviate the user's > need to know what an ABI is, or which one they have. If you're shipping libstdc++ yourself, then why do you need to track the distro packages and provide multiple GCCs, etc.? Surely you just tell everyone to use the (new, backwards compatible) libstdc++ that you ship? -n -- Nathaniel J. Smith -- https://vorpus.org From msarahan at gmail.com Wed Mar 9 22:50:32 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Thu, 10 Mar 2016 03:50:32 +0000 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: Indeed about shadowing. That is not the default. If people install libgcc, they currently get 4.8. I will look into creating a libstdc++ that has both abi's. I didn't realize that was a possibility. Thanks! Best, Michael On Wed, Mar 9, 2016, 18:58 Nathaniel Smith wrote: > Hi Michael, > > On Wed, Mar 9, 2016 at 4:33 PM, Michael Sarahan > wrote: > > Hi Nathaniel, > > > > Thanks for your feedback. It sounds like you think this won't be much > of an > > issue, and I really hope you're right! This feels something like the y2k > > bug to me: apocalyptic predictions, but perhaps ultimately little impact. > > > > The issues we have seen to a limited extent are: > > > > - User runs some of our software, which uses our (old) C++ ABI > > - In that process, our old libstdc++ now comes higher in priority than > the > > system one > > - later, some system library goes looking in libstdc++ (ours) and barfs, > > taking down the whole process. > > > > This is really a runtime concern, unfortunately, not a build-time linking > > problem. I'm less concerned about the problem of telling people how to > > compile things correctly - package builders are smart people. I just > don't > > like the thought of accidental crashes when people have no idea what an > ABI > > is. > > Oh, I see, I didn't realize you were shipping libstdc++. Right, > shadowing the system libstdc++ with an old version is probably a bad > idea :-). > > If you're shipping it anyway, then it seems like the solution to this > particular problem is "just" to ship a newer version of libstdc++ that > provides both ABIs? (I believe Julia uses a similar solution.) > Actually building this libstdc++ may be tricky, but once you've done > that then the runtime issue is solved, right? > > (At least with respect to this ABI break. In the longer run, this > particular problem with shipping libstdc++ seems extra nasty, because > there are a whole class of changes that the libstdc++ developers would > *not* consider to be ABI breaks, but that will still break in your > case. E.g. if they change the internal representation of some opaque > type, and add new methods / new versions of methods that access this > internal representation -- which they do, from time to time -- then > everything is fine so long as you only have a single libstdc++ in > process, but as soon as you have two libstdc++'s in the same ELF > namespace with one partially shadowing the other, you are likely to > have a bad time.) > > > This is a distro/package management problem, to be sure, but I think it's > > relevant for this group (and certainly for Continuum) to think about, > since > > we (manylinux-builders) are the packagers, and consumers of wheels that > use > > c++ may run into this. I think the docker image PR I posted handles > this in > > a reasonable way (providing two similar GCC's, each with different > default > > C++ ABI settings) - and then we have to track which libstdc++ we send to > > people accordingly. This is sort of your second proposal, I think. > We're > > counting on some modification to Conda in order to help obviate the > user's > > need to know what an ABI is, or which one they have. > > If you're shipping libstdc++ yourself, then why do you need to track > the distro packages and provide multiple GCCs, etc.? Surely you just > tell everyone to use the (new, backwards compatible) libstdc++ that > you ship? > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Mar 9 23:34:00 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 9 Mar 2016 20:34:00 -0800 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: On Wed, Mar 9, 2016 at 7:50 PM, Michael Sarahan wrote: > Indeed about shadowing. That is not the default. If people install libgcc, > they currently get 4.8. > > I will look into creating a libstdc++ that has both abi's. I didn't realize > that was a possibility. Thanks! Your only options are "libstdc++ with the old abi" and "libstdc++ with both abis". There is no way to get a libstdc++ that only has the new abi without the old abi. -n -- Nathaniel J. Smith -- https://vorpus.org From msarahan at gmail.com Wed Mar 9 23:56:55 2016 From: msarahan at gmail.com (Michael Sarahan) Date: Thu, 10 Mar 2016 04:56:55 +0000 Subject: [Wheel-builders] C++ ABI v5 In-Reply-To: References: Message-ID: I see that now in the red hat article I linked. Don't know how I missed it before. Thanks again. Michael On Wed, Mar 9, 2016, 22:34 Nathaniel Smith wrote: > On Wed, Mar 9, 2016 at 7:50 PM, Michael Sarahan > wrote: > > Indeed about shadowing. That is not the default. If people install > libgcc, > > they currently get 4.8. > > > > I will look into creating a libstdc++ that has both abi's. I didn't > realize > > that was a possibility. Thanks! > > Your only options are "libstdc++ with the old abi" and "libstdc++ with > both abis". There is no way to get a libstdc++ that only has the new > abi without the old abi. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Mar 10 03:16:11 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 10 Mar 2016 09:16:11 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: 2016-03-10 0:30 GMT+01:00 Nathaniel Smith : > > a) Somehow St?fan got a wheel whose platform tag was the > 'linux_x86_64.manylinux1_x86_64' (I guess auditwheel did this?). > That's wrong -- it should just be 'manylinux1_x86_64' (because > warehouse will reject uploads like 'linux_x86_64.manylinux1_x86_64', > and because the 'linux' tag claims that it works on *all* linuxes, so > even if you were allowed to upload it then it would just break > people's systems). Yes `auditwheel addtag` did this. I think we need to have an option named repairtag, fixtag or applytag instead. Or even just have `auditwheel repair` change the tag without having a second command for that. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From rmcgibbo at gmail.com Thu Mar 10 10:26:32 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Thu, 10 Mar 2016 07:26:32 -0800 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: If you get a chance, can you open an issue on auditwheel's tracker for this so I don't forget? On Thu, Mar 10, 2016 at 12:16 AM, Olivier Grisel wrote: > 2016-03-10 0:30 GMT+01:00 Nathaniel Smith : > > > > a) Somehow St?fan got a wheel whose platform tag was the > > 'linux_x86_64.manylinux1_x86_64' (I guess auditwheel did this?). > > That's wrong -- it should just be 'manylinux1_x86_64' (because > > warehouse will reject uploads like 'linux_x86_64.manylinux1_x86_64', > > and because the 'linux' tag claims that it works on *all* linuxes, so > > even if you were allowed to upload it then it would just break > > people's systems). > > Yes `auditwheel addtag` did this. I think we need to have an option > named repairtag, fixtag or applytag instead. Or even just have > `auditwheel repair` change the tag without having a second command for > that. > > -- > Olivier > http://twitter.com/ogrisel - http://github.com/ogrisel > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Mar 10 10:32:54 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 10 Mar 2016 16:32:54 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: Done: https://github.com/pypa/auditwheel/issues/19 From njs at pobox.com Tue Mar 15 11:58:44 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 15 Mar 2016 08:58:44 -0700 Subject: [Wheel-builders] manylinux deployment update In-Reply-To: References: Message-ID: The pypi enablement logic is now deployed to the testpypi server. (This is a public scratch copy of pypi at testpypi.python.org -- it doesn't share any data with the real pypi, so you can do whatever to it without worrying you'll break anything real.) And thanks to how the legacy pypi stack is implemented, there is no automated testing at all, so it's entirely possible that my PR broke everything :-) https://bitbucket.org/pypa/pypi/pull-requests/106/import-wheel-platform-checking-logic-from/diff If you have some wheels sitting around, manylinux and otherwise, please try uploading them to testpypi and confirm that it works! Once we have that confirmation then we can hit the switch to deploy the change to real pypi. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Tue Mar 15 15:01:33 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 15 Mar 2016 20:01:33 +0100 Subject: [Wheel-builders] manylinux deployment update In-Reply-To: References: Message-ID: Great! I did some tests by uploading some Cython wheels from Matthew's collection with twine along with the official macosx wheel from the official pypi archive. https://testpypi.python.org/pypi/Cython I checked that wheel filenames with tags such as manylinux1_x86_64.linux_x86_64 and linux_x86_64 are rejected as expected (while manylinux1_x86_64 alone works obviously). I also checked that pip 8.1 is able to install the wheel from that index: # pip install -i https://testpypi.python.org/pypi/ cython Collecting cython Downloading https://testpypi.python.org/packages/cp35/C/Cython/Cython-0.23.4-cp35-cp35m-manylinux1_x86_64.whl (6.2MB) 100% |????????????????????????????????| 6.2MB 153kB/s Installing collected packages: cython Successfully installed cython-0.23.4 # cython --version Cython version 0.23.4 Everything looks fine to me. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From njs at pobox.com Thu Mar 17 21:43:55 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 17 Mar 2016 18:43:55 -0700 Subject: [Wheel-builders] changes to the layout of the manylinux docker image Message-ID: Hi all, Before hitting merge I wanted to get a few more eyes on this pull request from Nate Coraor to add narrow-unicode builds and other cleanups to the manylinux docker images: https://github.com/pypa/manylinux/pull/35 The main change from the end-user point of view is that the pre-installed python interpreters have been rearranged. Before: /opt/ contained one subdirectory for each python version (e.g. /opt/2.7.11), plus one symlink (e.g. /opt/2.7), plus there's other stuff in /opt as well. After: /opt/ contains one symlink named after the python SO-ABI version (e.g. /opt/2.7m, /opt/2.7.mu), and the actual pythons are installed in /opt/python/ with names like /opt/python/2.7.11m. (The "SO-ABI tag" concept comes from PEP 3149, and is the same thing you see in wheel filenames like cp27mu. The "m" is due to this weird thing where python upstream has decided to mark all builds that use "pymalloc" with an "m", even though that's the standard default option that everyone uses. So every build is marked "m".) Crucially, /opt/python/*/bin/python now gives a list of all python interpreters, /opt/python/*/bin/pip gives a list of all pips, etc. This will probably break existing scripts that expect /opt/2.7 to exist; OTOH the use of SO-ABI tags does seem pretty reasonable. Details: https://github.com/pypa/manylinux/pull/35#issuecomment-193817820 Anyway, I'll go ahead and merge in a day or two if no-one objects... -n -- Nathaniel J. Smith -- https://vorpus.org From stefanv at berkeley.edu Sun Mar 20 17:03:57 2016 From: stefanv at berkeley.edu (=?UTF-8?Q?St=C3=A9fan_van_der_Walt?=) Date: Sun, 20 Mar 2016 14:03:57 -0700 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows Message-ID: Hi, everyone I would like to distribute scikit-image wheels on Windows. To build these I require numpy and scipy, but where they come from is not too important. However, we have a bigger problem when it comes to users installing the skimage wheels, because they would also need numpy and scipy wheels to be available. From what I understand, we currently have the tools for building these onr 2.7 and 3.4, but not on 3.5. What is the latest state of things, and what is my current best course of action? St?fan From njs at pobox.com Sun Mar 20 18:30:46 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 20 Mar 2016 15:30:46 -0700 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: On Sun, Mar 20, 2016 at 2:03 PM, St?fan van der Walt wrote: > Hi, everyone > > I would like to distribute scikit-image wheels on Windows. To build > these I require numpy and scipy, but where they come from is not too > important. > > However, we have a bigger problem when it comes to users installing > the skimage wheels, because they would also need numpy and scipy > wheels to be available. > > From what I understand, we currently have the > tools for building these onr 2.7 and 3.4, but not on 3.5. > > What is the latest state of things, and what is my current best course > of action? For scipy that's pretty much the state of things, yeah. (Or, we're on the verge of being able to post 2.7 and 3.4 scipy wheels on Windows -- there's some tests of that happening right now on the mingwpy mailing list.) For numpy we already have 2.7 / 3.4 / 3.5 wheels uploaded (using somewhat-slow BLAS, but they work). As for your best course of action, though, I don't think the availability of numpy/scipy wheels changes anything? For 'pip install scikit-image' to work when only an sdist is available (the current situation), then numpy and scipy need to be already installed (IIUC). For 'pip install scikit-image' to to work when a wheel is available (the hopeful future situation), then numpy and scipy need to be either: - available as wheels - automatically buildable (very unlikely on windows) - already installed So it seems like uploading scikit-image wheels right now will only improve the range of things that work. In particular, they'll mean that 'pip install scikit-image' starts working usefully for people who have already installed numpy and scipy via Gohlke's builds, or via conda. Obviously it will be even better when scipy wheels are available, but incremental progress is still progress :-) I guess the one downside is that right now, if someone who is missing the crucial things tries to do 'pip install scikit-image' then your setup.py can give a nice error message explaining, whereas if a scikit-image wheel is available then that person will still be doomed, but may not get as nice an error message telling them so. (I.e., they'll get whatever error message you get when you try to do 'pip install scipy' on Windows.) -n -- Nathaniel J. Smith -- https://vorpus.org From ralf.gommers at gmail.com Mon Mar 21 02:16:39 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 21 Mar 2016 07:16:39 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: On Mon, Mar 21, 2016 at 7:16 AM, Ralf Gommers wrote: > > > On Sun, Mar 20, 2016 at 11:30 PM, Nathaniel Smith wrote: > >> On Sun, Mar 20, 2016 at 2:03 PM, St?fan van der Walt >> wrote: >> > Hi, everyone >> > >> > I would like to distribute scikit-image wheels on Windows. To build >> > these I require numpy and scipy, but where they come from is not too >> > important. >> > >> > However, we have a bigger problem when it comes to users installing >> > the skimage wheels, because they would also need numpy and scipy >> > wheels to be available. >> > >> > From what I understand, we currently have the >> > tools for building these onr 2.7 and 3.4, but not on 3.5. >> > >> > What is the latest state of things, and what is my current best course >> > of action? >> >> For scipy that's pretty much the state of things, yeah. (Or, we're on >> the verge of being able to post 2.7 and 3.4 scipy wheels on Windows -- >> there's some tests of that happening right now on the mingwpy mailing >> list.) For numpy we already have 2.7 / 3.4 / 3.5 wheels uploaded >> (using somewhat-slow BLAS, but they work). >> >> As for your best course of action, though, I don't think the >> availability of numpy/scipy wheels changes anything? >> >> For 'pip install scikit-image' to work when only an sdist is available >> (the current situation), then numpy and scipy need to be already >> installed (IIUC). >> >> For 'pip install scikit-image' to to work when a wheel is available >> (the hopeful future situation), then numpy and scipy need to be >> either: >> - available as wheels >> - automatically buildable (very unlikely on windows) >> - already installed >> >> So it seems like uploading scikit-image wheels right now will only >> improve the range of things that work. In particular, they'll mean >> that 'pip install scikit-image' starts working usefully for people who >> have already installed numpy and scipy via Gohlke's builds, or via >> conda. Obviously it will be even better when scipy wheels are >> available, but incremental progress is still progress :-) >> >> I guess the one downside is that right now, if someone who is missing >> the crucial things tries to do 'pip install scikit-image' then your >> setup.py can give a nice error message explaining, whereas if a >> scikit-image wheel is available then that person will still be doomed, >> but may not get as nice an error message telling them so. (I.e., >> they'll get whatever error message you get when you try to do 'pip >> install scipy' on Windows.) >> > > One other thing to improve things for users is to change the scikit-image > install instructions. > > It should start with pointing to Anaconda/Canopy/WinPython, and explaining > that that's definitely the better option for users (rather than pip) at > this point in time. > > Then, http://scikit-image.org/download says to use ``pip install -U > scikit-image``, which is never the right command - it will attempt to > upgrade numpy and scipy. Remove the -U or add --no-deps. > > Ralf > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Mon Mar 21 04:16:40 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Mon, 21 Mar 2016 09:16:40 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: I like what they did for the nilearn installation instructions: http://nilearn.github.io/introduction.html#installing-nilearn -- Olivier From cmkleffner at gmail.com Mon Mar 21 04:44:26 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 21 Mar 2016 09:44:26 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: Hi St?fan, I carried out a temporary upload of 4 scipy-0.17.0 windows wheels on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads. These scipy wheels are build against and need the numpy wheels available on PYPI. Python-3.5 wheels are not available yet however. The reason is explained in the-vs-14-2015-runtime Please be aware, that the 32 bit versions of scipy still emits a lot of errors (mostly Arpack). See https://gist.github.com/carlkl/9e9aa45f49fedb1a1ef7 Carl 2016-03-21 9:16 GMT+01:00 Olivier Grisel : > I like what they did for the nilearn installation instructions: > > http://nilearn.github.io/introduction.html#installing-nilearn > > -- > Olivier > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 21 12:43:47 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 21 Mar 2016 09:43:47 -0700 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: On Mon, Mar 21, 2016 at 1:44 AM, Carl Kleffner wrote: > Hi St?fan, > > I carried out a temporary upload of 4 scipy-0.17.0 windows wheels on > https://bitbucket.org/carlkl/mingw-w64-for-python/downloads. These scipy > wheels are build against and need the numpy wheels available on PYPI. > Python-3.5 wheels are not available yet however. The reason is explained in > the-vs-14-2015-runtime > > Please be aware, that the 32 bit versions of scipy still emits a lot of > errors (mostly Arpack). See > https://gist.github.com/carlkl/9e9aa45f49fedb1a1ef7 Carl - do you think it would make sense to build a scipy for 3.5 linking against the msvcrt runtime? Thanks for doing the builds, Matthew From ralf.gommers at gmail.com Mon Mar 21 14:11:12 2016 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Mon, 21 Mar 2016 19:11:12 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: On Mon, Mar 21, 2016 at 9:16 AM, Olivier Grisel wrote: > I like what they did for the nilearn installation instructions: > > http://nilearn.github.io/introduction.html#installing-nilearn That's nice (except the pip command is equally wrong). The tab-widget-thingy is worth stealing. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Mon Mar 21 15:57:35 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 21 Mar 2016 20:57:35 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: 2016-03-21 17:43 GMT+01:00 Matthew Brett : > On Mon, Mar 21, 2016 at 1:44 AM, Carl Kleffner > wrote: > > Hi St?fan, > > > > I carried out a temporary upload of 4 scipy-0.17.0 windows wheels on > > https://bitbucket.org/carlkl/mingw-w64-for-python/downloads. These scipy > > wheels are build against and need the numpy wheels available on PYPI. > > Python-3.5 wheels are not available yet however. The reason is explained > in > > the-vs-14-2015-runtime > > > > Please be aware, that the 32 bit versions of scipy still emits a lot of > > errors (mostly Arpack). See > > https://gist.github.com/carlkl/9e9aa45f49fedb1a1ef7 > > Carl - do you think it would make sense to build a scipy for 3.5 > linking against the msvcrt runtime? > I tried excatly this recently, but stumbled over npymath.lib (MSVC) during the build process. Seems that the VS 2015 static lib format cannot be used by binutils. I will try it again with npymath.a (mingwpy build). Thanks for doing the builds, > > Matthew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmkleffner at gmail.com Mon Mar 21 18:35:50 2016 From: cmkleffner at gmail.com (Carl Kleffner) Date: Mon, 21 Mar 2016 23:35:50 +0100 Subject: [Wheel-builders] Questions re: scipy / skimage wheels on Windows In-Reply-To: References: Message-ID: On https://bitbucket.org/carlkl/mingw-w64-for-python/downloads you can now downlad python-3.5 scipy wheels as well. These 3.5 binaries are linked against the good old msvcrt, as this seems to work for scipy. At least for running the tests. And don't forget: you need to install the numpy atlas wheels builds from PYPI (thanks Matthew) the 32 bit binaries have more or less the same problems as with 3.4 and 2.7 - mainly with Arpack: FAILED (KNOWNFAIL=98, SKIP=1657, errors=172, failures=27) the 64bit binaries shows 6 failures with test_continuous_basic FAILED (KNOWNFAIL=98, SKIP=1657, failures=6) Have fun Carl 2016-03-21 20:57 GMT+01:00 Carl Kleffner : > > > 2016-03-21 17:43 GMT+01:00 Matthew Brett : > >> On Mon, Mar 21, 2016 at 1:44 AM, Carl Kleffner >> wrote: >> > Hi St?fan, >> > >> > I carried out a temporary upload of 4 scipy-0.17.0 windows wheels on >> > https://bitbucket.org/carlkl/mingw-w64-for-python/downloads. These >> scipy >> > wheels are build against and need the numpy wheels available on PYPI. >> > Python-3.5 wheels are not available yet however. The reason is >> explained in >> > the-vs-14-2015-runtime >> > >> > Please be aware, that the 32 bit versions of scipy still emits a lot of >> > errors (mostly Arpack). See >> > https://gist.github.com/carlkl/9e9aa45f49fedb1a1ef7 >> >> Carl - do you think it would make sense to build a scipy for 3.5 >> linking against the msvcrt runtime? >> > > I tried excatly this recently, but stumbled over npymath.lib (MSVC) during > the build process. > Seems that the VS 2015 static lib format cannot be used by binutils. I > will try it again > with npymath.a (mingwpy build). > > Thanks for doing the builds, >> >> Matthew >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Tue Mar 22 09:45:43 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 22 Mar 2016 14:45:43 +0100 Subject: [Wheel-builders] manylinux1 wheels are now accepted in production PyPI Message-ID: https://bitbucket.org/pypa/pypi/pull-requests/106/import-wheel-platform-checking-logic-from/diff -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From njs at pobox.com Tue Mar 22 10:47:10 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 22 Mar 2016 07:47:10 -0700 Subject: [Wheel-builders] manylinux1 wheels are now accepted in production PyPI In-Reply-To: References: Message-ID: \o/ On Mar 22, 2016 06:46, "Olivier Grisel" wrote: > > https://bitbucket.org/pypa/pypi/pull-requests/106/import-wheel-platform-checking-logic-from/diff > > -- > Olivier > http://twitter.com/ogrisel - http://github.com/ogrisel > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Wed Mar 23 05:44:34 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 23 Mar 2016 10:44:34 +0100 Subject: [Wheel-builders] BLAS/LAPACK for manylinux1 wheels In-Reply-To: References: Message-ID: FYI the "auditwheel repair" command now fixes the tag information (both in the filename and the WHEEL metadata) by default. This fix is available in the master branch of auditwheel (not yet released nor part of the docker manylinux docker images). https://github.com/pypa/auditwheel/pull/21 -- Olivier From olivier.grisel at ensta.org Wed Mar 23 07:00:36 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 23 Mar 2016 12:00:36 +0100 Subject: [Wheel-builders] Built ATLAS etc RPMS In-Reply-To: References: Message-ID: What about creating and pushing 2 new docker containers with those atlas packages pre-installed to make it easier for the scipy stack projects to build and tests manylinux1 wheels on their master branch. How long does the atlas build last? Is it feasible on travis or circle ci? -- Olivier Grisel From matthew.brett at gmail.com Wed Mar 23 11:45:13 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 23 Mar 2016 08:45:13 -0700 Subject: [Wheel-builders] Built ATLAS etc RPMS In-Reply-To: References: Message-ID: Hi, On Wed, Mar 23, 2016 at 4:00 AM, Olivier Grisel wrote: > What about creating and pushing 2 new docker containers with those > atlas packages pre-installed to make it easier for the scipy stack > projects to build and tests manylinux1 wheels on their master branch. > > How long does the atlas build last? Is it feasible on travis or circle ci? No, the build took many hours. Also, it's optimizing for the particular machine, so building on a virtual machine with unknown other loads, where CPU throttling might be operating, would likely make a total mess of the timing that ATLAS uses to chose the fastest algorithms. Matthew From matthew.brett at gmail.com Mon Mar 28 16:42:39 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 13:42:39 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts Message-ID: I'm installing a manylinux wheel on a Debian sid machine I have. For the manylinux wheel, but not a local build, I get the following error from `nosetests numpy.f2py`: ``` ====================================================================== ERROR: test_kind.TestKind.test_all ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", line 381, in setUp try_run(self.inst, ('setup', 'setUp')) File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", line 471, in try_run return func() File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 358, in setUp module_name=self.module_name) File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 78, in wrapper memo[key] = func(*a, **kw) File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", line 149, in build_module __import__(module_name) ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /tmp/tmpsFHJXE/_test_ext_module_5405.so) ``` Anyone out there with insight as to what's going on? Cheers, Matthew From matthew.brett at gmail.com Mon Mar 28 17:12:28 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 14:12:28 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 1:42 PM, Matthew Brett wrote: > I'm installing a manylinux wheel on a Debian sid machine I have. For > the manylinux wheel, but not a local build, I get the following error > from `nosetests numpy.f2py`: > > ``` > ====================================================================== > ERROR: test_kind.TestKind.test_all > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", > line 381, in setUp > try_run(self.inst, ('setup', 'setUp')) > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", > line 471, in try_run > return func() > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > line 358, in setUp > module_name=self.module_name) > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > line 78, in wrapper > memo[key] = func(*a, **kw) > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > line 149, in build_module > __import__(module_name) > ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > version `GFORTRAN_1.4' not found (required by > /tmp/tmpsFHJXE/_test_ext_module_5405.so) > ``` > > Anyone out there with insight as to what's going on? I guess what might be happening, is that the built f2py module should be linking against the system libgfortran, but in fact is finding the shipped gfortran. Matthew From matthew.brett at gmail.com Mon Mar 28 17:29:17 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 14:29:17 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 2:12 PM, Matthew Brett wrote: > On Mon, Mar 28, 2016 at 1:42 PM, Matthew Brett wrote: >> I'm installing a manylinux wheel on a Debian sid machine I have. For >> the manylinux wheel, but not a local build, I get the following error >> from `nosetests numpy.f2py`: >> >> ``` >> ====================================================================== >> ERROR: test_kind.TestKind.test_all >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", >> line 381, in setUp >> try_run(self.inst, ('setup', 'setUp')) >> File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", >> line 471, in try_run >> return func() >> File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> line 358, in setUp >> module_name=self.module_name) >> File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> line 78, in wrapper >> memo[key] = func(*a, **kw) >> File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> line 149, in build_module >> __import__(module_name) >> ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >> version `GFORTRAN_1.4' not found (required by >> /tmp/tmpsFHJXE/_test_ext_module_5405.so) >> ``` >> >> Anyone out there with insight as to what's going on? > > I guess what might be happening, is that the built f2py module should > be linking against the system libgfortran, but in fact is finding the > shipped gfortran. I replicated the error with manylinux wheels on travis-ci, by adding `apt-get install gfortran` to the setup: https://travis-ci.org/matthew-brett/manylinux-testing/jobs/119075844#L278 Matthew From njs at pobox.com Mon Mar 28 17:30:43 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 28 Mar 2016 14:30:43 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mar 28, 2016 14:20, "Matthew Brett" wrote: > > On Mon, Mar 28, 2016 at 1:42 PM, Matthew Brett wrote: > > I'm installing a manylinux wheel on a Debian sid machine I have. For > > the manylinux wheel, but not a local build, I get the following error > > from `nosetests numpy.f2py`: > > > > ``` > > ====================================================================== > > ERROR: test_kind.TestKind.test_all > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", > > line 381, in setUp > > try_run(self.inst, ('setup', 'setUp')) > > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", > > line 471, in try_run > > return func() > > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > > line 358, in setUp > > module_name=self.module_name) > > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > > line 78, in wrapper > > memo[key] = func(*a, **kw) > > File "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", > > line 149, in build_module > > __import__(module_name) > > ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > > version `GFORTRAN_1.4' not found (required by > > /tmp/tmpsFHJXE/_test_ext_module_5405.so) > > ``` > > > > Anyone out there with insight as to what's going on? > > I guess what might be happening, is that the built f2py module should > be linking against the system libgfortran, but in fact is finding the > shipped gfortran. I think this diagnosis is correct, but I don't know why it would be happening. The newly compiled module should be getting loaded into a fresh ELF context and find the system gfortran. We're not adding the .libs dir to LD_LIBRARY_PATH, right? Are we somehow adding the .libs dir to the built module's rpath? Some things to try: - run with LD_DEBUG=libs - check LD_LIBRARY_PATH - use readelf on the _test_ex_module.so to see if it has an rpath set -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 28 18:12:12 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 15:12:12 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 2:30 PM, Nathaniel Smith wrote: > On Mar 28, 2016 14:20, "Matthew Brett" wrote: >> >> On Mon, Mar 28, 2016 at 1:42 PM, Matthew Brett >> wrote: >> > I'm installing a manylinux wheel on a Debian sid machine I have. For >> > the manylinux wheel, but not a local build, I get the following error >> > from `nosetests numpy.f2py`: >> > >> > ``` >> > ====================================================================== >> > ERROR: test_kind.TestKind.test_all >> > ---------------------------------------------------------------------- >> > Traceback (most recent call last): >> > File >> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", >> > line 381, in setUp >> > try_run(self.inst, ('setup', 'setUp')) >> > File >> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", >> > line 471, in try_run >> > return func() >> > File >> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> > line 358, in setUp >> > module_name=self.module_name) >> > File >> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> > line 78, in wrapper >> > memo[key] = func(*a, **kw) >> > File >> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >> > line 149, in build_module >> > __import__(module_name) >> > ImportError: >> > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >> > version `GFORTRAN_1.4' not found (required by >> > /tmp/tmpsFHJXE/_test_ext_module_5405.so) >> > ``` >> > >> > Anyone out there with insight as to what's going on? >> >> I guess what might be happening, is that the built f2py module should >> be linking against the system libgfortran, but in fact is finding the >> shipped gfortran. > > I think this diagnosis is correct, but I don't know why it would be > happening. The newly compiled module should be getting loaded into a fresh > ELF context and find the system gfortran. We're not adding the .libs dir to > LD_LIBRARY_PATH, right? Are we somehow adding the .libs dir to the built > module's rpath? > > Some things to try: > - run with LD_DEBUG=libs > - check LD_LIBRARY_PATH > - use readelf on the _test_ex_module.so to see if it has an rpath set I can't replicate this error in a fresh Python process: ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /tmp/tmpjqst5B/_test_ext_module_5405.so) $ cd /tmp/tmpjqst5B $ python -c 'import _test_ext_module_5405' No sign of a custom rpath: $ readelf -d _test_ext_module_5405.so Dynamic section at offset 0x5d60 contains 29 entries: Tag Type Name/Value 0x0000000000000001 (NEEDED) Shared library: [libpython2.7.so.1.0] 0x0000000000000001 (NEEDED) Shared library: [libgfortran.so.3] 0x0000000000000001 (NEEDED) Shared library: [libm.so.6] 0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1] 0x0000000000000001 (NEEDED) Shared library: [libquadmath.so.0] 0x0000000000000001 (NEEDED) Shared library: [libc.so.6] 0x000000000000000c (INIT) 0x17c0 0x000000000000000d (FINI) 0x41e0 0x0000000000000019 (INIT_ARRAY) 0x205d40 0x000000000000001b (INIT_ARRAYSZ) 8 (bytes) 0x000000000000001a (FINI_ARRAY) 0x205d48 0x000000000000001c (FINI_ARRAYSZ) 8 (bytes) 0x000000006ffffef5 (GNU_HASH) 0x1f0 0x0000000000000005 (STRTAB) 0x9c8 0x0000000000000006 (SYMTAB) 0x2a8 0x000000000000000a (STRSZ) 1328 (bytes) 0x000000000000000b (SYMENT) 24 (bytes) 0x0000000000000003 (PLTGOT) 0x206000 0x0000000000000002 (PLTRELSZ) 1128 (bytes) 0x0000000000000014 (PLTREL) RELA 0x0000000000000017 (JMPREL) 0x1358 0x0000000000000007 (RELA) 0xfe0 0x0000000000000008 (RELASZ) 888 (bytes) 0x0000000000000009 (RELAENT) 24 (bytes) 0x000000006ffffffe (VERNEED) 0xf90 0x000000006fffffff (VERNEEDNUM) 1 0x000000006ffffff0 (VERSYM) 0xef8 0x000000006ffffff9 (RELACOUNT) 18 0x0000000000000000 (NULL) 0x0 Stopping in pdb: $ nosetests numpy.f2py --pdb ..........................................................................................................................................................................................................................................................................................................................................................................> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py(151)build_module() -> __import__(module_name) (Pdb) os.environ['LD_LIBRARY_PATH'] *** KeyError: 'LD_LIBRARY_PATH' (Pdb) module_name '_test_ext_module_5405' (Pdb) import _test_ext_module_5405 *** ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /tmp/tmp8KkcW2/_test_ext_module_5405.so) Running LD_DEBUG=libs nosetests /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/test_kind.py:TestKind.test_all gives no test error, but does show this: 18543: find library=libgfortran.so.3 [0]; searching 18543: search path=/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs (RPATH from file /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/multiarray.so) 18543: trying file=/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3 Matthew From matthew.brett at gmail.com Mon Mar 28 18:17:47 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 15:17:47 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 3:12 PM, Matthew Brett wrote: > On Mon, Mar 28, 2016 at 2:30 PM, Nathaniel Smith wrote: >> On Mar 28, 2016 14:20, "Matthew Brett" wrote: >>> >>> On Mon, Mar 28, 2016 at 1:42 PM, Matthew Brett >>> wrote: >>> > I'm installing a manylinux wheel on a Debian sid machine I have. For >>> > the manylinux wheel, but not a local build, I get the following error >>> > from `nosetests numpy.f2py`: >>> > >>> > ``` >>> > ====================================================================== >>> > ERROR: test_kind.TestKind.test_all >>> > ---------------------------------------------------------------------- >>> > Traceback (most recent call last): >>> > File >>> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/case.py", >>> > line 381, in setUp >>> > try_run(self.inst, ('setup', 'setUp')) >>> > File >>> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/nose/util.py", >>> > line 471, in try_run >>> > return func() >>> > File >>> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >>> > line 358, in setUp >>> > module_name=self.module_name) >>> > File >>> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >>> > line 78, in wrapper >>> > memo[key] = func(*a, **kw) >>> > File >>> > "/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py", >>> > line 149, in build_module >>> > __import__(module_name) >>> > ImportError: >>> > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>> > version `GFORTRAN_1.4' not found (required by >>> > /tmp/tmpsFHJXE/_test_ext_module_5405.so) >>> > ``` >>> > >>> > Anyone out there with insight as to what's going on? >>> >>> I guess what might be happening, is that the built f2py module should >>> be linking against the system libgfortran, but in fact is finding the >>> shipped gfortran. >> >> I think this diagnosis is correct, but I don't know why it would be >> happening. The newly compiled module should be getting loaded into a fresh >> ELF context and find the system gfortran. We're not adding the .libs dir to >> LD_LIBRARY_PATH, right? Are we somehow adding the .libs dir to the built >> module's rpath? >> >> Some things to try: >> - run with LD_DEBUG=libs >> - check LD_LIBRARY_PATH >> - use readelf on the _test_ex_module.so to see if it has an rpath set > > I can't replicate this error in a fresh Python process: > > ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > version `GFORTRAN_1.4' not found (required by > /tmp/tmpjqst5B/_test_ext_module_5405.so) > > $ cd /tmp/tmpjqst5B > $ python -c 'import _test_ext_module_5405' > > No sign of a custom rpath: > > $ readelf -d _test_ext_module_5405.so > > Dynamic section at offset 0x5d60 contains 29 entries: > Tag Type Name/Value > 0x0000000000000001 (NEEDED) Shared library: [libpython2.7.so.1.0] > 0x0000000000000001 (NEEDED) Shared library: [libgfortran.so.3] > 0x0000000000000001 (NEEDED) Shared library: [libm.so.6] > 0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1] > 0x0000000000000001 (NEEDED) Shared library: [libquadmath.so.0] > 0x0000000000000001 (NEEDED) Shared library: [libc.so.6] > 0x000000000000000c (INIT) 0x17c0 > 0x000000000000000d (FINI) 0x41e0 > 0x0000000000000019 (INIT_ARRAY) 0x205d40 > 0x000000000000001b (INIT_ARRAYSZ) 8 (bytes) > 0x000000000000001a (FINI_ARRAY) 0x205d48 > 0x000000000000001c (FINI_ARRAYSZ) 8 (bytes) > 0x000000006ffffef5 (GNU_HASH) 0x1f0 > 0x0000000000000005 (STRTAB) 0x9c8 > 0x0000000000000006 (SYMTAB) 0x2a8 > 0x000000000000000a (STRSZ) 1328 (bytes) > 0x000000000000000b (SYMENT) 24 (bytes) > 0x0000000000000003 (PLTGOT) 0x206000 > 0x0000000000000002 (PLTRELSZ) 1128 (bytes) > 0x0000000000000014 (PLTREL) RELA > 0x0000000000000017 (JMPREL) 0x1358 > 0x0000000000000007 (RELA) 0xfe0 > 0x0000000000000008 (RELASZ) 888 (bytes) > 0x0000000000000009 (RELAENT) 24 (bytes) > 0x000000006ffffffe (VERNEED) 0xf90 > 0x000000006fffffff (VERNEEDNUM) 1 > 0x000000006ffffff0 (VERSYM) 0xef8 > 0x000000006ffffff9 (RELACOUNT) 18 > 0x0000000000000000 (NULL) 0x0 > > Stopping in pdb: > > $ nosetests numpy.f2py --pdb > ..........................................................................................................................................................................................................................................................................................................................................................................> > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/util.py(151)build_module() > -> __import__(module_name) > (Pdb) os.environ['LD_LIBRARY_PATH'] > *** KeyError: 'LD_LIBRARY_PATH' > (Pdb) module_name > '_test_ext_module_5405' > (Pdb) import _test_ext_module_5405 > *** ImportError: > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > version `GFORTRAN_1.4' not found (required by > /tmp/tmp8KkcW2/_test_ext_module_5405.so) > > Running > > LD_DEBUG=libs nosetests > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/f2py/tests/test_kind.py:TestKind.test_all > > gives no test error, but does show this: > > 18543: find library=libgfortran.so.3 [0]; searching > 18543: search > path=/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs > (RPATH from file > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/multiarray.so) > 18543: trying > file=/home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3 And: $ python -c 'import _test_ext_module_5403' # No error $ python -c 'import numpy; import _test_ext_module_5403' # Error after importing numpy Traceback (most recent call last): File "", line 1, in ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by ./_test_ext_module_5403.so) Matthew From njs at pobox.com Mon Mar 28 18:33:22 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 28 Mar 2016 15:33:22 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mar 28, 2016 15:18, "Matthew Brett" wrote: [...] > And: > > $ python -c 'import _test_ext_module_5403' # No error > $ python -c 'import numpy; import _test_ext_module_5403' # Error > after importing numpy > Traceback (most recent call last): > File "", line 1, in > ImportError: /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > version `GFORTRAN_1.4' not found (required by > ./_test_ext_module_5403.so) Can you rerun this with LD_DEBUG=all and put the full output somewhere, like a gist or attachment or something? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Mon Mar 28 18:39:16 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 15:39:16 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: > On Mar 28, 2016 15:18, "Matthew Brett" wrote: > [...] >> And: >> >> $ python -c 'import _test_ext_module_5403' # No error >> $ python -c 'import numpy; import _test_ext_module_5403' # Error >> after importing numpy >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: >> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >> version `GFORTRAN_1.4' not found (required by >> ./_test_ext_module_5403.so) > > Can you rerun this with LD_DEBUG=all and put the full output somewhere, like > a gist or attachment or something? Bit big for a gist : does this work? https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 Matthew From insertinterestingnamehere at gmail.com Mon Mar 28 18:58:51 2016 From: insertinterestingnamehere at gmail.com (Ian Henriksen) Date: Mon, 28 Mar 2016 22:58:51 +0000 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 4:39 PM Matthew Brett wrote: > On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: > > On Mar 28, 2016 15:18, "Matthew Brett" wrote: > > [...] > >> And: > >> > >> $ python -c 'import _test_ext_module_5403' # No error > >> $ python -c 'import numpy; import _test_ext_module_5403' # Error > >> after importing numpy > >> Traceback (most recent call last): > >> File "", line 1, in > >> ImportError: > >> > /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: > >> version `GFORTRAN_1.4' not found (required by > >> ./_test_ext_module_5403.so) > > > > Can you rerun this with LD_DEBUG=all and put the full output somewhere, > like > > a gist or attachment or something? > > Bit big for a gist : does this work? > https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 > > Matthew > _______________________________________________ > Wheel-builders mailing list > Wheel-builders at python.org > https://mail.python.org/mailman/listinfo/wheel-builders Just a guess on what's going on here, but if you're using the system gfortran available on CentOS 5, it's going to be linking against a different libgfortran ABI version. The shared object version specifiers make it so that the older ligfortran shows up as "not found" rather than just having the dynamic linker find the copies of libgfortran with the newer ABI on newer systems. Best, -Ian Henriksen -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Mar 28 20:02:38 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 28 Mar 2016 17:02:38 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 3:39 PM, Matthew Brett wrote: > On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: >> On Mar 28, 2016 15:18, "Matthew Brett" wrote: >> [...] >>> And: >>> >>> $ python -c 'import _test_ext_module_5403' # No error >>> $ python -c 'import numpy; import _test_ext_module_5403' # Error >>> after importing numpy >>> Traceback (most recent call last): >>> File "", line 1, in >>> ImportError: >>> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>> version `GFORTRAN_1.4' not found (required by >>> ./_test_ext_module_5403.so) >> >> Can you rerun this with LD_DEBUG=all and put the full output somewhere, like >> a gist or attachment or something? > > Bit big for a gist : does this work? > https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 Ha, oops, yeah, the per-symbol debugging chatter is probably a bit much, huh :-) ...But, that .log file doesn't seem to contain either of the strings "test_ext_module" or "GFORTRAN_1.4". Are you sure it's a log of the same thing...? (If re-running then I guess LD_DEBUG=files,scopes,versions is probably sufficient and will generate less output than LD_DEBUG=all.) -n -- Nathaniel J. Smith -- https://vorpus.org From matthew.brett at gmail.com Mon Mar 28 20:04:58 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 17:04:58 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 5:02 PM, Nathaniel Smith wrote: > On Mon, Mar 28, 2016 at 3:39 PM, Matthew Brett wrote: >> On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: >>> On Mar 28, 2016 15:18, "Matthew Brett" wrote: >>> [...] >>>> And: >>>> >>>> $ python -c 'import _test_ext_module_5403' # No error >>>> $ python -c 'import numpy; import _test_ext_module_5403' # Error >>>> after importing numpy >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> ImportError: >>>> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>>> version `GFORTRAN_1.4' not found (required by >>>> ./_test_ext_module_5403.so) >>> >>> Can you rerun this with LD_DEBUG=all and put the full output somewhere, like >>> a gist or attachment or something? >> >> Bit big for a gist : does this work? >> https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 > > Ha, oops, yeah, the per-symbol debugging chatter is probably a bit much, huh :-) > > ...But, that .log file doesn't seem to contain either of the strings > "test_ext_module" or "GFORTRAN_1.4". Are you sure it's a log of the > same thing...? > > (If re-running then I guess LD_DEBUG=files,scopes,versions is probably > sufficient and will generate less output than LD_DEBUG=all.) Yes, I"m sure it's a log of the same thing - I said this before - but I do not see the error when running prefixed with LD_DEBUG Matthew From njs at pobox.com Mon Mar 28 20:14:08 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 28 Mar 2016 17:14:08 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 5:04 PM, Matthew Brett wrote: > On Mon, Mar 28, 2016 at 5:02 PM, Nathaniel Smith wrote: >> On Mon, Mar 28, 2016 at 3:39 PM, Matthew Brett wrote: >>> On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: >>>> On Mar 28, 2016 15:18, "Matthew Brett" wrote: >>>> [...] >>>>> And: >>>>> >>>>> $ python -c 'import _test_ext_module_5403' # No error >>>>> $ python -c 'import numpy; import _test_ext_module_5403' # Error >>>>> after importing numpy >>>>> Traceback (most recent call last): >>>>> File "", line 1, in >>>>> ImportError: >>>>> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>>>> version `GFORTRAN_1.4' not found (required by >>>>> ./_test_ext_module_5403.so) >>>> >>>> Can you rerun this with LD_DEBUG=all and put the full output somewhere, like >>>> a gist or attachment or something? >>> >>> Bit big for a gist : does this work? >>> https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 >> >> Ha, oops, yeah, the per-symbol debugging chatter is probably a bit much, huh :-) >> >> ...But, that .log file doesn't seem to contain either of the strings >> "test_ext_module" or "GFORTRAN_1.4". Are you sure it's a log of the >> same thing...? >> >> (If re-running then I guess LD_DEBUG=files,scopes,versions is probably >> sufficient and will generate less output than LD_DEBUG=all.) > > Yes, I"m sure it's a log of the same thing - I said this before - but > I do not see the error when running prefixed with LD_DEBUG Ah, I did miss that detail. But... even so... that log shows every single shared object that was loaded, and so the fact that _test_ext_module_5403 doesn't show up in the output seems to say Python never even tried to do dlopen("_test_ext_module_5403"), whether successfully or unsuccessfully. It's like the 'import _test_ext_module_5403' wasn't even executed at all. I *guess* that could be some bizarro bug where enabling LD_DEBUG totally changes python's execution path, but I hope not :-/ Any other ideas? -n -- Nathaniel J. Smith -- https://vorpus.org From matthew.brett at gmail.com Mon Mar 28 20:27:05 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Mon, 28 Mar 2016 17:27:05 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 5:14 PM, Nathaniel Smith wrote: > On Mon, Mar 28, 2016 at 5:04 PM, Matthew Brett wrote: >> On Mon, Mar 28, 2016 at 5:02 PM, Nathaniel Smith wrote: >>> On Mon, Mar 28, 2016 at 3:39 PM, Matthew Brett wrote: >>>> On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: >>>>> On Mar 28, 2016 15:18, "Matthew Brett" wrote: >>>>> [...] >>>>>> And: >>>>>> >>>>>> $ python -c 'import _test_ext_module_5403' # No error >>>>>> $ python -c 'import numpy; import _test_ext_module_5403' # Error >>>>>> after importing numpy >>>>>> Traceback (most recent call last): >>>>>> File "", line 1, in >>>>>> ImportError: >>>>>> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>>>>> version `GFORTRAN_1.4' not found (required by >>>>>> ./_test_ext_module_5403.so) >>>>> >>>>> Can you rerun this with LD_DEBUG=all and put the full output somewhere, like >>>>> a gist or attachment or something? >>>> >>>> Bit big for a gist : does this work? >>>> https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 >>> >>> Ha, oops, yeah, the per-symbol debugging chatter is probably a bit much, huh :-) >>> >>> ...But, that .log file doesn't seem to contain either of the strings >>> "test_ext_module" or "GFORTRAN_1.4". Are you sure it's a log of the >>> same thing...? >>> >>> (If re-running then I guess LD_DEBUG=files,scopes,versions is probably >>> sufficient and will generate less output than LD_DEBUG=all.) >> >> Yes, I"m sure it's a log of the same thing - I said this before - but >> I do not see the error when running prefixed with LD_DEBUG > > Ah, I did miss that detail. But... even so... that log shows every > single shared object that was loaded, and so the fact that > _test_ext_module_5403 doesn't show up in the output seems to say > Python never even tried to do dlopen("_test_ext_module_5403"), whether > successfully or unsuccessfully. It's like the 'import > _test_ext_module_5403' wasn't even executed at all. I *guess* that > could be some bizarro bug where enabling LD_DEBUG totally changes > python's execution path, but I hope not :-/ Any other ideas? Here's the output from: LD_DEBUG=files,scopes,versions python -c 'import numpy; import _test_ext_module_5403' >& ~/ld-debug-import.log https://www.dropbox.com/s/76ph39fede4uecu/ld-debug-import.log?dl=0 More useful? Matthew From matthew.brett at gmail.com Tue Mar 29 05:10:02 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 29 Mar 2016 02:10:02 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Mon, Mar 28, 2016 at 5:27 PM, Matthew Brett wrote: > On Mon, Mar 28, 2016 at 5:14 PM, Nathaniel Smith wrote: >> On Mon, Mar 28, 2016 at 5:04 PM, Matthew Brett wrote: >>> On Mon, Mar 28, 2016 at 5:02 PM, Nathaniel Smith wrote: >>>> On Mon, Mar 28, 2016 at 3:39 PM, Matthew Brett wrote: >>>>> On Mon, Mar 28, 2016 at 3:33 PM, Nathaniel Smith wrote: >>>>>> On Mar 28, 2016 15:18, "Matthew Brett" wrote: >>>>>> [...] >>>>>>> And: >>>>>>> >>>>>>> $ python -c 'import _test_ext_module_5403' # No error >>>>>>> $ python -c 'import numpy; import _test_ext_module_5403' # Error >>>>>>> after importing numpy >>>>>>> Traceback (most recent call last): >>>>>>> File "", line 1, in >>>>>>> ImportError: >>>>>>> /home/mb312/dev_trees/2fd9d9a29e022c297634/manylinux-test/lib/python2.7/site-packages/numpy/core/../.libs/libgfortran.so.3: >>>>>>> version `GFORTRAN_1.4' not found (required by >>>>>>> ./_test_ext_module_5403.so) >>>>>> >>>>>> Can you rerun this with LD_DEBUG=all and put the full output somewhere, like >>>>>> a gist or attachment or something? >>>>> >>>>> Bit big for a gist : does this work? >>>>> https://www.dropbox.com/s/7cdnl3zb4dr5i90/ld_debug.log.gz?dl=0 >>>> >>>> Ha, oops, yeah, the per-symbol debugging chatter is probably a bit much, huh :-) >>>> >>>> ...But, that .log file doesn't seem to contain either of the strings >>>> "test_ext_module" or "GFORTRAN_1.4". Are you sure it's a log of the >>>> same thing...? >>>> >>>> (If re-running then I guess LD_DEBUG=files,scopes,versions is probably >>>> sufficient and will generate less output than LD_DEBUG=all.) >>> >>> Yes, I"m sure it's a log of the same thing - I said this before - but >>> I do not see the error when running prefixed with LD_DEBUG >> >> Ah, I did miss that detail. But... even so... that log shows every >> single shared object that was loaded, and so the fact that >> _test_ext_module_5403 doesn't show up in the output seems to say >> Python never even tried to do dlopen("_test_ext_module_5403"), whether >> successfully or unsuccessfully. It's like the 'import >> _test_ext_module_5403' wasn't even executed at all. I *guess* that >> could be some bizarro bug where enabling LD_DEBUG totally changes >> python's execution path, but I hope not :-/ Any other ideas? > > Here's the output from: > > LD_DEBUG=files,scopes,versions python -c 'import numpy; import > _test_ext_module_5403' >& ~/ld-debug-import.log > > https://www.dropbox.com/s/76ph39fede4uecu/ld-debug-import.log?dl=0 You can replicate fairly simply with: pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com numpy cd numpy/f2py/tests/src/kind f2py -c foo.f90 -m foo python -c 'import numpy; import foo' Matthew From olivier.grisel at ensta.org Tue Mar 29 07:29:21 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 29 Mar 2016 13:29:21 +0200 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: > You can replicate fairly simply with: > > pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com > numpy > cd numpy/f2py/tests/src/kind > f2py -c foo.f90 -m foo > python -c 'import numpy; import foo' I can replicate. I can also force the use of the system fortran with LD_PRELOAD (on ubuntu 14.04): LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libgfortran.so.3' python -c 'import numpy; import foo' -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From matthew.brett at gmail.com Tue Mar 29 13:50:56 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Tue, 29 Mar 2016 10:50:56 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Tue, Mar 29, 2016 at 4:29 AM, Olivier Grisel wrote: >> You can replicate fairly simply with: >> >> pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com >> numpy >> cd numpy/f2py/tests/src/kind >> f2py -c foo.f90 -m foo >> python -c 'import numpy; import foo' > > I can replicate. I can also force the use of the system fortran with > LD_PRELOAD (on ubuntu 14.04): > > LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libgfortran.so.3' python -c > 'import numpy; import foo' This one is obviously a show-stopper - I believe that it means that any module compiled against gfortran (and presumably any shipped lib that numpy loads) will break after numpy has been imported. I don't understand the linux link rules well enough to know how to fix. I guess we could rename the libraries that we ship / vendor into libs? (numpy_libgfortran.so etc). Matthew From njs at pobox.com Tue Mar 29 17:51:02 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 29 Mar 2016 14:51:02 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Tue, Mar 29, 2016 at 10:50 AM, Matthew Brett wrote: > On Tue, Mar 29, 2016 at 4:29 AM, Olivier Grisel > wrote: >>> You can replicate fairly simply with: >>> >>> pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com >>> numpy >>> cd numpy/f2py/tests/src/kind >>> f2py -c foo.f90 -m foo >>> python -c 'import numpy; import foo' >> >> I can replicate. I can also force the use of the system fortran with >> LD_PRELOAD (on ubuntu 14.04): >> >> LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libgfortran.so.3' python -c >> 'import numpy; import foo' > > This one is obviously a show-stopper - I believe that it means that > any module compiled against gfortran (and presumably any shipped lib > that numpy loads) will break after numpy has been imported. > > I don't understand the linux link rules well enough to know how to > fix. I guess we could rename the libraries that we ship / vendor into > libs? (numpy_libgfortran.so etc). Yeah, I'm not at all sure what's going on here and whether it's intentional or a bug in glibc, and we should figure out what's actually happening and why for our own understanding, but I'm pretty confident that this is going to be either the solution or the workaround :-). I guess the general approach is that auditwheel should assign a unique name to every vendored library. Some possible naming strategies: -libgfortran.so -libgfortran.so -libgfortran.so I guess the truncated-sha256 strategy might be best because if there are lots of manylinux wheels all vendoring the same copy of libgfortran, they will (or at least might) end up sharing it in memory instead of mmap'ing multiple different copies of the identical .so file, which is good? And this is certainly safe if the different copies of libgfortran all have the same sha256. And then auditwheel should use patchelf --replace-needed to modify the DT_NEEDED in the various extension modules to point to the new name, at the same time it's changing the RPATH. (--replace-needed doesn't seem to be mentioned in most of the patchelf docs I can find, but it is described here: https://github.com/NixOS/patchelf) -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Tue Mar 29 17:52:11 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 29 Mar 2016 14:52:11 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Tue, Mar 29, 2016 at 2:51 PM, Nathaniel Smith wrote: > On Tue, Mar 29, 2016 at 10:50 AM, Matthew Brett wrote: >> On Tue, Mar 29, 2016 at 4:29 AM, Olivier Grisel >> wrote: >>>> You can replicate fairly simply with: >>>> >>>> pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com >>>> numpy >>>> cd numpy/f2py/tests/src/kind >>>> f2py -c foo.f90 -m foo >>>> python -c 'import numpy; import foo' >>> >>> I can replicate. I can also force the use of the system fortran with >>> LD_PRELOAD (on ubuntu 14.04): >>> >>> LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libgfortran.so.3' python -c >>> 'import numpy; import foo' >> >> This one is obviously a show-stopper - I believe that it means that >> any module compiled against gfortran (and presumably any shipped lib >> that numpy loads) will break after numpy has been imported. >> >> I don't understand the linux link rules well enough to know how to >> fix. I guess we could rename the libraries that we ship / vendor into >> libs? (numpy_libgfortran.so etc). > > Yeah, I'm not at all sure what's going on here and whether it's > intentional or a bug in glibc, and we should figure out what's > actually happening and why for our own understanding, but I'm pretty > confident that this is going to be either the solution or the > workaround :-). > > I guess the general approach is that auditwheel should assign a unique > name to every vendored library. Some possible naming strategies: > -libgfortran.so > -libgfortran.so > -libgfortran.so > > I guess the truncated-sha256 strategy might be best because if there > are lots of manylinux wheels all vendoring the same copy of > libgfortran, they will (or at least might) end up sharing it in memory > instead of mmap'ing multiple different copies of the identical .so > file, which is good? And this is certainly safe if the different > copies of libgfortran all have the same sha256. > > And then auditwheel should use patchelf --replace-needed to modify the > DT_NEEDED in the various extension modules to point to the new name, > at the same time it's changing the RPATH. (--replace-needed doesn't > seem to be mentioned in most of the patchelf docs I can find, but it > is described here: https://github.com/NixOS/patchelf) https://github.com/pypa/auditwheel/issues/24 -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Wed Mar 30 04:13:31 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 30 Mar 2016 01:13:31 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Tue, Mar 29, 2016 at 2:51 PM, Nathaniel Smith wrote: > Yeah, I'm not at all sure what's going on here and whether it's > intentional or a bug in glibc, and we should figure out what's > actually happening and why for our own understanding, It appears that glibc's loader is like Windows loader, and if it's looking for libfoo and there's already some library called libfoo loaded then it will short-circuit the regular library lookup rules and use whatever libfoo was previously found, even if that is not the libfoo that would be found now using the currently relevant lookup rules. IMO this is a bug, plus it appears to be documented absolutely nowhere (at least on Windows this is a well-documented misfeature), but we'll see what upstream says... https://sourceware.org/bugzilla/show_bug.cgi?id=19884 https://github.com/njsmith/rpath-weirdness -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Wed Mar 30 04:44:54 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 30 Mar 2016 01:44:54 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: On Tue, Mar 29, 2016 at 2:51 PM, Nathaniel Smith wrote: > On Tue, Mar 29, 2016 at 10:50 AM, Matthew Brett wrote: >> On Tue, Mar 29, 2016 at 4:29 AM, Olivier Grisel >> wrote: >>>> You can replicate fairly simply with: >>>> >>>> pip install -f https://d9a97980b71d47cde94d-aae005c4999d7244ac63632f8b80e089.ssl.cf2.rackcdn.com >>>> numpy >>>> cd numpy/f2py/tests/src/kind >>>> f2py -c foo.f90 -m foo >>>> python -c 'import numpy; import foo' >>> >>> I can replicate. I can also force the use of the system fortran with >>> LD_PRELOAD (on ubuntu 14.04): >>> >>> LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libgfortran.so.3' python -c >>> 'import numpy; import foo' >> >> This one is obviously a show-stopper - I believe that it means that >> any module compiled against gfortran (and presumably any shipped lib >> that numpy loads) will break after numpy has been imported. >> >> I don't understand the linux link rules well enough to know how to >> fix. I guess we could rename the libraries that we ship / vendor into >> libs? (numpy_libgfortran.so etc). > > Yeah, I'm not at all sure what's going on here and whether it's > intentional or a bug in glibc, and we should figure out what's > actually happening and why for our own understanding, but I'm pretty > confident that this is going to be either the solution or the > workaround :-). > > I guess the general approach is that auditwheel should assign a unique > name to every vendored library. Some possible naming strategies: > -libgfortran.so > -libgfortran.so > -libgfortran.so > > I guess the truncated-sha256 strategy might be best because if there > are lots of manylinux wheels all vendoring the same copy of > libgfortran, they will (or at least might) end up sharing it in memory > instead of mmap'ing multiple different copies of the identical .so > file, which is good? And this is certainly safe if the different > copies of libgfortran all have the same sha256. > > And then auditwheel should use patchelf --replace-needed to modify the > DT_NEEDED in the various extension modules to point to the new name, > at the same time it's changing the RPATH. (--replace-needed doesn't > seem to be mentioned in most of the patchelf docs I can find, but it > is described here: https://github.com/NixOS/patchelf) So generally speaking this workaround is definitely the right workaround, and I just implemented and tested it in this repository and it works: https://github.com/njsmith/rpath-weirdness But, in the process I discovered that the text described above is missing a crucial step -- we need to also patch the DT_SONAME of the renamed+vendored library, as described here: https://github.com/pypa/auditwheel/issues/24 -n -- Nathaniel J. Smith -- https://vorpus.org From olivier.grisel at ensta.org Wed Mar 30 05:01:00 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 30 Mar 2016 11:01:00 +0200 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: FYI, I tried to repeat this experiment with anaconda's numpy instead of Matthew's manylinux1 numpy wheel and I could reproduce exactly the same crash. -- Olivier From olivier.grisel at ensta.org Wed Mar 30 05:07:59 2016 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 30 Mar 2016 11:07:59 +0200 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: Actually with anaconda I get the error even when I do not import numpy first: $ ~/anaconda3/bin/python -m numpy.f2py -c foo.f90 -m foo &> /dev/null $ ~/anaconda3/bin/python -c "import foo" Traceback (most recent call last): File "", line 1, in ImportError: /home/ogrisel/anaconda3/bin/../lib/libgfortran.so.3: version `GFORTRAN_1.4' not found (required by /home/ogrisel/code/numpy/numpy/f2py/tests/src/kind/foo.cpython-35m-x86_64-linux-gnu.so) ... -- Olivier From jjhelmus at gmail.com Wed Mar 30 10:58:06 2016 From: jjhelmus at gmail.com (Jonathan Helmus) Date: Wed, 30 Mar 2016 09:58:06 -0500 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: References: Message-ID: <56FBE97E.90707@gmail.com> On 03/30/2016 04:07 AM, Olivier Grisel wrote: > Actually with anaconda I get the error even when I do not import numpy first: > > $ ~/anaconda3/bin/python -m numpy.f2py -c foo.f90 -m foo &> /dev/null > $ ~/anaconda3/bin/python -c "import foo" > Traceback (most recent call last): > File "", line 1, in > ImportError:/home/ogrisel/anaconda3/bin/../lib/libgfortran.so.3: > version `GFORTRAN_1.4' not found (required by > /home/ogrisel/code/numpy/numpy/f2py/tests/src/kind/foo.cpython-35m-x86_64-linux-gnu.so) > ... > > -- > Olivier Those of us working on conda-forge have run into this exact issue and are aware of it. The problem seems to stem from the fact that if you ship a runtime library, libgfortran but also libstdc++ or libgcc, which shadows a system library, you can (will?) get unresolved symbols at runtime if the system library is newer than the one you ship. Discussion of the topic can be found in issue 29 of conda-forge/conda-forge.github.io [1]. I believe the suggestions of renaming the libgfortran in the wheel should fix this issue. I think the same issue may crop up if you have a extension module which links against libstdc++ and perhaps libgcc. I do not have any examples of this, just something to keep in mind. For completeness, I should mention that to address this problem, conda-forge is currently planning on shipping a very recent version (GCC 5.2) of the various runtime libraries which are linked against a very old glibc for compatibility. Cheers, - Jonathan Helmus [1] https://github.com/conda-forge/conda-forge.github.io/issues/29 -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Mar 30 22:18:11 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 30 Mar 2016 19:18:11 -0700 Subject: [Wheel-builders] Error from numpy wheel - any thoughts In-Reply-To: <56FBE97E.90707@gmail.com> References: <56FBE97E.90707@gmail.com> Message-ID: On Mar 30, 2016 8:01 AM, "Jonathan Helmus" wrote: > > On 03/30/2016 04:07 AM, Olivier Grisel wrote: >> >> Actually with anaconda I get the error even when I do not import numpy first: >> >> $ ~/anaconda3/bin/python -m numpy.f2py -c foo.f90 -m foo &> /dev/null >> $ ~/anaconda3/bin/python -c "import foo" >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: /home/ogrisel/anaconda3/bin/../lib/libgfortran.so.3: >> version `GFORTRAN_1.4' not found (required by >> /home/ogrisel/code/numpy/numpy/f2py/tests/src/kind/foo.cpython-35m-x86_64-linux-gnu.so) >> ... >> >> -- >> Olivier > > > Those of us working on conda-forge have run into this exact issue and are aware of it. The problem seems to stem from the fact that if you ship a runtime library, libgfortran but also libstdc++ or libgcc, which shadows a system library, you can (will?) get unresolved symbols at runtime if the system library is newer than the one you ship. Discussion of the topic can be found in issue 29 of conda-forge/conda-forge.github.io [1]. > > I believe the suggestions of renaming the libgfortran in the wheel should fix this issue. I think the same issue may crop up if you have a extension module which links against libstdc++ and perhaps libgcc. I do not have any examples of this, just something to keep in mind. It will happen any time you have two different not-quite-compatible versions of the same library being used by different python extensions in the same process. So vendoring libraries is a particularly common place where this will arise of this, as is system library versus distributed libraries, but it's pretty general. > For completeness, I should mention that to address this problem, conda-forge is currently planning on shipping a very recent version (GCC 5.2) of the various runtime libraries which are linked against a very old glibc for compatibility. I'm not an expert on conda, but this is probably not the solution I would have chosen? It seems like there will always be some users who have an even-more-recent GCC, and eventually all users will -- so it's embedding a kind of time bomb in all your conda envs. Doesn't conda already have a binary postprocessing step where it inserts rpaths into build artifacts? I'd probably consider whether it would be possible to apply a library renaming step at this stage too. For whatever it's worth. -n