From robertc at robertcollins.net Wed Jul 1 03:44:23 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 1 Jul 2015 13:44:23 +1200 Subject: [Distutils] markers, <= and python 2.7.10 Message-ID: So - 426 markers specify string comparisons only. This is now broken in the wild :) We need to figure out: - what comparisons we should allow (e.g. - versions, or tuples, or ?) - what the migration strategy for early adopters looks like - do we change the meaning of python_version - do we define a new symbol name - do we break existing markers, or force some type gymnastics (e.g. casting the user side of the marker to a sequence of components)? https://bitbucket.org/pypa/setuptools/commits/e01e9a77fad5 https://github.com/pypa/pip/issues/2943 -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From reinout at vanrees.org Wed Jul 1 04:45:13 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 01 Jul 2015 04:45:13 +0200 Subject: [Distutils] New buildout release needed Message-ID: Hi, I've done quite a lot of work on buildout in the last two or three weeks. Merging pull requests and also submitting a couple of my own. If you look at the list of pull requests (https://github.com/buildout/buildout/pulls) you'll see a bunch that need further work ("doc or tests needed") and a handful of "plus or minus" pull requests, dealing with plus-minus stuff regarding sections. I don't use that much and I haven't looked at those yet. Perhaps someone else wants to? I've also set up travis-ci.org caching so that the builds on travis run two to three times faster. And I've fixed several issues dealing with non-ascii filenames. Apparently, if you install pyramid (for instance), buildout will fail to run. Apparently it is enough to install something like mr.bob to break buildout totally. It is now fixed in https://github.com/buildout/buildout/pull/250 There are two things that need doing now that I cannot do: - Review https://github.com/buildout/buildout/pull/248 with a number of bootstrap fixes. The fixes themselves aren't controversial, I think (adding a version, for instance). The one thing I want feedback on is that buildout now deletes the contents of the develop-eggs/ directory when it bootstraps. This helps a lot with faulty left-over egg-links in corner cases. The old osc.recipe.sysegg recipe for instance used to reliably wreak my buildouts (syseggrecipe is the replacement that does the right thing). Zapping the develop-eggs contents on bootstrap helps solve several problems. In my case, I really want this change to go in as it might mean the difference between my company using buildout or not... I'm getting tired of saying "oh, remove the develop-eggs directory contents". - Make a new zc.buildout, zc.recipe.egg and bootstrap.py release. See issue https://github.com/buildout/buildout/issues/249 . Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From wichert at wiggy.net Wed Jul 1 06:08:47 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Wed, 1 Jul 2015 06:08:47 +0200 Subject: [Distutils] New buildout release needed In-Reply-To: References: Message-ID: On 01 Jul 2015, at 04:45, Reinout van Rees wrote: > And I've fixed several issues dealing with non-ascii filenames. Apparently, if you install pyramid (for instance), buildout will fail to run. Apparently it is enough to install something like mr.bob to break buildout totally. It is now fixed in https://github.com/buildout/buildout/pull/250 That change looks wrong to me. Why would os.walk() need unicode on Python 2? One of the really nice things about Python 2 is that you can do not need to pretend filenames are unicode. I suspect the real problem there is that you are somehow getting a unicode component in a path/dirnames, which you should not do on Python 2. Wichert; From ncoghlan at gmail.com Wed Jul 1 06:33:03 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 1 Jul 2015 14:33:03 +1000 Subject: [Distutils] markers, <= and python 2.7.10 In-Reply-To: References: Message-ID: On 1 July 2015 at 11:44, Robert Collins wrote: > So - 426 markers specify string comparisons only. This is now broken > in the wild :) > > We need to figure out: > - what comparisons we should allow (e.g. - versions, or tuples, or ?) > - what the migration strategy for early adopters looks like > - do we change the meaning of python_version > - do we define a new symbol name > - do we break existing markers, or force some type gymnastics > (e.g. casting the user side of the marker to a sequence of > components)? > > https://bitbucket.org/pypa/setuptools/commits/e01e9a77fad5 > https://github.com/pypa/pip/issues/2943 >From a logistics perspective, I think it makes sense to pull environment markers out into their own PEP so we can standardise them independently of the full PEP 426 specification. >From a "how to fix it?" perspective, I think it makes sense to say that any marker ending in "_version" is compared using PEP 440 version ordering semantics rather than lexical ordering My rationale for that is: 1. In the simple X.Y.Z cases, lexical string comparisons and PEP 440 give the same answer 2. In the more complex cases (like 2.7.10), PEP 440 gives the right answer, while lexical string comparison fails 3. Anything handling environment markers already needs to be able to handle PEP 440 semantics anyway Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From reinout at vanrees.org Wed Jul 1 12:53:03 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 01 Jul 2015 12:53:03 +0200 Subject: [Distutils] New buildout release needed In-Reply-To: References: Message-ID: Wichert Akkerman schreef op 01-07-15 om 06:08: > On 01 Jul 2015, at 04:45, Reinout van Rees wrote: >> >And I've fixed several issues dealing with non-ascii filenames. Apparently, if you install pyramid (for instance), buildout will fail to run. Apparently it is enough to install something like mr.bob to break buildout totally. It is now fixed inhttps://github.com/buildout/buildout/pull/250 > That change looks wrong to me. Why would os.walk() need unicode on Python 2? One of the really nice things about Python 2 is that you can do not need to pretend filenames are unicode. I suspect the real problem there is that you are somehow getting a unicode component in a path/dirnames, which you should not do on Python 2. Your comment made me re-visit my fix. You're right that it normally should work. I did some more pdb'ing, now with the assumption that there must be something else that's wrong that I overlooked. Bingo! Turns out the actual problem is the .encode() that happens before the hash gets updated. A hash needs bytecode, not unicode. Thus the .encode(). But on python 2, it is already bytecode. And on python 2 the .encode() raises the unicode error when there are non-ascii characters. Solution: a small if statement that only calls .encode() when it is unicode. See https://github.com/buildout/buildout/pull/252 Can you look at that one? Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From jim at jimfulton.info Wed Jul 1 13:02:28 2015 From: jim at jimfulton.info (Jim Fulton) Date: Wed, 1 Jul 2015 07:02:28 -0400 Subject: [Distutils] New buildout release needed In-Reply-To: References: Message-ID: On Tue, Jun 30, 2015 at 10:45 PM, Reinout van Rees wrote: > Hi, > > I've done quite a lot of work on buildout in the last two or three weeks. > Merging pull requests and also submitting a couple of my own. Thanks! > > If you look at the list of pull requests > (https://github.com/buildout/buildout/pulls) you'll see a bunch that need > further work ("doc or tests needed") and a handful of "plus or minus" pull > requests, dealing with plus-minus stuff regarding sections. I don't use that > much and I haven't looked at those yet. Perhaps someone else wants to? > > I've also set up travis-ci.org caching so that the builds on travis run two > to three times faster. > > And I've fixed several issues dealing with non-ascii filenames. Apparently, > if you install pyramid (for instance), buildout will fail to run. Apparently > it is enough to install something like mr.bob to break buildout totally. It > is now fixed in https://github.com/buildout/buildout/pull/250 > > > There are two things that need doing now that I cannot do: > > > - Review https://github.com/buildout/buildout/pull/248 with a number of > bootstrap fixes. The fixes themselves aren't controversial, I think (adding > a version, for instance). The one thing I want feedback on is that buildout > now deletes the contents of the develop-eggs/ directory when it bootstraps. > This helps a lot with faulty left-over egg-links in corner cases. The old > osc.recipe.sysegg recipe for instance used to reliably wreak my buildouts > (syseggrecipe is the replacement that does the right thing). Zapping the > develop-eggs contents on bootstrap helps solve several problems. In my case, > I really want this change to go in as it might mean the difference between > my company using buildout or not... I'm getting tired of saying "oh, remove > the develop-eggs directory contents". > > - Make a new zc.buildout, zc.recipe.egg and bootstrap.py release. See issue > https://github.com/buildout/buildout/issues/249 . You can make a bootstrap release by merging to the bootstrap_release branch. I can make releases this weekend, or I can empower you. Which would you prefer? Jim -- Jim Fulton http://jimfulton.info From reinout at vanrees.org Wed Jul 1 14:01:16 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 01 Jul 2015 14:01:16 +0200 Subject: [Distutils] New buildout release needed In-Reply-To: References: Message-ID: <5593D68C.3060403@vanrees.org> Jim Fulton schreef op 01-07-15 om 13:02: > You can make a bootstrap release by merging to the bootstrap_release branch. > > I can make releases this weekend, or I can empower you. Which would you prefer? I wouldn't mind making the release :-) My pypi username is 'reinout'. Anything special I should be aware of (apart from the separate git tags for zc.recipe.egg)? Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Jul 1 16:40:55 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 01 Jul 2015 16:40:55 +0200 Subject: [Distutils] New buildout release needed In-Reply-To: <5593D68C.3060403@vanrees.org> References: <5593D68C.3060403@vanrees.org> Message-ID: Reinout van Rees schreef op 01-07-15 om 14:01: > Jim Fulton schreef op 01-07-15 om 13:02: >> You can make a bootstrap release by merging to the bootstrap_release >> branch. >> >> I can make releases this weekend, or I can empower you. Which would >> you prefer? > I wouldn't mind making the release :-) My pypi username is 'reinout'. The new bootstrap.py is online and zc.recipe.egg 2.0.2 and zc.buildout 2.4.0 have been released! I've also done a little bit of cleanup in the list of issues. If anyone has an old issue in there that can be closed or ammended: please do so. Otherwise I'm stuck replying "is this one still valid" to quite some 2013 tickets :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From donald at stufft.io Wed Jul 1 17:37:15 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 1 Jul 2015 11:37:15 -0400 Subject: [Distutils] markers, =?utf-8?Q?<=3D_?=and python 2.7.10 In-Reply-To: References: Message-ID: On July 1, 2015 at 12:33:29 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 1 July 2015 at 11:44, Robert Collins wrote: > > So - 426 markers specify string comparisons only. This is now broken > > in the wild :) > > > > We need to figure out: > > - what comparisons we should allow (e.g. - versions, or tuples, or ?) > > - what the migration strategy for early adopters looks like > > - do we change the meaning of python_version > > - do we define a new symbol name > > - do we break existing markers, or force some type gymnastics > > (e.g. casting the user side of the marker to a sequence of > > components)? > > > > https://bitbucket.org/pypa/setuptools/commits/e01e9a77fad5 > > https://github.com/pypa/pip/issues/2943 > > From a logistics perspective, I think it makes sense to pull > environment markers out into their own PEP so we can standardise them > independently of the full PEP 426 specification. Agreed, especially since they are already being used in the wild, so it makes sense to get them standardized sooner rather than later. > > From a "how to fix it?" perspective, I think it makes sense to say > that any marker ending in "_version" is compared using PEP 440 version > ordering semantics rather than lexical ordering > > My rationale for that is: > > 1. In the simple X.Y.Z cases, lexical string comparisons and PEP 440 > give the same answer > 2. In the more complex cases (like 2.7.10), PEP 440 gives the right > answer, while lexical string comparison fails > 3. Anything handling environment markers already needs to be able to > handle PEP 440 semantics anyway > Also agreed. Perhaps the ``_version`` markers should just support the full range of specifiers in PEP 440.? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From chris at simplistix.co.uk Wed Jul 1 19:10:36 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Wed, 01 Jul 2015 18:10:36 +0100 Subject: [Distutils] picky 0.9.2 released! Message-ID: <55941F0C.7030201@simplistix.co.uk> Hi All, I'm please to announce another release of picky, a tool for checking versions of packages used pip are as specified in their requirements files. I wrote this tool because it's all too easy to have a requirements.txt file for you project that you think covers all the resources you're using, only to find you've been dragging in extra packages unknowingly, and you now have different versions of those in development and production, causing lots of debugging misery! Using picky is as easy as: $ pip install picky $ echo 'picky==0.9.2' >> requirements.txt $ picky If you want to update your requirements.txt based on your current environment: $ picky --update This release has the following major changes since the last announced release: * Python 3 support * Fixed handling of package ?extras? in pip output and specifications. * Fixed handling of arbitrary equality clauses in pip output and specifications. * Correct the dependency specification of|argparse|so it only occurs on Python 2.6 * Check to see if|pip|takes|--disable-pip-version-check|before using it. Full docs are at http://picky.readthedocs.org/en/latest/. Source control and issue trackers are at https://github.com/Simplistix/picky. Any problems, please ask here or mail me direct! cheers, Chris PS: While I sympathise with the intentions of the pip version check, I do feel that not enough consideration has been given to safe, scripted use of pip. Of course, people who do that and don't have control what version of pip their users may choose, like me, are doomed anyway now, so I guess no point crying over spilled milk? -------------- next part -------------- An HTML attachment was scrubbed... URL: From oz.tiram at gmail.com Wed Jul 1 11:00:38 2015 From: oz.tiram at gmail.com (Oz Tiram) Date: Wed, 1 Jul 2015 11:00:38 +0200 Subject: [Distutils] Inconsistent package build with stdeb? Message-ID: Hi Everyone, I am trying to build a Debian package a Python library that includes shared object from boost. When building on machine the files and the proper links are included, here is an example: I am building the package like this: (machine1) $ python setup.py --command-packages=stdeb.command bdist_deb ... Now I check that the symbolic links and the objects are still there: (machine1) $ dpkg -c deb_dist/python-dataextract_8200.14.0819.2015-1_all.deb | grep date -rw-r--r-- root/root 95000 2014-08-20 05:20 ./usr/share/pyshared/dataextract/lib/libboost_date_time.so.1.53.0 rwxrwxrwx root/root 0 2015-07-01 10:46 ./usr/lib/python2.7/dist-packages/dataextract/lib/libboost_date_time.so -> libboost_date_time.so.1.53.0 rwxrwxrwx root/root 0 2015-07-01 10:46 ./usr/lib/python2.7/dist-packages/dataextract/lib/libboost_date_time.so.1.53.0 -> ../../../../../share/pyshared/dataextract/lib/libboost_date_time.so.1.53.0 rwxrwxrwx root/root 0 2015-07-01 10:46 ./usr/lib/pyshared/python2.7/dataextract/lib/libboost_date_time.so -> ../../../../python2.7/dist-packages/dataextract/lib/libboost_date_time.so Now one the other machine with exact same version of python-stdeb I issue the same build command, with a different result: (machine2) $ dpkg -c deb_dist/python-dataextract_8200.14.0819.2015-1_all.deb | grep date -rw-r--r-- root/root 95000 2014-08-20 05:20 ./usr/lib/python2.7/dist-packages/dataextract/lib/libboost_date_time.so rwxrwxrwx root/root 0 2015-07-01 10:32 ./usr/lib/pyshared/python2.7/dataextract/lib/libboost_date_time.so -> ../../../../python2.7/dist-packages/dataextract/lib/libboost_date_time.so As you can see the versioned files `libboost_date_time.so.1.53.0` are missing and I am quite puzzled why. I must also say that, this machine is the exception. I have other machines where the package builds properly. Can someone please provide a clue? Thanks, Oz --- Imagine there's no countries it isn't hard to do Nothing to kill or die for And no religion too Imagine all the people Living life in peace -------------- next part -------------- An HTML attachment was scrubbed... URL: From seb at nmeos.net Wed Jul 1 21:01:17 2015 From: seb at nmeos.net (=?ISO-8859-1?Q?S=E9bastien=20Douche?=) Date: Wed, 01 Jul 2015 21:01:17 +0200 Subject: [Distutils] New buildout release needed In-Reply-To: References: <5593D68C.3060403@vanrees.org> Message-ID: <1435777277.1240594.312893969.56941F18@webmail.messagingengine.com> On Wed, 1 Jul 2015, at 16:40, Reinout van Rees wrote: > The new bootstrap.py is online and zc.recipe.egg 2.0.2 and zc.buildout > 2.4.0 have been released! Well done Reinout! -- S?bastien Douche Twitter: @sdouche http://douche.name From ncoghlan at gmail.com Thu Jul 2 16:34:32 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Jul 2015 00:34:32 +1000 Subject: [Distutils] markers, <= and python 2.7.10 In-Reply-To: References: Message-ID: On 2 July 2015 at 01:37, Donald Stufft wrote: > On July 1, 2015 at 12:33:29 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: >> From a "how to fix it?" perspective, I think it makes sense to say >> that any marker ending in "_version" is compared using PEP 440 version >> ordering semantics rather than lexical ordering >> >> My rationale for that is: >> >> 1. In the simple X.Y.Z cases, lexical string comparisons and PEP 440 >> give the same answer >> 2. In the more complex cases (like 2.7.10), PEP 440 gives the right >> answer, while lexical string comparison fails >> 3. Anything handling environment markers already needs to be able to >> handle PEP 440 semantics anyway > > Also agreed. Perhaps the ``_version`` markers should just support the > full range of specifiers in PEP 440. That's what I was thinking. The one slight difference is that I don't believe we'd want any special case handling of pre-releases in environment markers - if I have a Python alpha installed, I'd want it treated as equivalent to the corresponding final release, since constraint checking isn't the same as choosing the "best" version for download/installation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu Jul 2 18:03:38 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 2 Jul 2015 12:03:38 -0400 Subject: [Distutils] markers, =?utf-8?Q?<=3D_?=and python 2.7.10 In-Reply-To: References: Message-ID: On July 2, 2015 at 10:34:35 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 2 July 2015 at 01:37, Donald Stufft wrote: > > On July 1, 2015 at 12:33:29 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > >> From a "how to fix it?" perspective, I think it makes sense to say > >> that any marker ending in "_version" is compared using PEP 440 version > >> ordering semantics rather than lexical ordering > >> > >> My rationale for that is: > >> > >> 1. In the simple X.Y.Z cases, lexical string comparisons and PEP 440 > >> give the same answer > >> 2. In the more complex cases (like 2.7.10), PEP 440 gives the right > >> answer, while lexical string comparison fails > >> 3. Anything handling environment markers already needs to be able to > >> handle PEP 440 semantics anyway > > > > Also agreed. Perhaps the ``_version`` markers should just support the > > full range of specifiers in PEP 440. > > That's what I was thinking. The one slight difference is that I don't > believe we'd want any special case handling of pre-releases in > environment markers - if I have a Python alpha installed, I'd want it > treated as equivalent to the corresponding final release, since > constraint checking isn't the same as choosing the "best" version for > download/installation. > > Well, even in the case for dependencies we say that a pre-release should be accepted if it?s installed which would be in that vein IMO. I agree that it?d be useful to call that out explicitly though. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Fri Jul 3 13:40:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Jul 2015 21:40:07 +1000 Subject: [Distutils] markers, <= and python 2.7.10 In-Reply-To: References: Message-ID: On 3 July 2015 at 02:03, Donald Stufft wrote: > On July 2, 2015 at 10:34:35 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: >> That's what I was thinking. The one slight difference is that I don't >> believe we'd want any special case handling of pre-releases in >> environment markers - if I have a Python alpha installed, I'd want it >> treated as equivalent to the corresponding final release, since >> constraint checking isn't the same as choosing the "best" version for >> download/installation. > > Well, even in the case for dependencies we say that a pre-release > should be accepted if it?s installed which would be in that vein > IMO. I agree that it?d be useful to call that out explicitly though. +1 For the benefit of the list, note that James Polley has started a draft of this extracted PEP: https://github.com/pypa/interoperability-peps/pull/29 Aside from the version handling, I'm not aware of any other major gripes with the current environment markers, so now's the time to let me know if there are other actual problems that need to be fixed :) (I'm not worried about missing features at this point - it's hard for those to qualify as urgent given how long we've already gone without them, but handling 2.7.10 correctly is pretty important) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Mon Jul 6 18:24:48 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Jul 2015 18:24:48 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform Message-ID: <20150706182448.32cddf8d@fsol> Hello, I'm having the following errors when trying to upload a wheel to PyPI. (yes, the version number is off - but that's besides the point here, I think): $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform $ twine upload dist/llvmlite-0.0.0-py3-none-linux_x86_64.whl /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo warnings.warn('No PKG-INFO found for package: %s' % self.package_name) Uploading distributions to https://pypi.python.org/pypi Uploading llvmlite-0.0.0-py3-none-linux_x86_64.whl HTTPError: 400 Client Error: Binary wheel for an unsupported platform Thanks Antoine. From graffatcolmingov at gmail.com Mon Jul 6 18:35:52 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 6 Jul 2015 11:35:52 -0500 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150706182448.32cddf8d@fsol> References: <20150706182448.32cddf8d@fsol> Message-ID: On Mon, Jul 6, 2015 at 11:24 AM, Antoine Pitrou wrote: > > Hello, > > I'm having the following errors when trying to upload a wheel to PyPI. > (yes, the version number is off - but that's besides the point here, > I think): > > $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo > warnings.warn('No PKG-INFO found for package: %s' % self.package_name) > Uploading distributions to https://pypi.python.org/pypi > Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > HTTPError: 400 Client Error: Binary wheel for an unsupported platform > > $ twine upload dist/llvmlite-0.0.0-py3-none-linux_x86_64.whl > /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo > warnings.warn('No PKG-INFO found for package: %s' % self.package_name) > Uploading distributions to https://pypi.python.org/pypi > Uploading llvmlite-0.0.0-py3-none-linux_x86_64.whl > HTTPError: 400 Client Error: Binary wheel for an unsupported platform Unrelated to your problem here (which I think is due in part to the lack of classifiers for binaries for *nix systems (or something along those lines that Nick or Donald will be more familiar with)), I filed a twine bug for those warnings: https://github.com/pypa/twine/issues/114 I've never seen those before, so I'm sorry for the noise in that output. Cheers, Ian From p.f.moore at gmail.com Mon Jul 6 20:03:19 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 6 Jul 2015 19:03:19 +0100 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150706182448.32cddf8d@fsol> References: <20150706182448.32cddf8d@fsol> Message-ID: On 6 July 2015 at 17:24, Antoine Pitrou wrote: > (yes, the version number is off - but that's besides the point here, > I think): > > $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo > warnings.warn('No PKG-INFO found for package: %s' % self.package_name) > Uploading distributions to https://pypi.python.org/pypi > Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > HTTPError: 400 Client Error: Binary wheel for an unsupported platform PyPI does not support uploading binary wheels for Linux. This is a deliberate restriction because the tags supported by the wheel spec are not fine enough grained to specify compatibility between the myriad of Linux variations. The intention is that once a resolution to that problem is found, this restriction will be lifted. In the meantime wheels are fine on PyPI for Windows and OSX, and on Linux for private use. It's only public distribution of Linux wheels where care is needed. Paul From solipsis at pitrou.net Mon Jul 6 20:18:08 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Jul 2015 20:18:08 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> Message-ID: <20150706201808.3f631734@fsol> On Mon, 6 Jul 2015 19:03:19 +0100 Paul Moore wrote: > On 6 July 2015 at 17:24, Antoine Pitrou wrote: > > (yes, the version number is off - but that's besides the point here, > > I think): > > > > $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > > /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: UserWarning: No PKG-INFO found for package: pkginfo > > warnings.warn('No PKG-INFO found for package: %s' % self.package_name) > > Uploading distributions to https://pypi.python.org/pypi > > Uploading llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl > > HTTPError: 400 Client Error: Binary wheel for an unsupported platform > > PyPI does not support uploading binary wheels for Linux. This is a > deliberate restriction because the tags supported by the wheel spec > are not fine enough grained to specify compatibility between the > myriad of Linux variations. The intention is that once a resolution to > that problem is found, this restriction will be lifted. What if packagers take care of working around the issue? (for example by building on a suitably old Linux platform, as we already do for Conda packages) Regards Antoine. From p.f.moore at gmail.com Mon Jul 6 23:34:38 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 6 Jul 2015 22:34:38 +0100 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150706201808.3f631734@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> Message-ID: On 6 July 2015 at 19:18, Antoine Pitrou wrote: > What if packagers take care of working around the issue? > (for example by building on a suitably old Linux platform, as we > already do for Conda packages) At the moment it's just a simple "if the wheel is for Linux, reject it" test. As to whether that's too conservative, one of the Linux guys would need to comment. Maybe the issue is simply that we can't be sure people will take the care that you do, and the risk of people getting broken installs is too high? Paul From solipsis at pitrou.net Mon Jul 6 23:46:04 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 6 Jul 2015 23:46:04 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> Message-ID: <20150706234604.1a505171@fsol> On Mon, 6 Jul 2015 22:34:38 +0100 Paul Moore wrote: > On 6 July 2015 at 19:18, Antoine Pitrou wrote: > > What if packagers take care of working around the issue? > > (for example by building on a suitably old Linux platform, as we > > already do for Conda packages) > > At the moment it's just a simple "if the wheel is for Linux, reject it" test. > > As to whether that's too conservative, one of the Linux guys would > need to comment. Maybe the issue is simply that we can't be sure > people will take the care that you do, and the risk of people getting > broken installs is too high? Then how about a warning, or a rejection by default with a well-known way to bypass it? Regards Antoine. From ubernostrum at gmail.com Tue Jul 7 07:09:11 2015 From: ubernostrum at gmail.com (James Bennett) Date: Mon, 6 Jul 2015 22:09:11 -0700 Subject: [Distutils] Phantom release/file and now can't upload Message-ID: Earlier tonight I was trying to upload a new version (1.1) of https://pypi.python.org/pypi/django-contact-form Initially tried 'setup.py sdist' followed by 'twine upload -s' of the resulting tarball. Twine reported success, but no new release or file appeared on PyPI. Tried 'setup.py sdist upload'. That reported success, but no new release or file appeared on PyPI. I tried quite a lot of things after that and kept getting reports of success... and no new release or file on PyPI. I even went and copy/pasted a sample from the distutils documentation using 'setup.py sdist upload -r' to force PyPI just in case it was somehow using the wrong index. Eventually some combination of things must have at least partly worked, because information about a 1.1 release of django-contact-form appeared on PyPI... except the file itself still did not appear, and any attempt to upload returned: HTTPError: 400 Client Error: This filename has previously been used, you should use a different version. I am absolutely stumped as to what's going on. I've tried deleting and re-creating the 1.1 release to re-upload, but I'm stuck at being able to create only an empty release with no file, since PyPI will not allow the file to be uploaded. While I try to track down what happened, who has the power to override PyPI's disallowed filename restriction for this? As far as I can tell, the file was *never* listed on PyPI and so could not have been downloaded by anyone, and after struggling with this for a few hours I'm not at all in the mood to bump versions just to get PyPI to cooperate. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 7 15:53:59 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Jul 2015 23:53:59 +1000 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150706234604.1a505171@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> Message-ID: On 7 July 2015 at 07:46, Antoine Pitrou wrote: > On Mon, 6 Jul 2015 22:34:38 +0100 > Paul Moore wrote: >> On 6 July 2015 at 19:18, Antoine Pitrou wrote: >> > What if packagers take care of working around the issue? >> > (for example by building on a suitably old Linux platform, as we >> > already do for Conda packages) >> >> At the moment it's just a simple "if the wheel is for Linux, reject it" test. >> >> As to whether that's too conservative, one of the Linux guys would >> need to comment. Maybe the issue is simply that we can't be sure >> people will take the care that you do, and the risk of people getting >> broken installs is too high? > > Then how about a warning, or a rejection by default with a well-known > way to bypass it? Unfortunately, the compatibility tagging for Linux wheels is currently so thoroughly inadequate that even in tightly controlled environments having a wheel file escape from its "intended" target platforms can cause hard to debug problems. There was a good proposal not that long ago to add a "platform tag override" capability to both pip (for installation) and bdist_wheel (for publication), but I don't know what became of that. If we had that system, then I think it would be reasonable to allow Linux uploads with a "pypi_linux_x86_64" override tag - they'd never be installed by default, but folks could opt in to allowing them. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Tue Jul 7 16:07:08 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Jul 2015 16:07:08 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> Message-ID: <20150707160708.5b2e44b8@fsol> On Tue, 7 Jul 2015 23:53:59 +1000 Nick Coghlan wrote: > On 7 July 2015 at 07:46, Antoine Pitrou wrote: > > On Mon, 6 Jul 2015 22:34:38 +0100 > > Paul Moore wrote: > >> On 6 July 2015 at 19:18, Antoine Pitrou wrote: > >> > What if packagers take care of working around the issue? > >> > (for example by building on a suitably old Linux platform, as we > >> > already do for Conda packages) > >> > >> At the moment it's just a simple "if the wheel is for Linux, reject it" test. > >> > >> As to whether that's too conservative, one of the Linux guys would > >> need to comment. Maybe the issue is simply that we can't be sure > >> people will take the care that you do, and the risk of people getting > >> broken installs is too high? > > > > Then how about a warning, or a rejection by default with a well-known > > way to bypass it? > > Unfortunately, the compatibility tagging for Linux wheels is currently > so thoroughly inadequate that even in tightly controlled environments > having a wheel file escape from its "intended" target platforms can > cause hard to debug problems. I'm not sure what you're pointing to, could you elaborate a bit? For the record, building against a well-known, old glibc + gcc has served the Anaconda platform well. Regards Antoine. From ncoghlan at gmail.com Tue Jul 7 16:22:29 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Jul 2015 00:22:29 +1000 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150707160708.5b2e44b8@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> <20150707160708.5b2e44b8@fsol> Message-ID: On 8 July 2015 at 00:07, Antoine Pitrou wrote: > On Tue, 7 Jul 2015 23:53:59 +1000 > Nick Coghlan wrote: >> Unfortunately, the compatibility tagging for Linux wheels is currently >> so thoroughly inadequate that even in tightly controlled environments >> having a wheel file escape from its "intended" target platforms can >> cause hard to debug problems. > > I'm not sure what you're pointing to, could you elaborate a bit? That was a reference to a case of someone building for Debian (I think), and then having one of their wheel files end up installed on a CentOS system and wondering why things weren't working. > For the record, building against a well-known, old glibc + gcc has > served the Anaconda platform well. The key problem is that there's no straightforward way for us to verify that folks are actually building against a suitably limited set of platform APIs that all Linux distros in widespread use provide. And when it inevitably fails (which it will, Python and PyPI are too popular for it not to), the folks running into the problem are unlikely to be able to diagnose what has happened, and even once we figure out what has gone wrong, we'd be left having to explain how to blacklist wheel files, and the UX would just generally be terrible (and the burden of dealing with that would fall on the pip and PyPI maintainers, not on the individual projects publishing insufficiently conservative Linux wheel files). That's why the platform override tags are such a nice idea, as it becomes possible to start iterating on possible solutions to the problem without affecting the default installation UX in the near term. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Tue Jul 7 17:02:40 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 7 Jul 2015 11:02:40 -0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> <20150707160708.5b2e44b8@fsol> Message-ID: On July 7, 2015 at 10:22:55 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 8 July 2015 at 00:07, Antoine Pitrou wrote: > > On Tue, 7 Jul 2015 23:53:59 +1000 > > Nick Coghlan wrote: > >> Unfortunately, the compatibility tagging for Linux wheels is currently > >> so thoroughly inadequate that even in tightly controlled environments > >> having a wheel file escape from its "intended" target platforms can > >> cause hard to debug problems. > > > > I'm not sure what you're pointing to, could you elaborate a bit? > > That was a reference to a case of someone building for Debian (I > think), and then having one of their wheel files end up installed on a > CentOS system and wondering why things weren't working. > > > For the record, building against a well-known, old glibc + gcc has > > served the Anaconda platform well. > > The key problem is that there's no straightforward way for us to > verify that folks are actually building against a suitably limited set > of platform APIs that all Linux distros in widespread use provide. > > And when it inevitably fails (which it will, Python and PyPI are too > popular for it not to), the folks running into the problem are > unlikely to be able to diagnose what has happened, and even once we > figure out what has gone wrong, we'd be left having to explain how to > blacklist wheel files, and the UX would just generally be terrible > (and the burden of dealing with that would fall on the pip and PyPI > maintainers, not on the individual projects publishing insufficiently > conservative Linux wheel files). > > That's why the platform override tags are such a nice idea, as it > becomes possible to start iterating on possible solutions to the > problem without affecting the default installation UX in the near > term. > > pip 7+ actually has the UI for blacklisting binary packages now, primarily to ask "no don't build a wheel for X", but it also functions to ask *not* to accept wheels for a particular project from PyPI. In my mind, the biggest reason to not just open up the ability to upload even generic linux wheels right now is the lack of a safe-ish default. I think if we added a few things: * Default to per platform tags (e.g. ubuntu_14_04), but allow this to be ? customized and also accept "Generic" Linux wheels as well. * Put the libc into the file name as well since it's reasonable to build a ? "generic" linux wheel that statically links all dependencies (afaik), however ? it does not really work to statically link glibc. This means that even if ? you build against an old version of glibc if you're running on a Linux that ? *doesnt* use glibc (like Alpine which uses MUSL) you'll run into problems. I think that it is entirely possible to build a generic linux wheel that will work on any Linux built the same libc* of the same or newer version, however I think that you have to be careful if you do it. You have to ensure all your dependencies are statically linked (if you have any) and you have to ensure that you build against an old enough Linux (likely some form of CentOS). * Side question, since I don't actually know how a computer works: Is it even ? possible to have a CPython extension link against a different libc than ? CPython itself is linked against? What if static linking is involved since ? there are non glibc libcs which actually do support static linking? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From solipsis at pitrou.net Tue Jul 7 17:25:46 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 7 Jul 2015 17:25:46 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> <20150707160708.5b2e44b8@fsol> Message-ID: <20150707172546.5507a976@fsol> On Tue, 7 Jul 2015 11:02:40 -0400 Donald Stufft wrote: > In my mind, the biggest reason to not just open up the ability to upload even > generic linux wheels right now is the lack of a safe-ish default. I think if > we added a few things: > > * Default to per platform tags (e.g. ubuntu_14_04), but allow this to be > ? customized and also accept "Generic" Linux wheels as well. That would be cool :) > * Put the libc into the file name as well since it's reasonable to build a > ? "generic" linux wheel that statically links all dependencies (afaik), however > ? it does not really work to statically link glibc. True. For example, here is the meat of a build of llvmlite on Linux: $ ldd miniconda3/pkgs/llvmlite-0.6.0-py34_5/lib/python3.4/site-packages/llvmlite/binding/libllvmlite.so linux-vdso.so.1 => (0x00007ffeacefd000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8c9e2f5000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8c9e0d7000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8c9decf000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8c9dccb000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8c9d9c5000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8c9d7ae000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8c9d3ea000) /lib64/ld-linux-x86-64.so.2 (0x00007f8c9fcee000) It embeds LLVM and has no dynamic reference to anything beside the most basic runtime libraries (libstdc++ is statically linked in). The .so file doesn't even require Python (but the rest of llvmlite, which loads the .so file using ctypes, does need Python, of course). We use a similar strategy on Windows and OS X. > This means that even if > ? you build against an old version of glibc if you're running on a Linux that > ? *doesnt* use glibc (like Alpine which uses MUSL) you'll run into problems. glibc vs. non-glibc is yet a different concern IMHO. Mainstream Linux setups use glibc. > You have to ensure all your > dependencies are statically linked (if you have any) and you have to ensure > that you build against an old enough Linux (likely some form of CentOS). Yes, we use CentOS 5... > * Side question, since I don't actually know how a computer works: Is it even > ? possible to have a CPython extension link against a different libc than > ? CPython itself is linked against? Half-incompetent answer here: I think link-time it would be fine. Then at library load time it depends on whether the actual system glibc is ABI/API-compatible with the one the C extension (and/or CPython itself) was linked with. Regards Antoine. From richard at python.org Wed Jul 8 01:23:20 2015 From: richard at python.org (Richard Jones) Date: Wed, 8 Jul 2015 09:23:20 +1000 Subject: [Distutils] Phantom release/file and now can't upload In-Reply-To: References: Message-ID: This is very strange - perhaps there's a caching issue or something, but there's a file present on that release when I look now :/ On 7 July 2015 at 15:09, James Bennett wrote: > Earlier tonight I was trying to upload a new version (1.1) of > https://pypi.python.org/pypi/django-contact-form > > Initially tried 'setup.py sdist' followed by 'twine upload -s' of the > resulting tarball. Twine reported success, but no new release or file > appeared on PyPI. Tried 'setup.py sdist upload'. That reported success, but > no new release or file appeared on PyPI. > > I tried quite a lot of things after that and kept getting reports of > success... and no new release or file on PyPI. I even went and copy/pasted > a sample from the distutils documentation using 'setup.py sdist upload -r' > to force PyPI just in case it was somehow using the wrong index. > > Eventually some combination of things must have at least partly worked, > because information about a 1.1 release of django-contact-form appeared on > PyPI... except the file itself still did not appear, and any attempt to > upload returned: > > HTTPError: 400 Client Error: This filename has previously been used, you > should use a different version. > > I am absolutely stumped as to what's going on. I've tried deleting and > re-creating the 1.1 release to re-upload, but I'm stuck at being able to > create only an empty release with no file, since PyPI will not allow the > file to be uploaded. > > While I try to track down what happened, who has the power to override > PyPI's disallowed filename restriction for this? As far as I can tell, the > file was *never* listed on PyPI and so could not have been downloaded by > anyone, and after struggling with this for a few hours I'm not at all in > the mood to bump versions just to get PyPI to cooperate. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Wed Jul 8 01:28:04 2015 From: ubernostrum at gmail.com (James Bennett) Date: Tue, 7 Jul 2015 16:28:04 -0700 Subject: [Distutils] Phantom release/file and now can't upload In-Reply-To: References: Message-ID: The 1.1 release exists, but PyPI thinks -- at least when it shows me the package-owner interface -- that there are no files for that release. Here's a screenshot of what I see: https://dl.dropboxusercontent.com/u/408510/django-contact-form-n-file.png Checked the 1.0 release just in case it went there, and that only shows the 1.0 package, not the 1.1 package. On Tue, Jul 7, 2015 at 4:23 PM, Richard Jones wrote: > This is very strange - perhaps there's a caching issue or something, but > there's a file present on that release when I look now :/ > > On 7 July 2015 at 15:09, James Bennett wrote: > >> Earlier tonight I was trying to upload a new version (1.1) of >> https://pypi.python.org/pypi/django-contact-form >> >> Initially tried 'setup.py sdist' followed by 'twine upload -s' of the >> resulting tarball. Twine reported success, but no new release or file >> appeared on PyPI. Tried 'setup.py sdist upload'. That reported success, but >> no new release or file appeared on PyPI. >> >> I tried quite a lot of things after that and kept getting reports of >> success... and no new release or file on PyPI. I even went and copy/pasted >> a sample from the distutils documentation using 'setup.py sdist upload -r' >> to force PyPI just in case it was somehow using the wrong index. >> >> Eventually some combination of things must have at least partly worked, >> because information about a 1.1 release of django-contact-form appeared on >> PyPI... except the file itself still did not appear, and any attempt to >> upload returned: >> >> HTTPError: 400 Client Error: This filename has previously been used, you >> should use a different version. >> >> I am absolutely stumped as to what's going on. I've tried deleting and >> re-creating the 1.1 release to re-upload, but I'm stuck at being able to >> create only an empty release with no file, since PyPI will not allow the >> file to be uploaded. >> >> While I try to track down what happened, who has the power to override >> PyPI's disallowed filename restriction for this? As far as I can tell, the >> file was *never* listed on PyPI and so could not have been downloaded by >> anyone, and after struggling with this for a few hours I'm not at all in >> the mood to bump versions just to get PyPI to cooperate. >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Jul 8 01:32:37 2015 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 07 Jul 2015 19:32:37 -0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150706201808.3f631734@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/06/2015 02:18 PM, Antoine Pitrou wrote: > On Mon, 6 Jul 2015 19:03:19 +0100 Paul Moore > wrote: >> On 6 July 2015 at 17:24, Antoine Pitrou >> wrote: >>> (yes, the version number is off - but that's besides the point >>> here, I think): >>> >>> $ twine upload dist/llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl >>> /home/antoine/.local/lib/python3.4/site-packages/pkginfo/installed.py:53: >>> UserWarning: No PKG-INFO found for package: pkginfo >>> warnings.warn('No PKG-INFO found for package: %s' % >>> self.package_name) Uploading distributions to >>> https://pypi.python.org/pypi Uploading >>> llvmlite-0.0.0-py2.py3-none-linux_x86_64.whl HTTPError: 400 Client >>> Error: Binary wheel for an unsupported platform >> >> PyPI does not support uploading binary wheels for Linux. This is a >> deliberate restriction because the tags supported by the wheel spec >> are not fine enough grained to specify compatibility between the >> myriad of Linux variations. The intention is that once a resolution >> to that problem is found, this restriction will be lifted. > > What if packagers take care of working around the issue? (for example > by building on a suitably old Linux platform, as we already do for > Conda packages) Compared to Windows, and even somewhat OS/X, the win for uploading wheels to PyPI is miniscule: pretty much everybody developing on Linux has or can get the toolchain required to build a wheel from an sdist, and can share those built wheels across the hosts she knows to be compatible. A-foolish-consistency'ly, Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnGGVAAoJEPKpaDSJE9HY5McP/0MCer5wS0MygSHw/UGrcvKN 2pDfGdFsucjY2cR5Wr/R28NEvTPzrmBghPAw4x47rNRpNWBOHvRMyOu3UWKJIeMI r/elBcP9te26jA7o/clqkfnVNKGjcoDSis+YkyhZAfhEDNGXQAaSf8uNFsVDKMTj W8ggxe0DraXHImE0X4rahu2a7Svd7yWIMtRv5eMP+HjfLA9ouQUuFNYXCuXDmTIS 5SJ+XkXjToKy0DSqPkknESVcCnF6sDjI1STo4dPi65QNlUQomv9m3TwLN9Ak+bx6 u0vM8R594PXMcb8cTnBXdmDbdDhQRym7Wy2Fr5zYs7nwJ8x13d2QKs5fQ/QhmUF5 DtyWeQekhK1+wupt6NQ8tXgu9jx9SV81XtvZSAp9SAVS9asC7BUjLAZEh9r3K07b dTkaY719vFUioiYQazDjIrkMLxSKjGbBgkve78tkjDtfwvKDOBqofFEmcsBv8yh0 wA5/iYJ78Wmr2rti5d4/JwLU5Tc+NkkYcw1W0bRUgi+GX8vYtYQ03i36f33Kyf3f z6n8rquZhGDIcPjbrEmveBJxErJTu/3ifuIy6NnTwCFmQNOl+HpqtJFtYS1a3NUR 0s9eivFgl6vwExN2KywYW9N/6tCcWyB0qvECG6tB/a+Ao3iW8KOciyyjaQZWwOFx glybBbAdFgtqaLafaJnQ =NgK8 -----END PGP SIGNATURE----- From vinay_sajip at yahoo.co.uk Wed Jul 8 01:32:29 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 7 Jul 2015 23:32:29 +0000 (UTC) Subject: [Distutils] ANN: distlib 0.2.1 released on PyPI Message-ID: <1521363010.1182798.1436311949075.JavaMail.yahoo@mail.yahoo.com> I've just released version 0.2.1 of distlib on PyPI [1]. For newcomers, distlib is a library of packaging functionality which is intended to be usable as the basis for third-party packaging tools. The main changes in this release are as follows: ??? Fixed issue #58: Return a Distribution instance or None from locate(). ??? Fixed issue #59: Skipped special keys when looking for versions. ??? Improved behaviour of PyPIJSONLocator to be analogous to that of other ??? locators. ??? Added resource iterator functionality. ??? Fixed issue #71: Updated launchers to decode shebangs using UTF-8. ??? This allows non-ASCII pathnames to be correctly handled. ??? Ensured that the executable written to shebangs is normcased. ??? Changed ScriptMaker to work better under Jython. ??? Changed the mode setting method to work better under Jython. ??? Changed get_executable() to return a normcased value. ??? Handled multiple-architecture wheel filenames correctly. A more detailed change log is available at [2]. Please try it out, and if you find any problems or have any suggestions for improvements, please give some feedback using the issue tracker! [3] Regards, Vinay Sajip [1] https://pypi.python.org/pypi/distlib/0.2.1 [2] https://goo.gl/K5Spsp [3] https://bitbucket.org/pypa/distlib/issues/new -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Jul 8 01:43:48 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 8 Jul 2015 01:43:48 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> Message-ID: <20150708014348.19e4dd61@fsol> On Tue, 07 Jul 2015 19:32:37 -0400 Tres Seaver wrote: > > Compared to Windows, and even somewhat OS/X, the win for uploading wheels > to PyPI is miniscule: pretty much everybody developing on Linux has or > can get the toolchain required to build a wheel from an sdist, and can > share those built wheels across the hosts she knows to be compatible. That's a dramatically uninformed statement, to put it politely... Some packages have difficult-to-meet build dependencies, and can also take a long time to do so. llvmlite, the package I'm talking about, builds against the development libraries for LLVM 3.6 (a non-trivial download and install, assuming you can find binaries of that LLVM version for your OS version.... otherwise, count ~20 minutes to compile it with a modern quad-core CPU). We regularly have bug reports from people failing to compile the package on Linux (and OS X), which is why we are considering the option of pre-built binary wheels (in addition to the conda packages we already provide, and which some people are reluctant to use). Regards Antoine. From cournape at gmail.com Wed Jul 8 04:04:48 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 8 Jul 2015 06:04:48 +0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150706234604.1a505171@fsol> <20150707160708.5b2e44b8@fsol> Message-ID: On Tue, Jul 7, 2015 at 7:02 PM, Donald Stufft wrote: > On July 7, 2015 at 10:22:55 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > On 8 July 2015 at 00:07, Antoine Pitrou wrote: > > > On Tue, 7 Jul 2015 23:53:59 +1000 > > > Nick Coghlan wrote: > > >> Unfortunately, the compatibility tagging for Linux wheels is currently > > >> so thoroughly inadequate that even in tightly controlled environments > > >> having a wheel file escape from its "intended" target platforms can > > >> cause hard to debug problems. > > > > > > I'm not sure what you're pointing to, could you elaborate a bit? > > > > That was a reference to a case of someone building for Debian (I > > think), and then having one of their wheel files end up installed on a > > CentOS system and wondering why things weren't working. > > > > > For the record, building against a well-known, old glibc + gcc has > > > served the Anaconda platform well. > > > > The key problem is that there's no straightforward way for us to > > verify that folks are actually building against a suitably limited set > > of platform APIs that all Linux distros in widespread use provide. > > > > And when it inevitably fails (which it will, Python and PyPI are too > > popular for it not to), the folks running into the problem are > > unlikely to be able to diagnose what has happened, and even once we > > figure out what has gone wrong, we'd be left having to explain how to > > blacklist wheel files, and the UX would just generally be terrible > > (and the burden of dealing with that would fall on the pip and PyPI > > maintainers, not on the individual projects publishing insufficiently > > conservative Linux wheel files). > > > > That's why the platform override tags are such a nice idea, as it > > becomes possible to start iterating on possible solutions to the > > problem without affecting the default installation UX in the near > > term. > > > > > > pip 7+ actually has the UI for blacklisting binary packages now, primarily > to > ask "no don't build a wheel for X", but it also functions to ask *not* to > accept wheels for a particular project from PyPI. > > In my mind, the biggest reason to not just open up the ability to upload > even > generic linux wheels right now is the lack of a safe-ish default. I think > if > we added a few things: > > * Default to per platform tags (e.g. ubuntu_14_04), but allow this to be > customized and also accept "Generic" Linux wheels as well. > * Put the libc into the file name as well since it's reasonable to build a > "generic" linux wheel that statically links all dependencies (afaik), > however > it does not really work to statically link glibc. This means that even if > you build against an old version of glibc if you're running on a Linux > that > *doesnt* use glibc (like Alpine which uses MUSL) you'll run into > problems. > > I think that it is entirely possible to build a generic linux wheel that > will > work on any Linux built the same libc* of the same or newer version, > however I > think that you have to be careful if you do it. You have to ensure all your > dependencies are statically linked (if you have any) and you have to ensure > that you build against an old enough Linux (likely some form of CentOS). > > * Side question, since I don't actually know how a computer works: Is it > even > possible to have a CPython extension link against a different libc than > CPython itself is linked against? What if static linking is involved > since > there are non glibc libcs which actually do support static linking? > You can use versioned symbols to manage some of those issues ( https://www.kernel.org/pub/software/libs/glibc/hjl/compat/). Some softwares even include their own libc, but this is getting hair fast. The common solution is to do what Antoine mentioned: build on the lowest common denominator, and reduce the dependencies on the system as much as possible. To be honest, glibc is rarely your problem: the kernel is actually more problematic (some common python packages don't build on 2.6.18 anymore), and C++ even more so. E.g. llvm 3.6 will not build on gcc 4.1 (the version of centos 5), so you need a new g++ which means a new libtstdc++. I am biased, but that's the kind of things where you may want to work with "professional" providers with people paid to work on those boring but critical issues. David > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Wed Jul 8 04:13:18 2015 From: richard at python.org (Richard Jones) Date: Wed, 8 Jul 2015 12:13:18 +1000 Subject: [Distutils] Phantom release/file and now can't upload In-Reply-To: References: Message-ID: Ah, sorry, I misunderstood. I believe I've removed the offending entry from the database that's blocking you uploading. Please try again. Richard On 8 July 2015 at 09:28, James Bennett wrote: > The 1.1 release exists, but PyPI thinks -- at least when it shows me the > package-owner interface -- that there are no files for that release. > > Here's a screenshot of what I see: > > https://dl.dropboxusercontent.com/u/408510/django-contact-form-n-file.png > > Checked the 1.0 release just in case it went there, and that only shows > the 1.0 package, not the 1.1 package. > > > On Tue, Jul 7, 2015 at 4:23 PM, Richard Jones wrote: > >> This is very strange - perhaps there's a caching issue or something, but >> there's a file present on that release when I look now :/ >> >> On 7 July 2015 at 15:09, James Bennett wrote: >> >>> Earlier tonight I was trying to upload a new version (1.1) of >>> https://pypi.python.org/pypi/django-contact-form >>> >>> Initially tried 'setup.py sdist' followed by 'twine upload -s' of the >>> resulting tarball. Twine reported success, but no new release or file >>> appeared on PyPI. Tried 'setup.py sdist upload'. That reported success, but >>> no new release or file appeared on PyPI. >>> >>> I tried quite a lot of things after that and kept getting reports of >>> success... and no new release or file on PyPI. I even went and copy/pasted >>> a sample from the distutils documentation using 'setup.py sdist upload -r' >>> to force PyPI just in case it was somehow using the wrong index. >>> >>> Eventually some combination of things must have at least partly worked, >>> because information about a 1.1 release of django-contact-form appeared on >>> PyPI... except the file itself still did not appear, and any attempt to >>> upload returned: >>> >>> HTTPError: 400 Client Error: This filename has previously been used, you >>> should use a different version. >>> >>> I am absolutely stumped as to what's going on. I've tried deleting and >>> re-creating the 1.1 release to re-upload, but I'm stuck at being able to >>> create only an empty release with no file, since PyPI will not allow the >>> file to be uploaded. >>> >>> While I try to track down what happened, who has the power to override >>> PyPI's disallowed filename restriction for this? As far as I can tell, the >>> file was *never* listed on PyPI and so could not have been downloaded by >>> anyone, and after struggling with this for a few hours I'm not at all in >>> the mood to bump versions just to get PyPI to cooperate. >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Jul 8 11:39:13 2015 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 08 Jul 2015 05:39:13 -0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150708014348.19e4dd61@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/07/2015 07:43 PM, Antoine Pitrou wrote: > That's a dramatically uninformed statement, to put it politely... > Some packages have difficult-to-meet build dependencies, and can also > take a long time to do so. In the general case, it is *exactly* those projects which are going to trip people up when you upload their binary wheels to PyPI: there will be no way to know that the compiled-in stuff will work on "any" Linux, and solving the problem of "which" Linux variants a given wheel can support is intractable. At least for performance-snesitev codes, building on an old, "least-common-denominator" platform has been unsat: squeezing out the most peformance on a newer machine requires compiling on exactly that platform, using the best compiler available for it (and maybe tweak options differently than are even possible on the LCD platform). > llvmlite, the package I'm talking about, builds against the > development libraries for LLVM 3.6 (a non-trivial download and > install, assuming you can find binaries of that LLVM version for your > OS version.... otherwise, count ~20 minutes to compile it with a > modern quad-core CPU). We regularly have bug reports from people > failing to compile the package on Linux (and OS X), which is why we > are considering the option of pre-built binary wheels (in addition to > the conda packages we already provide, and which some people are > reluctant to use). A conda pacakge solves the problem by pinning all the underlying non-Python dependencies to "known-good" versions, which makes it the right choice for folks who cannot / won't build it themselves. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnO/AAAoJEPKpaDSJE9HY01kQAJV3Jht+KBdBfWn9w5G+/bD9 /8XRweaHFl+jBhFc+NEKjZg1Nfcz+bF8PHvC/unsUF4hBMJyeMtAadutDYVvlbOb hjUnY7BF94ssP1HcNJW9x7eQfKiwQqdOxr+4r15YkYGf0osW/JJ3SXYj/R9GwQ1c d/ZlTFtbL+fZaEEwUHS8pr3J2hx9HELFPQI3VCdt7AomNqGMoM92UDPXcyOvLUTB OfswrojVM2g1NJclvVEbd0FXIO/ScQeDYVd767LIynMbv4xQoB8/Bs9B1RBEj+gj ZphfRFtGssEHiNKN0Txk9Z22aYqQhlmxiJJx4mqT5qaSyY15iG34WsBh3gwqaDvR o2VaWMDAtxunqqiB2E1NXGzPH5InivalG1laPxYs2SZJMsYn0M3y0FDgNaV0vHuO JC3U1ckVm/oeuLhmaHmi/qzfaFTAxo4JPPcTxYnAdhVWqmWOQJRz/q20acpUu7Cx ZrezhYOVeDkF5AB+XyU6O4e7a0zB7r9jKKSIn/6EJHrMQ+l744U3nT+mJ31CIIO2 ZAYAm/ZOQK8MtSBAi9rVLzKmkyOAnUX/Fnil6uBYHAeBiIRy+BLExuzcT/0bNBXi Gfhzky8/rw7RCdBUTHeWY1y+2cZpq3BzFDSZqXMPT79SA3I+kQ9wAyyRI0x5tB6s 0ujHYOv331tJooipuYrY =JH2E -----END PGP SIGNATURE----- From solipsis at pitrou.net Wed Jul 8 13:10:00 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 8 Jul 2015 13:10:00 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> Message-ID: <20150708131000.57c76b71@fsol> On Wed, 08 Jul 2015 05:39:13 -0400 Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/07/2015 07:43 PM, Antoine Pitrou wrote: > > > That's a dramatically uninformed statement, to put it politely... > > Some packages have difficult-to-meet build dependencies, and can also > > take a long time to do so. > > In the general case, it is *exactly* those projects which are going to > trip people up when you upload their binary wheels to PyPI: there will > be no way to know that the compiled-in stuff will work on "any" Linux, > and solving the problem of "which" Linux variants a given wheel can > support is intractable. Seriously, how this is even supposed to be relevant? The whole point is to produce best-effort packages that work on still-supported mainstream distros, not any arbitrary "Linux" setup. Instead of lecturing people about what is in your opinion "intractable", how about you just shut up, if you don't have anything constructive to contribute? Regards Antoine. From tseaver at palladion.com Wed Jul 8 19:05:45 2015 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 08 Jul 2015 13:05:45 -0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150708131000.57c76b71@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/08/2015 07:10 AM, Antoine Pitrou wrote: > Seriously, how this is even supposed to be relevant? The whole point > is to produce best-effort packages that work on still-supported > mainstream distros, not any arbitrary "Linux" setup. I'm arguing that allowing PyPI uploads of binary wheels for Linux will be actively harmful. The chance that hundreds of project maintainers can get the dance you are suggesting right is iffectively nil: it requires them to compile wheel only on some unspecified LCD platform which most of them will have no access to. Once uploaded, mis-built wheels will cause no end of pain for the hapless users who try to use them. conda *is* the right solution for distributing cross-platform binaries for the relatively small in number (but not importance) hard-to-build packages. > Instead of lecturing people about what is in your opinion > "intractable", how about you just shut up, if you don't have anything > constructive to contribute? Really? Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVnVhpAAoJEPKpaDSJE9HYnxIP/0b9dCEHarpzzzZP+fam04i2 a1sdxLn5RodGcUF9kid4KsdtOMc3D1sKf0yLbuBA7z1ctMYsPDYPcmvlo4cW2iSJ WHYaBm1nJ1JUIxBDagWl5IhIO+aMtRmBQ+UnRj2vQi6Clga/d0M0T1+lNhAJy4hs SogpFx3SayW+BotThqY1JGvAaKyM5hGYARAQIlrxWk2Ei6pff4mNAdXd3t1hjqg0 Y6PlBVCo9APFtbtO2AyzdVY5Mp4D6QqDcdj3NhjHsukvPTlpgv1dk+jJgPJfTxCw Gx0Gv2Zt673y9QNOer6Lp3A/jkLG6w5mnY9K3lid+wx3Pv4wd3NNy+O2FuopKHZc RdE1mWVe5W4ufSJVhdaXTOj5Eww9KnL5SH3KbTYBbDST7c9pFM0tnk/8Lq+4OU0z soGCrZtH5mjof7LPDkvVh/j/mrLeRo+Wz9JsaAb8+PZVDuBSEfHKDxoKcmO0VzAx weJE24eNfqCLzbQKhaIiXWA21D2jE1KxdUuzf0qJwwlkPaa4BmvMzJ0iKMeZYFf4 lYnBhNBJhZSGDTPpXRiM3Q73bzxvz3OYY/+6lcZU+LYx8LIHmQMrgHDddh2E9FvQ BC4sfIBGt+LTbZPSsK0xp+MbX1yoW4TcXZeA/fe8/e2GY0UFgKQRD44Fa0hRdU7b QXabMbyCHLKEKxc+xmXF =Lrn9 -----END PGP SIGNATURE----- From solipsis at pitrou.net Wed Jul 8 21:06:31 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 8 Jul 2015 21:06:31 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> Message-ID: <20150708210631.3cb2b464@fsol> On Wed, 08 Jul 2015 13:05:45 -0400 Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/08/2015 07:10 AM, Antoine Pitrou wrote: > > > Seriously, how this is even supposed to be relevant? The whole point > > is to produce best-effort packages that work on still-supported > > mainstream distros, not any arbitrary "Linux" setup. > > I'm arguing that allowing PyPI uploads of binary wheels for Linux will be > actively harmful. There is no point in reinstating an argument that has already been made and discussed in the other subthread (of course, you would have to read it first to know that). Regards Antoine. From holger at merlinux.eu Thu Jul 9 14:25:23 2015 From: holger at merlinux.eu (holger krekel) Date: Thu, 9 Jul 2015 12:25:23 +0000 Subject: [Distutils] devpi-{server-2.2.2,web-2.4.0,client-2.3.0} releases Message-ID: <20150709122523.GY28148@merlinux.eu> We just released devpi-server-2.2.2, devpi-web-2.4.0 and devpi-client-2.3.0, core parts of the private pypi package management and testing system. Among the highlights are support for distributed testing with "devpi test --detox", new status pages at "/+status" for replica and master sites and support for configuring upload formats when running "devpi upload". None of the changes require an export/import cycle on the server side if you used devpi-server-2.2.X before. However, please read the respective changelog entries below for some notes and potentially backward-incompatible changes. See the home page for docs and tutorials: http://doc.devpi.net have fun, Holger Krekel and Florian Schulze contracting: http://merlinux.eu server-2.2.2 ------------ - make replica thread more robust by catching more exceptions - Remove duplicates in plugin version info - track timestamps for event processing and replication and expose in /+status - implement devpiweb_get_status_info hook for devpi-web >= 2.4.0 status messages - UPGRADE NOTE: if devpi-web is installed, you have to request ``application/json`` for ``/+status``, or you might get a html page. - address issue246: refuse uploading release files if they do not contain the version that was transferred with the metadata of the upload request. - fix issue248: prevent change of index type after creation web-2.4.0 --------- - macros.pt: Add autofocus attribute to search field - macros.pt and style.css: Moved "How to search?" to the right of the search button and adjusted width of search field accordingly. - fix issue244: server status info - added support for status message plugin hook ``devpiweb_get_status_info`` - macros.pt: added macros ``status`` and ``statusbadge`` and placed them below the search field. - added status.pt: shows server status information - toxresults.pt: fix missing closing ``div`` tag. client-2.3.0 ------------ - fix issue247: possible password leakage to log in devpi-client - new experimental "-d|--detox" option to run tests via the "detox" distributed testing tool instead of "tox" which runs test environments one by one. - address issue246: make sure we use vcs-export also for building docs (and respect --no-vcs for all building activity) - address issue246: copy VCS repo dir to temporary upload dir to help with setuptools_scm. Warn if VCS other than hg/git are used because we don't copy the repo in that case for now and thus cause incompatibility with setuptools_scm. - (new,experimental) read a "[devpi:upload]" section from a setup.cfg file with a "formats" setting that will be taken if no "--formats" option is specified to "devpi upload". This allows to specify the default artefacts that should be created along with a project's setup.cfg file. Also you can use a ``no-vcs = True`` setting to induce the ``--no-vcs`` option. From ncoghlan at gmail.com Thu Jul 9 15:50:30 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 9 Jul 2015 23:50:30 +1000 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150708210631.3cb2b464@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: On 9 July 2015 at 05:06, Antoine Pitrou wrote: > On Wed, 08 Jul 2015 13:05:45 -0400 > Tres Seaver wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 07/08/2015 07:10 AM, Antoine Pitrou wrote: >> >> > Seriously, how this is even supposed to be relevant? The whole point >> > is to produce best-effort packages that work on still-supported >> > mainstream distros, not any arbitrary "Linux" setup. >> >> I'm arguing that allowing PyPI uploads of binary wheels for Linux will be >> actively harmful. > > There is no point in reinstating an argument that has already been made > and discussed in the other subthread (of course, you would have to read > it first to know that). Steady on folks - prebuilt binary software distribution is *really*, *really*, hard, and we're not going to magically solve problems in a couple of days that have eluded Linux distribution vendors for over a decade. Yes, it's annoying, yes, it's frustrating, but sniping at each other when we point out the many and varied reasons it's hard won't help us to improve the experience for Python users. The key is remembering that now matter how broken you think prebuilt binary software distribution might be, it's actually worse. And channeling Hofstadter's Law: this principle remains true, even when you attempt to take this principle into account :) If you look at various prebuilt binary ecosystems to date, there's either a central authority defining the ABI to link against: - CPython on Windows - CPython on Mac OS X - Linux distributions with centralised package review and build systems - conda - nix - MS Visual Studio - XCode - Google Play - Apple App Store Or else a relatively tightly controlled isolation layer between the application code and the host system: - JVM - .NET CLR (even Linux containers can still hit the kernel ABI compatibility issues mentioned elsewhere in the thread) As Donald notes, I think we're now in a good position to start making progress here, but the first step is going to be finding a way to ensure that *by default*, pip on Linux ignores wheel files published on PyPI, and requires that they be *whitelisted* in some fashion (whether individually or categorically). That way, we know we're not going to make the default user experience on Linux *worse* than the status quo while we're still experimenting with how we want the publication side of things to work. Debugging build time API compatibility errors can be hard enough, debugging runtime A*B*I compatibility errors is a nightmare even for seasoned support engineers. It seems to me that one possible way to do that might be to change PyPI from whitelisting Windows and Mac OS X (as I believe it does now) to instead blacklisting all the other currently possible results from distutils.util.get_platform(). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Thu Jul 9 16:50:48 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 9 Jul 2015 16:50:48 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: <20150709165048.5145ec1d@fsol> On Thu, 9 Jul 2015 23:50:30 +1000 Nick Coghlan wrote: > > As Donald notes, I think we're now in a good position to start making > progress here, but the first step is going to be finding a way to > ensure that *by default*, pip on Linux ignores wheel files published > on PyPI, and requires that they be *whitelisted* in some fashion > (whether individually or categorically). That way, we know we're not > going to make the default user experience on Linux *worse* than the > status quo while we're still experimenting with how we want the > publication side of things to work. Debugging build time API > compatibility errors can be hard enough, debugging runtime A*B*I > compatibility errors is a nightmare even for seasoned support > engineers. By the way, I think there's another possibility if the Python packaging authority doesn't want to tackle this (admittedly delicate) problem: issue a public statement that Anaconda is the preferred way of installing Linux binary packages if they aren't provided (or the version is too old) by their Linux distribution of choice. It would then give more authority to software developers if they want to tell their users "don't use pip to install our code under Linux, use conda". Regards Antoine. From aclark at aclark.net Thu Jul 9 17:18:13 2015 From: aclark at aclark.net (Alex Clark) Date: Thu, 09 Jul 2015 11:18:13 -0400 Subject: [Distutils] picky 0.8 released! In-Reply-To: <5587E889.4090306@simplistix.co.uk> References: <5587E889.4090306@simplistix.co.uk> Message-ID: On 6/22/15 6:50 AM, Chris Withers wrote: > Hi All, > > I'm please to announce the first release of picky, a tool for checking > versions of packages used pip are as specified in their requirements files. > > I wrote this tool because it's all too easy to have a requirements.txt > file for you project that you think covers all the resources you're > using, only to find you've been dragging in extra packages unknowingly, > and you now have different versions of those in development and > production, causing lots of debugging misery! > > Using picky is as easy as: > > $ pip install picky > $ echo 'picky==0.8' >> requirements.txt > $ picky > > If you want to update your requirements.txt based on your current > environment: > > $ picky --update > > Full docs are at http://picky.readthedocs.org/en/latest/. > > Source control and issue trackers are at > https://github.com/Simplistix/picky. Nice! Thanks > > Any problems, please ask here or mail me direct! > > cheers, > > Chris > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Alex Clark ? http://aclark.net From cournape at gmail.com Thu Jul 9 18:52:06 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 9 Jul 2015 17:52:06 +0100 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150709165048.5145ec1d@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> <20150709165048.5145ec1d@fsol> Message-ID: On Thu, Jul 9, 2015 at 3:50 PM, Antoine Pitrou wrote: > On Thu, 9 Jul 2015 23:50:30 +1000 > Nick Coghlan wrote: > > > > As Donald notes, I think we're now in a good position to start making > > progress here, but the first step is going to be finding a way to > > ensure that *by default*, pip on Linux ignores wheel files published > > on PyPI, and requires that they be *whitelisted* in some fashion > > (whether individually or categorically). That way, we know we're not > > going to make the default user experience on Linux *worse* than the > > status quo while we're still experimenting with how we want the > > publication side of things to work. Debugging build time API > > compatibility errors can be hard enough, debugging runtime A*B*I > > compatibility errors is a nightmare even for seasoned support > > engineers. > > By the way, I think there's another possibility if the Python packaging > authority doesn't want to tackle this (admittedly delicate) problem: > issue a public statement that Anaconda is the preferred way of > installing Linux binary packages if they aren't provided (or the > version is too old) by their Linux distribution of choice. > > It would then give more authority to software developers if they want > to tell their users "don't use pip to install our code under Linux, use > conda". > > I don't think it is reasonable for pypa to recommend one solution when multiple are available (though it is certainly fair to mention them). ActiveState, Enthought (my own employer) also provide linux binaries, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Jul 9 19:12:24 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 9 Jul 2015 19:12:24 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> <20150709165048.5145ec1d@fsol> Message-ID: <20150709191224.1eb3ee14@fsol> On Thu, 9 Jul 2015 17:52:06 +0100 David Cournapeau wrote: > I don't think it is reasonable for pypa to recommend one solution when > multiple are available (though it is certainly fair to mention them). > > ActiveState, Enthought (my own employer) also provide linux binaries, You are right, I was forgetting about them. Then mentioning them would probably work. Regards Antoine. From ncoghlan at gmail.com Fri Jul 10 00:37:42 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Jul 2015 08:37:42 +1000 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: <20150709165048.5145ec1d@fsol> References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> <20150709165048.5145ec1d@fsol> Message-ID: On 10 July 2015 at 00:50, Antoine Pitrou wrote: > On Thu, 9 Jul 2015 23:50:30 +1000 > Nick Coghlan wrote: >> >> As Donald notes, I think we're now in a good position to start making >> progress here, but the first step is going to be finding a way to >> ensure that *by default*, pip on Linux ignores wheel files published >> on PyPI, and requires that they be *whitelisted* in some fashion >> (whether individually or categorically). That way, we know we're not >> going to make the default user experience on Linux *worse* than the >> status quo while we're still experimenting with how we want the >> publication side of things to work. Debugging build time API >> compatibility errors can be hard enough, debugging runtime A*B*I >> compatibility errors is a nightmare even for seasoned support >> engineers. > > By the way, I think there's another possibility if the Python packaging > authority doesn't want to tackle this (admittedly delicate) problem: > issue a public statement that Anaconda is the preferred way of > installing Linux binary packages if they aren't provided (or the > version is too old) by their Linux distribution of choice. We already provide a page specifically aimed at alerting folks to their prebuilt binary options for the scientific Python stack: https://packaging.python.org/en/latest/science.html In addition to referencing the upstream conda components, that also links through to http://www.scipy.org/install.html where Anaconda and Enthought Canopy are both mentioned. (Also Pyzo, which was a new one to me, and further introduced me to a couple of interesting projects: http://www.iep-project.org/index.html & its successor http://zoof.io/, which aims to take advantage of the Project Jupyter architecture to better support multiple language runtimes) > It would then give more authority to software developers if they want > to tell their users "don't use pip to install our code under Linux, use > conda". I'd personally phrase such suggestions more along the lines of "For annoying technical reasons that folks are looking to find ways to fix, if you don't want to build from source yourself, then you'll currently need to use a Python redistributor rather than using the upstream Python Package Index directly. We know the conda binaries are kept up to date, but there isn't anyone we're aware of currently ensuring that up to date versions of our packages are readily available through Linux system package managers." Along those lines, while it's my personal recommendation rather than PyPA's collective recommendation, some of the references in http://www.curiousefficiency.org/posts/2015/04/stop-supporting-python26.html for "Third Party Supported" upgrade paths may prove useful. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ben+python at benfinney.id.au Fri Jul 10 01:27:37 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 10 Jul 2015 09:27:37 +1000 Subject: [Distutils] Version management for devpi (was: devpi-{server-2.2.2, web-2.4.0, client-2.3.0} releases) References: <20150709122523.GY28148@merlinux.eu> Message-ID: <85mvz4ykra.fsf@benfinney.id.au> holger krekel writes: > We just released devpi-server-2.2.2, devpi-web-2.4.0 and devpi-client-2.3.0, > core parts of the private pypi package management and testing system. Thank you for the release, and for continuing to work on this code base. What reason is there to have different code bases, with different versions, for a collection that is so clearly all related and makes sense as a single code base? It's very confusing to know what versions relate to other versions of other parts of devpi. The versions don't help to know the timeline, which defeats one of the primary purposes of versions on a code base. Since the code tends to be ready for release at the same time, why not make a single version string for the whole code base, and release it as a unit? -- \ ?Reichel's Law: A body on vacation tends to remain on vacation | `\ unless acted upon by an outside force.? ?Carol Reichel | _o__) | Ben Finney From contact at ionelmc.ro Fri Jul 10 01:16:14 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 10 Jul 2015 02:16:14 +0300 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: Would be quite useful to see some references and details about the vague issues being mentioned in the thread. It would help a lot the less versed engineers (like me) understand the issues at hand (and hopefully reduce the amount of disagreement overall). For example, for me it's not clear what's wrong with Antoine's proposal (compile on Centos 5) - it seemed quite sensible approach to produce a reasonably compatible binary. Some issues with kernel ABI have been mentioned - can anyone point me to some resources describing the possible problems? Is it correct to assume that it's about using vendor-specific kernel api? Also, what does Conda do to solve the binary compatibility issues and distutils or pip could never ever do (or implement)? Thanks, -- Ionel Cristian M?rie? On Thu, Jul 9, 2015 at 4:50 PM, Nick Coghlan wrote: > On 9 July 2015 at 05:06, Antoine Pitrou wrote: > > On Wed, 08 Jul 2015 13:05:45 -0400 > > Tres Seaver wrote: > >> -----BEGIN PGP SIGNED MESSAGE----- > >> Hash: SHA1 > >> > >> On 07/08/2015 07:10 AM, Antoine Pitrou wrote: > >> > >> > Seriously, how this is even supposed to be relevant? The whole point > >> > is to produce best-effort packages that work on still-supported > >> > mainstream distros, not any arbitrary "Linux" setup. > >> > >> I'm arguing that allowing PyPI uploads of binary wheels for Linux will > be > >> actively harmful. > > > > There is no point in reinstating an argument that has already been made > > and discussed in the other subthread (of course, you would have to read > > it first to know that). > > Steady on folks - prebuilt binary software distribution is *really*, > *really*, hard, and we're not going to magically solve problems in a > couple of days that have eluded Linux distribution vendors for over a > decade. Yes, it's annoying, yes, it's frustrating, but sniping at each > other when we point out the many and varied reasons it's hard won't > help us to improve the experience for Python users. > > The key is remembering that now matter how broken you think prebuilt > binary software distribution might be, it's actually worse. And > channeling Hofstadter's Law: this principle remains true, even when > you attempt to take this principle into account :) > > If you look at various prebuilt binary ecosystems to date, there's > either a central authority defining the ABI to link against: > > - CPython on Windows > - CPython on Mac OS X > - Linux distributions with centralised package review and build systems > - conda > - nix > - MS Visual Studio > - XCode > - Google Play > - Apple App Store > > Or else a relatively tightly controlled isolation layer between the > application code and the host system: > > - JVM > - .NET CLR > > (even Linux containers can still hit the kernel ABI compatibility > issues mentioned elsewhere in the thread) > > As Donald notes, I think we're now in a good position to start making > progress here, but the first step is going to be finding a way to > ensure that *by default*, pip on Linux ignores wheel files published > on PyPI, and requires that they be *whitelisted* in some fashion > (whether individually or categorically). That way, we know we're not > going to make the default user experience on Linux *worse* than the > status quo while we're still experimenting with how we want the > publication side of things to work. Debugging build time API > compatibility errors can be hard enough, debugging runtime A*B*I > compatibility errors is a nightmare even for seasoned support > engineers. > > It seems to me that one possible way to do that might be to change > PyPI from whitelisting Windows and Mac OS X (as I believe it does now) > to instead blacklisting all the other currently possible results from > distutils.util.get_platform(). > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 10 02:12:40 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 9 Jul 2015 20:12:40 -0400 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: On July 9, 2015 at 7:41:25 PM, Ionel Cristian M?rie? (contact at ionelmc.ro) wrote: > Would be quite useful to see some references and details about the vague > issues being mentioned in the thread. It would help a lot the less versed > engineers (like me) understand the issues at hand (and hopefully reduce the > amount of disagreement overall). > > For example, for me it's not clear what's wrong with Antoine's proposal > (compile on Centos 5) - it seemed quite sensible approach to produce a > reasonably compatible binary. > > Some issues with kernel ABI have been mentioned - can anyone point me to > some resources describing the possible problems? Is it correct to assume > that it's about using vendor-specific kernel api? It is about ABI. I'm not an expert but essentially anything that a Wheel doesn't statically link needs to be ABI compatible. For a plain C extension that doesn't link to anything else you have primarily: * The libc that it was linked against. * The Python that it was linked against. Currently on Python 3 we have a "stable" ABI (though I don't think it covers the entire thing) which is represented in the Wheel filename, however we don't have anything to cover the libc version. The most common version of libc in use is glibc and that is (as far as I know) basically always ABI compatible when going from an older version to a new version. However there is no guarentee that a glibc from Linux X will be ABI compatible with a glibC from Linux Z. There isn't even a guarentee that it will be glibc at all (for instance, Alpine linux uses MUSL). On top of that, you have the fact that a lot of C extensions are not self contained C extensions but instead are bindings for some other library. The problem starts becoming a lot bigger here, for example psycopg2 is a pretty popular library that links to libpsql. This may or may not be ABI compatible across CentOS to Ubuntu (for instance) and trying to install one that isn't will cause breakages. Circling back to Antoine's suggestion, the problem isn't that it wouldn't actually work, because it would sometimes, the problem is that you need to be careful about using it because it only works in some situations. In order for that to work you need to make sure that the project you're compiling is either a completely self contained C-extension or that you statically link everything that you're linking against. You also need to make sure that the person compiling the Wheel does so on a sufficiently ancient glibc to cover everyone that you care about covering. However this will still break in "weird" ways on even older versions of glibc, on Linux distributions which do not use glibc, or just because two Linux distributions decided to make their glibc slightly incompatible. This is the reason that it's currently blocked, because the edge cases are sufficiently sharp and you have to be very careful to build your Wheels in just the right way that I felt it was better to punt on it until someone puts in the effort to make it do something more resembling the right thing by default. > > Also, what does Conda do to solve the binary compatibility issues and > distutils or pip could never ever do (or implement)? >? They don't do anything (to my knowledge, and hopefully Antoine or someone can correct me if I'm wrong) to solve the libc compatiblity problem besides building on a sufficiently old version of CentOS that they feel comfortable calling that their minimum level of support. I assume that if I tried to run Conda on something like Alpine the default repositories would break since it doesn't use glibc. They however _do_ own the ABI of everything else, so everything from the Python you're using to the libpsql to the openssl comes from their repositories. This means that they don't have to worry about the ABI differences of OpenSSL in CentOS/RHEL 5 and Ubuntu 14.04 since they'll never use them, they'll just use the OpenSSL they personally have packaged. This isn't something we can adopt because we're not trying to be another platform (in the vein of Conda) we're trying to provide a common tooling that can be used across a wide range of platforms to install Python projects. This will absolutely involve trying to figure out what the right line to walk is for us between defining a defacto platform and simply being something you "plug" into another platform. I think the best way to go about this is to figure out the best way to publish Wheels for a specific platform and focus on identifying the platforms and providing ways for people to override that. A generic "Linux" platform is a reasonable one, but I don't think it's a reasonable default since to actually support it requires taking the special precautions from above. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From cournape at gmail.com Fri Jul 10 09:00:14 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 10 Jul 2015 08:00:14 +0100 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: On Fri, Jul 10, 2015 at 12:16 AM, Ionel Cristian M?rie? wrote: > Would be quite useful to see some references and details about the vague > issues being mentioned in the thread. It would help a lot the less versed > engineers (like me) understand the issues at hand (and hopefully reduce the > amount of disagreement overall). > > For example, for me it's not clear what's wrong with Antoine's proposal > (compile on Centos 5) - it seemed quite sensible approach to produce a > reasonably compatible binary. > > Some issues with kernel ABI have been mentioned - can anyone point me to > some resources describing the possible problems? Is it correct to assume > that it's about using vendor-specific kernel api? > No, it is about some python packages depending directly or indirectly on kernel features not available in the kernel on centos 5. For example, you can't build subprocess32 (https://pypi.python.org/pypi/subprocess32/) on centos 5 kernels. > > Also, what does Conda do to solve the binary compatibility issues and > distutils or pip could never ever do (or implement)? > They do what almost everybody distributing large applications on Linux do : they "ship the world". Any large binary python distribution provider does the same here: except for low level X11/glibc libraries, everything else is bundled as part of the distribution. So no magic, just lots of maintenance work. David > > Thanks, > -- Ionel Cristian M?rie? > > On Thu, Jul 9, 2015 at 4:50 PM, Nick Coghlan wrote: > >> On 9 July 2015 at 05:06, Antoine Pitrou wrote: >> > On Wed, 08 Jul 2015 13:05:45 -0400 >> > Tres Seaver wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> >> Hash: SHA1 >> >> >> >> On 07/08/2015 07:10 AM, Antoine Pitrou wrote: >> >> >> >> > Seriously, how this is even supposed to be relevant? The whole point >> >> > is to produce best-effort packages that work on still-supported >> >> > mainstream distros, not any arbitrary "Linux" setup. >> >> >> >> I'm arguing that allowing PyPI uploads of binary wheels for Linux will >> be >> >> actively harmful. >> > >> > There is no point in reinstating an argument that has already been made >> > and discussed in the other subthread (of course, you would have to read >> > it first to know that). >> >> Steady on folks - prebuilt binary software distribution is *really*, >> *really*, hard, and we're not going to magically solve problems in a >> couple of days that have eluded Linux distribution vendors for over a >> decade. Yes, it's annoying, yes, it's frustrating, but sniping at each >> other when we point out the many and varied reasons it's hard won't >> help us to improve the experience for Python users. >> >> The key is remembering that now matter how broken you think prebuilt >> binary software distribution might be, it's actually worse. And >> channeling Hofstadter's Law: this principle remains true, even when >> you attempt to take this principle into account :) >> >> If you look at various prebuilt binary ecosystems to date, there's >> either a central authority defining the ABI to link against: >> >> - CPython on Windows >> - CPython on Mac OS X >> - Linux distributions with centralised package review and build systems >> - conda >> - nix >> - MS Visual Studio >> - XCode >> - Google Play >> - Apple App Store >> >> Or else a relatively tightly controlled isolation layer between the >> application code and the host system: >> >> - JVM >> - .NET CLR >> >> (even Linux containers can still hit the kernel ABI compatibility >> issues mentioned elsewhere in the thread) >> >> As Donald notes, I think we're now in a good position to start making >> progress here, but the first step is going to be finding a way to >> ensure that *by default*, pip on Linux ignores wheel files published >> on PyPI, and requires that they be *whitelisted* in some fashion >> (whether individually or categorically). That way, we know we're not >> going to make the default user experience on Linux *worse* than the >> status quo while we're still experimenting with how we want the >> publication side of things to work. Debugging build time API >> compatibility errors can be hard enough, debugging runtime A*B*I >> compatibility errors is a nightmare even for seasoned support >> engineers. >> >> It seems to me that one possible way to do that might be to change >> PyPI from whitelisting Windows and Mac OS X (as I believe it does now) >> to instead blacklisting all the other currently possible results from >> distutils.util.get_platform(). >> >> Regards, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Fri Jul 10 14:53:22 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Fri, 10 Jul 2015 14:53:22 +0200 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: I just checked and indeed the python exec installed by miniconda does not work on Alpine linux (launch via docker from the gliderlabs/alpine image): # ldd /root/miniconda3/pkgs/python-3.4.3-0/bin/python3.4 /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libpython3.4m.so.1.0 => /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0 (0x7f26bd153000) libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libutil.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __finite: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __rawmemchr: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinff: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isnan: symbol not found Error relocating /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: __isinf: symbol not found We could still have a platform or ABI tag for linux that would include some libc information to ensure that it points to a compatible glibc and provide a reference docker image to build such wheels. We could assume that wheel binary packages should not link to any .so file from the system besides the libc. I think the packages that have specific kernel ABI requirements are rare enough to be explicitly left outside of this first effort and let the user build those from source directly. Is there an easy way to introspect a binary to detect such kernel dependencies? -- Olivier From cournape at gmail.com Fri Jul 10 15:10:00 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 10 Jul 2015 14:10:00 +0100 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: On Fri, Jul 10, 2015 at 1:53 PM, Olivier Grisel wrote: > I just checked and indeed the python exec installed by miniconda does > not work on Alpine linux (launch via docker from the gliderlabs/alpine > image): > > # ldd /root/miniconda3/pkgs/python-3.4.3-0/bin/python3.4 > /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > libpython3.4m.so.1.0 => > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0 > (0x7f26bd153000) > libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > libutil.so.1 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f26bd5fe000) > Error relocating > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: > __finite: symbol not found > Error relocating > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: > __rawmemchr: symbol not found > Error relocating > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: > __isinff: symbol not found > Error relocating > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: > __isnan: symbol not found > Error relocating > /root/miniconda3/pkgs/python-3.4.3-0/bin/../lib/libpython3.4m.so.1.0: > __isinf: symbol not found > > We could still have a platform or ABI tag for linux that would include > some libc information to ensure that it points to a compatible glibc > and provide a reference docker image to build such wheels. > > We could assume that wheel binary packages should not link to any .so > file from the system besides the libc. > This is too restrictive if you want plotting-related packages (which I suspsect you are interested in ;) ). The libraries we at Enthought depend on for our packages are: * glibc (IMO if you use a system w/o glibc, you are expected to be on your own to build packages from sources) * X11/fontconfig * libstdc++ Those are the ones you really do not want to ship. I don't know the proportion of packages that would work from pypi if you could assume the system has those available through some kind of ABI/Platform specifier following pep425. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Fri Jul 10 19:03:34 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 10 Jul 2015 10:03:34 -0700 Subject: [Distutils] 400 Client Error: Binary wheel for an unsupported platform In-Reply-To: References: <20150706182448.32cddf8d@fsol> <20150706201808.3f631734@fsol> <20150708014348.19e4dd61@fsol> <20150708131000.57c76b71@fsol> <20150708210631.3cb2b464@fsol> Message-ID: <559FFAE6.6040907@stoneleaf.us> On 07/10/2015 12:00 AM, David Cournapeau wrote: > They do what almost everybody distributing large applications on Linux do : they > "ship the world". Any large binary python distribution provider does the same > here: except for low level X11/glibc libraries, everything else is bundled as > part of the distribution. Huh, sounds like Windows. -- ~Ethan~ From ethan at stoneleaf.us Fri Jul 10 23:38:22 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 10 Jul 2015 14:38:22 -0700 Subject: [Distutils] Making install a no-op Message-ID: <55A03B4E.7060509@stoneleaf.us> I have recently received a request to make installing enum34 a no-op on Python3.4 and later so that wheels, etc, don't have to worry about the Python version when dealing with Enum. From an enum34 point-of-view this makes sense since Enum is in the stdlib in 3.4+, and enum34 has no purpose -- but how? Is it a simple matter of checking for the Python version and raising SystemExit if enum34 is not needed? -- ~Ethan~ From randy at thesyrings.us Fri Jul 10 23:47:23 2015 From: randy at thesyrings.us (Randy Syring) Date: Fri, 10 Jul 2015 17:47:23 -0400 Subject: [Distutils] Making install a no-op In-Reply-To: <55A03B4E.7060509@stoneleaf.us> References: <55A03B4E.7060509@stoneleaf.us> Message-ID: <55A03D6B.9090702@thesyrings.us> Seems to me this would be handled in the upstream packages that are depending on enum34. IMO, it would be their responsibility to only include enum34 if their package is being installed on a python that needs it. To ask enum34 to be installed and then expect enum34 to not install itself seems backwards. But that's just my $0.02. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ On 07/10/2015 05:38 PM, Ethan Furman wrote: > I have recently received a request to make installing enum34 a no-op > on Python3.4 and later so that wheels, etc, don't have to worry about > the Python version when dealing with Enum. > > From an enum34 point-of-view this makes sense since Enum is in the > stdlib in 3.4+, and enum34 has no purpose -- but how? Is it a simple > matter of checking for the Python version and raising SystemExit if > enum34 is not needed? > > -- > ~Ethan~ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Sat Jul 11 00:11:32 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 10 Jul 2015 15:11:32 -0700 Subject: [Distutils] Making install a no-op In-Reply-To: <55A03D6B.9090702@thesyrings.us> References: <55A03B4E.7060509@stoneleaf.us> <55A03D6B.9090702@thesyrings.us> Message-ID: <55A04314.1070602@stoneleaf.us> On 07/10/2015 02:47 PM, Randy Syring wrote: > Seems to me this would be handled in the upstream packages that are depending on enum34. > IMO, it would be their responsibility to only include enum34 if their package is being > installed on a python that needs it. > > To ask enum34 to be installed and then expect enum34 to not install itself seems backwards. But that's just my $0.02. You make a good point. I have changed the classifier Python3 to Python3.1, 3.2, and 3.3 -- hopefully that will things a little easier. The request came from someone who would like to have one wheel file for all Python2-3 versions; I know that in setup.py it's easy enough to check the version and add (or not) enum34 to the required (or dependent?) module list. Does this type of thing work with wheels? -- ~Ethan~ From fungi at yuggoth.org Sat Jul 11 00:22:50 2015 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 10 Jul 2015 22:22:50 +0000 Subject: [Distutils] Making install a no-op In-Reply-To: <55A04314.1070602@stoneleaf.us> References: <55A03B4E.7060509@stoneleaf.us> <55A03D6B.9090702@thesyrings.us> <55A04314.1070602@stoneleaf.us> Message-ID: <20150710222250.GZ2731@yuggoth.org> On 2015-07-10 15:11:32 -0700 (-0700), Ethan Furman wrote: [...] > The request came from someone who would like to have one wheel > file for all Python2-3 versions; I know that in setup.py it's easy > enough to check the version and add (or not) enum34 to the > required (or dependent?) module list. Does this type of thing work > with wheels? If you want a real-world example you can look at argparse (I can successfully pip install argparse in a Python 3.5 virtualenv even though it's been in stdlib since 2.7). If you want depending packages to do the work instead, they'll need to implement environment markers with something like ;python_version<='3.3' (which really needs setuptools 17.1 or later to interpret it correctly). http://pythonhosted.org/setuptools/history.html#id5 -- Jeremy Stanley From brett at python.org Sat Jul 11 18:50:23 2015 From: brett at python.org (Brett Cannon) Date: Sat, 11 Jul 2015 16:50:23 +0000 Subject: [Distutils] Making install a no-op In-Reply-To: <55A03B4E.7060509@stoneleaf.us> References: <55A03B4E.7060509@stoneleaf.us> Message-ID: The way I handled itnin importlib was to version check in setup.py and if importlib would be in the stdlib I would use an empty list to represent what to install. That way users didn't have use markers and need to know importlib was in some specific version of Python (basically argparse has made people expect backports to just magically work). On Fri, Jul 10, 2015, 14:38 Ethan Furman wrote: > I have recently received a request to make installing enum34 a no-op on > Python3.4 and later so that wheels, etc, don't have to worry about the > Python version when dealing with Enum. > > From an enum34 point-of-view this makes sense since Enum is in the stdlib > in 3.4+, and enum34 has no purpose -- but how? Is it a simple matter of > checking for the Python version and raising > SystemExit if enum34 is not needed? > > -- > ~Ethan~ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Sat Jul 11 18:55:23 2015 From: randy at thesyrings.us (Randy Syring) Date: Sat, 11 Jul 2015 12:55:23 -0400 Subject: [Distutils] Making install a no-op In-Reply-To: References: <55A03B4E.7060509@stoneleaf.us> Message-ID: > > The way I handled itnin importlib was to version check in setup.py and if > importlib would be in the stdlib I would use an empty list to represent > what to install FWIW, that is also how I have done it in the past. Conditionally add install requirements to the list based on the current environment and then pass that to setup(). *Randy Syring* Husband | Father | Redeemed Sinner *"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)* On Sat, Jul 11, 2015 at 12:50 PM, Brett Cannon wrote: > The way I handled itnin importlib was to version check in setup.py and if > importlib would be in the stdlib I would use an empty list to represent > what to install. That way users didn't have use markers and need to know > importlib was in some specific version of Python (basically argparse has > made people expect backports to just magically work). > > On Fri, Jul 10, 2015, 14:38 Ethan Furman wrote: > >> I have recently received a request to make installing enum34 a no-op on >> Python3.4 and later so that wheels, etc, don't have to worry about the >> Python version when dealing with Enum. >> >> From an enum34 point-of-view this makes sense since Enum is in the >> stdlib in 3.4+, and enum34 has no purpose -- but how? Is it a simple >> matter of checking for the Python version and raising >> SystemExit if enum34 is not needed? >> >> -- >> ~Ethan~ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Wed Jul 15 06:55:50 2015 From: randy at thesyrings.us (Randy Syring) Date: Wed, 15 Jul 2015 00:55:50 -0400 Subject: [Distutils] Virtualenv bug breaking tox runs Message-ID: <55A5E7D6.4050302@thesyrings.us> I'm trying to use tox from python2 to test a python3 environment. But I get an exception like the following: > rsyring at loftex:~/projects/blaze/blazeutils-src$ python -m virtualenv > --python python3 ~/tmp/testvenv > Running virtualenv with interpreter /home/rsyring/bin/python3 > Traceback (most recent call last): > File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line > 14, in > import shutil > File "/opt/python34/lib/python3.4/shutil.py", line 11, in > import fnmatch > File "/opt/python34/lib/python3.4/fnmatch.py", line 15, in > import functools > File "/opt/python34/lib/python3.4/functools.py", line 21, in > from collections import namedtuple > File "/opt/python34/lib/python3.4/collections/__init__.py", line 17, > in > from reprlib import recursive_repr as _recursive_repr > File "/usr/local/lib/python2.7/dist-packages/reprlib.py", line 3, in > > from repr import * > ImportError: No module named 'repr' There is a bug for this in virtualenv: https://github.com/pypa/virtualenv/issues/625 So, I realize I could use `python3 -m virtualenv` if I was running the command myself. But, tox is making the call to virtualenv. This was working the last time I tested it but has now broken. I'm guessing that is because I upgraded tox or virtualenv? Any pointers welcome as to the best way to resolve this issue now? Thanks in advance. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at gedmin.as Wed Jul 15 08:08:57 2015 From: marius at gedmin.as (Marius Gedminas) Date: Wed, 15 Jul 2015 09:08:57 +0300 Subject: [Distutils] Virtualenv bug breaking tox runs In-Reply-To: <55A5E7D6.4050302@thesyrings.us> References: <55A5E7D6.4050302@thesyrings.us> Message-ID: <20150715060857.GA10855@platonas> On Wed, Jul 15, 2015 at 12:55:50AM -0400, Randy Syring wrote: > I'm trying to use tox from python2 to test a python3 environment. But I get > an exception like the following: > > >rsyring at loftex:~/projects/blaze/blazeutils-src$ python -m virtualenv > >--python python3 ~/tmp/testvenv > >Running virtualenv with interpreter /home/rsyring/bin/python3 > >Traceback (most recent call last): > > File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 14, in > > > > import shutil > > File "/opt/python34/lib/python3.4/shutil.py", line 11, in > > import fnmatch > > File "/opt/python34/lib/python3.4/fnmatch.py", line 15, in > > import functools > > File "/opt/python34/lib/python3.4/functools.py", line 21, in > > from collections import namedtuple > > File "/opt/python34/lib/python3.4/collections/__init__.py", line 17, in > > > > from reprlib import recursive_repr as _recursive_repr > > File "/usr/local/lib/python2.7/dist-packages/reprlib.py", line 3, in > > > > from repr import * > >ImportError: No module named 'repr' > > There is a bug for this in virtualenv: > > https://github.com/pypa/virtualenv/issues/625 > > So, I realize I could use `python3 -m virtualenv` if I was running the > command myself. But, tox is making the call to virtualenv. > > This was working the last time I tested it but has now broken. I'm guessing > that is because I upgraded tox or virtualenv? It broke now because you 'sudo pip install'ed something that depends on something that ships /usr/local/lib/python2.7/dist-packages/reprlib.py (which probably comes from pies2overrides) > Any pointers welcome as to the best way to resolve this issue now? I would recommend sudo pip uninstall pies2overrides (and whatever depended on it). A good way to avoid pain is to never ever 'sudo pip install' stuff. A different workaround would be to create a clean virtualenv, install tox into it, then run tox from that virtualenv. Marius Gedminas -- Un*x admins know what they are doing by definition. -- Bernd Petrovitsch -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From contact at ionelmc.ro Wed Jul 15 08:42:11 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 15 Jul 2015 09:42:11 +0300 Subject: [Distutils] Virtualenv bug breaking tox runs In-Reply-To: <20150715060857.GA10855@platonas> References: <55A5E7D6.4050302@thesyrings.us> <20150715060857.GA10855@platonas> Message-ID: You could try using the virtualenv rewrite (my branch) - it's well tested, albeit unreviewed. AFAIK it doesn't have the issue you've hit. Just run: pip install https://github.com/ionelmc/virtualenv/archive/develop.zip Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro On Wed, Jul 15, 2015 at 9:08 AM, Marius Gedminas wrote: > On Wed, Jul 15, 2015 at 12:55:50AM -0400, Randy Syring wrote: > > I'm trying to use tox from python2 to test a python3 environment. But I > get > > an exception like the following: > > > > >rsyring at loftex:~/projects/blaze/blazeutils-src$ python -m virtualenv > > >--python python3 ~/tmp/testvenv > > >Running virtualenv with interpreter /home/rsyring/bin/python3 > > >Traceback (most recent call last): > > > File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 14, > in > > > > > > import shutil > > > File "/opt/python34/lib/python3.4/shutil.py", line 11, in > > > import fnmatch > > > File "/opt/python34/lib/python3.4/fnmatch.py", line 15, in > > > import functools > > > File "/opt/python34/lib/python3.4/functools.py", line 21, in > > > from collections import namedtuple > > > File "/opt/python34/lib/python3.4/collections/__init__.py", line 17, > in > > > > > > from reprlib import recursive_repr as _recursive_repr > > > File "/usr/local/lib/python2.7/dist-packages/reprlib.py", line 3, in > > > > > > from repr import * > > >ImportError: No module named 'repr' > > > > There is a bug for this in virtualenv: > > > > https://github.com/pypa/virtualenv/issues/625 > > > > So, I realize I could use `python3 -m virtualenv` if I was running the > > command myself. But, tox is making the call to virtualenv. > > > > This was working the last time I tested it but has now broken. I'm > guessing > > that is because I upgraded tox or virtualenv? > > It broke now because you 'sudo pip install'ed something that depends on > something that ships /usr/local/lib/python2.7/dist-packages/reprlib.py > (which probably comes from pies2overrides) > > > Any pointers welcome as to the best way to resolve this issue now? > > I would recommend sudo pip uninstall pies2overrides (and whatever > depended on it). A good way to avoid pain is to never ever 'sudo pip > install' stuff. > > A different workaround would be to create a clean virtualenv, install > tox into it, then run tox from that virtualenv. > > Marius Gedminas > -- > Un*x admins know what they are doing by definition. > -- Bernd Petrovitsch > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Thu Jul 16 17:20:32 2015 From: randy at thesyrings.us (Randy Syring) Date: Thu, 16 Jul 2015 11:20:32 -0400 Subject: [Distutils] [TIP] Virtualenv bug breaking tox runs In-Reply-To: <20150715060857.GA10855@platonas> References: <55A5E7D6.4050302@thesyrings.us> <20150715060857.GA10855@platonas> Message-ID: <55A7CBC0.7030708@thesyrings.us> On 07/15/2015 02:08 AM, Marius Gedminas wrote: > So, I realize I could use `python3 -m virtualenv` if I was running the > command myself. But, tox is making the call to virtualenv. > > This was working the last time I tested it but has now broken. I'm guessing > that is because I upgraded tox or virtualenv? > It broke now because you 'sudo pip install'ed something that depends on > something that ships /usr/local/lib/python2.7/dist-packages/reprlib.py > (which probably comes from pies2overrides) > >> Any pointers welcome as to the best way to resolve this issue now? > I would recommend sudo pip uninstall pies2overrides (and whatever > depended on it). A good way to avoid pain is to never ever 'sudo pip > install' stuff. > > A different workaround would be to create a clean virtualenv, install > tox into it, then run tox from that virtualenv. > Thanks so much, that was exactly the problem. I've always had my system setup to have a few default packages installed globally and then use virtualenvs for everything else. Not sure where pie2overrides came from, but I can assure you it's getting booted ASAP! *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Thu Jul 16 19:41:40 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 16 Jul 2015 13:41:40 -0400 Subject: [Distutils] Working toward Linux wheel support Message-ID: Hi all, I've recently been working on adding SOABI support for Python 2.x and other pieces needed to get wheels w/ C extensions for Linux working. Here's the work for wheels: https://bitbucket.org/pypa/wheel/pull-request/54/ Based on that, I've added support for those wheels to pip here: https://github.com/natefoo/pip/tree/linux-wheels As mentioned in the wheels PR, there are some questions and decisions made that I need guidance on: - On Linux, the distro name/version (as determined by platform.linux_distribution()) will be appended to the platform string, e.g. linux_x86_64_ubuntu_14_04. This is going to be necessary to make a reasonable attempt at wheel compatibility in PyPI. But this may violate PEP 425. - By default, wheels will be built using the most specific platform information. In practice, I build our wheels[1] using Debian Squeeze in Docker and therefore they should work on most currently "supported" Linuxes, but allowing such wheels to PyPI could still be dangerous because forward compatibility is not always guaranteed (e.g. if a SO version/name changes, or a C lib API method changes in a non-backward compatible way but the SO version/name does not change). That said, I'd be happy to make a much more generalized version of our docker-build[2] system that'd allow package authors to easily/rapidly build distro/version-specific wheels for many of the popular Linux distros. We can assume that a wheel built on a vanilla install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 (this is what the distro vendors promise, anyway). - I attempt to set the SOABI if the SOABI config var is unset, this is for Python 2, but will also be done even on Python 3. Maybe that is the wrong decision (or maybe SOABI is guaranteed to be set on Python 3). - Do any other implementations define SOABI? PyPy does not, I did not test others. What should we do with these? Because the project I work for[3] relies heavily on large number of packages, some of which have complicated build-time dependencies, we have always provided them as eggs and monkeypatched platform support back in to pkg_resources. Now that the PyPA has settled on wheels as the preferred binary packaging format, I am pretty heavily motivated to do the work to work out all the issues with this implementation. Thanks, --nate [1] https://wheels.galaxyproject.org/ [2] https://github.com/galaxyproject/docker-build/ [3] https://galaxyproject.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jul 17 10:22:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 17 Jul 2015 18:22:27 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 17 July 2015 at 03:41, Nate Coraor wrote: > Hi all, > > I've recently been working on adding SOABI support for Python 2.x and other > pieces needed to get wheels w/ C extensions for Linux working. Here's the > work for wheels: > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > Based on that, I've added support for those wheels to pip here: > > https://github.com/natefoo/pip/tree/linux-wheels > > As mentioned in the wheels PR, there are some questions and decisions made > that I need guidance on: > > - On Linux, the distro name/version (as determined by > platform.linux_distribution()) will be appended to the platform string, e.g. > linux_x86_64_ubuntu_14_04. This is going to be necessary to make a > reasonable attempt at wheel compatibility in PyPI. But this may violate PEP > 425. I think it's going beyond it in a useful way, though. At the moment, the "linux_x86_64" platform tag *under*specifies the platform - a binary extension built on Ubuntu 14.04 with default settings may not work on CentOS 7, for example. Adding in the precise distro name and version number changes that to *over*specification, but I now think we can address that through configuration settings on the installer side that allow the specification of "compatible platforms". That way a derived distribution could add the corresponding upstream distribution's platform tag and their users would be able to install the relevant wheel files by default. Rather than putting the Linux specific platform tag derivation logic directly in the tools, though, what if we claimed a file under the "/etc/python" subtree and used it to tell the tools what platform tags to use? For example, we could put the settings relating to package tags into "/etc/python/binary-compatibility.cfg" and allow that to be overridden on a per-virtualenv basis with a binary-compatibility.cfg file within the virtualenv. For example, we could have a section where for a given platform, we overrode both the build and install tags appropriately. For RHEL 7.1, that may look like: [linux_x86_64] build=rhel_7_1 install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503 Using JSON rather than an ini-style format would also work: { "linux_x86_64": { "build": "rhel_7_1", "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", "centos_7_1503"] } } The reason I like this approach is that it leaves the definition of ABI compatibility in the hands of the distros, but also makes it safe to publish Linux wheel files on PyPI (just not with the generic linux_x86_64 platform tag). > - By default, wheels will be built using the most specific platform > information. In practice, I build our wheels[1] using Debian Squeeze in > Docker and therefore they should work on most currently "supported" Linuxes, > but allowing such wheels to PyPI could still be dangerous because forward > compatibility is not always guaranteed (e.g. if a SO version/name changes, > or a C lib API method changes in a non-backward compatible way but the SO > version/name does not change). That said, I'd be happy to make a much more > generalized version of our docker-build[2] system that'd allow package > authors to easily/rapidly build distro/version-specific wheels for many of > the popular Linux distros. We can assume that a wheel built on a vanilla > install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 > (this is what the distro vendors promise, anyway). Right, if we break ABI within a release, that's our fault (putting on my distro developer hat), and folks will rightly yell at us for it. I was previously wary of this approach due to the "what about derived distributions?" problem, but realised recently that a config file that explicitly lists known binary compatible platforms should suffice for that. There's only a handful of systems folks are likely want to prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), and a configuration file based system allows ABI compatible derived distros to be handled as if they were their parent. > - I attempt to set the SOABI if the SOABI config var is unset, this is for > Python 2, but will also be done even on Python 3. Maybe that is the wrong > decision (or maybe SOABI is guaranteed to be set on Python 3). Python 3 should always set it, but if it's not present for some reason, deriving it makes sense. > - Do any other implementations define SOABI? PyPy does not, I did not test > others. What should we do with these? The implementation identifier is also included in the compatibility tags, so setting that in addition to the platform ABI tag when a wheel contains binary extensions should suffice. > Because the project I work for[3] relies heavily on large number of > packages, some of which have complicated build-time dependencies, we have > always provided them as eggs and monkeypatched platform support back in to > pkg_resources. Now that the PyPA has settled on wheels as the preferred > binary packaging format, I am pretty heavily motivated to do the work to > work out all the issues with this implementation. Thank you! Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Fri Jul 17 17:36:39 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 17 Jul 2015 08:36:39 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: TL;DR -- pip+wheel needs to address the non-python dependency issue before it can be a full solution for Linux (or anything else, really) The long version: I think Linux wheel support is almost useless unless the pypa stack provides _something_ to handle non-python dependencies. 1) Pure Python packages work fine as source. 2) Python packages with C extensions build really easily out of the box -- so source distribution is fine (OK, I suppose some folks want to run a system without a compiler -- is this the intended use-case?) So what are the hard cases? the one we really want binary wheels for? - Windows, where a system compiler is a rarity: Done - OS-X, where a system compiler is a semi-rarity, and way too many "standard" system libs aren't there (or are old crappy versions...) - Almost Done. - Packages with semi-standard dependencies: can we expect ANY Linux distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe not installed (would a headless server have libfreetype?). And would those version be all compatible (probably if you specified a distro version) - Packages with non-standard non-python dependencies: libhdf5, lapack, BLAS, fortran(!) -- this is where the nightmare really is. I suspect most folks on this list will say that this is "Scipy Problem", and indeed, that's where the biggest issues are, and where systems like conda have grown up to address this. But at this point, I think it's really sad that the community has become fractured -- if folks start out with "I want to do scientific computing", then they get pointed to Enthought Canopy or Anaconda, and all is well (until they look for standard web development packages -- though that's getting better). But if someone starts out as a web developer, and is all happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests they put some Bokeh plotting in their web site, or need to do some analytics on HDF5 files, or any number of things well supported by Python, but NOT by pip/wheel -- they are kind of stuck. My point is that it may actually be a bad thing to solve the easy problem while keeping out fingers in our ears about the hard ones.... (la la la la, I don't need to use those packages. la la la la) My thought: what pip+wheel needs to support much of this is the ability to specify a wheel dependency, rather than a package dependency -- i.e. "this particular wheel requires a libfreetype wheel". Then we could have binary wheels for non-python dependencies like libs (which would install the lib into pre-defined locations that could be relative path linked to) Sorry for the rant.... -Chris PS: Personally, after banging my head against this for years, I've committed to conda for the moment -- working to get conda to better support the wide range of python packages. I haven't tried it on Linux, but it does exist and works well for some folks. On Fri, Jul 17, 2015 at 1:22 AM, Nick Coghlan wrote: > On 17 July 2015 at 03:41, Nate Coraor wrote: > > Hi all, > > > > I've recently been working on adding SOABI support for Python 2.x and > other > > pieces needed to get wheels w/ C extensions for Linux working. Here's the > > work for wheels: > > > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > > > Based on that, I've added support for those wheels to pip here: > > > > https://github.com/natefoo/pip/tree/linux-wheels > > > > As mentioned in the wheels PR, there are some questions and decisions > made > > that I need guidance on: > > > > - On Linux, the distro name/version (as determined by > > platform.linux_distribution()) will be appended to the platform string, > e.g. > > linux_x86_64_ubuntu_14_04. This is going to be necessary to make a > > reasonable attempt at wheel compatibility in PyPI. But this may violate > PEP > > 425. > > I think it's going beyond it in a useful way, though. At the moment, > the "linux_x86_64" platform tag *under*specifies the platform - a > binary extension built on Ubuntu 14.04 with default settings may not > work on CentOS 7, for example. > > Adding in the precise distro name and version number changes that to > *over*specification, but I now think we can address that through > configuration settings on the installer side that allow the > specification of "compatible platforms". That way a derived > distribution could add the corresponding upstream distribution's > platform tag and their users would be able to install the relevant > wheel files by default. > > Rather than putting the Linux specific platform tag derivation logic > directly in the tools, though, what if we claimed a file under the > "/etc/python" subtree and used it to tell the tools what platform tags > to use? For example, we could put the settings relating to package > tags into "/etc/python/binary-compatibility.cfg" and allow that to be > overridden on a per-virtualenv basis with a binary-compatibility.cfg > file within the virtualenv. > > For example, we could have a section where for a given platform, we > overrode both the build and install tags appropriately. For RHEL 7.1, > that may look like: > > [linux_x86_64] > build=rhel_7_1 > install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503 > > Using JSON rather than an ini-style format would also work: > > { > "linux_x86_64": { > "build": "rhel_7_1", > "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", > "centos_7_1503"] > } > } > > The reason I like this approach is that it leaves the definition of > ABI compatibility in the hands of the distros, but also makes it safe > to publish Linux wheel files on PyPI (just not with the generic > linux_x86_64 platform tag). > > > - By default, wheels will be built using the most specific platform > > information. In practice, I build our wheels[1] using Debian Squeeze in > > Docker and therefore they should work on most currently "supported" > Linuxes, > > but allowing such wheels to PyPI could still be dangerous because forward > > compatibility is not always guaranteed (e.g. if a SO version/name > changes, > > or a C lib API method changes in a non-backward compatible way but the SO > > version/name does not change). That said, I'd be happy to make a much > more > > generalized version of our docker-build[2] system that'd allow package > > authors to easily/rapidly build distro/version-specific wheels for many > of > > the popular Linux distros. We can assume that a wheel built on a vanilla > > install of e.g. Ubuntu 14.04 will work on any other installation of 14.04 > > (this is what the distro vendors promise, anyway). > > Right, if we break ABI within a release, that's our fault (putting on > my distro developer hat), and folks will rightly yell at us for it. I > was previously wary of this approach due to the "what about derived > distributions?" problem, but realised recently that a config file that > explicitly lists known binary compatible platforms should suffice for > that. There's only a handful of systems folks are likely want to > prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), > and a configuration file based system allows ABI compatible derived > distros to be handled as if they were their parent. > > > - I attempt to set the SOABI if the SOABI config var is unset, this is > for > > Python 2, but will also be done even on Python 3. Maybe that is the wrong > > decision (or maybe SOABI is guaranteed to be set on Python 3). > > Python 3 should always set it, but if it's not present for some > reason, deriving it makes sense. > > > - Do any other implementations define SOABI? PyPy does not, I did not > test > > others. What should we do with these? > > The implementation identifier is also included in the compatibility > tags, so setting that in addition to the platform ABI tag when a wheel > contains binary extensions should suffice. > > > Because the project I work for[3] relies heavily on large number of > > packages, some of which have complicated build-time dependencies, we have > > always provided them as eggs and monkeypatched platform support back in > to > > pkg_resources. Now that the PyPA has settled on wheels as the preferred > > binary packaging format, I am pretty heavily motivated to do the work to > > work out all the issues with this implementation. > > Thank you! > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Jul 17 17:46:43 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 17 Jul 2015 17:46:43 +0200 Subject: [Distutils] Working toward Linux wheel support References: Message-ID: <20150717174643.6e46c715@fsol> On Fri, 17 Jul 2015 08:36:39 -0700 Chris Barker wrote: > > - Packages with non-standard non-python dependencies: libhdf5, lapack, > BLAS, fortran(!) -- this is where the nightmare really is. I suspect most > folks on this list will say that this is "Scipy Problem", and indeed, > that's where the biggest issues are, and where systems like conda have > grown up to address this. > > But at this point, I think it's really sad that the community has become > fractured -- if folks start out with "I want to do scientific computing", > then they get pointed to Enthought Canopy or Anaconda, and all is well > (until they look for standard web development packages -- though that's > getting better). But if someone starts out as a web developer, and is all > happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests > they put some Bokeh plotting in their web site, or need to do > some analytics on HDF5 files, or any number of things well supported by > Python, but NOT by pip/wheel -- they are kind of stuck. Indeed, that's the main issue here. Eventually some people will want to use llvmlite or Numba in an environment where there's also a web application serving stuff, or who knows other combinations. > PS: Personally, after banging my head against this for years, > I've committed to conda for the moment -- working to get conda to better > support the wide range of python packages. I haven't tried it on Linux, but > it does exist and works well for some folks. Due to the fact Linux binary wheels don't exist, conda is even more useful on Linux... Regards Antoine. From chris.barker at noaa.gov Fri Jul 17 17:53:08 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 17 Jul 2015 08:53:08 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <20150717174643.6e46c715@fsol> References: <20150717174643.6e46c715@fsol> Message-ID: On Fri, Jul 17, 2015 at 8:46 AM, Antoine Pitrou wrote: > Due to the fact Linux binary wheels don't exist, conda is even more > useful on Linux... > True -- though it's at least possible, and certainly easier than on Mac and Windows, to build it all yourself on Linux. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Jul 17 18:50:17 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 17 Jul 2015 09:50:17 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: I think Linux wheel support is almost useless unless the pypa stack > provides _something_ to handle non-python dependencies. > I wouldn't say useless, but I tend to agree with this sentiment. I'm thinking the only way to really "compete" with the ease of Conda (for non-python dependencies) is to shift away from wheels, and instead focus on making it easier to create native distro packages (i.e. rpm, deb etc...that can easily depend on non-python dependencies) for python applications, and moreover that these packages should be "parallel installable" with the system packages, i.e. they should depend on virtual environments, not the system python. I've been working on this some personally, admittedly pretty slowly, since it's a pretty tall order to put all the pieces together Marcus > > 1) Pure Python packages work fine as source. > > 2) Python packages with C extensions build really easily out of the box > -- so source distribution is fine (OK, I suppose some folks want to > run a system without a compiler -- is this the intended use-case?) > > So what are the hard cases? the one we really want binary wheels for? > > - Windows, where a system compiler is a rarity: Done > > - OS-X, where a system compiler is a semi-rarity, and way too many > "standard" system libs aren't there (or are old crappy versions...) - > Almost Done. > > - Packages with semi-standard dependencies: can we expect ANY Linux > distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe > not installed (would a headless server have libfreetype?). And would those > version be all compatible (probably if you specified a distro version) > > - Packages with non-standard non-python dependencies: libhdf5, lapack, > BLAS, fortran(!) -- this is where the nightmare really is. I suspect most > folks on this list will say that this is "Scipy Problem", and indeed, > that's where the biggest issues are, and where systems like conda have > grown up to address this. > > But at this point, I think it's really sad that the community has become > fractured -- if folks start out with "I want to do scientific computing", > then they get pointed to Enthought Canopy or Anaconda, and all is well > (until they look for standard web development packages -- though that's > getting better). But if someone starts out as a web developer, and is all > happy with the PyPA stack (virtualenv, pip, etc...), then someone suggests > they put some Bokeh plotting in their web site, or need to do > some analytics on HDF5 files, or any number of things well supported by > Python, but NOT by pip/wheel -- they are kind of stuck. > > My point is that it may actually be a bad thing to solve the easy problem > while keeping out fingers in our ears about the hard ones.... > > (la la la la, I don't need to use those packages. la la la la) > > My thought: what pip+wheel needs to support much of this is the ability to > specify a wheel dependency, rather than a package dependency -- i.e. "this > particular wheel requires a libfreetype wheel". Then we could have binary > wheels for non-python dependencies like libs (which would install the lib > into pre-defined locations that could be relative path linked to) > > Sorry for the rant.... > > -Chris > > PS: Personally, after banging my head against this for years, > I've committed to conda for the moment -- working to get conda to better > support the wide range of python packages. I haven't tried it on Linux, but > it does exist and works well for some folks. > > > > On Fri, Jul 17, 2015 at 1:22 AM, Nick Coghlan wrote: > >> On 17 July 2015 at 03:41, Nate Coraor wrote: >> > Hi all, >> > >> > I've recently been working on adding SOABI support for Python 2.x and >> other >> > pieces needed to get wheels w/ C extensions for Linux working. Here's >> the >> > work for wheels: >> > >> > https://bitbucket.org/pypa/wheel/pull-request/54/ >> > >> > Based on that, I've added support for those wheels to pip here: >> > >> > https://github.com/natefoo/pip/tree/linux-wheels >> >> > >> > As mentioned in the wheels PR, there are some questions and decisions >> made >> > that I need guidance on: >> > >> > - On Linux, the distro name/version (as determined by >> > platform.linux_distribution()) will be appended to the platform string, >> e.g. >> > linux_x86_64_ubuntu_14_04. This is going to be necessary to make a >> > reasonable attempt at wheel compatibility in PyPI. But this may violate >> PEP >> > 425. >> >> I think it's going beyond it in a useful way, though. At the moment, >> the "linux_x86_64" platform tag *under*specifies the platform - a >> binary extension built on Ubuntu 14.04 with default settings may not >> work on CentOS 7, for example. >> >> Adding in the precise distro name and version number changes that to >> *over*specification, but I now think we can address that through >> configuration settings on the installer side that allow the >> specification of "compatible platforms". That way a derived >> distribution could add the corresponding upstream distribution's >> platform tag and their users would be able to install the relevant >> wheel files by default. >> >> Rather than putting the Linux specific platform tag derivation logic >> directly in the tools, though, what if we claimed a file under the >> "/etc/python" subtree and used it to tell the tools what platform tags >> to use? For example, we could put the settings relating to package >> tags into "/etc/python/binary-compatibility.cfg" and allow that to be >> overridden on a per-virtualenv basis with a binary-compatibility.cfg >> file within the virtualenv. >> >> For example, we could have a section where for a given platform, we >> overrode both the build and install tags appropriately. For RHEL 7.1, >> that may look like: >> >> [linux_x86_64] >> build=rhel_7_1 >> install=rhel_7_0,rhel_7_1,centos_7_1406,centos_7_1503 >> >> Using JSON rather than an ini-style format would also work: >> >> { >> "linux_x86_64": { >> "build": "rhel_7_1", >> "install": ["rhel_7_0", "rhel_7_1", "centos_7_1406", >> "centos_7_1503"] >> } >> } >> >> The reason I like this approach is that it leaves the definition of >> ABI compatibility in the hands of the distros, but also makes it safe >> to publish Linux wheel files on PyPI (just not with the generic >> linux_x86_64 platform tag). >> >> > - By default, wheels will be built using the most specific platform >> > information. In practice, I build our wheels[1] using Debian Squeeze in >> > Docker and therefore they should work on most currently "supported" >> Linuxes, >> > but allowing such wheels to PyPI could still be dangerous because >> forward >> > compatibility is not always guaranteed (e.g. if a SO version/name >> changes, >> > or a C lib API method changes in a non-backward compatible way but the >> SO >> > version/name does not change). That said, I'd be happy to make a much >> more >> > generalized version of our docker-build[2] system that'd allow package >> > authors to easily/rapidly build distro/version-specific wheels for many >> of >> > the popular Linux distros. We can assume that a wheel built on a vanilla >> > install of e.g. Ubuntu 14.04 will work on any other installation of >> 14.04 >> > (this is what the distro vendors promise, anyway). >> >> Right, if we break ABI within a release, that's our fault (putting on >> my distro developer hat), and folks will rightly yell at us for it. I >> was previously wary of this approach due to the "what about derived >> distributions?" problem, but realised recently that a config file that >> explicitly lists known binary compatible platforms should suffice for >> that. There's only a handful of systems folks are likely want to >> prebuild wheels for (Debian, Ubuntu, Fedora, CentOS/RHEL, openSuse), >> and a configuration file based system allows ABI compatible derived >> distros to be handled as if they were their parent. >> >> > - I attempt to set the SOABI if the SOABI config var is unset, this is >> for >> > Python 2, but will also be done even on Python 3. Maybe that is the >> wrong >> > decision (or maybe SOABI is guaranteed to be set on Python 3). >> >> Python 3 should always set it, but if it's not present for some >> reason, deriving it makes sense. >> >> > - Do any other implementations define SOABI? PyPy does not, I did not >> test >> > others. What should we do with these? >> >> The implementation identifier is also included in the compatibility >> tags, so setting that in addition to the platform ABI tag when a wheel >> contains binary extensions should suffice. >> >> > Because the project I work for[3] relies heavily on large number of >> > packages, some of which have complicated build-time dependencies, we >> have >> > always provided them as eggs and monkeypatched platform support back in >> to >> > pkg_resources. Now that the PyPA has settled on wheels as the preferred >> > binary packaging format, I am pretty heavily motivated to do the work to >> > work out all the issues with this implementation. >> >> Thank you! >> >> Regards, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Fri Jul 17 20:34:11 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Fri, 17 Jul 2015 20:34:11 +0200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: 2015-07-17 18:50 GMT+02:00 Marcus Smith : > > >> I think Linux wheel support is almost useless unless the pypa stack >> provides _something_ to handle non-python dependencies. > > > I wouldn't say useless, but I tend to agree with this sentiment. > > I'm thinking the only way to really "compete" with the ease of Conda (for > non-python dependencies) is to shift away from wheels, and instead focus on > making it easier to create native distro packages (i.e. rpm, deb etc...that > can easily depend on non-python dependencies) for python applications, and > moreover that these packages should be "parallel installable" with the > system packages, i.e. they should depend on virtual environments, not the > system python. +1 for being able to work in isolation of the system packages (and without admin rights). This is precisely the killer feature of conda (and virtualenv to some extent): users do not need to rely on interaction with sys admins to get up and running to setup a developer environment. Furthermore they can get as many cheap environments in parallel to develop and reproduce bugs with various versions of libraries or Python it-self. However I don't see why you would not be able to ship your non-Python dependencies as wheels. Surely it should be possible to package stateless libraries like OpenBLAS, libxml/libxsql, llvm runtimes, qt and the like as wheels. Shipping wheels for services such as database servers like postgresql is out of the scope in my opinion. For such admin sys tasks such as managing running stateful services, system packages or docker containers + orchestration are the way to go. Still wheels should be able to address the "setup parallel dev environments" use case. When I say "developer environment" I also include "datascientists environment" that rely on ipython notebook + scipy stack libraries. Best, -- Olivier From dholth at gmail.com Fri Jul 17 22:18:35 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 17 Jul 2015 20:18:35 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: I've recently packaged SDL2 for Windows as a wheel, without any Python code. It is a conditional dependency "if Windows" for a SDL wrapper. Very convenient. It uses a little WAF script instead of bdist_wheel to make the package. https://bitbucket.org/dholth/sdl2_lib/src/tip We were talking on this list about adding more categories to wheel, to make it easier to install in abstract locations "confdir", "libdir" etc. probably per GNU convention which would map to /etc, /usr/share, and so forth based on the platform. Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done. The real trick is when you have to depend on something that lives outside of your packaging system, for example, it's probably easier to ship qt as a wheel than to ship libc as a wheel. Asking for specific SHA-256 hashes of all the 'ldd' shared library dependencies would be limiting. Specifying the full library names of the same a-la RPM somewhere? And as always many Linux users will find precompiled code to be a nuisance even if it does run and even if the dependency in question is difficult to compile. On Fri, Jul 17, 2015 at 2:34 PM Olivier Grisel wrote: > 2015-07-17 18:50 GMT+02:00 Marcus Smith : > > > > > >> I think Linux wheel support is almost useless unless the pypa stack > >> provides _something_ to handle non-python dependencies. > > > > > > I wouldn't say useless, but I tend to agree with this sentiment. > > > > I'm thinking the only way to really "compete" with the ease of Conda (for > > non-python dependencies) is to shift away from wheels, and instead focus > on > > making it easier to create native distro packages (i.e. rpm, deb > etc...that > > can easily depend on non-python dependencies) for python applications, > and > > moreover that these packages should be "parallel installable" with the > > system packages, i.e. they should depend on virtual environments, not the > > system python. > > +1 for being able to work in isolation of the system packages (and > without admin rights). > > This is precisely the killer feature of conda (and virtualenv to some > extent): users do not need to rely on interaction with sys admins to > get up and running to setup a developer environment. Furthermore they > can get as many cheap environments in parallel to develop and > reproduce bugs with various versions of libraries or Python it-self. > > However I don't see why you would not be able to ship your non-Python > dependencies as wheels. Surely it should be possible to package > stateless libraries like OpenBLAS, libxml/libxsql, llvm runtimes, qt > and the like as wheels. > > Shipping wheels for services such as database servers like postgresql > is out of the scope in my opinion. For such admin sys tasks such as > managing running stateful services, system packages or docker > containers + orchestration are the way to go. > > Still wheels should be able to address the "setup parallel dev > environments" use case. When I say "developer environment" I also > include "datascientists environment" that rely on ipython notebook + > scipy stack libraries. > > Best, > > -- > Olivier > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat Jul 18 03:13:11 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 17 Jul 2015 18:13:11 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: <8281273334707202351@unknownmsgid> > On Jul 17, 2015, at 1:19 PM, Daniel Holth wrote: > > I've recently packaged SDL2 for Windows as a wheel, without any Python code. It is a conditional dependency "if Windows" for a SDL wrapper. Cool, though I still think we need wheel-level deps -- the dependency is on the particular binary, not the platform. But a good start. > > We were talking on this list about adding more categories to wheel, to make it easier to install in abstract locations "confdir", "libdir" etc. probably per GNU convention which would map to /etc, /usr/share, and so forth based on the platform. Where would the concrete firs be? I think inside the Python install I.e. Where everything is managed by python . I don't think I want pip dumping stuff in /use/local, nevermind /usr. And presumably the goal is to support virtualenv anyway. > Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done. If we want Windows support in the long run -- and we do -- we should be thinking about it from the start. But if it's going in the Python-managed dirs, it doesn't have to follow Windows convention ... > The real trick is when you have to depend on something that lives outside of your packaging system, for example, it's probably easier to ship qt as a wheel than to ship libc as a wheel. Well, we can expect SOME base system! No system can exist without libc.... -CHB From dholth at gmail.com Sat Jul 18 04:11:26 2015 From: dholth at gmail.com (Daniel Holth) Date: Sat, 18 Jul 2015 02:11:26 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <8281273334707202351@unknownmsgid> References: <8281273334707202351@unknownmsgid> Message-ID: Yes, but how do you know that I compiled against the right version of libc? On Fri, Jul 17, 2015, 9:13 PM Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > > On Jul 17, 2015, at 1:19 PM, Daniel Holth wrote: > > > > I've recently packaged SDL2 for Windows as a wheel, without any Python > code. It is a conditional dependency "if Windows" for a SDL wrapper. > > Cool, though I still think we need wheel-level deps -- the dependency > is on the particular binary, not the platform. But a good start. > > > > > We were talking on this list about adding more categories to wheel, to > make it easier to install in abstract locations "confdir", "libdir" etc. > probably per GNU convention which would map to /etc, /usr/share, and so > forth based on the platform. > > Where would the concrete firs be? I think inside the Python install > I.e. Where everything is managed by python . I don't think I want pip > dumping stuff in /use/local, nevermind /usr. And presumably the goal > is to support virtualenv anyway. > > > Someone needs to write that specification. Propose we forget about > Windows for the first revision, so that it is possible to get it done. > > If we want Windows support in the long run -- and we do -- we should > be thinking about it from the start. But if it's going in the > Python-managed dirs, it doesn't have to follow Windows convention ... > > > The real trick is when you have to depend on something that lives > outside of your packaging system, for example, it's probably easier to ship > qt as a wheel than to ship libc as a wheel. > > Well, we can expect SOME base system! No system can exist without libc.... > > -CHB > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea at andreabedini.com Sat Jul 18 09:00:31 2015 From: andrea at andreabedini.com (Andrea Bedini) Date: Sat, 18 Jul 2015 17:00:31 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: <20150717174643.6e46c715@fsol> Message-ID: <83276676-1442-4B1B-B7BB-15D99E260019@andreabedini.com> > On 18 Jul 2015, at 1:53 am, Chris Barker wrote: > > On Fri, Jul 17, 2015 at 8:46 AM, Antoine Pitrou wrote: > Due to the fact Linux binary wheels don't exist, conda is even more > useful on Linux... > > True -- though it's at least possible, and certainly easier than on Mac and Windows, to build it all yourself on Linux. I build(*) everything myself on OS X and I find it easy, hdf5 has never been a problem. (*) I am lying, homebrew provides binary installs. my 2c, Andrea -- Andrea Bedini @andreabedini, http://www.andreabedini.com See the impact of my research at https://impactstory.org/AndreaBedini use https://keybase.io/andreabedini to send me encrypted messages Key fingerprint = 17D5 FB49 FA18 A068 CF53 C5C2 9503 64C1 B2D5 9591 From p.f.moore at gmail.com Sat Jul 18 13:51:12 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 18 Jul 2015 12:51:12 +0100 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <8281273334707202351@unknownmsgid> References: <8281273334707202351@unknownmsgid> Message-ID: On 18 July 2015 at 02:13, Chris Barker - NOAA Federal wrote: >> Someone needs to write that specification. Propose we forget about Windows for the first revision, so that it is possible to get it done. > > If we want Windows support in the long run -- and we do -- we should > be thinking about it from the start. But if it's going in the > Python-managed dirs, it doesn't have to follow Windows convention ... I agree that excluding Windows is probably a mistake (differing expectations on Windows will come back to bite you if you do that). But Windows shouldn't be a huge issue as long as it's clearly noted that all directories will be within the Python-managed dirs. (Even if the system install on Unix doesn't work like this, virtualenvs on Unix have to, so that's not a Windows-specific point). Managing categories that make no sense on particular platforms (e.g. manpages on Windows) is the only other thing that I can think of that considering Windows might bring up, but again, it's not actually Windows specific (HTML Help files on Unix, for instance, would be similar - an obvious resolution is just to document that certain directories simply won't be installed on inappropriate platforms). Paul From leorochael at gmail.com Mon Jul 20 03:42:06 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Sun, 19 Jul 2015 22:42:06 -0300 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: Hi, On 17 July 2015 at 05:22, Nick Coghlan wrote: > On 17 July 2015 at 03:41, Nate Coraor wrote: > > [...] > > > > As mentioned in the wheels PR, there are some questions and decisions > made > > that I need guidance on: > > > > - On Linux, the distro name/version (as determined by > > platform.linux_distribution()) will be appended to the platform string, > e.g. > > linux_x86_64_ubuntu_14_04. This is going to be necessary to make a > > reasonable attempt at wheel compatibility in PyPI. But this may violate > PEP > > 425. > > I think it's going beyond it in a useful way, though. At the moment, > the "linux_x86_64" platform tag *under*specifies the platform - a > binary extension built on Ubuntu 14.04 with default settings may not > work on CentOS 7, for example. > > Adding in the precise distro name and version number changes that to > *over*specification, but I now think we can address that through > configuration settings on the installer side that allow the > specification of "compatible platforms". That way a derived > distribution could add the corresponding upstream distribution's > platform tag and their users would be able to install the relevant > wheel files by default. > [...] The definition of "acceptable platform tags" should list the platforms in order of preference (for example, some of the backward compatible past releases of a linux distro, in reverse order), so that if multiple acceptable wheels are present the closest one is selected. As some other have mentioned, this doesn't solve the problem of system dependencies. I.e.: a perfectly compiled lxml wheel for linux_x86_64_ubuntu_14_04, installed into Ubuntu 14.04, will still fail to work if libxml2 and libxslt1.1 debian packages are not installed (among others). Worse is that pip will gladly install such package, and the failure will happen as a potentially cryptic error message payload to an ImportError that doesn't really make it clear what needs to be done to make the package actually work. To solve this problem, so far we've only been able to come up with two extremes: - Have the libraries contain enough metadata in their source form that we can generate true system packages from them (this doesn't really help the virtualenv case) - Carry all the dependencies. Either by static linking, or by including all dynamic libraries in the wheel, or by becoming something like Conda where we package even non Python projects. As a further step that could be taken on top of Nate's proposed PR, but avoiding the extremes above, I like Daniel's idea of "specifying the full library names [...] ?-l? RPM". Combine it with the specification of abstract locations, and we could have wheels declare something like. - lxml wheel for linux_x86_64_ubuntu_14_04: - extdeps: - /libc.so.6 - /libm.so.6 - /libxml2.so.2 - /libexslt.so.0 This also makes it possible to have wheels depend on stuff other than libraries, for example binaries or data files (imagine a lightweight version of pytz that didn't have to carry its own timezones, and depended on the host system to keep them updated). As long as we have a proper abstract location to anchor the files, we can express these dependencies without hardcoding paths as they were on the build machine. It even opens the possibility that some of these external dependencies could be provided on a per-virtualenv basis, instead of globally. Pip could then (optionally?) check the existence of these external dependencies before allowing installation of the wheel, increasing the likelihood that it will work once installed. This same way of expressing external dependencies could be extended to source packages themselves. For example the `setup()` (or whatever successor we end up with) for a PIL source package could express dependency on '/png.h'. Or, what's more likely these days, a dependency on '/libpng12-config', which when run prints the correct invocations of gcc flags to add to the build process. The build process would then check the presence of these external build dependencies early on, allowing for much clearer error messages and precise instructions on how to provide the proper build environment. Most distros provide handy ways of querying which packages provide which files, so I believe the specification of external file dependences to be a nice step up from where we are right now, without wading into full-system-integration territory. Leo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 20 07:50:00 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 20 Jul 2015 15:50:00 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 18 July 2015 at 01:36, Chris Barker wrote: > TL;DR -- pip+wheel needs to address the non-python dependency issue before > it can be a full solution for Linux (or anything else, really) > > The long version: > > I think Linux wheel support is almost useless unless the pypa stack > provides _something_ to handle non-python dependencies. > > 1) Pure Python packages work fine as source. > > 2) Python packages with C extensions build really easily out of the box > -- so source distribution is fine (OK, I suppose some folks want to > run a system without a compiler -- is this the intended use-case?) The intended use case is "Build once, deploy many times". This is especially important for use cases like Nate's - Galaxy has complete control over both the build environment and the deployment environment, but they *don't* want to rebuild in every analysis environment. That means all they need is a way to build a binary artifact that adequately declares its build context, and a way to retrieve those artifacts at installation time. I'm interested in the same case - I don't need to build artifacts for arbitrary versions of Linux, I mainly want to build them for the particular ABIs defined by the different Fedora and EPEL versions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon Jul 20 08:00:34 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 20 Jul 2015 16:00:34 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 20 July 2015 at 11:42, Leonardo Rochael Almeida wrote: > To solve this problem, so far we've only been able to come up with two > extremes: > > - Have the libraries contain enough metadata in their source form that we > can generate true system packages from them (this doesn't really help the > virtualenv case) > - Carry all the dependencies. Either by static linking, or by including all > dynamic libraries in the wheel, or by becoming something like Conda where we > package even non Python projects. We keep stalling on making progress with Linux wheel files as our discussions spiral out into all the reasons why solving the general case of binary distribution is so hard. However, Nate has a specific concrete problem in needing to get artifacts from Galaxy's build servers and installing them into their analysis environments - let's help him solve that, on the assumption that some *other* mechanism will be used to manage the non-Python components. This approach is actually applicable to many server based environments, as a configuration management tool like Puppet, Chef, Salt or Ansible will be used to deal with the non-Python aspects. This approach is even applicable to some "centrally managed data analysis workstation" cases. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.barker at noaa.gov Mon Jul 20 19:37:20 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 20 Jul 2015 10:37:20 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Sun, Jul 19, 2015 at 10:50 PM, Nick Coghlan wrote: > The intended use case is "Build once, deploy many times". > > This is especially important for use cases like Nate's - Galaxy has > complete control over both the build environment and the deployment > environment, but they *don't* want to rebuild in every analysis > environment. > > That means all they need is a way to build a binary artifact that > adequately declares its build context, and a way to retrieve those > artifacts at installation time. > > I'm interested in the same case - I don't need to build artifacts for > arbitrary versions of Linux, I mainly want to build them for the > particular ABIs defined by the different Fedora and EPEL versions. > sure -- but isn't that use-case already supported by wheel -- define your own wheelhouse that has the ABI you know you need, and point pip to it. Not that it would hurt to add a bit more to the filename, but it seems you either: Have a specific system definition you are building for -- so you want to give it a name. One step better than defining a wheelhouse. or You want to put it up on PyPi and have the folks with compatible systems be abel to get, and know it will work -- THAT is a big 'ol can of worms that maybe you're better off going with conda.... -CHB > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Jul 20 19:39:19 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 20 Jul 2015 10:39:19 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Sun, Jul 19, 2015 at 11:00 PM, Nick Coghlan wrote: > However, Nate has a specific concrete problem in needing to get > artifacts from Galaxy's build servers and installing them into their > analysis environments - let's help him solve that, on the assumption > that some *other* mechanism will be used to manage the non-Python > components > What is there to solve here? Galaxy's build servers put all the wheels somewhere. Galaxy's analysis systems point to that place. I thought pip+wheel_wheelhouse already solved that problem? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Jul 20 20:37:44 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 20 Jul 2015 19:37:44 +0100 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 20 July 2015 at 18:37, Chris Barker wrote: > sure -- but isn't that use-case already supported by wheel -- define your > own wheelhouse that has the ABI you know you need, and point pip to it. I presume the issue is wanting to have a single shared wheelhouse for a (presumably limited) number of platforms. So being able to specify a (completely arbitrary) local platform name at build and install time sounds like a viable option. BUT - we have someone offering a solution that solves at least part of the problem, sufficient for their needs and a step forward from where we are. This is great news, as wheel support for Linux has always been stalled before (for whatever reason). So thank you to Nate for his work, and let's look to how we can accept it and build on it in the future. Unfortunately, I don't have any Linux knowledge in this area, so I can't offer any useful advice on the questions Nate asks. But hopefully some people on this list can. Paul From tseaver at palladion.com Tue Jul 21 05:25:21 2015 From: tseaver at palladion.com (Tres Seaver) Date: Mon, 20 Jul 2015 23:25:21 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <20150717174643.6e46c715@fsol> References: <20150717174643.6e46c715@fsol> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/17/2015 11:46 AM, Antoine Pitrou wrote: > Due to the fact Linux binary wheels don't exist, conda is even more > useful on Linux... FWIW, they exist, they just can't be published to PyPI. Private indexes (where binary compatibility is a known quantity) work fine with them. Because it nails down binary non-Python dependencies, conda (and similar tools) do fit the bill for public distribution of Python projects which have such build-time deps. Even given the "over-specified" platform tags Nick suggests, linux wheels won't fully work, because the build-time depes won't be satisfiable *by pip*: the burden will be on each project to attempt a build and then spit out an error message trying to indicate the missing system package. Is-that-'-dev'-or-'-devel'-I-need?'ly, Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJVrbuhAAoJEPKpaDSJE9HY0R8P/jLBINO04/NTlJTUa8wmIxed aWSU0mxFSAKg0q+n2QaRi418QG6vvtUVGsXGafmYu4hlfKj3Hkj6DA+ws2o7uR5S 1UNU3KSF2lsLoWjaIKpMm4RNWmbHuWQ3HlabXqSly7H7lfgXCAzntdrVy5s3zacM 4wqVTjTWaG2lBf77B6aWhgom6kTvfnpNtyQ4+oKDujSnSWlLJ1W7p0hvuR/33XHr 1NHUdaoUWH7kES0zcRHOyYU7PSPtVYMpzn3SKWljMXSiN1vs9YN6WmypNmLeXjTj gkD/JR8gGv97o9TliKW6KaocbSLvZ5w2bHwkBGYsLRS2pti2ojw+3vmSpm4VwKyn PLhOaMpBR4qC2scFVJ5z1iW9uOYlakra45o60rAaRTiuKEHBPaoimQP3mMW38AsB glY+/j349A2XyE1vosAekxeuinip64erQg6G3+gU0myRsfaaC1lTBlzkDrsya4X5 C2LbE4n2IlMrm+hrA/RbUjKlbTJtIyWLlnrv1jORh6l5VNTXSkafStA7j1nXa/hx 4zAqv9mV/1UErI+IjPz6CQTwNbz5QtSP1gFa/9xqGnnrBSuWRMYd/x0c+JNXFzFC MCMhbQ/ZIAkpmk/VRb1mVQVc2uqsWr9WxZ5F13cJJvZrvWkQJFf70nnHk+n2f3CU 9/s6HEGX8SkP8tZnZ7Co =gpd9 -----END PGP SIGNATURE----- From leorochael at gmail.com Tue Jul 21 17:07:05 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Tue, 21 Jul 2015 12:07:05 -0300 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: <20150717174643.6e46c715@fsol> Message-ID: Hi Tres, On 21 July 2015 at 00:25, Tres Seaver wrote: > [...] > > Even given the "over-specified" platform tags Nick suggests, linux wheels > won't fully work, because the build-time depes won't be satisfiable *by > pip*: the burden will be on each project to attempt a build and then > spit out an error message trying to indicate the missing system package. > Actually, since they're wheels, they?re already built, so installing them will work perfectly. Trying to use them is what's going to fail with a message like: ImportError: libxslt.so.1: cannot open shared object file: No such file or directory I do think Nate's proposal is a step forward[1], since being unable to use the package because a runtime dependency is not installed is no more of a problem than being unable to install a source package because a build dependency is not installed. And the package documentation could always specify which system packages are needed for using the wheel. If anything, the error message tends to be smaller, whereas a missing .h from a missing development package usually causes a huge stream of error messages on build, only the first of which is actually relevant. Then again, an import error could happen anywhere in the middle of running the software, so in some cases the error might not be obvious at first. My proposal (that wheels should specify the system file dependencies in terms of abstract locations) would allow pip to provide a much more user-friendly information about the missing file, in the earliest possible moment, allowing for the user to hunt (or compile) the system package at the same moment as he's installing the python package. This information is readily derived during the build process, making it's inclusion in the wheel info straightforward. But I don't think my proposal should block acceptance of Nate's. [1] As long as the acceptance of the over-specified wheels is a strictly opt-in process. Some linux folks don't like running code they haven't compiled. Regards, Leo -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Jul 21 18:38:35 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 21 Jul 2015 16:38:35 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Fri, 17 Jul 2015 at 16:37 Chris Barker wrote: > TL;DR -- pip+wheel needs to address the non-python dependency issue before > it can be a full solution for Linux (or anything else, really) > > > - Packages with semi-standard dependencies: can we expect ANY Linux > distro to have libfreetype, libpng, libz, libjpeg, etc? probably, but maybe > not installed (would a headless server have libfreetype?). And would those > version be all compatible (probably if you specified a distro version) > - Packages with non-standard non-python dependencies: libhdf5, lapack, > BLAS, fortran(!) > I think it would be great to just package these up as wheels and put them on PyPI. I'd really like to be able to (easily) have different BLAS libraries on a per-virtualenv basis. So numpy could depend on "blas" and there could be a few different distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do: pip install gotoblas # Installs the BLAS library within Python dirs pip install numpy You could have a BLAS distribution that is just a shim for a system BLAS that was installed some other way. pip install --install-option='--blaslib=/usr/lib/libblas' systemblas pip install numpy That would give linux distros a way to provide the BLAS library that python/pip understands without everything being statically linked and without pip needing to understand the distro package manager. Also python packages that want BLAS can use the Python import system to locate the BLAS library making it particularly simple for them and allowing distros to move things around as desired. I would like it if this were possible even without wheels. I'd be happy just that the commands to download a BLAS library, compile it, install it non-globally, and configure numpy to use it would be that simple. If it worked with wheels then that'd be a massive win. -- Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Wed Jul 22 23:07:32 2015 From: bcannon at gmail.com (Brett Cannon) Date: Wed, 22 Jul 2015 21:07:32 +0000 Subject: [Distutils] dump of all PyPI project metadata available? Message-ID: When I wrote https://nothingbutsnark.svbtle.com/python-3-support-on-pypi I wrote a script to download every project's JSON metadata by scraping the simple index and then making the appropriate GET request for the JSON metadata. It worked, but somewhat of a hassle. Is there some dump somewhere that is built daily, weekly, or monthly of all the metadata on PyPI for offline analysis? -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Wed Jul 22 23:19:46 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 22 Jul 2015 16:19:46 -0500 Subject: [Distutils] dump of all PyPI project metadata available? In-Reply-To: References: Message-ID: https://github.com/dstufft/pypi-stats https://github.com/dstufft/pypi-external-stats - [ ] a flat bigquery w/ pandas.io.gbq ala GitHub Archive would be great - [ ] it's probably worth it to add RDFa to PyPi and warehouse pages (in addition to the auxiliary executed/extracted JSON) for #search On Jul 22, 2015 4:08 PM, "Brett Cannon" wrote: > When I wrote https://nothingbutsnark.svbtle.com/python-3-support-on-pypi > I wrote a script to download every project's JSON metadata by scraping the > simple index and then making the appropriate GET request for the JSON > metadata. It worked, but somewhat of a hassle. > > Is there some dump somewhere that is built daily, weekly, or monthly of > all the metadata on PyPI for offline analysis? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Thu Jul 23 00:12:32 2015 From: bcannon at gmail.com (Brett Cannon) Date: Wed, 22 Jul 2015 22:12:32 +0000 Subject: [Distutils] dump of all PyPI project metadata available? In-Reply-To: References: Message-ID: On Wed, Jul 22, 2015 at 2:19 PM Wes Turner wrote: > https://github.com/dstufft/pypi-stats > > https://github.com/dstufft/pypi-external-stats > I'm not quite sure what I'm supposed to get from those links, Wes, as that code still scrapes every project individually and downloads them while all I'm trying to avoid having to scrape PyPI and instead just download a single file (plus I don't want the files but just the metadata already returned by the JSON API). -Brett > - [ ] a flat bigquery w/ pandas.io.gbq ala GitHub Archive would be great > - [ ] it's probably worth it to add RDFa to PyPi and warehouse pages (in > addition to the auxiliary executed/extracted JSON) for #search > On Jul 22, 2015 4:08 PM, "Brett Cannon" wrote: > >> When I wrote https://nothingbutsnark.svbtle.com/python-3-support-on-pypi >> I wrote a script to download every project's JSON metadata by scraping the >> simple index and then making the appropriate GET request for the JSON >> metadata. It worked, but somewhat of a hassle. >> >> Is there some dump somewhere that is built daily, weekly, or monthly of >> all the metadata on PyPI for offline analysis? >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Jul 23 06:12:21 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 22 Jul 2015 23:12:21 -0500 Subject: [Distutils] dump of all PyPI project metadata available? In-Reply-To: References: Message-ID: On Jul 22, 2015 5:12 PM, "Brett Cannon" wrote: > > > > On Wed, Jul 22, 2015 at 2:19 PM Wes Turner wrote: >> >> https://github.com/dstufft/pypi-stats >> >> https://github.com/dstufft/pypi-external-stats > > > I'm not quite sure what I'm supposed to get from those links, Wes, as that code still scrapes every project individually and downloads them while all I'm trying to avoid having to scrape PyPI and instead just download a single file (plus I don't want the files but just the metadata already returned by the JSON API). An online query or an offline dump? > > -Brett > >> >> - [ ] a flat bigquery w/ pandas.io.gbq ala GitHub Archive would be great http://pandas.pydata.org/pandas-docs/version/0.16.2/io.html#io-bigquery >> - [ ] it's probably worth it to add RDFa to PyPi and warehouse pages (in addition to the auxiliary executed/extracted JSON) for #search https://github.com/pypa/warehouse/blob/master/warehouse/packaging/models.py https://github.com/pypa/warehouse/blob/master/tests/unit/packaging/test_models.py https://github.com/pypa/warehouse/blob/master/warehouse/packaging/views.py https://github.com/pypa/warehouse/blob/master/warehouse/templates/packaging/detail.html https://github.com/pypa/warehouse/blob/master/warehouse/routes.py https://github.com/pypa/warehouse/blob/master/tests/unit/legacy/api/test_json.py https://github.com/pypa/warehouse/blob/master/warehouse/legacy/api/json.py >> >> On Jul 22, 2015 4:08 PM, "Brett Cannon" wrote: >>> >>> When I wrote https://nothingbutsnark.svbtle.com/python-3-support-on-pypi I wrote a script to download every project's JSON metadata by scraping the simple index and then making the appropriate GET request for the JSON metadata. It worked, but somewhat of a hassle. >>> >>> Is there some dump somewhere that is built daily, weekly, or monthly of all the metadata on PyPI for offline analysis? >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Thu Jul 23 16:30:31 2015 From: bcannon at gmail.com (Brett Cannon) Date: Thu, 23 Jul 2015 14:30:31 +0000 Subject: [Distutils] dump of all PyPI project metadata available? In-Reply-To: References: Message-ID: On Wed, Jul 22, 2015 at 9:12 PM Wes Turner wrote: > > On Jul 22, 2015 5:12 PM, "Brett Cannon" wrote: > > > > > > > > On Wed, Jul 22, 2015 at 2:19 PM Wes Turner wrote: > >> > >> https://github.com/dstufft/pypi-stats > >> > >> https://github.com/dstufft/pypi-external-stats > > > > > > I'm not quite sure what I'm supposed to get from those links, Wes, as > that code still scrapes every project individually and downloads them while > all I'm trying to avoid having to scrape PyPI and instead just download a > single file (plus I don't want the files but just the metadata already > returned by the JSON API). > > An online query or an offline dump? > Offline dump. I literally just want a single file to download. Anyway, it's sounding like there isn't one currently so it would need to be a new feature for Warehouse. -Brett > > > > -Brett > > > >> > >> - [ ] a flat bigquery w/ pandas.io.gbq ala GitHub Archive would be great > > http://pandas.pydata.org/pandas-docs/version/0.16.2/io.html#io-bigquery > > >> - [ ] it's probably worth it to add RDFa to PyPi and warehouse pages > (in addition to the auxiliary executed/extracted JSON) for #search > > https://github.com/pypa/warehouse/blob/master/warehouse/packaging/models.py > > > https://github.com/pypa/warehouse/blob/master/tests/unit/packaging/test_models.py > > https://github.com/pypa/warehouse/blob/master/warehouse/packaging/views.py > > > https://github.com/pypa/warehouse/blob/master/warehouse/templates/packaging/detail.html > > https://github.com/pypa/warehouse/blob/master/warehouse/routes.py > > > https://github.com/pypa/warehouse/blob/master/tests/unit/legacy/api/test_json.py > > https://github.com/pypa/warehouse/blob/master/warehouse/legacy/api/json.py > > >> > >> On Jul 22, 2015 4:08 PM, "Brett Cannon" wrote: > >>> > >>> When I wrote > https://nothingbutsnark.svbtle.com/python-3-support-on-pypi I wrote a > script to download every project's JSON metadata by scraping the simple > index and then making the appropriate GET request for the JSON metadata. It > worked, but somewhat of a hassle. > >>> > >>> Is there some dump somewhere that is built daily, weekly, or monthly > of all the metadata on PyPI for offline analysis? > >>> > >>> _______________________________________________ > >>> Distutils-SIG maillist - Distutils-SIG at python.org > >>> https://mail.python.org/mailman/listinfo/distutils-sig > >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Thu Jul 23 17:36:32 2015 From: richard at python.org (Richard Jones) Date: Thu, 23 Jul 2015 09:36:32 -0600 Subject: [Distutils] dump of all PyPI project metadata available? In-Reply-To: References: Message-ID: On 22 July 2015 at 15:07, Brett Cannon wrote: > When I wrote https://nothingbutsnark.svbtle.com/python-3-support-on-pypi > I wrote a script to download every project's JSON metadata by scraping the > simple index and then making the appropriate GET request for the JSON > metadata. It worked, but somewhat of a hassle. > > Is there some dump somewhere that is built daily, weekly, or monthly of > all the metadata on PyPI for offline analysis? > Short answer is there's nothing built into PyPI itself to do this. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Jul 24 20:52:51 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 24 Jul 2015 11:52:51 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin wrote: > > I think it would be great to just package these up as wheels and put them > on PyPI. > that's the point -- there is no way with the current spec to specify a wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms. > So numpy could depend on "blas" and there could be a few different > distributions on PyPI that provide "blas" representing the different > underlying libraries. If I want to install numpy with a particular one I > can just do: > > pip install gotoblas # Installs the BLAS library within Python dirs > pip install numpy > well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging. But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that???? -Chris > You could have a BLAS distribution that is just a shim for a system BLAS > that was installed some other way. > > pip install --install-option='--blaslib=/usr/lib/libblas' systemblas > pip install numpy > > That would give linux distros a way to provide the BLAS library that > python/pip understands without everything being statically linked and > without pip needing to understand the distro package manager. Also python > packages that want BLAS can use the Python import system to locate the BLAS > library making it particularly simple for them and allowing distros to move > things around as desired. > > I would like it if this were possible even without wheels. I'd be happy > just that the commands to download a BLAS library, compile it, install it > non-globally, and configure numpy to use it would be that simple. If it > worked with wheels then that'd be a massive win. > > -- > Oscar > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 27 16:19:05 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Jul 2015 00:19:05 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 21 July 2015 at 04:37, Paul Moore wrote: > On 20 July 2015 at 18:37, Chris Barker wrote: >> sure -- but isn't that use-case already supported by wheel -- define your >> own wheelhouse that has the ABI you know you need, and point pip to it. > > I presume the issue is wanting to have a single shared wheelhouse for > a (presumably limited) number of platforms. So being able to specify a > (completely arbitrary) local platform name at build and install time > sounds like a viable option. While supporting multiple distros in a single repo is indeed one use case (and the one that needs to be solved to allow distribution via PyPI), the problem I'm interested in isn't the "success case" where a precompiled Linux wheel stays nicely confined to the specific environment it was built to target, but rather the failure mode where a file "escapes". Currently, there's nothing in a built Linux wheel file to indicate its *intended* target environment, which makes debugging ABI mismatches incredibly difficult. By contrast, if the wheel filename says "Fedora 22" and you're trying to run it on "Ubuntu 14.04" and getting a segfault, you have a pretty good hint as to the likely cause of your problem. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From nate at bx.psu.edu Mon Jul 27 21:07:57 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Mon, 27 Jul 2015 15:07:57 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: Hi all, Thanks for the lively debate - I sent the message to start the thread and then had a week's vacation - and I appreciate the discussion that took place in the interim. I've encountered all of the problems discussed here, especially the dependencies both with Python and other attempts at package management in distributed heterogeneous systems. For Galaxy's controlled ecosystem we deal with this using static linking (e.g. our psycopg2 egg is statically linked to a version of libpq5 built for the egg), but this is not an ideal solution for a variety of reasons that I doubt I need to explain. As for the Python side of things, I certainly agree with the point raised by Leonardo that an ENOENT is probably easier for most to debug than a missing Python.h. For what it's worth, some libraries like PyYAML have a partial solution for this: If libyaml.so.X is not found at runtime, it defaults to a pure Python implementation. This is not ideal, for sure, nor will it be possible for all packages, and it depends on the package author to implement a pure Python version, but it does avoid an outright runtime failure. I hope - and I think that Nick is advocating for this - that incremental improvements can be made, rather than what's been the case so far: identifying the myriad of problems and the shortcomings of the packaging format(s), only to stall on making progress towards a solution. As to the comments regarding our needs being met today with a wheelhouse, while this is partially true (e.g. we've got our own PyPI up at https://wheels.galaxyproject.org), we still need to settle on an overspecified tag standard and fix SOABI support in Python 2.x in order to avoid having to ship a modified wheel/pip with Galaxy. Is there any specific direction the Distutils-SIG would like me to take to continue this work? Thanks, --nate On Mon, Jul 27, 2015 at 10:19 AM, Nick Coghlan wrote: > On 21 July 2015 at 04:37, Paul Moore wrote: > > On 20 July 2015 at 18:37, Chris Barker wrote: > >> sure -- but isn't that use-case already supported by wheel -- define > your > >> own wheelhouse that has the ABI you know you need, and point pip to it. > > > > I presume the issue is wanting to have a single shared wheelhouse for > > a (presumably limited) number of platforms. So being able to specify a > > (completely arbitrary) local platform name at build and install time > > sounds like a viable option. > > While supporting multiple distros in a single repo is indeed one use > case (and the one that needs to be solved to allow distribution via > PyPI), the problem I'm interested in isn't the "success case" where a > precompiled Linux wheel stays nicely confined to the specific > environment it was built to target, but rather the failure mode where > a file "escapes". > > Currently, there's nothing in a built Linux wheel file to indicate its > *intended* target environment, which makes debugging ABI mismatches > incredibly difficult. By contrast, if the wheel filename says "Fedora > 22" and you're trying to run it on "Ubuntu 14.04" and getting a > segfault, you have a pretty good hint as to the likely cause of your > problem. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimfulton.info Tue Jul 28 15:00:47 2015 From: jim at jimfulton.info (Jim Fulton) Date: Tue, 28 Jul 2015 09:00:47 -0400 Subject: [Distutils] Table of contents formatting in PyPI pages generated from long descriptions Message-ID: I like to include tables of contents in my distribution long descriptions. Normally. ReST formats these with links, so that when someone clicks on and entry in the TOC, they jump to that position in the document. Recently (last month?) PyPI stopped displaying TOCs with links. Was this intentional? Jim -- Jim Fulton http://jimfulton.info From graffatcolmingov at gmail.com Tue Jul 28 15:10:35 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 28 Jul 2015 08:10:35 -0500 Subject: [Distutils] Table of contents formatting in PyPI pages generated from long descriptions In-Reply-To: References: Message-ID: If I remember correctly the readme project is now used to render that information. Does that project support rendering TOCs? If not support for that may need to be added. On Jul 28, 2015 8:01 AM, "Jim Fulton" wrote: > I like to include tables of contents in my distribution long descriptions. > Normally. ReST formats these with links, so that when someone clicks on > and entry in the TOC, they jump to that position in the document. > > Recently (last month?) PyPI stopped displaying TOCs with links. > Was this intentional? > > Jim > > -- > Jim Fulton > http://jimfulton.info > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Jul 28 17:02:05 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 28 Jul 2015 15:02:05 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Fri, 24 Jul 2015 at 19:53 Chris Barker wrote: > On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin < > oscar.j.benjamin at gmail.com> wrote: > >> >> I think it would be great to just package these up as wheels and put them >> on PyPI. >> > > that's the point -- there is no way with the current spec to specify a > wheel dependency as opposed to a package dependency. i.e this particular > binary numpy wheel depends on this other wheel, whereas the numpy source > pacakge does not have that dependency -- and, indeed, a wheel for one > platform may have different dependencies that\n other platforms. > I thought it was possible to do this with wheels. It's already possible to have wheels or sdists whose dependencies vary by platform I thought. The BLAS dependency is different. In particular the sdist is compatible with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel? > So numpy could depend on "blas" and there could be a few different >> distributions on PyPI that provide "blas" representing the different >> underlying libraries. If I want to install numpy with a particular one I >> can just do: >> >> pip install gotoblas # Installs the BLAS library within Python dirs >> pip install numpy >> > > well,different implementations of BLAS are theoretically ABI compatible, > but as I understand it, it's not actually that simple, so this is > particularly challenging. > > But if it were, this would be a particular trick, because then that numpy > wheel would depend on _some_ BLAS wheel, but there may be more than one > option -- how would you express that???? > I imagined having numpy Require "blas OR openblas". Then openblas package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas. Potentially "blas" is not specific enough so the label could be "blas-gfortran" to express the ABI. -- Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Tue Jul 28 18:21:05 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 28 Jul 2015 11:21:05 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Jul 28, 2015 10:02 AM, "Oscar Benjamin" wrote: > > On Fri, 24 Jul 2015 at 19:53 Chris Barker wrote: >> >> On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin < oscar.j.benjamin at gmail.com> wrote: >>> >>> >>> I think it would be great to just package these up as wheels and put them on PyPI. >> >> >> that's the point -- there is no way with the current spec to specify a wheel dependency as opposed to a package dependency. i.e this particular binary numpy wheel depends on this other wheel, whereas the numpy source pacakge does not have that dependency -- and, indeed, a wheel for one platform may have different dependencies that\n other platforms. > > > I thought it was possible to do this with wheels. It's already possible to have wheels or sdists whose dependencies vary by platform I thought. > > The BLAS dependency is different. In particular the sdist is compatible with more cases than a wheel would be so the built wheel would have a more precise requirement than the sdist. Is that not possible with pip/wheels/PyPI or is that a limitation of using setuptools to build the wheel? > >>> >>> So numpy could depend on "blas" and there could be a few different distributions on PyPI that provide "blas" representing the different underlying libraries. If I want to install numpy with a particular one I can just do: >>> >>> pip install gotoblas # Installs the BLAS library within Python dirs >>> pip install numpy >> >> >> well,different implementations of BLAS are theoretically ABI compatible, but as I understand it, it's not actually that simple, so this is particularly challenging. >> >> >> But if it were, this would be a particular trick, because then that numpy wheel would depend on _some_ BLAS wheel, but there may be more than one option -- how would you express that???? > > > I imagined having numpy Require "blas OR openblas". Then openblas package Provides "blas". Any other BLAS library also provides "blas". If you do "pip install numpy" and "blas" is already provided then the numpy wheel installs fine. Otherwise it falls back to installing openblas. > > Potentially "blas" is not specific enough so the label could be "blas-gfortran" to express the ABI. BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)? ... How is an ABI constraint different from a package dependency? iiuc, ABI tags are thus combinatorial with package/wheel dependency strings? Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors : ``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ``` > > -- > Oscar > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.van.rees at zestsoftware.nl Wed Jul 29 00:40:03 2015 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Wed, 29 Jul 2015 00:40:03 +0200 Subject: [Distutils] Table of contents formatting in PyPI pages generated from long descriptions In-Reply-To: References: Message-ID: I noticed the same today. I went on a bug hunt and found that this was fixed in html5lib. From the changelog: 0.999999/1.0b7 Released on July 7, 2015 Fix #189: fix the sanitizer to allow relative URLs again (as it did prior to 0.9999/1.0b5). So PyPI would need to start using a newer html5lib. I have just now opened a bug report for good measure: https://bitbucket.org/pypa/pypi/issues/309/ Maurits Ian Cordasco schreef op 28-07-15 om 15:10: > If I remember correctly the readme project is now used to render that > information. Does that project support rendering TOCs? If not support > for that may need to be added. > > On Jul 28, 2015 8:01 AM, "Jim Fulton" > wrote: > > I like to include tables of contents in my distribution long > descriptions. > Normally. ReST formats these with links, so that when someone clicks on > and entry in the TOC, they jump to that position in the document. > > Recently (last month?) PyPI stopped displaying TOCs with links. > Was this intentional? > > Jim > > -- > Jim Fulton > http://jimfulton.info > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl