From krabacher.rachel at gmail.com Thu Aug 6 18:05:37 2015 From: krabacher.rachel at gmail.com (Rachel Krabacher) Date: Thu, 6 Aug 2015 12:05:37 -0400 Subject: [Distutils] PIP Installation Question Message-ID: Hello! My name is Rachel and I am a master's student using Python as part of my thesis work analysis. I have found a package online and have downloaded it from github, but am having difficulties getting Python to recognize that I have downloaded the files. I am relatively new to Python and have been working in ipython notebook, so other command terminals are difficult for me. I have come across your article titled "Installing Python Modules" where you discuss using pip to install different packages. I have looked in my packages in Canopy's Package Manager and it says that I have pip 7.1.0-1 installed, but for some reason when I type in the code: python -m pip install SomePackage, I am receiving an error that pip is an invalid syntax. Again I am using ipython notebook to run this, not sure if that is contributing to the issue. Any suggestions you have would be greatly appreciated! Thank you very much! Rachel -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Aug 6 18:16:54 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 6 Aug 2015 09:16:54 -0700 Subject: [Distutils] PIP Installation Question In-Reply-To: References: Message-ID: On Thu, Aug 6, 2015 at 9:05 AM, Rachel Krabacher wrote: > My name is Rachel and I am a master's student using Python as part of my > thesis work analysis. I have found a package online and have downloaded it > from github, but am having difficulties getting Python to recognize that I > have downloaded the files. I am relatively new to Python and have been > working in ipython notebook, so other command terminals are difficult for > me. > well, "pip" is a "shell" command, not a python one, so you need to run it in a regular command shell, not a python command prompt. > I have come across your article titled "Installing Python Modules" where > you discuss using pip to install different packages. I have looked in my > packages in Canopy's Package Manager > If you are using the Canopy distribution of Python, you should look at its docs for recommended way to support installing packages that are not included by the Canopy Package Manager. I haven't ued Canopy, so I can't help here. However, it's highly likely that it does indeed include standard pip installing, so: and it says that I have pip 7.1.0-1 installed, but for some reason when I > type in the code: python -m pip install SomePackage, > I am receiving an error that pip is an invalid syntax. > yes, because that is a "shell" command, not a python one, so it won't run in the pyton interpeter (which is what iPython is giving you). Two options: 1) It is a really good idea to get a bit familiar with using the shell (command line interface). Look online for tutorials. Here is one option: http://cli.learncodethehardway.org/book/ 2) ipython lets you pass shell command off from inside iPython or the notebook, so you may be abel to do: ! pip install SomePackage or, if teh PATH isn't set right: ! python -m pip install SomePackage > Again I am using ipython notebook to run this, not sure if that is > contributing to the issue. > yes, it sure is :-) Good luck, -Chris > Any suggestions you have would be greatly appreciated! > > Thank you very much! > Rachel > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Wed Aug 12 22:21:27 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Wed, 12 Aug 2015 16:21:27 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: Hello all, I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg. https://bitbucket.org/pypa/wheel/pull-request/54/ If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted? Thanks, --nate On Tue, Jul 28, 2015 at 12:21 PM, Wes Turner wrote: > > On Jul 28, 2015 10:02 AM, "Oscar Benjamin" > wrote: > > > > On Fri, 24 Jul 2015 at 19:53 Chris Barker wrote: > >> > >> On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin < > oscar.j.benjamin at gmail.com> wrote: > >>> > >>> > >>> I think it would be great to just package these up as wheels and put > them on PyPI. > >> > >> > >> that's the point -- there is no way with the current spec to specify a > wheel dependency as opposed to a package dependency. i.e this particular > binary numpy wheel depends on this other wheel, whereas the numpy source > pacakge does not have that dependency -- and, indeed, a wheel for one > platform may have different dependencies that\n other platforms. > > > > > > I thought it was possible to do this with wheels. It's already possible > to have wheels or sdists whose dependencies vary by platform I thought. > > > > The BLAS dependency is different. In particular the sdist is compatible > with more cases than a wheel would be so the built wheel would have a more > precise requirement than the sdist. Is that not possible with > pip/wheels/PyPI or is that a limitation of using setuptools to build the > wheel? > > > >>> > >>> So numpy could depend on "blas" and there could be a few different > distributions on PyPI that provide "blas" representing the different > underlying libraries. If I want to install numpy with a particular one I > can just do: > >>> > >>> pip install gotoblas # Installs the BLAS library within Python > dirs > >>> pip install numpy > >> > >> > >> well,different implementations of BLAS are theoretically ABI > compatible, but as I understand it, it's not actually that simple, so this > is particularly challenging. > >> > >> > >> But if it were, this would be a particular trick, because then that > numpy wheel would depend on _some_ BLAS wheel, but there may be more than > one option -- how would you express that???? > > > > > > I imagined having numpy Require "blas OR openblas". Then openblas > package Provides "blas". Any other BLAS library also provides "blas". If > you do "pip install numpy" and "blas" is already provided then the numpy > wheel installs fine. Otherwise it falls back to installing openblas. > > > > Potentially "blas" is not specific enough so the label could be > "blas-gfortran" to express the ABI. > > BLAS may not be the best example, but should we expect such linked > interfaces to change over time? (And e.g. be versioned dependencies with > shim packages that have check functions)? > > ... How is an ABI constraint different from a package dependency? > > iiuc, ABI tags are thus combinatorial with package/wheel dependency > strings? > > Conda/pycosat solve this with "preprocessing selectors" : > http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors > : > > ``` > linux True if the platform is Linux > linux32 True if the platform is Linux and the Python architecture is 32-bit > linux64 True if the platform is Linux and the Python architecture is 64-bit > armv6 True if the platform is Linux and the Python architecture is armv6l > osx True if the platform is OS X > unix True if the platform is Unix (OS X or Linux) > win True if the platform is Windows > win32 True if the platform is Windows and the Python architecture is 32-bit > win64 True if the platform is Windows and the Python architecture is 64-bit > py The Python version as a two digit string (like '27'). See also the > CONDA_PY environment variable below. > py3k True if the Python major version is 3 > py2k True if the Python major version is 2 > py26 True if the Python version is 2.6 > py27 True if the Python version is 2.7 > py33 True if the Python version is 3.3 > py34 True if the Python version is 3.4 > np The NumPy version as a two digit string (like '17'). See also the > CONDA_NPY environment variable below. > Because the selector is any valid Python expression, complicated logic is > possible. > ``` > > > > > -- > > Oscar > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Aug 13 01:49:08 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 13 Aug 2015 11:49:08 +1200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: I'm not sure what will be needed to get the PR accepted; At PyCon AU Tennessee Leuwenberg started drafting a PEP for the expression of dependencies on e.g. BLAS - its been given number 497, and is in the packaging-peps repo; I'm working on updating it now. On 13 August 2015 at 08:21, Nate Coraor wrote: > Hello all, > > I've implemented the wheel side of Nick's suggestion from very early in this > thread to support a vendor-providable binary-compatibility.cfg. > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > If this is acceptable, I'll add support for it to the pip side. What else > should be implemented at this stage to get the PR accepted? > > Thanks, > --nate > > On Tue, Jul 28, 2015 at 12:21 PM, Wes Turner wrote: >> >> >> On Jul 28, 2015 10:02 AM, "Oscar Benjamin" >> wrote: >> > >> > On Fri, 24 Jul 2015 at 19:53 Chris Barker wrote: >> >> >> >> On Tue, Jul 21, 2015 at 9:38 AM, Oscar Benjamin >> >> wrote: >> >>> >> >>> >> >>> I think it would be great to just package these up as wheels and put >> >>> them on PyPI. >> >> >> >> >> >> that's the point -- there is no way with the current spec to specify a >> >> wheel dependency as opposed to a package dependency. i.e this particular >> >> binary numpy wheel depends on this other wheel, whereas the numpy source >> >> pacakge does not have that dependency -- and, indeed, a wheel for one >> >> platform may have different dependencies that\n other platforms. >> > >> > >> > I thought it was possible to do this with wheels. It's already possible >> > to have wheels or sdists whose dependencies vary by platform I thought. >> > >> > The BLAS dependency is different. In particular the sdist is compatible >> > with more cases than a wheel would be so the built wheel would have a more >> > precise requirement than the sdist. Is that not possible with >> > pip/wheels/PyPI or is that a limitation of using setuptools to build the >> > wheel? >> > >> >>> >> >>> So numpy could depend on "blas" and there could be a few different >> >>> distributions on PyPI that provide "blas" representing the different >> >>> underlying libraries. If I want to install numpy with a particular one I can >> >>> just do: >> >>> >> >>> pip install gotoblas # Installs the BLAS library within Python >> >>> dirs >> >>> pip install numpy >> >> >> >> >> >> well,different implementations of BLAS are theoretically ABI >> >> compatible, but as I understand it, it's not actually that simple, so this >> >> is particularly challenging. >> >> >> >> >> >> But if it were, this would be a particular trick, because then that >> >> numpy wheel would depend on _some_ BLAS wheel, but there may be more than >> >> one option -- how would you express that???? >> > >> > >> > I imagined having numpy Require "blas OR openblas". Then openblas >> > package Provides "blas". Any other BLAS library also provides "blas". If you >> > do "pip install numpy" and "blas" is already provided then the numpy wheel >> > installs fine. Otherwise it falls back to installing openblas. >> > >> > Potentially "blas" is not specific enough so the label could be >> > "blas-gfortran" to express the ABI. >> >> BLAS may not be the best example, but should we expect such linked >> interfaces to change over time? (And e.g. be versioned dependencies with >> shim packages that have check functions)? >> >> ... How is an ABI constraint different from a package dependency? >> >> iiuc, ABI tags are thus combinatorial with package/wheel dependency >> strings? >> >> Conda/pycosat solve this with "preprocessing selectors" : >> http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors >> : >> >> ``` >> linux True if the platform is Linux >> linux32 True if the platform is Linux and the Python architecture is >> 32-bit >> linux64 True if the platform is Linux and the Python architecture is >> 64-bit >> armv6 True if the platform is Linux and the Python architecture is armv6l >> osx True if the platform is OS X >> unix True if the platform is Unix (OS X or Linux) >> win True if the platform is Windows >> win32 True if the platform is Windows and the Python architecture is >> 32-bit >> win64 True if the platform is Windows and the Python architecture is >> 64-bit >> py The Python version as a two digit string (like '27'). See also the >> CONDA_PY environment variable below. >> py3k True if the Python major version is 3 >> py2k True if the Python major version is 2 >> py26 True if the Python version is 2.6 >> py27 True if the Python version is 2.7 >> py33 True if the Python version is 3.3 >> py34 True if the Python version is 3.4 >> np The NumPy version as a two digit string (like '17'). See also the >> CONDA_NPY environment variable below. >> Because the selector is any valid Python expression, complicated logic is >> possible. >> ``` >> >> > >> > -- >> > Oscar >> > >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> > >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Thu Aug 13 02:51:07 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 12 Aug 2015 17:51:07 -0700 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Aug 12, 2015 16:49, "Robert Collins" wrote: > > I'm not sure what will be needed to get the PR accepted; At PyCon AU > Tennessee Leuwenberg started drafting a PEP for the expression of > dependencies on e.g. BLAS - its been given number 497, and is in the > packaging-peps repo; I'm working on updating it now. I wanted to take a look at this PEP, but I can't seem to find it. PEP 497: https://www.python.org/dev/peps/pep-0497/ appears to be something else entirely? I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) have actively been making plans to ship a BLAS wheel on windows, and AFAICT this is totally doable now -- the blocker is windows toolchain issues, not pypa-related infrastructure. Specifically the idea is to have a wheel that contains the shared library as a regular old data file, plus a stub python package that knows how to find this data file and how to make it accessible to the linker. So numpy/__init__.py would start by calling: import pyopenblas1 # on Linux modifies LD_LIBRARY_PATH, # on Windows uses ctypes to preload... whatever pyopenblas1.enable() and then get on with things, or the build system might do: import pyopenblas1 pyopenblas1.get_header_directories() pyopenblas1.get_linker_directories() This doesn't help if you want to declare dependencies on external, system managed libraries and have those be automatically somehow provided or checked for, but to me that sounds like an impossible boil-the-ocean project anyway, while the above is trivial and should just work. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From jp at jamezpolley.com Thu Aug 13 03:02:09 2015 From: jp at jamezpolley.com (James Polley) Date: Thu, 13 Aug 2015 01:02:09 +0000 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: It looks like we've had a numbering collision https://github.com/pypa/interoperability-peps/pull/30/files is the PEP Tennessee is working on On Thu, Aug 13, 2015 at 10:56 AM Nathaniel Smith wrote: > On Aug 12, 2015 16:49, "Robert Collins" wrote: > > > > I'm not sure what will be needed to get the PR accepted; At PyCon AU > > Tennessee Leuwenberg started drafting a PEP for the expression of > > dependencies on e.g. BLAS - its been given number 497, and is in the > > packaging-peps repo; I'm working on updating it now. > > I wanted to take a look at this PEP, but I can't seem to find it. PEP 497: > https://www.python.org/dev/peps/pep-0497/ > appears to be something else entirely? > > I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) > have actively been making plans to ship a BLAS wheel on windows, and AFAICT > this is totally doable now -- the blocker is windows toolchain issues, not > pypa-related infrastructure. > > Specifically the idea is to have a wheel that contains the shared library > as a regular old data file, plus a stub python package that knows how to > find this data file and how to make it accessible to the linker. So > numpy/__init__.py would start by calling: > > import pyopenblas1 > # on Linux modifies LD_LIBRARY_PATH, > # on Windows uses ctypes to preload... whatever > pyopenblas1.enable() > > and then get on with things, or the build system might do: > > import pyopenblas1 > pyopenblas1.get_header_directories() > pyopenblas1.get_linker_directories() > > This doesn't help if you want to declare dependencies on external, system > managed libraries and have those be automatically somehow provided or > checked for, but to me that sounds like an impossible boil-the-ocean > project anyway, while the above is trivial and should just work. > > -n > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Thu Aug 13 05:06:23 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 12 Aug 2015 23:06:23 -0400 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: 2015-08-12 20:51 GMT-04:00 Nathaniel Smith : > > I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) > have actively been making plans to ship a BLAS wheel on windows, and AFAICT > this is totally doable now -- the blocker is windows toolchain issues, not > pypa-related infrastructure. > > Specifically the idea is to have a wheel that contains the shared library as > a regular old data file, plus a stub python package that knows how to find > this data file and how to make it accessible to the linker. So > numpy/__init__.py would start by calling: > > import pyopenblas1 > # on Linux modifies LD_LIBRARY_PATH, > # on Windows uses ctypes to preload... whatever > pyopenblas1.enable() > > and then get on with things, or the build system might do: > > import pyopenblas1 > pyopenblas1.get_header_directories() > pyopenblas1.get_linker_directories() > > This doesn't help if you want to declare dependencies on external, system > managed libraries and have those be automatically somehow provided or > checked for, but to me that sounds like an impossible boil-the-ocean project > anyway, while the above is trivial and should just work. +1 -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From robertc at robertcollins.net Thu Aug 13 05:10:15 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 13 Aug 2015 15:10:15 +1200 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On 13 August 2015 at 12:51, Nathaniel Smith wrote: > On Aug 12, 2015 16:49, "Robert Collins" wrote: >> >> I'm not sure what will be needed to get the PR accepted; At PyCon AU >> Tennessee Leuwenberg started drafting a PEP for the expression of >> dependencies on e.g. BLAS - its been given number 497, and is in the >> packaging-peps repo; I'm working on updating it now. > > I wanted to take a look at this PEP, but I can't seem to find it. PEP 497: > https://www.python.org/dev/peps/pep-0497/ > appears to be something else entirely? > > I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) > have actively been making plans to ship a BLAS wheel on windows, and AFAICT > this is totally doable now -- the blocker is windows toolchain issues, not > pypa-related infrastructure. > > Specifically the idea is to have a wheel that contains the shared library as > a regular old data file, plus a stub python package that knows how to find > this data file and how to make it accessible to the linker. So > numpy/__init__.py would start by calling: > > import pyopenblas1 > # on Linux modifies LD_LIBRARY_PATH, > # on Windows uses ctypes to preload... whatever > pyopenblas1.enable() > > and then get on with things, or the build system might do: > > import pyopenblas1 > pyopenblas1.get_header_directories() > pyopenblas1.get_linker_directories() > > This doesn't help if you want to declare dependencies on external, system > managed libraries and have those be automatically somehow provided or > checked for, but to me that sounds like an impossible boil-the-ocean project > anyway, while the above is trivial and should just work. Well, have a read of the draft. Its a solved problem by e.g. conda, apt, yum, nix and many others. Uploading system .so's is certainly also an option, and I see no reason why we can't do both. I do know that distribution vendors are likely to be highly allergic to the idea of having regular shared libraries present as binaries, but thats a different discussion :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From nate at bx.psu.edu Thu Aug 13 16:07:24 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 13 Aug 2015 10:07:24 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > On Aug 12, 2015 13:57, "Nate Coraor" wrote: > > > > Hello all, > > > > I've implemented the wheel side of Nick's suggestion from very early in > this thread to support a vendor-providable binary-compatibility.cfg. > > > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > > > If this is acceptable, I'll add support for it to the pip side. What > else should be implemented at this stage to get the PR accepted? > > From my reading of what the Enthought and Continuum folks were saying > about how they are successfully distributing binaries across different > distributions, it sounds like the additional piece that would take this > from a interesting experiment to basically-immediately-usable would be to > teach pip that if no binary-compatibility.cfg is provided, then it should > assume by default that the compatible systems whose wheels should be > installed are: (1) the current system's exact tag, > This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version. > (2) the special hard-coded tag "centos5". (That's what everyone actually > uses in practice, right?) > The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available? --nate > To make this *really* slick, it would be cool if, say, David C. could make > a formal list of exactly which system libraries are important to depend on > (xlib, etc.), and we could hard-code two compatibility profiles > "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that > plus the core too-hard-to-ship libraries), and possibly teach pip how to > check whether that hard-coded core set is available. > > Compare with osx, where there are actually a ton of different ABIs but in > practice everyone distributing wheels basically sat down and picked one and > wrote some ad hoc tools to make it work, and it does: > https://github.com/MacPython/wiki/wiki/Spinning-wheels > > -n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Aug 13 03:05:00 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 12 Aug 2015 18:05:00 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 12, 2015 13:57, "Nate Coraor" wrote: > > Hello all, > > I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg. > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted? >From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag, (2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?) To make this *really* slick, it would be cool if, say, David C. could make a formal list of exactly which system libraries are important to depend on (xlib, etc.), and we could hard-code two compatibility profiles "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that plus the core too-hard-to-ship libraries), and possibly teach pip how to check whether that hard-coded core set is available. Compare with osx, where there are actually a ton of different ABIs but in practice everyone distributing wheels basically sat down and picked one and wrote some ad hoc tools to make it work, and it does: https://github.com/MacPython/wiki/wiki/Spinning-wheels -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu Aug 13 08:08:53 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 12 Aug 2015 23:08:53 -0700 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Wed, Aug 12, 2015 at 8:10 PM, Robert Collins wrote: > On 13 August 2015 at 12:51, Nathaniel Smith wrote: >> On Aug 12, 2015 16:49, "Robert Collins" wrote: >>> >>> I'm not sure what will be needed to get the PR accepted; At PyCon AU >>> Tennessee Leuwenberg started drafting a PEP for the expression of >>> dependencies on e.g. BLAS - its been given number 497, and is in the >>> packaging-peps repo; I'm working on updating it now. >> >> I wanted to take a look at this PEP, but I can't seem to find it. PEP 497: >> https://www.python.org/dev/peps/pep-0497/ >> appears to be something else entirely? >> >> I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) >> have actively been making plans to ship a BLAS wheel on windows, and AFAICT >> this is totally doable now -- the blocker is windows toolchain issues, not >> pypa-related infrastructure. >> >> Specifically the idea is to have a wheel that contains the shared library as >> a regular old data file, plus a stub python package that knows how to find >> this data file and how to make it accessible to the linker. So >> numpy/__init__.py would start by calling: >> >> import pyopenblas1 >> # on Linux modifies LD_LIBRARY_PATH, >> # on Windows uses ctypes to preload... whatever >> pyopenblas1.enable() >> >> and then get on with things, or the build system might do: >> >> import pyopenblas1 >> pyopenblas1.get_header_directories() >> pyopenblas1.get_linker_directories() >> Thanks to James for sending on the link! Two main thoughts, now that I've read it over: 1) The motivating example is somewhat confused -- the draft says: + The example provided in the abstract is a + hypothetical package which needs versions of numpy and scipy, both of which + must have been compiled to be aware of the ATLAS compiled set of linear algebra + libraries (for performance reasons). This sounds esoteric but is, in fact, a + routinely encountered situation which drives people towards using the + alternative packaging for scientific python environments. Numpy and scipy actually work hard to export a consistent, append-only ABI regardless of what libraries are used underneath. (This is actually by far our biggest issue with wheels -- that there's still no way to tag the numpy ABI as part of the ABI string, so in practice it's just impossible to ever have a smooth migration to a new ABI and we have no choice but to forever maintain compatibility with numpy 0.1. But that's not what this proposal addresses.) Possibly part of the confusion here is that Christoph Gohlke's popular numpy+scipy builds use a hack where instead of making the wheels self-contained via statically linking or something like that, then he ships the actual libBLAS.dll inside the numpy wheel, and then the scipy wheel has some code added that magically "knows" that there is this special numpy wheel that it can find libBLAS.dll inside and use it directly from scipy's own extensions. But this coupling is pretty much just broken, and it directly motivates the blas-in-its-own-wheel design I sketched out above. (I guess the one exception is that if you have a numpy or scipy build that dynamically links to a library like BLAS, and then another extension that links to a different BLAS with an incompatible ABI, and the two BLAS libraries have symbol name collisions, then that could be a problem because ELF is frustrating like that. But the obvious solution here is to be careful about how you do your builds -- either by using static linking, or making sure that incompatible ABIs get different symbol names.) Anyway, this doesn't particularly undermine the PEP, but it would be good to use a more realistic motivating example. 2) AFAICT, the basic goal of this PEP is to provide machinery to let one reliably build a wheel for some specific version of some specific distribution, while depending on vendor-provided libraries for various external dependencies, and providing a nice user experience (e.g., telling users explicitly which vendor-provided libraries they need to install). I say this because strings like "libblas1.so" or "kernel.h" do not define any fixed ABI or APIs, unless you are implicitly scoping to some particular distribution with at least some minimum version constraint. It seems like a reasonable effort at solving this problem, and I guess there are probably some people somewhere that have this problem, but my concern is that I don't actually know any of those people. The developers I know instead have the problem of, they want to be able to provide a small finite number of binaries (ideally six binaries per Python version: {32 bit, 64 bit} * {windows, osx, linux}) that together will Just Work on 99% of end-user systems. And that's the problem that Enthought, Continuum, etc., have been solving for years, and which wheels already mostly solve on windows and osx, so it seems like a reasonable goal to aim for. But I don't see how this PEP gets us any closer to that. Again, not really a criticism -- these goals aren't contradictory and it's great if pip ends up being able to handle both common and niche use cases. But I want to make sure that we're clear that these goals are different and which one each proposal is aimed at. >> This doesn't help if you want to declare dependencies on external, system >> managed libraries and have those be automatically somehow provided or >> checked for, but to me that sounds like an impossible boil-the-ocean project >> anyway, while the above is trivial and should just work. > > Well, have a read of the draft. > > Its a solved problem by e.g. conda, apt, yum, nix and many others. None of these projects allow a .deb to depend on .rpms etc. -- they all require that they own the whole world with some narrow, carefully controlled exceptions (e.g. anaconda requires some non-trivial runtime on the host system -- glibc, glib, pcre, expat, ... -- but it's a single fixed set that they've empirically determined is close enough to universally available in practice). The "boil the ocean" part is the part where everybody who wants to distribute wheels has to go around and figure out every possible permutation of ABIs on every possible external packaging system and provide separate wheels for each of them. > Uploading system .so's is certainly also an option, and I see no > reason why we can't do both. > > I do know that distribution vendors are likely to be highly allergic > to the idea of having regular shared libraries present as binaries, > but thats a different discussion :) Yeah, but basically in the same way that they're allergic to all wheels, period, so ... :-). I think in the long run the only realistic approach is for most users to either be getting blas+numpy from some external system like macports/conda/yum/... or else to be getting blas+numpy from official wheels on pypi. And neither of these two scenarios seems to benefit from the functionality described in this PEP. (Final emphasis: this is all just my own opinion based on my far-from-omniscient view of the packaging system, please tell me if I'm making some ridiculous error, or if well-actually libBLAS is special and there is some other harder case I'm not thinking of, etc.) -n -- Nathaniel J. Smith -- http://vorpus.org From stevenybw at hotmail.com Thu Aug 13 10:42:53 2015 From: stevenybw at hotmail.com (=?gb2312?B?0+Gyqc7E?=) Date: Thu, 13 Aug 2015 16:42:53 +0800 Subject: [Distutils] Problem Report Message-ID: Dear Maintainers: This problem occurred when1. Windows platform2. Python is installed on non-Latin path (for example: path contains Chinese character).3. try to "pip install theano" And I found the problem is in distutils.command.build_scripts module's copy_scripts function, on line 106 executable = os.fsencode(executable) shebang = b"#!" + executable + post_interp + b"\n" try: shebang.decode('utf-8') actually os.fsencode will encode the path into GBK encoding on windows, it's certainly that will fail to decode via utf-8. Solution: #executable = os.fsencode(executable) (delete this line)executable = executable.encode('utf-8') Theano successfully installed after this patch. Thank you!Bowen Yu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Thu Aug 13 19:52:20 2015 From: cournape at gmail.com (David Cournapeau) Date: Thu, 13 Aug 2015 18:52:20 +0100 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 2:05 AM, Nathaniel Smith wrote: > On Aug 12, 2015 13:57, "Nate Coraor" wrote: > > > > Hello all, > > > > I've implemented the wheel side of Nick's suggestion from very early in > this thread to support a vendor-providable binary-compatibility.cfg. > > > > https://bitbucket.org/pypa/wheel/pull-request/54/ > > > > If this is acceptable, I'll add support for it to the pip side. What > else should be implemented at this stage to get the PR accepted? > > From my reading of what the Enthought and Continuum folks were saying > about how they are successfully distributing binaries across different > distributions, it sounds like the additional piece that would take this > from a interesting experiment to basically-immediately-usable would be to > teach pip that if no binary-compatibility.cfg is provided, then it should > assume by default that the compatible systems whose wheels should be > installed are: (1) the current system's exact tag, (2) the special > hard-coded tag "centos5". (That's what everyone actually uses in practice, > right?) > > To make this *really* slick, it would be cool if, say, David C. could make > a formal list of exactly which system libraries are important to depend on > (xlib, etc.), and we could hard-code two compatibility profiles > "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that > plus the core too-hard-to-ship libraries), and possibly teach pip how to > check whether that hard-coded core set is available. > So this is a basic list I got w/ a few minutes of scripting, by installing our 200 most used packages on centos 5, ldd'ing all of the .so, and filtering out a few things/bugs of some of our own packages): /usr/lib64/libatk-1.0.so.0 /usr/lib64/libcairo.so.2 /usr/lib64/libdrm.so.2 /usr/lib64/libfontconfig.so.1 /usr/lib64/libGL.so.1 /usr/lib64/libGLU.so.1 /usr/lib64/libstdc++.so.6 /usr/lib64/libX11.so.6 /usr/lib64/libXau.so.6 /usr/lib64/libXcursor.so.1 /usr/lib64/libXdmcp.so.6 /usr/lib64/libXext.so.6 /usr/lib64/libXfixes.so.3 /usr/lib64/libXft.so.2 /usr/lib64/libXinerama.so.1 /usr/lib64/libXi.so.6 /usr/lib64/libXrandr.so.2 /usr/lib64/libXrender.so.1 /usr/lib64/libXt.so.6 /usr/lib64/libXv.so.1 /usr/lib64/libXxf86vm.so.1 /usr/lib64/libz.so.1 This list should only be taken as a first idea, I can work on a more precise list including the versions if that's deemed useful. One significant issue is SSL: in theory, we (as a downstream distributor) really want to avoid distributing such a key piece of infrastructure, but in practice, there are so many versions which are incompatible across distributions that it is not an option. David > Compare with osx, where there are actually a ton of different ABIs but in > practice everyone distributing wheels basically sat down and picked one and > wrote some ad hoc tools to make it work, and it does: > https://github.com/MacPython/wiki/wiki/Spinning-wheels > > -n > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leorochael at gmail.com Thu Aug 13 21:30:31 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Thu, 13 Aug 2015 16:30:31 -0300 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 13 August 2015 at 11:07, Nate Coraor wrote: > On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > >> On Aug 12, 2015 13:57, "Nate Coraor" wrote: >> > >> > Hello all, >> > >> > I've implemented the wheel side of Nick's suggestion from very early in >> this thread to support a vendor-providable binary-compatibility.cfg. >> > >> > https://bitbucket.org/pypa/wheel/pull-request/54/ >> > >> > If this is acceptable, I'll add support for it to the pip side. What >> else should be implemented at this stage to get the PR accepted? >> >> From my reading of what the Enthought and Continuum folks were saying >> about how they are successfully distributing binaries across different >> distributions, it sounds like the additional piece that would take this >> from a interesting experiment to basically-immediately-usable would be to >> teach pip that if no binary-compatibility.cfg is provided, then it should >> assume by default that the compatible systems whose wheels should be >> installed are: (1) the current system's exact tag, >> > > This should already be the case - the default tag will no longer be > -linux_x86_64, it'd be linux_x86_64_distro_version. > > >> (2) the special hard-coded tag "centos5". (That's what everyone actually >> uses in practice, right?) >> > > The idea here is that we should attempt to install centos5 wheels if more > specific wheels for the platform aren't available? > Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the standard behavior for installation being the exact platform tag, and an automatic fallback to cento5. IMO, on Linux, the default should always be to opt in to the desired platform tags. We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation. But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions. And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them. The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that they can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables that correctly tweaked their build process. Regards, Leo -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Aug 13 21:43:44 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 14:43:44 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 2:31 PM, "Leonardo Rochael Almeida" wrote: > > > > On 13 August 2015 at 11:07, Nate Coraor wrote: >> >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >>> >>> On Aug 12, 2015 13:57, "Nate Coraor" wrote: >>> > >>> > Hello all, >>> > >>> > I've implemented the wheel side of Nick's suggestion from very early in this thread to support a vendor-providable binary-compatibility.cfg. >>> > >>> > https://bitbucket.org/pypa/wheel/pull-request/54/ >>> > >>> > If this is acceptable, I'll add support for it to the pip side. What else should be implemented at this stage to get the PR accepted? >>> >>> From my reading of what the Enthought and Continuum folks were saying about how they are successfully distributing binaries across different distributions, it sounds like the additional piece that would take this from a interesting experiment to basically-immediately-usable would be to teach pip that if no binary-compatibility.cfg is provided, then it should assume by default that the compatible systems whose wheels should be installed are: (1) the current system's exact tag, >> >> >> This should already be the case - the default tag will no longer be -linux_x86_64, it'd be linux_x86_64_distro_version. >> >>> >>> (2) the special hard-coded tag "centos5". (That's what everyone actually uses in practice, right?) >> >> >> The idea here is that we should attempt to install centos5 wheels if more specific wheels for the platform aren't available? > > > Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the standard behavior for installation being the exact platform tag, and an automatic fallback to cento5. > > IMO, on Linux, the default should always be to opt in to the desired platform tags. > > We could make it so that the word `default` inside `binary-compatibility.cfg` means an exact match on the distro version, so that we could simplify the documentation. > > But I don't want to upgrade to pip and suddenly find myself installing binary wheels compiled by whomever for whatever platform I have no control with, even assuming the best of the package builders intentions. > > And I certainly don't want centos5 wheels accidentally installed on my ubuntu servers unless I very specifically asked for them. > > The tiny pain inflicted by telling users to add a one-line text file in a very well known location (or two lines, for the added centos5), so that they can get the benefit of binary wheels on linux, is very small compared to the pain of repeatable install scripts suddenly behaving differently and installing binary wheels in systems that were prepared to pay the price of source installs, including the setting of build environment variables that correctly tweaked their build process. Could/should this (repeatable) build configuration be specified in a JSON manifest file? What's the easiest way to build for all of these platforms? Tox w/ per-platform Dockerfile? > > Regards, > > Leo > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Aug 14 03:25:59 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Aug 2015 18:25:59 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: > On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >> >> From my reading of what the Enthought and Continuum folks were saying >> about how they are successfully distributing binaries across different >> distributions, it sounds like the additional piece that would take this from >> a interesting experiment to basically-immediately-usable would be to teach >> pip that if no binary-compatibility.cfg is provided, then it should assume >> by default that the compatible systems whose wheels should be installed are: >> (1) the current system's exact tag, > > This should already be the case - the default tag will no longer be > -linux_x86_64, it'd be linux_x86_64_distro_version. > >> >> (2) the special hard-coded tag "centos5". (That's what everyone actually >> uses in practice, right?) > > The idea here is that we should attempt to install centos5 wheels if more > specific wheels for the platform aren't available? Yes. Or more generally, we should pick some common baseline build environment such that we're pretty sure wheels built there can run on 99% of end-user systems and give this environment a name. (Doesn't have to be "centos5", though IIUC CentOS 5 is what people are using for this baseline build environment right now.) That way when distros catch up and start providing binary-compatibility.cfg files, we can give tell them that this is an environment that they should try to support because it's what everyone is using, and to kick start that process we should assume it as a default until the distros do catch up. This has two benefits: it means that these wheels would actually become useful in some reasonable amount of time, and as a bonus, it would provide a clear incentive for those rare distros that *aren't* compatible to document that by starting to provide a binary-compatibility.cfg. -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Fri Aug 14 03:31:01 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 14 Aug 2015 13:31:01 +1200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 14 August 2015 at 13:25, Nathaniel Smith wrote: > On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >>> >>> From my reading of what the Enthought and Continuum folks were saying >>> about how they are successfully distributing binaries across different >>> distributions, it sounds like the additional piece that would take this from >>> a interesting experiment to basically-immediately-usable would be to teach >>> pip that if no binary-compatibility.cfg is provided, then it should assume >>> by default that the compatible systems whose wheels should be installed are: >>> (1) the current system's exact tag, >> >> This should already be the case - the default tag will no longer be >> -linux_x86_64, it'd be linux_x86_64_distro_version. >> >>> >>> (2) the special hard-coded tag "centos5". (That's what everyone actually >>> uses in practice, right?) >> >> The idea here is that we should attempt to install centos5 wheels if more >> specific wheels for the platform aren't available? > > Yes. > > Or more generally, we should pick some common baseline build > environment such that we're pretty sure wheels built there can run on > 99% of end-user systems and give this environment a name. (Doesn't > have to be "centos5", though IIUC CentOS 5 is what people are using > for this baseline build environment right now.) That way when distros > catch up and start providing binary-compatibility.cfg files, we can > give tell them that this is an environment that they should try to > support because it's what everyone is using, and to kick start that > process we should assume it as a default until the distros do catch > up. This has two benefits: it means that these wheels would actually > become useful in some reasonable amount of time, and as a bonus, it > would provide a clear incentive for those rare distros that *aren't* > compatible to document that by starting to provide a > binary-compatibility.cfg. Sounds like a reinvention of LSB, which is still a thing I think, but really didn't take the vendor world by storm. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From wes.turner at gmail.com Fri Aug 14 03:38:21 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 20:38:21 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 8:31 PM, "Robert Collins" wrote: > > On 14 August 2015 at 13:25, Nathaniel Smith wrote: > > On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: > >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > >>> > >>> From my reading of what the Enthought and Continuum folks were saying > >>> about how they are successfully distributing binaries across different > >>> distributions, it sounds like the additional piece that would take this from > >>> a interesting experiment to basically-immediately-usable would be to teach > >>> pip that if no binary-compatibility.cfg is provided, then it should assume > >>> by default that the compatible systems whose wheels should be installed are: > >>> (1) the current system's exact tag, > >> > >> This should already be the case - the default tag will no longer be > >> -linux_x86_64, it'd be linux_x86_64_distro_version. > >> > >>> > >>> (2) the special hard-coded tag "centos5". (That's what everyone actually > >>> uses in practice, right?) > >> > >> The idea here is that we should attempt to install centos5 wheels if more > >> specific wheels for the platform aren't available? > > > > Yes. > > > > Or more generally, we should pick some common baseline build > > environment such that we're pretty sure wheels built there can run on > > 99% of end-user systems and give this environment a name. (Doesn't > > have to be "centos5", though IIUC CentOS 5 is what people are using > > for this baseline build environment right now.) That way when distros > > catch up and start providing binary-compatibility.cfg files, we can > > give tell them that this is an environment that they should try to > > support because it's what everyone is using, and to kick start that > > process we should assume it as a default until the distros do catch > > up. This has two benefits: it means that these wheels would actually > > become useful in some reasonable amount of time, and as a bonus, it > > would provide a clear incentive for those rare distros that *aren't* > > compatible to document that by starting to provide a > > binary-compatibility.cfg. > > Sounds like a reinvention of LSB, which is still a thing I think, but > really didn't take the vendor world by storm. LSB == "Linux System Base" It really shouldn't be too difficult to add lsb_release to the major distros and/or sys.plat* http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/book1.html http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/lsbrelease.html > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Aug 14 03:44:23 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 14 Aug 2015 13:44:23 +1200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 14 August 2015 at 13:38, Wes Turner wrote: > > On Aug 13, 2015 8:31 PM, "Robert Collins" wrote: >> >> On 14 August 2015 at 13:25, Nathaniel Smith wrote: >> > On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: >> >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >> >>> >> >>> From my reading of what the Enthought and Continuum folks were saying >> >>> about how they are successfully distributing binaries across different >> >>> distributions, it sounds like the additional piece that would take >> >>> this from >> >>> a interesting experiment to basically-immediately-usable would be to >> >>> teach >> >>> pip that if no binary-compatibility.cfg is provided, then it should >> >>> assume >> >>> by default that the compatible systems whose wheels should be >> >>> installed are: >> >>> (1) the current system's exact tag, >> >> >> >> This should already be the case - the default tag will no longer be >> >> -linux_x86_64, it'd be linux_x86_64_distro_version. >> >> >> >>> >> >>> (2) the special hard-coded tag "centos5". (That's what everyone >> >>> actually >> >>> uses in practice, right?) >> >> >> >> The idea here is that we should attempt to install centos5 wheels if >> >> more >> >> specific wheels for the platform aren't available? >> > >> > Yes. >> > >> > Or more generally, we should pick some common baseline build >> > environment such that we're pretty sure wheels built there can run on >> > 99% of end-user systems and give this environment a name. (Doesn't >> > have to be "centos5", though IIUC CentOS 5 is what people are using >> > for this baseline build environment right now.) That way when distros >> > catch up and start providing binary-compatibility.cfg files, we can >> > give tell them that this is an environment that they should try to >> > support because it's what everyone is using, and to kick start that >> > process we should assume it as a default until the distros do catch >> > up. This has two benefits: it means that these wheels would actually >> > become useful in some reasonable amount of time, and as a bonus, it >> > would provide a clear incentive for those rare distros that *aren't* >> > compatible to document that by starting to provide a >> > binary-compatibility.cfg. >> >> Sounds like a reinvention of LSB, which is still a thing I think, but >> really didn't take the vendor world by storm. > > LSB == "Linux System Base" > > It really shouldn't be too difficult to add lsb_release to the major distros > and/or sys.plat* > > http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/book1.html > > http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/lsbrelease.html So its already there; the point I was making was the LSB process and guarantees, not lsb_release, which is a tiny thing. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From wes.turner at gmail.com Fri Aug 14 03:44:51 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 20:44:51 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 8:38 PM, "Wes Turner" wrote: > > > On Aug 13, 2015 8:31 PM, "Robert Collins" wrote: > > > > On 14 August 2015 at 13:25, Nathaniel Smith wrote: > > > On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: > > >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > > >>> > > >>> From my reading of what the Enthought and Continuum folks were saying > > >>> about how they are successfully distributing binaries across different > > >>> distributions, it sounds like the additional piece that would take this from > > >>> a interesting experiment to basically-immediately-usable would be to teach > > >>> pip that if no binary-compatibility.cfg is provided, then it should assume > > >>> by default that the compatible systems whose wheels should be installed are: > > >>> (1) the current system's exact tag, > > >> > > >> This should already be the case - the default tag will no longer be > > >> -linux_x86_64, it'd be linux_x86_64_distro_version. > > >> > > >>> > > >>> (2) the special hard-coded tag "centos5". (That's what everyone actually > > >>> uses in practice, right?) > > >> > > >> The idea here is that we should attempt to install centos5 wheels if more > > >> specific wheels for the platform aren't available? > > > > > > Yes. > > > > > > Or more generally, we should pick some common baseline build > > > environment such that we're pretty sure wheels built there can run on > > > 99% of end-user systems and give this environment a name. (Doesn't > > > have to be "centos5", though IIUC CentOS 5 is what people are using > > > for this baseline build environment right now.) That way when distros > > > catch up and start providing binary-compatibility.cfg files, we can > > > give tell them that this is an environment that they should try to > > > support because it's what everyone is using, and to kick start that > > > process we should assume it as a default until the distros do catch > > > up. This has two benefits: it means that these wheels would actually > > > become useful in some reasonable amount of time, and as a bonus, it > > > would provide a clear incentive for those rare distros that *aren't* > > > compatible to document that by starting to provide a > > > binary-compatibility.cfg. > > > > Sounds like a reinvention of LSB, which is still a thing I think, but > > really didn't take the vendor world by storm. > > LSB == "Linux System Base" > > It really shouldn't be too difficult to add lsb_release to the major distros and/or sys.plat* Salt grains implement this functionality w/ many OS: https://github.com/saltstack/salt/blob/110cae3cdc1799bad37f81f2/salt/grains/core.py#L1229 ("osname", "osrelease") [Apache 2.0] > > http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/book1.html > > http://refspecs.linuxbase.org/LSB_5.0.0/LSB-Core-generic/LSB-Core-generic/lsbrelease.html > > > > > -Rob > > > > -- > > Robert Collins > > Distinguished Technologist > > HP Converged Cloud > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Aug 14 03:47:27 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Aug 2015 18:47:27 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 12:30 PM, Leonardo Rochael Almeida wrote: > > On 13 August 2015 at 11:07, Nate Coraor wrote: >> >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >>> [...] >>> (2) the special hard-coded tag "centos5". (That's what everyone actually >>> uses in practice, right?) >> >> >> The idea here is that we should attempt to install centos5 wheels if more >> specific wheels for the platform aren't available? > > > Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the > standard behavior for installation being the exact platform tag, and an > automatic fallback to cento5. > > IMO, on Linux, the default should always be to opt in to the desired > platform tags. > > We could make it so that the word `default` inside > `binary-compatibility.cfg` means an exact match on the distro version, so > that we could simplify the documentation. > > But I don't want to upgrade to pip and suddenly find myself installing > binary wheels compiled by whomever for whatever platform I have no control > with, even assuming the best of the package builders intentions. > > And I certainly don't want centos5 wheels accidentally installed on my > ubuntu servers unless I very specifically asked for them. > > The tiny pain inflicted by telling users to add a one-line text file in a > very well known location (or two lines, for the added centos5), so that they > can get the benefit of binary wheels on linux, is very small compared to the > pain of repeatable install scripts suddenly behaving differently and > installing binary wheels in systems that were prepared to pay the price of > source installs, including the setting of build environment variables that > correctly tweaked their build process. I think there are two issues here: 1) You don't want centos5 wheels "accidentally" installed on an ubuntu server: Fair enough, you're right; we should probably make the "this wheel should work on pretty much any linux out there" tag be something that distributors have to explicitly opt into (similar to how they have to opt into creating universal wheels), rather than having it be something you could get by just typing 'pip wheel foo' on the right (wrong) machine. 2) You want it to be the case that if I type 'pip install foo' on a Linux machine, and pip finds both an sdist and a wheel, where the wheel is definitely compatible with the current system, then it should still always prefer the sdist unless configured otherwise: Here I disagree strongly. This is inconsistent with how things work on every other platform, it's inconsistent with how pip is being used on Linux right now with private wheelhouses, and the "tiny pain" of editing a file in /etc is a huge barrier to new users, many of whom are uncomfortable editing config files and may not have root access. -- Nathaniel J. Smith -- http://vorpus.org From wes.turner at gmail.com Fri Aug 14 03:50:04 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 20:50:04 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 8:47 PM, "Nathaniel Smith" wrote: > > On Thu, Aug 13, 2015 at 12:30 PM, Leonardo Rochael Almeida > wrote: > > > > On 13 August 2015 at 11:07, Nate Coraor wrote: > >> > >> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > >>> > [...] > >>> (2) the special hard-coded tag "centos5". (That's what everyone actually > >>> uses in practice, right?) > >> > >> > >> The idea here is that we should attempt to install centos5 wheels if more > >> specific wheels for the platform aren't available? > > > > > > Just my opinion, but although I'm +1 on Nate's efforts, I'm -1 on both the > > standard behavior for installation being the exact platform tag, and an > > automatic fallback to cento5. > > > > IMO, on Linux, the default should always be to opt in to the desired > > platform tags. > > > > We could make it so that the word `default` inside > > `binary-compatibility.cfg` means an exact match on the distro version, so > > that we could simplify the documentation. > > > > But I don't want to upgrade to pip and suddenly find myself installing > > binary wheels compiled by whomever for whatever platform I have no control > > with, even assuming the best of the package builders intentions. > > > > And I certainly don't want centos5 wheels accidentally installed on my > > ubuntu servers unless I very specifically asked for them. > > > > The tiny pain inflicted by telling users to add a one-line text file in a > > very well known location (or two lines, for the added centos5), so that they > > can get the benefit of binary wheels on linux, is very small compared to the > > pain of repeatable install scripts suddenly behaving differently and > > installing binary wheels in systems that were prepared to pay the price of > > source installs, including the setting of build environment variables that > > correctly tweaked their build process. > > I think there are two issues here: > > 1) You don't want centos5 wheels "accidentally" installed on an ubuntu > server: Fair enough, you're right; we should probably make the "this > wheel should work on pretty much any linux out there" tag be something > that distributors have to explicitly opt into (similar to how they > have to opt into creating universal wheels), rather than having it be > something you could get by just typing 'pip wheel foo' on the right > (wrong) machine. > > 2) You want it to be the case that if I type 'pip install foo' on a > Linux machine, and pip finds both an sdist and a wheel, where the > wheel is definitely compatible with the current system, then it should > still always prefer the sdist unless configured otherwise: Here I > disagree strongly. This is inconsistent with how things work on every > other platform, it's inconsistent with how pip is being used on Linux > right now with private wheelhouses, and the "tiny pain" of editing a > file in /etc is a huge barrier to new users, many of whom are > uncomfortable editing config files and may not have root access. So, there would be a capability / osnamestr mapping, or just [...]? Because my libc headers are different. > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Aug 14 04:14:29 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Aug 2015 19:14:29 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 6:31 PM, Robert Collins wrote: > On 14 August 2015 at 13:25, Nathaniel Smith wrote: >> On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: >>> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: >>>> >>>> From my reading of what the Enthought and Continuum folks were saying >>>> about how they are successfully distributing binaries across different >>>> distributions, it sounds like the additional piece that would take this from >>>> a interesting experiment to basically-immediately-usable would be to teach >>>> pip that if no binary-compatibility.cfg is provided, then it should assume >>>> by default that the compatible systems whose wheels should be installed are: >>>> (1) the current system's exact tag, >>> >>> This should already be the case - the default tag will no longer be >>> -linux_x86_64, it'd be linux_x86_64_distro_version. >>> >>>> >>>> (2) the special hard-coded tag "centos5". (That's what everyone actually >>>> uses in practice, right?) >>> >>> The idea here is that we should attempt to install centos5 wheels if more >>> specific wheels for the platform aren't available? >> >> Yes. >> >> Or more generally, we should pick some common baseline build >> environment such that we're pretty sure wheels built there can run on >> 99% of end-user systems and give this environment a name. (Doesn't >> have to be "centos5", though IIUC CentOS 5 is what people are using >> for this baseline build environment right now.) That way when distros >> catch up and start providing binary-compatibility.cfg files, we can >> give tell them that this is an environment that they should try to >> support because it's what everyone is using, and to kick start that >> process we should assume it as a default until the distros do catch >> up. This has two benefits: it means that these wheels would actually >> become useful in some reasonable amount of time, and as a bonus, it >> would provide a clear incentive for those rare distros that *aren't* >> compatible to document that by starting to provide a >> binary-compatibility.cfg. > > Sounds like a reinvention of LSB, which is still a thing I think, but > really didn't take the vendor world by storm. Yeah, I've been carefully not mentioning LSB because LSB is a disaster :-). But, I think this is different. IIUC, the problem with LSB is that it's trying to make it possible for big enterprise software vendors to stop saying "This RDBMS is certified to work on RHEL 6" and start saying "This RDBMS is certified to work on any distribution that meets the LSB criteria". But in practice this creates more risk and work for the vendor, while not actually solving any real problem -- if a customer is spending $$$$ on some enterprise database then they might as well throw in an extra $$ for a RHEL license, so the customers don't care, so the vendor doesn't either. And the folks building free software like Postgres don't care either because the distros do the support for them. So the LSB continues to limp along through the ISO process because just enough managers have been convinced that it *ought* to be useful that they continue to throw some money at it, and hey, it's probably useful to some people sometimes, just not very many people very often. We, on the other hand, are trying to solve a real problem that our users feel keenly (lots of people want to be able to distribute some little binary python extension in a way that just works for a wide range of users), and the proposed mechanism for solving this problem is not "let's form an ISO committee and hire contractors to write a Grand Unified Test Suite", it's codifying an existing working solution in the form of a wiki page or PEP or something. Of course if you have an alternative proposal than I'm all ears :-). -n P.S.: since probably not everyone on the mailing list has been following Linux inside baseball for decades, some context...: https://en.wikipedia.org/wiki/Linux_Standard_Base http://www.linuxfoundation.org/collaborate/workgroups/lsb/download https://lwn.net/Articles/152580/ http://udrepper.livejournal.com/8511.html (Last two links are from 2005, I can't really say how accurate they still are in details but they do describe some of the structural reasons why the LSB has not been massively popular) -- Nathaniel J. Smith -- http://vorpus.org From wes.turner at gmail.com Fri Aug 14 04:24:32 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 21:24:32 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 9:14 PM, "Nathaniel Smith" wrote: > > On Thu, Aug 13, 2015 at 6:31 PM, Robert Collins > wrote: > > On 14 August 2015 at 13:25, Nathaniel Smith wrote: > >> On Thu, Aug 13, 2015 at 7:07 AM, Nate Coraor wrote: > >>> On Wed, Aug 12, 2015 at 9:05 PM, Nathaniel Smith wrote: > >>>> > >>>> From my reading of what the Enthought and Continuum folks were saying > >>>> about how they are successfully distributing binaries across different > >>>> distributions, it sounds like the additional piece that would take this from > >>>> a interesting experiment to basically-immediately-usable would be to teach > >>>> pip that if no binary-compatibility.cfg is provided, then it should assume > >>>> by default that the compatible systems whose wheels should be installed are: > >>>> (1) the current system's exact tag, > >>> > >>> This should already be the case - the default tag will no longer be > >>> -linux_x86_64, it'd be linux_x86_64_distro_version. > >>> > >>>> > >>>> (2) the special hard-coded tag "centos5". (That's what everyone actually > >>>> uses in practice, right?) > >>> > >>> The idea here is that we should attempt to install centos5 wheels if more > >>> specific wheels for the platform aren't available? > >> > >> Yes. > >> > >> Or more generally, we should pick some common baseline build > >> environment such that we're pretty sure wheels built there can run on > >> 99% of end-user systems and give this environment a name. (Doesn't > >> have to be "centos5", though IIUC CentOS 5 is what people are using > >> for this baseline build environment right now.) That way when distros > >> catch up and start providing binary-compatibility.cfg files, we can > >> give tell them that this is an environment that they should try to > >> support because it's what everyone is using, and to kick start that > >> process we should assume it as a default until the distros do catch > >> up. This has two benefits: it means that these wheels would actually > >> become useful in some reasonable amount of time, and as a bonus, it > >> would provide a clear incentive for those rare distros that *aren't* > >> compatible to document that by starting to provide a > >> binary-compatibility.cfg. > > > > Sounds like a reinvention of LSB, which is still a thing I think, but > > really didn't take the vendor world by storm. > > Yeah, I've been carefully not mentioning LSB because LSB is a disaster > :-). But, I think this is different. > > IIUC, the problem with LSB is that it's trying to make it possible for > big enterprise software vendors to stop saying "This RDBMS is > certified to work on RHEL 6" and start saying "This RDBMS is certified > to work on any distribution that meets the LSB criteria". That's great. Is there a Dockerfile invocation for: - running the tests - building a binary in a mapped path - posting build state and artifacts to a central server > But in > practice this creates more risk and work for the vendor, while not > actually solving any real problem -- if a customer is spending $$$$ on > some enterprise database then they might as well throw in an extra $$ > for a RHEL license, so the customers don't care, so the vendor doesn't > either. And the folks building free software like Postgres don't care > either because the distros do the support for them. So the LSB > continues to limp along through the ISO process because just enough > managers have been convinced that it *ought* to be useful that they > continue to throw some money at it, and hey, it's probably useful to > some people sometimes, just not very many people very often. > > We, on the other hand, are trying to solve a real problem that our > users feel keenly (lots of people want to be able to distribute some > little binary python extension in a way that just works for a wide > range of users), and the proposed mechanism for solving this problem > is not "let's form an ISO committee and hire contractors to write a > Grand Unified Test Suite", it's codifying an existing working solution > in the form of a wiki page or PEP or something. > > Of course if you have an alternative proposal than I'm all ears :-). Required_caps = [('blas1', None), ('blas', '>= 1'), ('np17', None)] Re-post [TODO: upgrade mailman] """ ... BLAS may not be the best example, but should we expect such linked interfaces to change over time? (And e.g. be versioned dependencies with shim packages that have check functions)? ... How is an ABI constraint different from a package dependency? iiuc, ABI tags are thus combinatorial with package/wheel dependency strings? Conda/pycosat solve this with "preprocessing selectors" : http://conda.pydata.org/docs/building/meta-yaml.html#preprocessing-selectors : ``` linux True if the platform is Linux linux32 True if the platform is Linux and the Python architecture is 32-bit linux64 True if the platform is Linux and the Python architecture is 64-bit armv6 True if the platform is Linux and the Python architecture is armv6l osx True if the platform is OS X unix True if the platform is Unix (OS X or Linux) win True if the platform is Windows win32 True if the platform is Windows and the Python architecture is 32-bit win64 True if the platform is Windows and the Python architecture is 64-bit py The Python version as a two digit string (like '27'). See also the CONDA_PY environment variable below. py3k True if the Python major version is 3 py2k True if the Python major version is 2 py26 True if the Python version is 2.6 py27 True if the Python version is 2.7 py33 True if the Python version is 3.3 py34 True if the Python version is 3.4 np The NumPy version as a two digit string (like '17'). See also the CONDA_NPY environment variable below. Because the selector is any valid Python expression, complicated logic is possible. ``` > > -n > > P.S.: since probably not everyone on the mailing list has been > following Linux inside baseball for decades, some context...: > https://en.wikipedia.org/wiki/Linux_Standard_Base > http://www.linuxfoundation.org/collaborate/workgroups/lsb/download > https://lwn.net/Articles/152580/ > http://udrepper.livejournal.com/8511.html > (Last two links are from 2005, I can't really say how accurate they > still are in details but they do describe some of the structural > reasons why the LSB has not been massively popular) > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Aug 14 04:27:06 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 14 Aug 2015 14:27:06 +1200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 14 August 2015 at 14:14, Nathaniel Smith wrote: ...> > Of course if you have an alternative proposal than I'm all ears :-). Yeah :) So, I want to dedicate some time to contributing to this discussion meaningfully, but I can't for the next few weeks - Jury duty, Kiwi PyCon and polishing up the PEP's I'm already committed to... I think the approach of being able to ask the *platform* for things needed to build-or-use known artifacts is going to enable a bunch of different answers in this space. I'm much more enthusiastic about that than doing anything that ends up putting PyPI in competition with the distribution space. My criteria for success are: - there's *a* migration path from what we have today to what we propose. Doesn't have to be good, just exist. - authors of scipy, numpy, cryptography etc can upload binary wheels for *linux, Mac OSX and Windows 32/64 in a safe and sane way - we don't need to do things like uploading wheels containing non-Python shared libraries, nor upload statically linked modules In fact, I think uploading regular .so files is just a huge heartache waiting to happen, so I'm almost inclined to add: - we don't support uploading external non-Python libraries [ without prejuidice for changing our minds in the future] There was a post that referenced a numpy ABI, dunno if it was in this thread - I need to drill down into that, because I don't understand why thats not a regular version resolution problem,unlike the Python ABI, which pip can't install [and shouldn't be able to!] -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Fri Aug 14 04:33:51 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Aug 2015 19:33:51 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 6:50 PM, Wes Turner wrote: > So, there would be a capability / osnamestr mapping, or just [...]? > > Because my libc headers are different. Hi Wes, >From the question mark I infer that this is intended as a question for me, but like most of your posts I have no idea what you're talking about -- they're telegraphic to the point of incomprehensibility. So... if I don't answer something, that's why. -n -- Nathaniel J. Smith -- http://vorpus.org From wes.turner at gmail.com Fri Aug 14 04:41:47 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 13 Aug 2015 21:41:47 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Aug 13, 2015 9:33 PM, "Nathaniel Smith" wrote: > > On Thu, Aug 13, 2015 at 6:50 PM, Wes Turner wrote: > > So, there would be a capability / osnamestr mapping, or just [...]? > > > > Because my libc headers are different. > > Hi Wes, > > From the question mark I infer that this is intended as a question for > me, but like most of your posts I have no idea what you're talking > about -- they're telegraphic to the point of incomprehensibility. > So... if I don't answer something, that's why. Two approaches: * specify specific platforms / distributions ("centos5") * specify required capabilities ("Pkg", [version_constraints], [pkg_ABI_v2, xyz]) Limitations in the status quo: * setuptools install_requires only accepts (name, [version_constraints]) > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Aug 14 06:07:10 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 13 Aug 2015 21:07:10 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau wrote: > > On Thu, Aug 13, 2015 at 2:05 AM, Nathaniel Smith wrote: >> >> On Aug 12, 2015 13:57, "Nate Coraor" wrote: >> > >> > Hello all, >> > >> > I've implemented the wheel side of Nick's suggestion from very early in >> > this thread to support a vendor-providable binary-compatibility.cfg. >> > >> > https://bitbucket.org/pypa/wheel/pull-request/54/ >> > >> > If this is acceptable, I'll add support for it to the pip side. What >> > else should be implemented at this stage to get the PR accepted? >> >> From my reading of what the Enthought and Continuum folks were saying >> about how they are successfully distributing binaries across different >> distributions, it sounds like the additional piece that would take this from >> a interesting experiment to basically-immediately-usable would be to teach >> pip that if no binary-compatibility.cfg is provided, then it should assume >> by default that the compatible systems whose wheels should be installed are: >> (1) the current system's exact tag, (2) the special hard-coded tag >> "centos5". (That's what everyone actually uses in practice, right?) >> >> To make this *really* slick, it would be cool if, say, David C. could make >> a formal list of exactly which system libraries are important to depend on >> (xlib, etc.), and we could hard-code two compatibility profiles >> "centos5-minimal" (= just glibc and the C++ runtime) and "centos5" (= that >> plus the core too-hard-to-ship libraries), and possibly teach pip how to >> check whether that hard-coded core set is available. > > > So this is a basic list I got w/ a few minutes of scripting, by installing > our 200 most used packages on centos 5, ldd'ing all of the .so, and > filtering out a few things/bugs of some of our own packages): > > /usr/lib64/libatk-1.0.so.0 > /usr/lib64/libcairo.so.2 > /usr/lib64/libdrm.so.2 > /usr/lib64/libfontconfig.so.1 > /usr/lib64/libGL.so.1 > /usr/lib64/libGLU.so.1 > /usr/lib64/libstdc++.so.6 > /usr/lib64/libX11.so.6 > /usr/lib64/libXau.so.6 > /usr/lib64/libXcursor.so.1 > /usr/lib64/libXdmcp.so.6 > /usr/lib64/libXext.so.6 > /usr/lib64/libXfixes.so.3 > /usr/lib64/libXft.so.2 > /usr/lib64/libXinerama.so.1 > /usr/lib64/libXi.so.6 > /usr/lib64/libXrandr.so.2 > /usr/lib64/libXrender.so.1 > /usr/lib64/libXt.so.6 > /usr/lib64/libXv.so.1 > /usr/lib64/libXxf86vm.so.1 > /usr/lib64/libz.so.1 > > This list should only be taken as a first idea, I can work on a more precise > list including the versions if that's deemed useful. Cool. Here's a list of the external .so's assumed by the packages currently included in a default Anaconda install: https://gist.github.com/njsmith/6c3d3f2dbaaf526a8585 The lists look fairly similar overall -- glibc, libstdc++, Xlib. They additionally assume the availability of expat, glib, ncurses, pcre, maybe some other stuff I missed, but they ship their own versions of libz and fontconfig, and they don't seem to either ship or use cairo or atk in their default install. For defining a "standard platform", just taking the union seems reasonable -- if either project has gotten away this long with assuming some library is there, then it's probably there. Writing a little script that takes a wheel and checks whether it has any external dependencies outside of these lists, or takes a system and checks whether all these libraries are available, seems like it would be pretty trivial. > One significant issue is SSL: in theory, we (as a downstream distributor) > really want to avoid distributing such a key piece of infrastructure, but in > practice, there are so many versions which are incompatible across > distributions that it is not an option. This is mostly an issue for distributing Python itself, right? ...I hope? -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Fri Aug 14 09:38:12 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 14 Aug 2015 00:38:12 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins wrote: > On 14 August 2015 at 14:14, Nathaniel Smith wrote: > ...> >> Of course if you have an alternative proposal than I'm all ears :-). > > Yeah :) > > So, I want to dedicate some time to contributing to this discussion > meaningfully, but I can't for the next few weeks - Jury duty, Kiwi > PyCon and polishing up the PEP's I'm already committed to... Totally hear that... it's not super urgent anyway. We should make it clear to Nate -- hi Nate! -- that there's no reason that solving this problem should block putting together the basic binary-compatibility.cfg infrastructure. > I think the approach of being able to ask the *platform* for things > needed to build-or-use known artifacts is going to enable a bunch of > different answers in this space. I'm much more enthusiastic about that > than doing anything that ends up putting PyPI in competition with the > distribution space. > > My criteria for success are: > > - there's *a* migration path from what we have today to what we > propose. Doesn't have to be good, just exist. > > - authors of scipy, numpy, cryptography etc can upload binary wheels > for *linux, Mac OSX and Windows 32/64 in a safe and sane way So the problem is that, IMO, "sane" here means "not building a separate wheel for every version of distro on distrowatch". So I can see two ways to do that: - my suggestion that we just pick a particular highly-compatible distro like centos 5 to build against, and make a standard list of which libraries can be assumed to be provided - the PEP-497-or-number-to-be-determined approach, in which we still have to pick a highly-compatible distro like centos 5 to build against, but each wheel has a list of which libraries from that distro it is counting on being provided I can see the appeal of the latter approach, since if you want to do the former approach right you need to be careful about exactly which libraries you're assuming are present, etc. They both could work. But in practice, you still have to pick which distro you are going to use to build, and you still have to say "when I say I need libblas.so.1, what I mean is that I need a file that is ABI-compatible with the version of libblas.so.1 that existed in centos 5 exactly, not any other libblas.so.1". And then in practice not every distro will have such a thing, so for a project like numpy that wants to make things easy for a wide variety of users, we'll still only be able to take advantage of external dependencies for libraries that are effectively universally available and compatible anyway and end up vendoring the rest... so in the end basically we'd be distributing exactly the same wheels under either of these proposals, just the latter requires a much much more complicated scheme for metadata and installation. And in practice I think the main alternative possibility if we don't come up with some solid guidance for how packages can build works-everywhere-wheels is that we'll see wheels for latest-version-of-Ubuntu-only, plus the occasional smattering of other distros, varying randomly on a project-by-project basis. Which would suck. > - we don't need to do things like uploading wheels containing > non-Python shared libraries, nor upload statically linked modules > > > In fact, I think uploading regular .so files is just a huge heartache > waiting to happen, so I'm almost inclined to add: > > - we don't support uploading external non-Python libraries [ without > prejuidice for changing our minds in the future] Windows and OS X don't (reliably) have any package manager. So PyPI *is* inevitably going to contain non-Python shared libraries or statically linked modules or something like that. (And in fact it already contains such things today.) I'm not sure what the alternative would even be. This also means that projects like numpy are already forced to accept that we're on the hook for security updates in our dependencies etc., so doing it on Linux too is not really that scary. Oh, I just thought of another issue: an extremely important requirement for numpy/scipy/etc. wheels is that they be reliably installable without root access. People *really* care about this: missing your grant deadline b/c you can't upgrade some package to fix some showstopper bug b/c university IT support is not answering calls at midnight on Sunday = rather poor UX. Given that, the only situation I can see where we would ever distribute wheels that require system blas on Linux, is if we were able to do it alongside wheels that do not require system blas, and pip were clever enough to reliably always pick the latter except in cases where the system blas was actually present and working. > There was a post that referenced a numpy ABI, dunno if it was in this > thread - I need to drill down into that, because I don't understand > why thats not a regular version resolution problem,unlike the Python > ABI, which pip can't install [and shouldn't be able to!] The problem is that numpy is very unusual among Python packages in that exposes a large and widely-used *C* API/ABI: http://docs.scipy.org/doc/numpy/reference/c-api.html This means that when you build, e.g., scipy, then you get a binary that depends on things like the in-memory layout of numpy's internal objects. We'd like it to be the case that when we release a new version of numpy, pip could realize "hey, this new version says it has an incompatible ABI that will break your currently installed version of scipy -- I'd better fetch a new version of scipy as well, or at least rebuild the same version I already have". Notice that at the time scipy is built, it is not known which future version of numpy will require a rebuild. There are a lot of ways this might work on both the numpy and pip sides -- definitely fodder for a separate thread -- but that's the basic problem. -n -- Nathaniel J. Smith -- http://vorpus.org From cournape at gmail.com Fri Aug 14 11:59:02 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 14 Aug 2015 10:59:02 +0100 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 7:08 AM, Nathaniel Smith wrote: > On Wed, Aug 12, 2015 at 8:10 PM, Robert Collins > wrote: > > On 13 August 2015 at 12:51, Nathaniel Smith wrote: > >> On Aug 12, 2015 16:49, "Robert Collins" > wrote: > >>> > >>> I'm not sure what will be needed to get the PR accepted; At PyCon AU > >>> Tennessee Leuwenberg started drafting a PEP for the expression of > >>> dependencies on e.g. BLAS - its been given number 497, and is in the > >>> packaging-peps repo; I'm working on updating it now. > >> > >> I wanted to take a look at this PEP, but I can't seem to find it. PEP > 497: > >> https://www.python.org/dev/peps/pep-0497/ > >> appears to be something else entirely? > >> > >> I'm a bit surprised to hear that such a PEP is needed. We (= numpy devs) > >> have actively been making plans to ship a BLAS wheel on windows, and > AFAICT > >> this is totally doable now -- the blocker is windows toolchain issues, > not > >> pypa-related infrastructure. > >> > >> Specifically the idea is to have a wheel that contains the shared > library as > >> a regular old data file, plus a stub python package that knows how to > find > >> this data file and how to make it accessible to the linker. So > >> numpy/__init__.py would start by calling: > >> > >> import pyopenblas1 > >> # on Linux modifies LD_LIBRARY_PATH, > >> # on Windows uses ctypes to preload... whatever > >> pyopenblas1.enable() > >> > >> and then get on with things, or the build system might do: > >> > >> import pyopenblas1 > >> pyopenblas1.get_header_directories() > >> pyopenblas1.get_linker_directories() > >> > > Thanks to James for sending on the link! > > Two main thoughts, now that I've read it over: > > 1) The motivating example is somewhat confused -- the draft says: > > + The example provided in the abstract is a > + hypothetical package which needs versions of numpy and scipy, both of > which > + must have been compiled to be aware of the ATLAS compiled set of > linear algebra > + libraries (for performance reasons). This sounds esoteric but is, in > fact, a > + routinely encountered situation which drives people towards using the > + alternative packaging for scientific python environments. > > Numpy and scipy actually work hard to export a consistent, append-only > ABI regardless of what libraries are used underneath. (This is > actually by far our biggest issue with wheels -- that there's still no > way to tag the numpy ABI as part of the ABI string, so in practice > it's just impossible to ever have a smooth migration to a new ABI and > we have no choice but to forever maintain compatibility with numpy > 0.1. But that's not what this proposal addresses.) Possibly part of > the confusion here is that Christoph Gohlke's popular numpy+scipy > builds use a hack where instead of making the wheels self-contained > via statically linking or something like that, then he ships the > actual libBLAS.dll inside the numpy wheel, and then the scipy wheel > has some code added that magically "knows" that there is this special > numpy wheel that it can find libBLAS.dll inside and use it directly > from scipy's own extensions. But this coupling is pretty much just > broken, and it directly motivates the blas-in-its-own-wheel design I > sketched out above. > > (I guess the one exception is that if you have a numpy or scipy build > that dynamically links to a library like BLAS, and then another > extension that links to a different BLAS with an incompatible ABI, and > the two BLAS libraries have symbol name collisions, then that could be > a problem because ELF is frustrating like that. But the obvious > solution here is to be careful about how you do your builds -- either > by using static linking, or making sure that incompatible ABIs get > different symbol names.) > > Anyway, this doesn't particularly undermine the PEP, but it would be > good to use a more realistic motivating example. > > 2) AFAICT, the basic goal of this PEP is to provide machinery to let > one reliably build a wheel for some specific version of some specific > distribution, while depending on vendor-provided libraries for various > external dependencies, and providing a nice user experience (e.g., > telling users explicitly which vendor-provided libraries they need to > install). I say this because strings like "libblas1.so" or "kernel.h" > do not define any fixed ABI or APIs, unless you are implicitly scoping > to some particular distribution with at least some minimum version > constraint. > > It seems like a reasonable effort at solving this problem, and I guess > there are probably some people somewhere that have this problem, but > my concern is that I don't actually know any of those people. The > developers I know instead have the problem of, they want to be able to > provide a small finite number of binaries (ideally six binaries per > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that > together will Just Work on 99% of end-user systems. And that's the > problem that Enthought, Continuum, etc., have been solving for years, > and which wheels already mostly solve on windows and osx, so it seems > like a reasonable goal to aim for. But I don't see how this PEP gets > us any closer to that. Again, not really a criticism -- these goals > aren't contradictory and it's great if pip ends up being able to > handle both common and niche use cases. But I want to make sure that > we're clear that these goals are different and which one each proposal > is aimed at. > > >> This doesn't help if you want to declare dependencies on external, > system > >> managed libraries and have those be automatically somehow provided or > >> checked for, but to me that sounds like an impossible boil-the-ocean > project > >> anyway, while the above is trivial and should just work. > > > > Well, have a read of the draft. > > > > Its a solved problem by e.g. conda, apt, yum, nix and many others. > > None of these projects allow a .deb to depend on .rpms etc. -- they > all require that they own the whole world with some narrow, carefully > controlled exceptions (e.g. anaconda requires some non-trivial runtime > on the host system -- glibc, glib, pcre, expat, ... -- but it's a > single fixed set that they've empirically determined is close enough > to universally available in practice). The "boil the ocean" part is > the part where everybody who wants to distribute wheels has to go > around and figure out every possible permutation of ABIs on every > possible external packaging system and provide separate wheels for > each of them. > > > Uploading system .so's is certainly also an option, and I see no > > reason why we can't do both. > > > > I do know that distribution vendors are likely to be highly allergic > > to the idea of having regular shared libraries present as binaries, > > but thats a different discussion :) > > Yeah, but basically in the same way that they're allergic to all > wheels, period, so ... :-). I think in the long run the only realistic > approach is for most users to either be getting blas+numpy from some > external system like macports/conda/yum/... or else to be getting > blas+numpy from official wheels on pypi. And neither of these two > scenarios seems to benefit from the functionality described in this > PEP. > > (Final emphasis: this is all just my own opinion based on my > far-from-omniscient view of the packaging system, please tell me if > I'm making some ridiculous error, or if well-actually libBLAS is > special and there is some other harder case I'm not thinking of, etc.) > >From my own experience if you have a design that covers blas/lapack issues for numpy/scipy, you've solved a majority of typical binary packaging issues in the python ecosystem. Will you be there at euroscipy ? I will spend some time at Euroscipy to continue the work we started at pycon around some of this (taking into account the PEP as well). David > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Aug 14 18:00:25 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 14 Aug 2015 09:00:25 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Wed, Aug 12, 2015 at 6:05 PM, Nathaniel Smith wrote: > (2) the special hard-coded tag "centos5". (That's what everyone actually > uses in practice, right?) > Is LSB a fantasy that never happened? I haven't followed it for years.... -CHB > Compare with osx, where there are actually a ton of different ABIs > I suppose so -- but monstrously fewer than Linux, and a very small set that are in common use. A really different problem. But yes, the consensus on what to support really helps. -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Aug 14 18:04:06 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 14 Aug 2015 09:04:06 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau wrote: > So this is a basic list I got w/ a few minutes of scripting, > could we define this list (or somethign like it) as "Python-Linux-Standard-Base-version X.Y" Then we have a tag to use on binary wheels, and clearly defined way to know whether you can use them. My understanding tis that Anaconda uses a "kinda old" version of Linux Z(CentOS?) -- and it seems to work OK, though it's not really all that well defined or documented. This could be a way to do about the same thing, but better defined and documented. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Fri Aug 14 18:20:33 2015 From: cournape at gmail.com (David Cournapeau) Date: Fri, 14 Aug 2015 17:20:33 +0100 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Fri, Aug 14, 2015 at 5:04 PM, Chris Barker wrote: > On Thu, Aug 13, 2015 at 10:52 AM, David Cournapeau > wrote: > >> So this is a basic list I got w/ a few minutes of scripting, >> > > could we define this list (or somethign like it) as > "Python-Linux-Standard-Base-version X.Y" > > Then we have a tag to use on binary wheels, and clearly defined way to > know whether you can use them. > > My understanding tis that Anaconda uses a "kinda old" version of Linux > Z(CentOS?) -- and it seems to work OK, though it's not really all that well > defined or documented. > > This could be a way to do about the same thing, but better defined and > documented. > My suggestion would be to actually document this by simply providing a corresponding docker image (built through say packer). David > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Fri Aug 14 18:17:13 2015 From: steve.dower at python.org (Steve Dower) Date: Fri, 14 Aug 2015 09:17:13 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: <55CE1489.20801@python.org> On 14Aug2015 0038, Nathaniel Smith wrote: > Windows and OS X don't (reliably) have any package manager. So PyPI > *is* inevitably going to contain non-Python shared libraries or > statically linked modules or something like that. (And in fact it > already contains such things today.) I'm not sure what the alternative > would even be. Windows 10 has a package manager (http://blogs.technet.com/b/keithmayer/archive/2014/04/16/what-s-new-in-powershell-getting-started-with-oneget-in-one-line-with-powershell-5-0.aspx) but I don't think it will be particularly helpful here. The Windows model has always been to only share system libraries and each application should keep its own dependencies local. I actually like two ideas for Windows (not clear to me how well they apply on other platforms), both of which have been mentioned in the past: * PyPI packages that are *very* thin wrappers around a shared library For example, maybe "libpng" shows up on PyPI, and packages can then depend on it. It takes some care on the part of the publisher to maintain version-to-version compatibility (or maybe wheel/setup.py/.cfg grows a way to define vendored dependencies?) but this should be possible today. * "Packages" that contain shared sources One big problem on Windows is there's no standard place to put library sources, so build tools can't find them. If a package declared "build requires libpng.x.y source" then there could be tarballs "somewhere" (or even links to public version control) that have that version of the source, and the build tools can add the path references to include it. I don't have numbers, but I do know that once a C compiler is available the next easiest problem to solve is getting and referencing sources. > Given that, the only situation I can see where we would ever > distribute wheels that require system blas on Linux, is if we were > able to do it alongside wheels that do not require system blas, and > pip were clever enough to reliably always pick the latter except in > cases where the system blas was actually present and working. I think something similar came up back when we were discussing SSE support in Windows wheels. I'd love to see packages be able to run system checks to determine their own platform string (maybe a pip/wheel extension?) before selecting and downloading a wheel. I think that would actually solve a lot of these issues. > This means that when you build, e.g., scipy, then you get a binary > that depends on things like the in-memory layout of numpy's internal > objects. We'd like it to be the case that when we release a new > version of numpy, pip could realize "hey, this new version says it has > an incompatible ABI that will break your currently installed version > of scipy -- I'd better fetch a new version of scipy as well, or at > least rebuild the same version I already have". Notice that at the > time scipy is built, it is not known which future version of numpy > will require a rebuild. There are a lot of ways this might work on > both the numpy and pip sides -- definitely fodder for a separate > thread -- but that's the basic problem. There was discussion about an "incompatible_with" metadata item at one point. Could numpy include {incompatible_with: "scipy -n > From chris.barker at noaa.gov Fri Aug 14 22:16:09 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 14 Aug 2015 13:16:09 -0700 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <55CE1489.20801@python.org> References: <55CE1489.20801@python.org> Message-ID: On Fri, Aug 14, 2015 at 9:17 AM, Steve Dower wrote: > I actually like two ideas for Windows (not clear to me how well they apply > on other platforms), I think this same approach should be used for OS-X, not sure about Linux -- on LInux, you normally have "normal" ways to get libs. both of which have been mentioned in the past: > > * PyPI packages that are *very* thin wrappers around a shared library > > For example, maybe "libpng" shows up on PyPI, and packages can then depend > on it. It takes some care on the part of the publisher to maintain > version-to-version compatibility (or maybe wheel/setup.py/.cfg grows a > way to define vendored dependencies?) but this should be possible today. > excep that AFAICT, we have no way to describe wheel (or platform) dependent dependencies: i.e "this particular binary wheel, for Windows depends on the libPNG version x.y wheel" Though you could probably fairly easily path that dependency into the wheel itself. But ideally, we would have a semi-standard place to put such stuff, and then the source package would depend on libPNG being there at build time, too, but only on Windows. (or maybe only on OS-X, or both, but not Linux, or...) Or just go with conda :-) -- conda packages depend on other conda packages -- not on other projects (I.e. not source, etc). And yuo can do platform dependent configuration, like dependencies. * "Packages" that contain shared sources > > One big problem on Windows is there's no standard place to put library > sources, so build tools can't find them. If a package declared "build > requires libpng.x.y source" then there could be tarballs "somewhere" (or > even links to public version control) that have that version of the source, > and the build tools can add the path references to include it. > That would be the source equivalent of the above, and yes, I like that idea -- but again, you need a way to express platform-dependent dependencies. Though given that setup.py is python code, that's not too hard. There was discussion about an "incompatible_with" metadata item at one > point. Could numpy include {incompatible_with: "scipy release? Or would that not be possible. > circular dependency hell! scipy depends on numpy, not teh other way around -- so it needs to be clear which version of numpy a given version of scipy depends on. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Sat Aug 15 00:32:26 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Fri, 14 Aug 2015 18:32:26 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: <55CE1489.20801@python.org> Message-ID: <55CE6C7A.3010301@sdamon.com> On 8/14/2015 16:16, Chris Barker wrote: > On Fri, Aug 14, 2015 at 9:17 AM, Steve Dower > wrote: > > There was discussion about an "incompatible_with" metadata item at > one point. Could numpy include {incompatible_with: "scipy such a release? Or would that not be possible. > > > circular dependency hell! scipy depends on numpy, not teh other way > around -- so it needs to be clear which version of numpy a given > version of scipy depends on. > > -CHB > I think a better spelling of that would be something along the lines of 'abi_version' - listing all the packages your new version of your module breaks... is a long list. - Alex W. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Mon Aug 17 16:07:17 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Mon, 17 Aug 2015 16:07:17 +0200 Subject: [Distutils] PEP for dependencies on libraries like BLAS In-Reply-To: References: Message-ID: Nathaniel Smith schreef op 13-08-15 om 08:08: > On Wed, Aug 12, 2015 at 8:10 PM, Robert Collins > wrote: >> On 13 August 2015 at 12:51, Nathaniel Smith wrote: >>> On Aug 12, 2015 16:49, "Robert Collins" wrote: >>> >>> This doesn't help if you want to declare dependencies on external, system >>> managed libraries and have those be automatically somehow provided or >>> checked for, but to me that sounds like an impossible boil-the-ocean project >>> anyway, while the above is trivial and should just work. >> Well, have a read of the draft. >> >> Its a solved problem by e.g. conda, apt, yum, nix and many others. > None of these projects allow a .deb to depend on .rpms etc. -- they > all require that they own the whole world Would it help if our tools could "accept" already-externally installed dependencies? As an example, we use syseggrecipe (https://pypi.python.org/pypi/syseggrecipe) in buildout. You specify some packages there (psycopg2, numpy, scipy, lxml to name some common ones) and syseggrecipe tries to find them and adds them to buildout. So IF you installed numpy/scipy as a debian package, you can include it in your buildout. In the same way, if you activated a conda environment with some python dependencies, you could tell buildout to re-use one of the packages from there. This works, because buildout doesn't do virtualenv-style hard isolation. It "only" inserts the python packages it installed into the front of the sys.path. And with syseggrecipe, some system-wide installed eggs are explicitly included in sys.path. Question: could pip/virtualenv be made to accept something from the outside world? I'm mostly looking at .deb/.rpm packages here. It goes a bit against the pure isolation that virtualenv aims to provide, I know :-) a) pip wouldn't need to own the whole world anymore (in specific cases). b) you'd probably still want/need a mechanism to find out which .deb package you'd need for some system dependency. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From donald at stufft.io Mon Aug 17 16:15:46 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 17 Aug 2015 10:15:46 -0400 Subject: [Distutils] PEP for dependencies on libraries like BLAS In-Reply-To: References: Message-ID: On August 17, 2015 at 10:08:03 AM, Reinout van Rees (reinout at vanrees.org) wrote: > Nathaniel Smith schreef op 13-08-15 om 08:08: > > On Wed, Aug 12, 2015 at 8:10 PM, Robert Collins > > wrote: > >> On 13 August 2015 at 12:51, Nathaniel Smith wrote: > >>> On Aug 12, 2015 16:49, "Robert Collins" wrote: > >>> > >>> This doesn't help if you want to declare dependencies on external, system > >>> managed libraries and have those be automatically somehow provided or > >>> checked for, but to me that sounds like an impossible boil-the-ocean project > >>> anyway, while the above is trivial and should just work. > >> Well, have a read of the draft. > >> > >> Its a solved problem by e.g. conda, apt, yum, nix and many others. > > None of these projects allow a .deb to depend on .rpms etc. -- they > > all require that they own the whole world > > Would it help if our tools could "accept" already-externally installed > dependencies? > > As an example, we use syseggrecipe > (https://pypi.python.org/pypi/syseggrecipe) in buildout. You specify > some packages there (psycopg2, numpy, scipy, lxml to name some common > ones) and syseggrecipe tries to find them and adds them to buildout. So > IF you installed numpy/scipy as a debian package, you can include it in > your buildout. > > In the same way, if you activated a conda environment with some python > dependencies, you could tell buildout to re-use one of the packages from > there. > > This works, because buildout doesn't do virtualenv-style hard isolation. > It "only" inserts the python packages it installed into the front of the > sys.path. And with syseggrecipe, some system-wide installed eggs are > explicitly included in sys.path. > > > Question: could pip/virtualenv be made to accept something from the > outside world? I'm mostly looking at .deb/.rpm packages here. It goes a > bit against the pure isolation that virtualenv aims to provide, I know :-) > > a) pip wouldn't need to own the whole world anymore (in specific cases). > > b) you'd probably still want/need a mechanism to find out which .deb > package you'd need for some system dependency. > > pip already accepts things installs by not pip as long as they have standard Python metadata installed too, which most Linux distributions do. Virtual environments of course isolate you from the system so it isolates you from that too, but that can be disabled by using ?system-site-packages. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From erik.m.bray at gmail.com Mon Aug 17 19:12:26 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 17 Aug 2015 13:12:26 -0400 Subject: [Distutils] Problem Report In-Reply-To: References: Message-ID: On Thu, Aug 13, 2015 at 4:42 AM, ??? wrote: > Dear Maintainers: > > This problem occurred when > 1. Windows platform > 2. Python is installed on non-Latin path (for example: path contains Chinese > character). > 3. try to "pip install theano" > > And I found the problem is in distutils.command.build_scripts module's > copy_scripts function, on line 106 > > executable = os.fsencode(executable) > shebang = b"#!" + executable + post_interp + b"\n" > try: > shebang.decode('utf-8') > > actually os.fsencode will encode the path into GBK encoding on windows, it's > certainly that will fail to decode via utf-8. > > Solution: > > #executable = os.fsencode(executable) (delete this line) > executable = executable.encode('utf-8') > > Theano successfully installed after this patch. Hi, This is a bit tricky--I think, from the *nix perspective, using os.fsencode() looks like the correct approach here. However, if sys.getfilesystemencoding() != 'utf-8', and if the result of os.fsencode(executable) is not decodable as utf-8, then that's going to be a problem for the Python interpreter which begins reading a file as utf-8 until it gets to the coding token. Unfortunately this is a bit contradictory--if the path to the interpreter in the local filesystem encoding is not UTF-8 it is impossible to parse that file in Python. On Windows this shouldn't matter--I agree with your patch, that it should just write the shebang line in UTF-8. However, on *nix systems it really should be using os.fsencode, I think. I wonder if this was brought up in the discussion around PEP-263. I feel like as long as the file encoding is declared to be the same as whatever encoding was used the write the shebang line, that this should be valid. However, the Python interpreter still tries to interpret the shebang line as UTF-8, and hence falls over in your case. This is unfortunate... Best, Erik From p.f.moore at gmail.com Mon Aug 17 21:37:17 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 17 Aug 2015 20:37:17 +0100 Subject: [Distutils] Problem Report In-Reply-To: References: Message-ID: On 17 August 2015 at 18:12, Erik Bray wrote: > On Thu, Aug 13, 2015 at 4:42 AM, ??? wrote: >> Dear Maintainers: >> >> This problem occurred when >> 1. Windows platform >> 2. Python is installed on non-Latin path (for example: path contains Chinese >> character). >> 3. try to "pip install theano" >> >> And I found the problem is in distutils.command.build_scripts module's >> copy_scripts function, on line 106 >> >> executable = os.fsencode(executable) >> shebang = b"#!" + executable + post_interp + b"\n" >> try: >> shebang.decode('utf-8') >> >> actually os.fsencode will encode the path into GBK encoding on windows, it's >> certainly that will fail to decode via utf-8. >> >> Solution: >> >> #executable = os.fsencode(executable) (delete this line) >> executable = executable.encode('utf-8') >> >> Theano successfully installed after this patch. > > Hi, > > This is a bit tricky--I think, from the *nix perspective, using > os.fsencode() looks like the correct approach here. However, if > sys.getfilesystemencoding() != 'utf-8', and if the result of > os.fsencode(executable) is not decodable as utf-8, then that's going > to be a problem for the Python interpreter which begins reading a file > as utf-8 until it gets to the coding token. > > Unfortunately this is a bit contradictory--if the path to the > interpreter in the local filesystem encoding is not UTF-8 it is > impossible to parse that file in Python. On Windows this shouldn't > matter--I agree with your patch, that it should just write the shebang > line in UTF-8. However, on *nix systems it really should be using > os.fsencode, I think. > > I wonder if this was brought up in the discussion around PEP-263. I > feel like as long as the file encoding is declared to be the same as > whatever encoding was used the write the shebang line, that this > should be valid. However, the Python interpreter still tries to > interpret the shebang line as UTF-8, and hence falls over in your > case. This is unfortunate... There are a number of questions here, which I don't currently have time to dig into, I'm afraid: 1. The original post specifies Windows, so I'll stick to that. Unix is a whole other situation, and I won't cover that as I have no expertise there. But it will need reviewing by someone who does know. 2. Where is the shebang being used? I can think of at least 3 possibilities, and they are all parsed with different code. If it's written to a .py file executed by the user (via the launcher) it should be UTF-8 as that's what the launcher uses. If it's written to the embedded python script in a pip (distlib) single-file exe wrapper, it should probably also use UTF-8 as the distlib wrappers use code derived from the launcher code (I believe) and therefore probably also uses UTF-8. If it's an old-style setuptools 2-file exe wrapper (.exe and -script.py) then it should use whatever that exe requires - I have no idea what that might be, but UTF-8 is still the only really sane choice, it's just that the setuptools wrapper was written some time ago and may not have made that choice. Someone should check. 3. Long story short, use UTF-8, but you may need to check the code that interprets the shebang just to be sure. Any actual patch needs to be conditional on the OS as well (unless it turns out that UTF-8 is the right answer everywhere, which frankly I doubt...) Paul From reinout at vanrees.org Mon Aug 17 22:56:21 2015 From: reinout at vanrees.org (Reinout van Rees) Date: Mon, 17 Aug 2015 22:56:21 +0200 Subject: [Distutils] PEP for dependencies on libraries like BLAS In-Reply-To: References: Message-ID: Donald Stufft schreef op 17-08-15 om 16:15: > pip already accepts things installs by not pip as long as they have standard Python metadata installed too, which most Linux distributions do. Virtual environments of course isolate you from the system so it isolates you from that too, but that can be disabled by using ?system-site-packages. If I understand correctly, --system-site-packages results in the full amount of site-wide installed packages to be available. So pip will look at them all. What I am aiming at (and which syseggrecipe does for buildout) is to *selectively* and *explicitly* allow some site-wide packages to be available in the virtualenv. "Give me the global numpy and lxml, please!". Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From garyr at fidalgo.net Tue Aug 18 18:58:49 2015 From: garyr at fidalgo.net (garyr) Date: Tue, 18 Aug 2015 09:58:49 -0700 Subject: [Distutils] (no subject) Message-ID: <70964B5C03D9406FADF07D58354B8B2A@owner59bf8d40c> I posted this on comp.lang.python but received no replies. I tried building the spammodule.c example described in the documentation section "Extending Python with C or C++." As shown the code compiles OK but generates a link error: LINK : error LNK2001: unresolved external symbol init_spam build\temp.win32-2.7\Release\_spam.lib : fatal error LNK1120: 1 unresolved externals I tried changing the name of the initialization function spam_system to init_spam and removed the static declaration. This compiled and linked without errors but generated a system error when _spam was imported. I'm using Python 2.7.9. The same error occurs with Python 2.6.9. The code and the setup.py file are shown below. What do I need to do to fix this? setup.py: ----------------------------------------------------------------------------- from setuptools import setup, Extension setup(name='spam', version='0.1', description='test module', ext_modules=[Extension('_spam', ['spammodule.c'], include_dirs=[C:\Documents and Settings\Owner\Miniconda\include], )], ) sammodule.c -------------------------------------------------- #include static PyObject *SpamError; static PyObject * spam_system(PyObject *self, PyObject *args) { const char *command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) return NULL; sts = system(command); if (sts < 0) { PyErr_SetString(SpamError, "System command failed"); return NULL; } return PyLong_FromLong(sts); } static PyMethodDef SpamMethods[] = { {"system", spam_system, METH_VARARGS, "Execute a shell command."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC initspam(void) { PyObject *m; m = Py_InitModule("spam", SpamMethods); if (m == NULL) return; SpamError = PyErr_NewException("spam.error", NULL, NULL); Py_INCREF(SpamError); PyModule_AddObject(m, "error", SpamError); } From oscar.j.benjamin at gmail.com Tue Aug 18 21:51:06 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 18 Aug 2015 19:51:06 +0000 Subject: [Distutils] (no subject) In-Reply-To: <70964B5C03D9406FADF07D58354B8B2A@owner59bf8d40c> References: <70964B5C03D9406FADF07D58354B8B2A@owner59bf8d40c> Message-ID: Should the function be called init_spam rather than initspam? On Tue, 18 Aug 2015 19:19 garyr wrote: I posted this on comp.lang.python but received no replies. I tried building the spammodule.c example described in the documentation section "Extending Python with C or C++." As shown the code compiles OK but generates a link error: LINK : error LNK2001: unresolved external symbol init_spam build\temp.win32-2.7\Release\_spam.lib : fatal error LNK1120: 1 unresolved externals I tried changing the name of the initialization function spam_system to init_spam and removed the static declaration. This compiled and linked without errors but generated a system error when _spam was imported. I'm using Python 2.7.9. The same error occurs with Python 2.6.9. The code and the setup.py file are shown below. What do I need to do to fix this? setup.py: ----------------------------------------------------------------------------- from setuptools import setup, Extension setup(name='spam', version='0.1', description='test module', ext_modules=[Extension('_spam', ['spammodule.c'], include_dirs=[C:\Documents and Settings\Owner\Miniconda\include], )], ) sammodule.c -------------------------------------------------- #include static PyObject *SpamError; static PyObject * spam_system(PyObject *self, PyObject *args) { const char *command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) return NULL; sts = system(command); if (sts < 0) { PyErr_SetString(SpamError, "System command failed"); return NULL; } return PyLong_FromLong(sts); } static PyMethodDef SpamMethods[] = { {"system", spam_system, METH_VARARGS, "Execute a shell command."}, {NULL, NULL, 0, NULL} /* Sentinel */ }; PyMODINIT_FUNC initspam(void) { PyObject *m; m = Py_InitModule("spam", SpamMethods); if (m == NULL) return; SpamError = PyErr_NewException("spam.error", NULL, NULL); Py_INCREF(SpamError); PyModule_AddObject(m, "error", SpamError); } _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Aug 18 23:56:34 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 18 Aug 2015 21:56:34 +0000 Subject: [Distutils] (no subject) In-Reply-To: <5D2F9D0335944DC3ACF828F3F80AE2B2@owner59bf8d40c> References: <70964B5C03D9406FADF07D58354B8B2A@owner59bf8d40c> <5D2F9D0335944DC3ACF828F3F80AE2B2@owner59bf8d40c> Message-ID: You extension module is called _spam in your setup.py so the function should be init_spam. On Tue, 18 Aug 2015 22:50 garyr wrote: > Not according to the Python documentation. > > See the Python documentation Extending and Embedding the Python Interperter > / Extending Python with C or C++ / > A Simple Example: > > The method table must be passed to the interpreter in the module?s > initialization function. The initialization function must be named > initname(), > where name is the name of the module, and should be the only non-static > item > defined in the module file: > > PyMODINIT_FUNC > initspam(void) > { > (void) Py_InitModule("spam", SpamMethods); > } > > I tried doing that and it crashed Python when I imported _spam > > ----- Original Message ----- > From: "Oscar Benjamin" > To: "garyr" ; > Sent: Tuesday, August 18, 2015 12:51 PM > Subject: Re: [Distutils] (no subject) > > > > Should the function be called init_spam rather than initspam? > > > > > > On Tue, 18 Aug 2015 19:19 garyr wrote: > > > > I posted this on comp.lang.python but received no replies. > > > > I tried building the spammodule.c example described in the documentation > > section "Extending Python with C or C++." As shown the code compiles OK > but > > generates a link error: > > > > LINK : error LNK2001: unresolved external symbol init_spam > > build\temp.win32-2.7\Release\_spam.lib : fatal error LNK1120: 1 > unresolved > > externals > > > > I tried changing the name of the initialization function spam_system to > > init_spam and removed the static declaration. This compiled and linked > > without errors but generated a system error when _spam was imported. > > > > I'm using Python 2.7.9. The same error occurs with Python 2.6.9. > > > > The code and the setup.py file are shown below. What do I need to do to > fix > > this? > > > > setup.py: > > > ----------------------------------------------------------------------------- > > from setuptools import setup, Extension > > > > setup(name='spam', > > version='0.1', > > description='test module', > > ext_modules=[Extension('_spam', ['spammodule.c'], > > include_dirs=[C:\Documents and > > Settings\Owner\Miniconda\include], > > )], > > ) > > > > sammodule.c > > -------------------------------------------------- > > #include > > static PyObject *SpamError; > > > > static PyObject * > > spam_system(PyObject *self, PyObject *args) > > { > > const char *command; > > int sts; > > > > if (!PyArg_ParseTuple(args, "s", &command)) > > return NULL; > > sts = system(command); > > if (sts < 0) { > > PyErr_SetString(SpamError, "System command failed"); > > return NULL; > > } > > return PyLong_FromLong(sts); > > } > > > > static PyMethodDef SpamMethods[] = { > > > > {"system", spam_system, METH_VARARGS, > > "Execute a shell command."}, > > {NULL, NULL, 0, NULL} /* Sentinel */ > > }; > > > > PyMODINIT_FUNC > > initspam(void) > > { > > PyObject *m; > > > > m = Py_InitModule("spam", SpamMethods); > > if (m == NULL) > > return; > > > > SpamError = PyErr_NewException("spam.error", NULL, NULL); > > Py_INCREF(SpamError); > > PyModule_AddObject(m, "error", SpamError); > > } > > > > > > > > > > > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Aug 20 12:05:34 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 20 Aug 2015 20:05:34 +1000 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: [Catching up on distutils-sig after travel] On 13 August 2015 at 16:08, Nathaniel Smith wrote: > It seems like a reasonable effort at solving this problem, and I guess > there are probably some people somewhere that have this problem, but > my concern is that I don't actually know any of those people. The > developers I know instead have the problem of, they want to be able to > provide a small finite number of binaries (ideally six binaries per > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that > together will Just Work on 99% of end-user systems. And that's the > problem that Enthought, Continuum, etc., have been solving for years, > and which wheels already mostly solve on windows and osx, so it seems > like a reasonable goal to aim for. But I don't see how this PEP gets > us any closer to that. The key benefit from my perspective is that tools like pyp2rpm, conda skeleton, the Debian Python packaging tools, etc, will be able to automatically generate full dependency sets automatically from upstream Python metadata. At the moment that's manual work which needs to be handled independently for each binary ecosystem, but there's no reason it has to be that way - we can do a better job of defining the source dependencies, and then hook into release-monitoring.org to automatically rebuild the downstream binaries (including adding new external dependencies if needed) whenever new upstream releases are published. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wes.turner at gmail.com Thu Aug 20 19:15:53 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 20 Aug 2015 12:15:53 -0500 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: > > [Catching up on distutils-sig after travel] > > On 13 August 2015 at 16:08, Nathaniel Smith wrote: > > It seems like a reasonable effort at solving this problem, and I guess > > there are probably some people somewhere that have this problem, but > > my concern is that I don't actually know any of those people. The > > developers I know instead have the problem of, they want to be able to > > provide a small finite number of binaries (ideally six binaries per > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that > > together will Just Work on 99% of end-user systems. And that's the > > problem that Enthought, Continuum, etc., have been solving for years, > > and which wheels already mostly solve on windows and osx, so it seems > > like a reasonable goal to aim for. But I don't see how this PEP gets > > us any closer to that. > > The key benefit from my perspective is that tools like pyp2rpm, conda > skeleton, the Debian Python packaging tools, etc, will be able to > automatically generate full dependency sets automatically from > upstream Python metadata. > > At the moment that's manual work which needs to be handled > independently for each binary ecosystem, but there's no reason it has > to be that way - we can do a better job of defining the source > dependencies, and then hook into release-monitoring.org to > automatically rebuild the downstream binaries (including adding new > external dependencies if needed) whenever new upstream releases are > published. JSON (JSON-LD) would likely be most platform compatible (and designed for interoperable graph nodes and edges with attributes). JSON-LD does not require a specific library iff the @context is not necessary. Notes about JSON-LD and interoperable software package metadata: https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Thu Aug 20 20:26:44 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 20 Aug 2015 14:26:44 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Fri, Aug 14, 2015 at 3:38 AM, Nathaniel Smith wrote: > On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins > wrote: > > On 14 August 2015 at 14:14, Nathaniel Smith wrote: > > ...> > >> Of course if you have an alternative proposal than I'm all ears :-). > > > > Yeah :) > > > > So, I want to dedicate some time to contributing to this discussion > > meaningfully, but I can't for the next few weeks - Jury duty, Kiwi > > PyCon and polishing up the PEP's I'm already committed to... > > Totally hear that... it's not super urgent anyway. We should make it > clear to Nate -- hi Nate! -- that there's no reason that solving this > problem should block putting together the basic > binary-compatibility.cfg infrastructure. > Hi! I've been working on bits of this as I've also been working on, as a test case, building out psycopg2 wheels for lots of different popular distros on i386 and x86_64, UCS2 and UCS4, under Docker. As a result, it's clear that my Linux distro tagging work in wheel's pep425tags has some issues. I've been adding to this list of distributions but it's going to need a lot more work: https://bitbucket.org/pypa/wheel/pull-requests/54/soabi-2x-platform-os-distro-support-for/diff#Lwheel/pep425tags.pyT61 So I need a bit of guidance here. I've arbitrarily chosen some tags - `rhel` for example - and wonder if, like PEP 425's mapping of Python implementations to tags, a defined mapping of Linux distributions to shorthand tags is necessary (of course this would be difficult to keep up to date, but binary-compatibility.cfg would make it less relevant in the long run). Alternatively, I could simply trust and normalize platform.linux_distribution()[0], but this means that the platform tag on RHEL would be something like `linux_x86_64_red_hat_enterprise_linux_server_6_5` Finally, by *default*, the built platform tag will include whatever version information is provided in platform.linux_distribution()[1], but the "major-only" version is also included in the list of platforms, so a default debian tag might look like `linux_x86_64_debian_7_8`, but it would be possible to build (and install) `linux_x86_64_debian_7`. However, it may be the case that the default (at least for building, maybe not for installing) ought to be the major-only tag since it should really be ABI compatible with any minor release of that distro. --nate > > I think the approach of being able to ask the *platform* for things > > needed to build-or-use known artifacts is going to enable a bunch of > > different answers in this space. I'm much more enthusiastic about that > > than doing anything that ends up putting PyPI in competition with the > > distribution space. > > > > My criteria for success are: > > > > - there's *a* migration path from what we have today to what we > > propose. Doesn't have to be good, just exist. > > > > - authors of scipy, numpy, cryptography etc can upload binary wheels > > for *linux, Mac OSX and Windows 32/64 in a safe and sane way > > So the problem is that, IMO, "sane" here means "not building a > separate wheel for every version of distro on distrowatch". So I can > see two ways to do that: > - my suggestion that we just pick a particular highly-compatible > distro like centos 5 to build against, and make a standard list of > which libraries can be assumed to be provided > - the PEP-497-or-number-to-be-determined approach, in which we still > have to pick a highly-compatible distro like centos 5 to build > against, but each wheel has a list of which libraries from that distro > it is counting on being provided > > I can see the appeal of the latter approach, since if you want to do > the former approach right you need to be careful about exactly which > libraries you're assuming are present, etc. They both could work. But > in practice, you still have to pick which distro you are going to use > to build, and you still have to say "when I say I need libblas.so.1, > what I mean is that I need a file that is ABI-compatible with the > version of libblas.so.1 that existed in centos 5 exactly, not any > other libblas.so.1". And then in practice not every distro will have > such a thing, so for a project like numpy that wants to make things > easy for a wide variety of users, we'll still only be able to take > advantage of external dependencies for libraries that are effectively > universally available and compatible anyway and end up vendoring the > rest... so in the end basically we'd be distributing exactly the same > wheels under either of these proposals, just the latter requires a > much much more complicated scheme for metadata and installation. > > And in practice I think the main alternative possibility if we don't > come up with some solid guidance for how packages can build > works-everywhere-wheels is that we'll see wheels for > latest-version-of-Ubuntu-only, plus the occasional smattering of other > distros, varying randomly on a project-by-project basis. Which would > suck. > > > - we don't need to do things like uploading wheels containing > > non-Python shared libraries, nor upload statically linked modules > > > > > > In fact, I think uploading regular .so files is just a huge heartache > > waiting to happen, so I'm almost inclined to add: > > > > - we don't support uploading external non-Python libraries [ without > > prejuidice for changing our minds in the future] > > Windows and OS X don't (reliably) have any package manager. So PyPI > *is* inevitably going to contain non-Python shared libraries or > statically linked modules or something like that. (And in fact it > already contains such things today.) I'm not sure what the alternative > would even be. > > This also means that projects like numpy are already forced to accept > that we're on the hook for security updates in our dependencies etc., > so doing it on Linux too is not really that scary. > > Oh, I just thought of another issue: an extremely important > requirement for numpy/scipy/etc. wheels is that they be reliably > installable without root access. People *really* care about this: > missing your grant deadline b/c you can't upgrade some package to fix > some showstopper bug b/c university IT support is not answering calls > at midnight on Sunday = rather poor UX. > > Given that, the only situation I can see where we would ever > distribute wheels that require system blas on Linux, is if we were > able to do it alongside wheels that do not require system blas, and > pip were clever enough to reliably always pick the latter except in > cases where the system blas was actually present and working. > > > There was a post that referenced a numpy ABI, dunno if it was in this > > thread - I need to drill down into that, because I don't understand > > why thats not a regular version resolution problem,unlike the Python > > ABI, which pip can't install [and shouldn't be able to!] > > The problem is that numpy is very unusual among Python packages in > that exposes a large and widely-used *C* API/ABI: > > http://docs.scipy.org/doc/numpy/reference/c-api.html > > This means that when you build, e.g., scipy, then you get a binary > that depends on things like the in-memory layout of numpy's internal > objects. We'd like it to be the case that when we release a new > version of numpy, pip could realize "hey, this new version says it has > an incompatible ABI that will break your currently installed version > of scipy -- I'd better fetch a new version of scipy as well, or at > least rebuild the same version I already have". Notice that at the > time scipy is built, it is not known which future version of numpy > will require a rebuild. There are a lot of ways this might work on > both the numpy and pip sides -- definitely fodder for a separate > thread -- but that's the basic problem. > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Aug 20 20:38:49 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 20 Aug 2015 18:38:49 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: We've also considered using truncated SHA-256 hashes. Then the tag would not be readable, but it would always be the same length. On Thu, Aug 20, 2015 at 2:27 PM Nate Coraor wrote: > On Fri, Aug 14, 2015 at 3:38 AM, Nathaniel Smith wrote: > >> On Thu, Aug 13, 2015 at 7:27 PM, Robert Collins >> wrote: >> > On 14 August 2015 at 14:14, Nathaniel Smith wrote: >> > ...> >> >> Of course if you have an alternative proposal than I'm all ears :-). >> > >> > Yeah :) >> > >> > So, I want to dedicate some time to contributing to this discussion >> > meaningfully, but I can't for the next few weeks - Jury duty, Kiwi >> > PyCon and polishing up the PEP's I'm already committed to... >> >> Totally hear that... it's not super urgent anyway. We should make it >> clear to Nate -- hi Nate! -- that there's no reason that solving this >> problem should block putting together the basic >> binary-compatibility.cfg infrastructure. >> > > Hi! > > I've been working on bits of this as I've also been working on, as a test > case, building out psycopg2 wheels for lots of different popular distros on > i386 and x86_64, UCS2 and UCS4, under Docker. As a result, it's clear that > my Linux distro tagging work in wheel's pep425tags has some issues. I've > been adding to this list of distributions but it's going to need a lot more > work: > > > https://bitbucket.org/pypa/wheel/pull-requests/54/soabi-2x-platform-os-distro-support-for/diff#Lwheel/pep425tags.pyT61 > > So I need a bit of guidance here. I've arbitrarily chosen some tags - > `rhel` for example - and wonder if, like PEP 425's mapping of Python > implementations to tags, a defined mapping of Linux distributions to > shorthand tags is necessary (of course this would be difficult to keep up > to date, but binary-compatibility.cfg would make it less relevant in the > long run). > > Alternatively, I could simply trust and normalize > platform.linux_distribution()[0], but this means that the platform tag on > RHEL would be something like > `linux_x86_64_red_hat_enterprise_linux_server_6_5` > > Finally, by *default*, the built platform tag will include whatever > version information is provided in platform.linux_distribution()[1], but > the "major-only" version is also included in the list of platforms, so a > default debian tag might look like `linux_x86_64_debian_7_8`, but it would > be possible to build (and install) `linux_x86_64_debian_7`. However, it may > be the case that the default (at least for building, maybe not for > installing) ought to be the major-only tag since it should really be ABI > compatible with any minor release of that distro. > > --nate > > > >> > I think the approach of being able to ask the *platform* for things >> > needed to build-or-use known artifacts is going to enable a bunch of >> > different answers in this space. I'm much more enthusiastic about that >> > than doing anything that ends up putting PyPI in competition with the >> > distribution space. >> > >> > My criteria for success are: >> > >> > - there's *a* migration path from what we have today to what we >> > propose. Doesn't have to be good, just exist. >> > >> > - authors of scipy, numpy, cryptography etc can upload binary wheels >> > for *linux, Mac OSX and Windows 32/64 in a safe and sane way >> >> So the problem is that, IMO, "sane" here means "not building a >> separate wheel for every version of distro on distrowatch". So I can >> see two ways to do that: >> - my suggestion that we just pick a particular highly-compatible >> distro like centos 5 to build against, and make a standard list of >> which libraries can be assumed to be provided >> - the PEP-497-or-number-to-be-determined approach, in which we still >> have to pick a highly-compatible distro like centos 5 to build >> against, but each wheel has a list of which libraries from that distro >> it is counting on being provided >> >> I can see the appeal of the latter approach, since if you want to do >> the former approach right you need to be careful about exactly which >> libraries you're assuming are present, etc. They both could work. But >> in practice, you still have to pick which distro you are going to use >> to build, and you still have to say "when I say I need libblas.so.1, >> what I mean is that I need a file that is ABI-compatible with the >> version of libblas.so.1 that existed in centos 5 exactly, not any >> other libblas.so.1". And then in practice not every distro will have >> such a thing, so for a project like numpy that wants to make things >> easy for a wide variety of users, we'll still only be able to take >> advantage of external dependencies for libraries that are effectively >> universally available and compatible anyway and end up vendoring the >> rest... so in the end basically we'd be distributing exactly the same >> wheels under either of these proposals, just the latter requires a >> much much more complicated scheme for metadata and installation. >> >> And in practice I think the main alternative possibility if we don't >> come up with some solid guidance for how packages can build >> works-everywhere-wheels is that we'll see wheels for >> latest-version-of-Ubuntu-only, plus the occasional smattering of other >> distros, varying randomly on a project-by-project basis. Which would >> suck. >> >> > - we don't need to do things like uploading wheels containing >> > non-Python shared libraries, nor upload statically linked modules >> > >> > >> > In fact, I think uploading regular .so files is just a huge heartache >> > waiting to happen, so I'm almost inclined to add: >> > >> > - we don't support uploading external non-Python libraries [ without >> > prejuidice for changing our minds in the future] >> >> Windows and OS X don't (reliably) have any package manager. So PyPI >> *is* inevitably going to contain non-Python shared libraries or >> statically linked modules or something like that. (And in fact it >> already contains such things today.) I'm not sure what the alternative >> would even be. >> >> This also means that projects like numpy are already forced to accept >> that we're on the hook for security updates in our dependencies etc., >> so doing it on Linux too is not really that scary. >> >> Oh, I just thought of another issue: an extremely important >> requirement for numpy/scipy/etc. wheels is that they be reliably >> installable without root access. People *really* care about this: >> missing your grant deadline b/c you can't upgrade some package to fix >> some showstopper bug b/c university IT support is not answering calls >> at midnight on Sunday = rather poor UX. >> >> Given that, the only situation I can see where we would ever >> distribute wheels that require system blas on Linux, is if we were >> able to do it alongside wheels that do not require system blas, and >> pip were clever enough to reliably always pick the latter except in >> cases where the system blas was actually present and working. >> >> > There was a post that referenced a numpy ABI, dunno if it was in this >> > thread - I need to drill down into that, because I don't understand >> > why thats not a regular version resolution problem,unlike the Python >> > ABI, which pip can't install [and shouldn't be able to!] >> >> The problem is that numpy is very unusual among Python packages in >> that exposes a large and widely-used *C* API/ABI: >> >> http://docs.scipy.org/doc/numpy/reference/c-api.html >> >> This means that when you build, e.g., scipy, then you get a binary >> that depends on things like the in-memory layout of numpy's internal >> objects. We'd like it to be the case that when we release a new >> version of numpy, pip could realize "hey, this new version says it has >> an incompatible ABI that will break your currently installed version >> of scipy -- I'd better fetch a new version of scipy as well, or at >> least rebuild the same version I already have". Notice that at the >> time scipy is built, it is not known which future version of numpy >> will require a rebuild. There are a lot of ways this might work on >> both the numpy and pip sides -- definitely fodder for a separate >> thread -- but that's the basic problem. >> >> -n >> >> -- >> Nathaniel J. Smith -- http://vorpus.org >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Aug 20 21:14:01 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 20 Aug 2015 21:14:01 +0200 Subject: [Distutils] Working toward Linux wheel support References: Message-ID: <20150820211401.649801c0@fsol> On Thu, 20 Aug 2015 14:26:44 -0400 Nate Coraor wrote: > > So I need a bit of guidance here. I've arbitrarily chosen some tags - > `rhel` for example - and wonder if, like PEP 425's mapping of Python > implementations to tags, a defined mapping of Linux distributions to > shorthand tags is necessary (of course this would be difficult to keep up > to date, but binary-compatibility.cfg would make it less relevant in the > long run). > > Alternatively, I could simply trust and normalize > platform.linux_distribution()[0], In practice, the `platform` module does not really keep up to date with evolution in the universe of Linux distributions. Regards Antoine. From donald at stufft.io Thu Aug 20 21:19:46 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 20 Aug 2015 15:19:46 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On August 20, 2015 at 2:40:04 PM, Daniel Holth (dholth at gmail.com) wrote: > We've also considered using truncated SHA-256 hashes. Then the tag would > not be readable, but it would always be the same length. > I think it?d be a problem where can?t reverse the SHA-256 operation, so you could no longer inspect a wheel to determine what platforms it supports based on the filename, you would only ever be able to determine if it matches a particular platform. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From dholth at gmail.com Thu Aug 20 21:22:58 2015 From: dholth at gmail.com (Daniel Holth) Date: Thu, 20 Aug 2015 19:22:58 +0000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: If you need that for some reason just put the longer information in the metadata, inside the WHEEL file for example. Surely "does it work on my system" dominates, as opposed to "I have a wheel with this mnemonic tag, now let me install debian 5 so I can get it to run". On Thu, Aug 20, 2015 at 3:19 PM Donald Stufft wrote: > On August 20, 2015 at 2:40:04 PM, Daniel Holth (dholth at gmail.com) wrote: > > We've also considered using truncated SHA-256 hashes. Then the tag would > > not be readable, but it would always be the same length. > > > > I think it?d be a problem where can?t reverse the SHA-256 operation, so > you could no longer inspect a wheel to determine what platforms it supports > based on the filename, you would only ever be able to determine if it > matches a particular platform. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Aug 20 21:25:15 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 20 Aug 2015 15:25:15 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) wrote: > If you need that for some reason just put the longer information in the > metadata, inside the WHEEL file for example. Surely "does it work on my > system" dominates, as opposed to "I have a wheel with this mnemonic tag, > now let me install debian 5 so I can get it to run". > > It?s less about ?now let me install Debian 5? and more like tooling that doesn?t run *on* the platform but which needs to make decisions based on what platform a wheel is built for. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From nate at bx.psu.edu Thu Aug 20 21:39:09 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 20 Aug 2015 15:39:09 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Thu, Aug 20, 2015 at 3:25 PM, Donald Stufft wrote: > > On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) wrote: > > If you need that for some reason just put the longer information in the > > metadata, inside the WHEEL file for example. Surely "does it work on my > > system" dominates, as opposed to "I have a wheel with this mnemonic tag, > > now let me install debian 5 so I can get it to run". > > > > > > It?s less about ?now let me install Debian 5? and more like tooling that > doesn?t run *on* the platform but which needs to make decisions based on > what platform a wheel is built for. > This makes binary-compatibility.cfg much more difficult, however. There'd still have to be a maintained list of "platform" to hash. --nate > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Thu Aug 20 21:40:57 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 20 Aug 2015 15:40:57 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: <20150820211401.649801c0@fsol> References: <20150820211401.649801c0@fsol> Message-ID: On Thu, Aug 20, 2015 at 3:14 PM, Antoine Pitrou wrote: > On Thu, 20 Aug 2015 14:26:44 -0400 > Nate Coraor wrote: > > > > So I need a bit of guidance here. I've arbitrarily chosen some tags - > > `rhel` for example - and wonder if, like PEP 425's mapping of Python > > implementations to tags, a defined mapping of Linux distributions to > > shorthand tags is necessary (of course this would be difficult to keep up > > to date, but binary-compatibility.cfg would make it less relevant in the > > long run). > > > > Alternatively, I could simply trust and normalize > > platform.linux_distribution()[0], > > In practice, the `platform` module does not really keep up to date with > evolution in the universe of Linux distributions. > Understandable, although so far it's doing a pretty good job: ('Red Hat Enterprise Linux Server', '6.5', 'Santiago') ('CentOS', '6.7', 'Final') ('CentOS Linux', '7.1.1503', 'Core') ('Scientific Linux', '6.2', 'Carbon') ('debian', '6.0.10', '') ('debian', '7.8', '') ('debian', '8.1', '') ('debian', 'stretch/sid', '') ('Ubuntu', '12.04', 'precise') ('Ubuntu', '14.04', 'trusty') ('Fedora', '21', 'Twenty One') ('SUSE Linux Enterprise Server ', '11', 'x86_64') ('Gentoo Base System', '2.2', '') platform.linux_distribution(full_distribution_name=False) might be nice but it made some bad assumptions, e.g. on Scientific Linux it returned the platform as 'redhat'. --nate > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Aug 20 21:58:07 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 21 Aug 2015 07:58:07 +1200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 21 August 2015 at 07:25, Donald Stufft wrote: > > On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) wrote: >> If you need that for some reason just put the longer information in the >> metadata, inside the WHEEL file for example. Surely "does it work on my >> system" dominates, as opposed to "I have a wheel with this mnemonic tag, >> now let me install debian 5 so I can get it to run". >> >> > > It?s less about ?now let me install Debian 5? and more like tooling that doesn?t run *on* the platform but which needs to make decisions based on what platform a wheel is built for. Cramming that into the file name is a mistake IMO. Make it declarative data, make it indexable, and index it. We can do that locally as much as via the REST API. That btw is why the draft for referencing external dependencies specifies file names (because file names give an ABI in the context of a platform) - but we do need to identify the platform, and platform.distribution should be good enough for that (or perhaps we start depending on lsb-release for detection -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From solipsis at pitrou.net Thu Aug 20 21:51:16 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 20 Aug 2015 21:51:16 +0200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: <20150820211401.649801c0@fsol> Message-ID: <20150820215116.350775ef@fsol> On Thu, 20 Aug 2015 15:40:57 -0400 Nate Coraor wrote: > > > > In practice, the `platform` module does not really keep up to date with > > evolution in the universe of Linux distributions. > > > > Understandable, although so far it's doing a pretty good job: Hmm, perhaps that one just parses /os/lsb-release, then :-) Regards Antoine. From ncoghlan at gmail.com Fri Aug 21 08:51:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 21 Aug 2015 16:51:14 +1000 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On 21 August 2015 at 05:58, Robert Collins wrote: > On 21 August 2015 at 07:25, Donald Stufft wrote: >> >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) wrote: >>> If you need that for some reason just put the longer information in the >>> metadata, inside the WHEEL file for example. Surely "does it work on my >>> system" dominates, as opposed to "I have a wheel with this mnemonic tag, >>> now let me install debian 5 so I can get it to run". >>> >>> >> >> It?s less about ?now let me install Debian 5? and more like tooling that doesn?t run *on* the platform but which needs to make decisions based on what platform a wheel is built for. > > Cramming that into the file name is a mistake IMO. > > Make it declarative data, make it indexable, and index it. We can do > that locally as much as via the REST API. > > That btw is why the draft for referencing external dependencies > specifies file names (because file names give an ABI in the context of > a platform) - but we do need to identify the platform, and > platform.distribution should be good enough for that (or perhaps we > start depending on lsb-release for detection LSB has too much stuff in it, so most distros aren't LSB compliant out of the box - you have to install extra packages. /etc/os-release is a better option: http://www.freedesktop.org/software/systemd/man/os-release.html My original concern with using that was that it *over*specifies the distro (e.g. not only do CentOS and RHEL releases show up as different platforms, but so do X.Y releases within a series), but the binary-compatibility.txt idea resolves that issue, since a derived distro can explicitly identify itself as binary compatible with its upstream and be able to use the corresponding wheel files. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mal at egenix.com Fri Aug 21 10:03:47 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 21 Aug 2015 10:03:47 +0200 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: <55D6DB63.5080507@egenix.com> On 21.08.2015 08:51, Nick Coghlan wrote: > On 21 August 2015 at 05:58, Robert Collins wrote: >> On 21 August 2015 at 07:25, Donald Stufft wrote: >>> >>> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) wrote: >>>> If you need that for some reason just put the longer information in the >>>> metadata, inside the WHEEL file for example. Surely "does it work on my >>>> system" dominates, as opposed to "I have a wheel with this mnemonic tag, >>>> now let me install debian 5 so I can get it to run". >>>> >>>> >>> >>> It?s less about ?now let me install Debian 5? and more like tooling that doesn?t run *on* the platform but which needs to make decisions based on what platform a wheel is built for. >> >> Cramming that into the file name is a mistake IMO. Agreed. IMO, the file name should really just be a hint to what's in the box and otherwise just serve the main purpose of being unique for whatever the platform needs are. You might be interested in the approach we've chosen for our prebuilt packages when used with our Python package web installer: Instead of parsing file names, we use a tag file for each package, which maps a set of tags to the URLs of the distribution files. The web installer takes care of determining the right distribution file to download by looking at those tags, not be looking at the file name. Since tags are very flexible, and, most importantly, extensible, this approach has allowed us to add new differentiations to the system without changing the basic architecture. Here's a talk on the installer architecture I gave at PyCon UK 2014: http://www.egenix.com/library/presentations/PyCon-UK-2014-Python-Web-Installer/ This architecture was born out of the need to support more platforms than eggs, wheels, etc. currently support. We had previously tried to use the file name approach and get setuptools to play along, but this failed. The prebuilt distribution files still use a variant of this to make the file names uniques, but we've stopped putting more energy into getting those to work with setuptools, since the tags allow for a much more flexible approach than file names. We currently support Windows, Linux, FreeBSD and Mac OS X. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 21 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 2015-08-22: FrOSCon 2015 ... tomorrow ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From brett at python.org Fri Aug 21 19:41:31 2015 From: brett at python.org (Brett Cannon) Date: Fri, 21 Aug 2015 17:41:31 +0000 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Thu, 20 Aug 2015 at 10:16 Wes Turner wrote: > > On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: > > > > [Catching up on distutils-sig after travel] > > > > On 13 August 2015 at 16:08, Nathaniel Smith wrote: > > > It seems like a reasonable effort at solving this problem, and I guess > > > there are probably some people somewhere that have this problem, but > > > my concern is that I don't actually know any of those people. The > > > developers I know instead have the problem of, they want to be able to > > > provide a small finite number of binaries (ideally six binaries per > > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that > > > together will Just Work on 99% of end-user systems. And that's the > > > problem that Enthought, Continuum, etc., have been solving for years, > > > and which wheels already mostly solve on windows and osx, so it seems > > > like a reasonable goal to aim for. But I don't see how this PEP gets > > > us any closer to that. > > > > The key benefit from my perspective is that tools like pyp2rpm, conda > > skeleton, the Debian Python packaging tools, etc, will be able to > > automatically generate full dependency sets automatically from > > upstream Python metadata. > > > > At the moment that's manual work which needs to be handled > > independently for each binary ecosystem, but there's no reason it has > > to be that way - we can do a better job of defining the source > > dependencies, and then hook into release-monitoring.org to > > automatically rebuild the downstream binaries (including adding new > > external dependencies if needed) whenever new upstream releases are > > published. > > JSON (JSON-LD) would likely be most platform compatible (and designed for > interoperable graph nodes and edges with attributes). > > JSON-LD does not require a specific library iff the @context is not > necessary. > > Notes about JSON-LD and interoperable software package metadata: > https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html > What does JSON-LD have to do with this conversation, Wes? No discussion of implementation has even begun, let alone worrying about data formats. This is purely a discussion of problem space and what needs to be solved and is not grounded in concrete design yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Aug 21 20:30:31 2015 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 21 Aug 2015 13:30:31 -0500 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Aug 21, 2015 12:41 PM, "Brett Cannon" wrote: > > > > On Thu, 20 Aug 2015 at 10:16 Wes Turner wrote: >> >> >> On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: >> > >> > [Catching up on distutils-sig after travel] >> > >> > On 13 August 2015 at 16:08, Nathaniel Smith wrote: >> > > It seems like a reasonable effort at solving this problem, and I guess >> > > there are probably some people somewhere that have this problem, but >> > > my concern is that I don't actually know any of those people. The >> > > developers I know instead have the problem of, they want to be able to >> > > provide a small finite number of binaries (ideally six binaries per >> > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that >> > > together will Just Work on 99% of end-user systems. And that's the >> > > problem that Enthought, Continuum, etc., have been solving for years, >> > > and which wheels already mostly solve on windows and osx, so it seems >> > > like a reasonable goal to aim for. But I don't see how this PEP gets >> > > us any closer to that. >> > >> > The key benefit from my perspective is that tools like pyp2rpm, conda >> > skeleton, the Debian Python packaging tools, etc, will be able to >> > automatically generate full dependency sets automatically from >> > upstream Python metadata. >> > >> > At the moment that's manual work which needs to be handled >> > independently for each binary ecosystem, but there's no reason it has >> > to be that way - we can do a better job of defining the source >> > dependencies, and then hook into release-monitoring.org to >> > automatically rebuild the downstream binaries (including adding new >> > external dependencies if needed) whenever new upstream releases are >> > published. >> >> JSON (JSON-LD) would likely be most platform compatible (and designed for interoperable graph nodes and edges with attributes). >> >> JSON-LD does not require a specific library iff the @context is not necessary. >> >> Notes about JSON-LD and interoperable software package metadata: https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html > > What does JSON-LD have to do with this conversation, Wes? No discussion of implementation has even begun, let alone worrying about data formats. This is purely a discussion of problem space and what needs to be solved and is not grounded in concrete design yet. Really? Why would a language designed for graphs be appropriate for expressing graphs and constraints? The problem is: setuptools packages cannot declare dependency edges to things that are not Python packages; and, basically portage ebuild USE flags for attributes. Now, as ever, would be a good time to learn to write a JSONLD @context (for the existing fragmented packaging system standards (that often require code execution to read/generate JSON metadata (because this is decidable))). -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Aug 21 21:27:42 2015 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 21 Aug 2015 14:27:42 -0500 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Fri, Aug 21, 2015 at 1:30 PM, Wes Turner wrote: > > On Aug 21, 2015 12:41 PM, "Brett Cannon" wrote: > > > > > > > > On Thu, 20 Aug 2015 at 10:16 Wes Turner wrote: > >> > >> > >> On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: > >> > > >> > [Catching up on distutils-sig after travel] > >> > > >> > On 13 August 2015 at 16:08, Nathaniel Smith wrote: > >> > > It seems like a reasonable effort at solving this problem, and I > guess > >> > > there are probably some people somewhere that have this problem, but > >> > > my concern is that I don't actually know any of those people. The > >> > > developers I know instead have the problem of, they want to be able > to > >> > > provide a small finite number of binaries (ideally six binaries per > >> > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that > >> > > together will Just Work on 99% of end-user systems. And that's the > >> > > problem that Enthought, Continuum, etc., have been solving for > years, > >> > > and which wheels already mostly solve on windows and osx, so it > seems > >> > > like a reasonable goal to aim for. But I don't see how this PEP gets > >> > > us any closer to that. > >> > > >> > The key benefit from my perspective is that tools like pyp2rpm, conda > >> > skeleton, the Debian Python packaging tools, etc, will be able to > >> > automatically generate full dependency sets automatically from > >> > upstream Python metadata. > >> > > >> > At the moment that's manual work which needs to be handled > >> > independently for each binary ecosystem, but there's no reason it has > >> > to be that way - we can do a better job of defining the source > >> > dependencies, and then hook into release-monitoring.org to > >> > automatically rebuild the downstream binaries (including adding new > >> > external dependencies if needed) whenever new upstream releases are > >> > published. > >> > >> JSON (JSON-LD) would likely be most platform compatible (and designed > for interoperable graph nodes and edges with attributes). > >> > >> JSON-LD does not require a specific library iff the @context is not > necessary. > >> > >> Notes about JSON-LD and interoperable software package metadata: > https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html > > > > What does JSON-LD have to do with this conversation, Wes? No discussion > of implementation has even begun, let alone worrying about data formats. > This is purely a discussion of problem space and what needs to be solved > and is not grounded in concrete design yet. > > Really? > Why would a language designed for graphs be appropriate for expressing > graphs and constraints? > > The problem is: setuptools packages cannot declare dependency edges to > things that are not Python packages; and, basically portage ebuild USE > flags for attributes. > > Now, as ever, would be a good time to learn to write a JSONLD @context > (for the existing fragmented packaging system standards (that often require > code execution to read/generate JSON metadata (because this is decidable))). > I guess what I'm trying to say is: * "why is this packaging metadata split?" * shouldn't this all be in setup.py * couldn't we generate a proper RDF graph from each of the latest JSONLD serializations (e.g. https://pypi.python.org/pypi//jsonld) * what is missing? * install_requires_pkgs_apt install_requires_pkgs = { "apt": [...], "yum": [...], "dnf": [...], "aur": [....] } * extras_require_pkgs = { "extraname": { { "apt": [...], "yum": [...], "dnf": [...], "aur": [....] }, "otherextraname": { "apt": [...], "yum": [...], "dnf": [...], "aur": [....] } * build flag schema buildflags_schema={"numpy17":"bool"} buildflags=[{"numpy17"}] Because, from a maintainer/reproducibility standpoint, how do I know that all of these packages (1) do (2) could exist? * (3) SHOULD exist: tox, Docker builds So, is JSON-LD necessary to add extra attributes to setup.py? Nope. Would it be a good time to norm with a JSON-LD @context and determine how to specify package groupings like [salt grains] without salt, in setuptools? Or just be declarative (e.g. with Portage and Conda (and Pip/Peep, inevitably)). -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Aug 21 21:34:51 2015 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 21 Aug 2015 14:34:51 -0500 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: Something like this for packages that work with a default install would be helpful: >From metadata: * https://python3wos.appspot.com/ * http://djangowos.com/ * http://pythonwheels.com/ Additional package build scripts (and managed e.g. BLAS, numpy ABI compatibility): * http://docs.continuum.io/anaconda/pkg-docs * https://www.enthought.com/products/canopy/package-index/ Additional attributes that would be useful in setup.py and JSON: * [ ] Add conda and canopy as pkgmgrstrs: { "apt": [...], ..., "conda": [ ], "canopy": [ ] } On Fri, Aug 21, 2015 at 2:27 PM, Wes Turner wrote: > > > On Fri, Aug 21, 2015 at 1:30 PM, Wes Turner wrote: > >> >> On Aug 21, 2015 12:41 PM, "Brett Cannon" wrote: >> > >> > >> > >> > On Thu, 20 Aug 2015 at 10:16 Wes Turner wrote: >> >> >> >> >> >> On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: >> >> > >> >> > [Catching up on distutils-sig after travel] >> >> > >> >> > On 13 August 2015 at 16:08, Nathaniel Smith wrote: >> >> > > It seems like a reasonable effort at solving this problem, and I >> guess >> >> > > there are probably some people somewhere that have this problem, >> but >> >> > > my concern is that I don't actually know any of those people. The >> >> > > developers I know instead have the problem of, they want to be >> able to >> >> > > provide a small finite number of binaries (ideally six binaries per >> >> > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that >> >> > > together will Just Work on 99% of end-user systems. And that's the >> >> > > problem that Enthought, Continuum, etc., have been solving for >> years, >> >> > > and which wheels already mostly solve on windows and osx, so it >> seems >> >> > > like a reasonable goal to aim for. But I don't see how this PEP >> gets >> >> > > us any closer to that. >> >> > >> >> > The key benefit from my perspective is that tools like pyp2rpm, conda >> >> > skeleton, the Debian Python packaging tools, etc, will be able to >> >> > automatically generate full dependency sets automatically from >> >> > upstream Python metadata. >> >> > >> >> > At the moment that's manual work which needs to be handled >> >> > independently for each binary ecosystem, but there's no reason it has >> >> > to be that way - we can do a better job of defining the source >> >> > dependencies, and then hook into release-monitoring.org to >> >> > automatically rebuild the downstream binaries (including adding new >> >> > external dependencies if needed) whenever new upstream releases are >> >> > published. >> >> >> >> JSON (JSON-LD) would likely be most platform compatible (and designed >> for interoperable graph nodes and edges with attributes). >> >> >> >> JSON-LD does not require a specific library iff the @context is not >> necessary. >> >> >> >> Notes about JSON-LD and interoperable software package metadata: >> https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html >> > >> > What does JSON-LD have to do with this conversation, Wes? No >> discussion of implementation has even begun, let alone worrying about data >> formats. This is purely a discussion of problem space and what needs to be >> solved and is not grounded in concrete design yet. >> >> Really? >> Why would a language designed for graphs be appropriate for expressing >> graphs and constraints? >> >> The problem is: setuptools packages cannot declare dependency edges to >> things that are not Python packages; and, basically portage ebuild USE >> flags for attributes. >> >> Now, as ever, would be a good time to learn to write a JSONLD @context >> (for the existing fragmented packaging system standards (that often require >> code execution to read/generate JSON metadata (because this is decidable))). >> > > I guess what I'm trying to say is: > > * "why is this packaging metadata split?" > * shouldn't this all be in setup.py > > * couldn't we generate a proper RDF graph > from each of the latest JSONLD serializations > (e.g. https://pypi.python.org/pypi//jsonld) > > * what is missing? > * install_requires_pkgs_apt > install_requires_pkgs = { "apt": [...], "yum": [...], "dnf": > [...], "aur": [....] } > * extras_require_pkgs = { > "extraname": { { "apt": [...], "yum": [...], "dnf": [...], > "aur": [....] }, > "otherextraname": { "apt": [...], "yum": [...], "dnf": [...], > "aur": [....] } > * build flag schema > buildflags_schema={"numpy17":"bool"} > buildflags=[{"numpy17"}] > > Because, from a maintainer/reproducibility standpoint, > how do I know that all of these packages (1) do (2) could exist? > > * (3) SHOULD exist: tox, Docker builds > > > So, is JSON-LD necessary to add extra attributes to setup.py? Nope. > > Would it be a good time to norm with a JSON-LD @context and determine how > to specify package groupings like [salt grains] without salt, in > setuptools? Or just be declarative (e.g. with Portage and Conda (and > Pip/Peep, inevitably)). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri Aug 21 22:28:13 2015 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 21 Aug 2015 15:28:13 -0500 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On Fri, Aug 21, 2015 at 2:34 PM, Wes Turner wrote: > Something like this for packages that work with a default install would be > helpful: > > From metadata: > > * https://python3wos.appspot.com/ > * http://djangowos.com/ > * http://pythonwheels.com/ > > Additional package build scripts (and managed e.g. BLAS, numpy ABI > compatibility): > > * http://docs.continuum.io/anaconda/pkg-docs > * https://www.enthought.com/products/canopy/package-index/ > > > Additional attributes that would be useful in setup.py and JSON: > > * [ ] Add conda and canopy as pkgmgrstrs: { "apt": [...], ..., "conda": [ > ], "canopy": [ ] } > * [ ] Add ``packages`` and ``namespace_packages`` attributes from setup.py to pydist.json / metadata.json * [ ] BUG,UBY: detect collisions between installed packages ((n)) * how is this relevant to [BLAS] package metadata? * this is relevant to a broader PEP for Python package metadata ("3.0?"), apologies > > > On Fri, Aug 21, 2015 at 2:27 PM, Wes Turner wrote: > >> >> >> On Fri, Aug 21, 2015 at 1:30 PM, Wes Turner wrote: >> >>> >>> On Aug 21, 2015 12:41 PM, "Brett Cannon" wrote: >>> > >>> > >>> > >>> > On Thu, 20 Aug 2015 at 10:16 Wes Turner wrote: >>> >> >>> >> >>> >> On Aug 20, 2015 5:05 AM, "Nick Coghlan" wrote: >>> >> > >>> >> > [Catching up on distutils-sig after travel] >>> >> > >>> >> > On 13 August 2015 at 16:08, Nathaniel Smith wrote: >>> >> > > It seems like a reasonable effort at solving this problem, and I >>> guess >>> >> > > there are probably some people somewhere that have this problem, >>> but >>> >> > > my concern is that I don't actually know any of those people. The >>> >> > > developers I know instead have the problem of, they want to be >>> able to >>> >> > > provide a small finite number of binaries (ideally six binaries >>> per >>> >> > > Python version: {32 bit, 64 bit} * {windows, osx, linux}) that >>> >> > > together will Just Work on 99% of end-user systems. And that's the >>> >> > > problem that Enthought, Continuum, etc., have been solving for >>> years, >>> >> > > and which wheels already mostly solve on windows and osx, so it >>> seems >>> >> > > like a reasonable goal to aim for. But I don't see how this PEP >>> gets >>> >> > > us any closer to that. >>> >> > >>> >> > The key benefit from my perspective is that tools like pyp2rpm, >>> conda >>> >> > skeleton, the Debian Python packaging tools, etc, will be able to >>> >> > automatically generate full dependency sets automatically from >>> >> > upstream Python metadata. >>> >> > >>> >> > At the moment that's manual work which needs to be handled >>> >> > independently for each binary ecosystem, but there's no reason it >>> has >>> >> > to be that way - we can do a better job of defining the source >>> >> > dependencies, and then hook into release-monitoring.org to >>> >> > automatically rebuild the downstream binaries (including adding new >>> >> > external dependencies if needed) whenever new upstream releases are >>> >> > published. >>> >> >>> >> JSON (JSON-LD) would likely be most platform compatible (and designed >>> for interoperable graph nodes and edges with attributes). >>> >> >>> >> JSON-LD does not require a specific library iff the @context is not >>> necessary. >>> >> >>> >> Notes about JSON-LD and interoperable software package metadata: >>> https://mail.python.org/pipermail/distutils-sig/2015-April/026108.html >>> > >>> > What does JSON-LD have to do with this conversation, Wes? No >>> discussion of implementation has even begun, let alone worrying about data >>> formats. This is purely a discussion of problem space and what needs to be >>> solved and is not grounded in concrete design yet. >>> >>> Really? >>> Why would a language designed for graphs be appropriate for expressing >>> graphs and constraints? >>> >>> The problem is: setuptools packages cannot declare dependency edges to >>> things that are not Python packages; and, basically portage ebuild USE >>> flags for attributes. >>> >>> Now, as ever, would be a good time to learn to write a JSONLD @context >>> (for the existing fragmented packaging system standards (that often require >>> code execution to read/generate JSON metadata (because this is decidable))). >>> >> >> I guess what I'm trying to say is: >> >> * "why is this packaging metadata split?" >> * shouldn't this all be in setup.py >> >> * couldn't we generate a proper RDF graph >> from each of the latest JSONLD serializations >> (e.g. https://pypi.python.org/pypi//jsonld) >> >> * what is missing? >> * install_requires_pkgs_apt >> install_requires_pkgs = { "apt": [...], "yum": [...], "dnf": >> [...], "aur": [....] } >> * extras_require_pkgs = { >> "extraname": { { "apt": [...], "yum": [...], "dnf": [...], >> "aur": [....] }, >> "otherextraname": { "apt": [...], "yum": [...], "dnf": [...], >> "aur": [....] } >> * build flag schema >> buildflags_schema={"numpy17":"bool"} >> buildflags=[{"numpy17"}] >> >> Because, from a maintainer/reproducibility standpoint, >> how do I know that all of these packages (1) do (2) could exist? >> >> * (3) SHOULD exist: tox, Docker builds >> >> >> So, is JSON-LD necessary to add extra attributes to setup.py? Nope. >> >> Would it be a good time to norm with a JSON-LD @context and determine how >> to specify package groupings like [salt grains] without salt, in >> setuptools? Or just be declarative (e.g. with Portage and Conda (and >> Pip/Peep, inevitably)). >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Aug 22 12:33:24 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Aug 2015 20:33:24 +1000 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) In-Reply-To: References: Message-ID: On 22 August 2015 at 05:27, Wes Turner wrote: > I guess what I'm trying to say is: > > * "why is this packaging metadata split?" > * shouldn't this all be in setup.py > > * couldn't we generate a proper RDF graph > from each of the latest JSONLD serializations > (e.g. https://pypi.python.org/pypi//jsonld) We *could* do just about anything, but there's a reason I decided it was reasonable to put work on the overall metadata 2.0 spec on indefinite hiatus: given the difficult of getting folks to upgrade their build and deployment pipelines, we're currently still vastly better off finding smaller, more incremental steps that move the ecosystem forward, rather than engaging in a wholesale speculative redesign before we have any idea where folks would find the time to implement those changes not only in the core components (pip, PyPI, setuptools), but also throughout the wider ecosystem. As a result, at least until Warehouse is deployed to production, and we have TUF-based package signing available, it makes far more sense for us to pursue smaller tactical fixes that effectively address well-known problems with a minimum of development effort. Patience is one of the hardest development skills to learn (I regularly struggle with it myself), but when we don't apply ourselves to that task, not only do we significantly increase the likelihood of burning ourselves out, we're also likely to seriously annoy the folks around us. Personal prioritisation of effort is already difficult, and adding more people in a collaborative prioritisation process certainly doesn't make that any easier :) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Sat Aug 22 21:22:41 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 22 Aug 2015 21:22:41 +0200 Subject: [Distutils] PEP for dependencies on libraries like BLAS (was: Re: Working toward Linux wheel support) References: Message-ID: <20150822212241.7d974027@fsol> On Sat, 22 Aug 2015 20:33:24 +1000 Nick Coghlan wrote: > > Patience is one of the hardest development skills to learn (I > regularly struggle with it myself), but when we don't apply ourselves > to that task, not only do we significantly increase the likelihood of > burning ourselves out, we're also likely to seriously annoy the folks > around us. Note the etymology of "patience" ;-) Regards Antoine. From ncoghlan at gmail.com Mon Aug 24 02:19:18 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Aug 2015 10:19:18 +1000 Subject: [Distutils] Docker Content Trust and PyPI package signing Message-ID: Hi folks, The recent Docker 1.8 release was the first one to include their new content signing system, which is described well in this post: https://blog.docker.com/2015/08/content-trust-docker-1-8/ The resign I bring that up here is because the Docker Content Trust system is based on The Update Framework, which is the same system we've been exploring for PyPI package signing in PEPs 458 and 480. The part I particularly like is the way they have handled the trust establishment process for content signing: they use a "trust on first use" model by default, similar to that used in SSH. This means there is still a reliance on HTTPS and the CA system, but only for the task of bootstrapping TUF in a way that allows new clients to obtain the public signing certificate of the repo publisher transparently. Once the intial trust relationship with a public repo like PyPI or a private repo within a company or other organisation has been established, later compromises of the CA system don't provide the ability to forge package signatures. Also of potential interest is the TUF-based signing infrastructure that Docker built, Notary: https://github.com/docker/notary While I don't have a strong personal preference one way or the other, finding a way to reuse that does seem like it could be an interesting architectural alternative to building signing capabilities directly into Warehouse itself. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wes.turner at gmail.com Mon Aug 24 08:07:31 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 24 Aug 2015 01:07:31 -0500 Subject: [Distutils] Docker Content Trust and PyPI package signing In-Reply-To: References: Message-ID: The Update Framework | Homepage: http://theupdateframework.com/ | Src: https://github.com/theupdateframework/tuf | PyPI: https://pypi.python.org/pypi/tuf Esky does transactional upgrades: https://github.com/cloudmatrix/esky/ Pants freezes apps into single file dists: https://pantsbuild.github.io/python-readme.html ... probably relevant to python package signining: pypa/pip Implement "hook" support for package signature verification. #1035 https://github.com/pypa/pip/issues/1035#issuecomment-20656810 *westurner *commented on Jul 9, 2013 A syntax like the following would be convenient: pip install --verify- -e git+https://github.com/pypa/pip#egg=pip ... - http://www.pip-installer.org/en/latest/usage.html#pip-install - http://www.pip-installer.org/en/latest/usage.html#pip-verify ? - https://python-packaging-user-guide.readthedocs.org/en/latest/packaging_tutorial.html#create-your-first-release These may be helpful for creating documentation on this feature and how it relates to other components of a secure python packaging process: *Source Repository GPG* - https://en.wikipedia.org/wiki/GNU_Privacy_Guard (PGP ) - http://stackoverflow.com/questions/10077996/sign-git-commits-with-gpg - http://stackoverflow.com/questions/11556184/whats-the-purpose-of-signing-changesets-in-mercurial *Python Package GPG (./.asc)* - http://pythonhosted.org/distlib/tutorial.html#signing-a-distribution - http://pythonhosted.org/distlib/tutorial.html#verifying-signatures For any archive downloaded from an index, you can retrieve any signature by just appending*.asc* to the path portion of the download URL for the archive, and downloading that. - https://pypi.python.org/packages/source/p/pip/pip-1.3.1.tar.gz.asc#md5=cbb27a191cebc58997c4da8513863153 *Python Wheel JWS S/MIME (PEP 427 )* - http://www.python.org/dev/peps/pep-0427/#signed-wheel-files - https://tools.ietf.org/html/draft-ietf-jose-json-web-signature-11 (html ) - https://tools.ietf.org/html/draft-ietf-jose-json-web-key-11 (html ) - https://en.wikipedia.org/wiki/X.509 - https://bitbucket.org/dholth/wheel/src/tip/wheel/signatures/__init__.py - https://distlib.readthedocs.org/en/latest/internals.html#the-wheel-api - https://payswarm.com/specs/source/web-keys/ (http://json-ld.org) *Index Mirror DSA (PEP 381 )* - http://www.python.org/dev/peps/pep-0381/#mirror-authenticity - https://en.wikipedia.org/wiki/Digital_Signature_Algorithm *Package Signatures for .deb, .rpm, ... * - https://en.wikipedia.org/wiki/List_of_software_package_management_systems - http://man.he.net/man8/apt-key - http://wiki.debian.org/SecureApt - http://linux.die.net/man/5/yum.conf # gpgcheck, localpkg_gpgcheck, repo_gpgcheck - http://iuscommunity.org/pages/CreatingAGPGKeyandSigningRPMs.html *Python Package Configuration Management Systems* - https://github.com/puppetlabs/puppet/blob/master/lib/puppet/provider/package/pip.rb - https://github.com/opscode/chef/blob/master/lib/chef/provider/package/easy_install.rb - https://github.com/saltstack/salt/blob/develop/salt/modules/pip.py - https://github.com/ansible/ansible/blob/devel/library/packaging/pip - http://docs.bcfg2.org/server/plugins/generators/packages.html#handling-gpg-keys *[Cryptographic] Hash Functions* - https://en.wikipedia.org/wiki/Cryptographic_hash_function#Verifying_the_integrity_of_files_or_messages - https://en.wikipedia.org/wiki/Cryptographic_hash_function#Hash_functions_based_on_block_ciphers - https://en.wikipedia.org/wiki/Category:Cryptographic_hash_functions - https://en.wikipedia.org/wiki/Hash_function_security_summary - https://en.wikipedia.org/wiki/Category:Broken_hash_functions - https://en.wikipedia.org/wiki/MD5 - https://en.wikipedia.org/wiki/SHA-1 - http://pythonhosted.org/passlib/lib/passlib.hash.html#unix-modular-crypt-hashes - http://pythonhosted.org/passlib/modular_crypt_format.html On Sun, Aug 23, 2015 at 7:19 PM, Nick Coghlan wrote: > Hi folks, > > The recent Docker 1.8 release was the first one to include their new > content signing system, which is described well in this post: > https://blog.docker.com/2015/08/content-trust-docker-1-8/ > > The resign I bring that up here is because the Docker Content Trust > system is based on The Update Framework, which is the same system > we've been exploring for PyPI package signing in PEPs 458 and 480. > > The part I particularly like is the way they have handled the trust > establishment process for content signing: they use a "trust on first > use" model by default, similar to that used in SSH. This means there > is still a reliance on HTTPS and the CA system, but only for the task > of bootstrapping TUF in a way that allows new clients to obtain the > public signing certificate of the repo publisher transparently. Once > the intial trust relationship with a public repo like PyPI or a > private repo within a company or other organisation has been > established, later compromises of the CA system don't provide the > ability to forge package signatures. > > Also of potential interest is the TUF-based signing infrastructure > that Docker built, Notary: https://github.com/docker/notary > > While I don't have a strong personal preference one way or the other, > finding a way to reuse that does seem like it could be an interesting > architectural alternative to building signing capabilities directly > into Warehouse itself. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.v.diaz at gmail.com Mon Aug 24 15:57:31 2015 From: vladimir.v.diaz at gmail.com (Vladimir Diaz) Date: Mon, 24 Aug 2015 09:57:31 -0400 Subject: [Distutils] Docker Content Trust and PyPI package signing In-Reply-To: References: Message-ID: Hello, The folks who worked on Docker Content Trust also recently presented Notary at the DockerCon 2015 keynote, which you may view here: http://www.ustream.tv/recorded/64499822#to01:54:00 Thanks, Vlad -- vladimir.v.diaz at gmail.com PGP fingerprint = ACCF 9DCA 73B9 862F 93C5 6608 63F8 90AA 1D25 3935 -- On Sun, Aug 23, 2015 at 8:19 PM, Nick Coghlan wrote: > Hi folks, > > The recent Docker 1.8 release was the first one to include their new > content signing system, which is described well in this post: > https://blog.docker.com/2015/08/content-trust-docker-1-8/ > > The resign I bring that up here is because the Docker Content Trust > system is based on The Update Framework, which is the same system > we've been exploring for PyPI package signing in PEPs 458 and 480. > > The part I particularly like is the way they have handled the trust > establishment process for content signing: they use a "trust on first > use" model by default, similar to that used in SSH. This means there > is still a reliance on HTTPS and the CA system, but only for the task > of bootstrapping TUF in a way that allows new clients to obtain the > public signing certificate of the repo publisher transparently. Once > the intial trust relationship with a public repo like PyPI or a > private repo within a company or other organisation has been > established, later compromises of the CA system don't provide the > ability to forge package signatures. > > Also of potential interest is the TUF-based signing infrastructure > that Docker built, Notary: https://github.com/docker/notary > > While I don't have a strong personal preference one way or the other, > finding a way to reuse that does seem like it could be an interesting > architectural alternative to building signing capabilities directly > into Warehouse itself. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Mon Aug 24 17:03:11 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Mon, 24 Aug 2015 11:03:11 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan wrote: > On 21 August 2015 at 05:58, Robert Collins > wrote: > > On 21 August 2015 at 07:25, Donald Stufft wrote: > >> > >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) > wrote: > >>> If you need that for some reason just put the longer information in the > >>> metadata, inside the WHEEL file for example. Surely "does it work on my > >>> system" dominates, as opposed to "I have a wheel with this mnemonic > tag, > >>> now let me install debian 5 so I can get it to run". > >>> > >>> > >> > >> It?s less about ?now let me install Debian 5? and more like tooling > that doesn?t run *on* the platform but which needs to make decisions based > on what platform a wheel is built for. > > > > Cramming that into the file name is a mistake IMO. > > > > Make it declarative data, make it indexable, and index it. We can do > > that locally as much as via the REST API. > > > > That btw is why the draft for referencing external dependencies > > specifies file names (because file names give an ABI in the context of > > a platform) - but we do need to identify the platform, and > > platform.distribution should be good enough for that (or perhaps we > > start depending on lsb-release for detection > > LSB has too much stuff in it, so most distros aren't LSB compliant out > of the box - you have to install extra packages. > > /etc/os-release is a better option: > http://www.freedesktop.org/software/systemd/man/os-release.html As per this discussion, and because I've discovered that the entire platform module is deprecated in 3.5 (and other amusements, like a Ubuntu-modified version of platform that ships on Ubuntu - platform as shipped with CPython detects Ubuntu as debian), I'm switching to os-release, but even that is unreliable - the file does not exist in CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and VERSION_ID are unset (which is not wrong - there is no release of testing, but it does make identifying the platform more complicated since even the codename is not provided other than at the end of PRETTY_NAME). Regardless of whether a hash or a human-identifiable string is used to identify the platform, there still needs to be a way to reliably detect it. Unless someone tells me not to, I'm going to default to using os-release and then fall back to other methods in the event that os-release isn't available, and this will be in some sort of library alongside pep425tags in wheel/pip. FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions without explicit need for a binary-compatibility.cfg (although not blindly - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS and Fedora versions are not congruent). --nate > > > My original concern with using that was that it *over*specifies the > distro (e.g. not only do CentOS and RHEL releases show up as different > platforms, but so do X.Y releases within a series), but the > binary-compatibility.txt idea resolves that issue, since a derived > distro can explicitly identify itself as binary compatible with its > upstream and be able to use the corresponding wheel files. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Aug 24 19:51:22 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 24 Aug 2015 12:51:22 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor wrote: > On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan wrote: > >> On 21 August 2015 at 05:58, Robert Collins >> wrote: >> > On 21 August 2015 at 07:25, Donald Stufft wrote: >> >> >> >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) >> wrote: >> >>> If you need that for some reason just put the longer information in >> the >> >>> metadata, inside the WHEEL file for example. Surely "does it work on >> my >> >>> system" dominates, as opposed to "I have a wheel with this mnemonic >> tag, >> >>> now let me install debian 5 so I can get it to run". >> >>> >> >>> >> >> >> >> It?s less about ?now let me install Debian 5? and more like tooling >> that doesn?t run *on* the platform but which needs to make decisions based >> on what platform a wheel is built for. >> > >> > Cramming that into the file name is a mistake IMO. >> > >> > Make it declarative data, make it indexable, and index it. We can do >> > that locally as much as via the REST API. >> > >> > That btw is why the draft for referencing external dependencies >> > specifies file names (because file names give an ABI in the context of >> > a platform) - but we do need to identify the platform, and >> > platform.distribution should be good enough for that (or perhaps we >> > start depending on lsb-release for detection >> >> LSB has too much stuff in it, so most distros aren't LSB compliant out >> of the box - you have to install extra packages. >> >> /etc/os-release is a better option: >> http://www.freedesktop.org/software/systemd/man/os-release.html > > > As per this discussion, and because I've discovered that the entire > platform module is deprecated in 3.5 (and other amusements, like a > Ubuntu-modified version of platform that ships on Ubuntu - platform as > shipped with CPython detects Ubuntu as debian), I'm switching to > os-release, but even that is unreliable - the file does not exist in > CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and > VERSION_ID are unset (which is not wrong - there is no release of testing, > but it does make identifying the platform more complicated since even the > codename is not provided other than at the end of PRETTY_NAME). Regardless > of whether a hash or a human-identifiable string is used to identify the > platform, there still needs to be a way to reliably detect it. > > Unless someone tells me not to, I'm going to default to using os-release > and then fall back to other methods in the event that os-release isn't > available, and this will be in some sort of library alongside pep425tags in > wheel/pip. > > FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions > without explicit need for a binary-compatibility.cfg (although not blindly > - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS > and Fedora versions are not congruent). > IIUC, then the value of os-release will be used to generalize the compatible versions of *.so deps of a given distribution at a point in time? This works for distros that don't change [libc] much during a release, but for rolling release models (e.g. arch, gentoo), IDK how this simplification will work. (This is a graph with nodes and edges (with attributes), and rules). * Keying/namespacing is a simplification which may work. * *conda preprocessing selectors* (and ~LSB-Python-Conda) ~'prune' large parts of the graph * Someone mentioned LSB[-Python-Base] (again as a simplification) * [[package, [version<=>verstr]]] Salt * __salt__['grains']['os'] = "Fedora" || "Ubuntu" * __salt__['grains']['os_family'] = "RedHat" || "Debian" * __salt__['grains']['osrelease'] = "22" || "14.04" * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html * Docs: http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.get * Src: https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 ("def os_data()") $ sudo salt-call --local grains.item os_family os osrelease oscodename local: ---------- os: Fedora os_family: RedHat oscodename: Twenty Two osrelease: 22 > --nate > > >> >> >> My original concern with using that was that it *over*specifies the >> distro (e.g. not only do CentOS and RHEL releases show up as different >> platforms, but so do X.Y releases within a series), but the >> binary-compatibility.txt idea resolves that issue, since a derived >> distro can explicitly identify itself as binary compatible with its >> upstream and be able to use the corresponding wheel files. >> >> Regards, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Mon Aug 24 20:17:19 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Mon, 24 Aug 2015 14:17:19 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner wrote: > > > On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor wrote: > >> On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan wrote: >> >>> On 21 August 2015 at 05:58, Robert Collins >>> wrote: >>> > On 21 August 2015 at 07:25, Donald Stufft wrote: >>> >> >>> >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) >>> wrote: >>> >>> If you need that for some reason just put the longer information in >>> the >>> >>> metadata, inside the WHEEL file for example. Surely "does it work on >>> my >>> >>> system" dominates, as opposed to "I have a wheel with this mnemonic >>> tag, >>> >>> now let me install debian 5 so I can get it to run". >>> >>> >>> >>> >>> >> >>> >> It?s less about ?now let me install Debian 5? and more like tooling >>> that doesn?t run *on* the platform but which needs to make decisions based >>> on what platform a wheel is built for. >>> > >>> > Cramming that into the file name is a mistake IMO. >>> > >>> > Make it declarative data, make it indexable, and index it. We can do >>> > that locally as much as via the REST API. >>> > >>> > That btw is why the draft for referencing external dependencies >>> > specifies file names (because file names give an ABI in the context of >>> > a platform) - but we do need to identify the platform, and >>> > platform.distribution should be good enough for that (or perhaps we >>> > start depending on lsb-release for detection >>> >>> LSB has too much stuff in it, so most distros aren't LSB compliant out >>> of the box - you have to install extra packages. >>> >>> /etc/os-release is a better option: >>> http://www.freedesktop.org/software/systemd/man/os-release.html >> >> >> As per this discussion, and because I've discovered that the entire >> platform module is deprecated in 3.5 (and other amusements, like a >> Ubuntu-modified version of platform that ships on Ubuntu - platform as >> shipped with CPython detects Ubuntu as debian), I'm switching to >> os-release, but even that is unreliable - the file does not exist in >> CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and >> VERSION_ID are unset (which is not wrong - there is no release of testing, >> but it does make identifying the platform more complicated since even the >> codename is not provided other than at the end of PRETTY_NAME). Regardless >> of whether a hash or a human-identifiable string is used to identify the >> platform, there still needs to be a way to reliably detect it. >> >> Unless someone tells me not to, I'm going to default to using os-release >> and then fall back to other methods in the event that os-release isn't >> available, and this will be in some sort of library alongside pep425tags in >> wheel/pip. >> >> FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions >> without explicit need for a binary-compatibility.cfg (although not blindly >> - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS >> and Fedora versions are not congruent). >> > > IIUC, then the value of os-release > will be used to generalize > the compatible versions of *.so deps > of a given distribution at a point in time? > > This works for distros that don't change [libc] much during a release, > but for rolling release models (e.g. arch, gentoo), > IDK how this simplification will work. > (This is a graph with nodes and edges (with attributes), and rules). > Arch, Gentoo, and other rolling release distributions don't have a stable ABI, so by definition I don't think we can support redistributable wheels on them. I'm adding platform detection support for them regardless, but I don't think there's any way to allow wheels built for these platforms in PyPI. > * Keying/namespacing is a simplification which may work. > * *conda preprocessing selectors* (and ~LSB-Python-Conda) > ~'prune' large parts of the graph > > * Someone mentioned LSB[-Python-Base] (again as a simplification) > * [[package, [version<=>verstr]]] > > Salt > * __salt__['grains']['os'] = "Fedora" || "Ubuntu" > * __salt__['grains']['os_family'] = "RedHat" || "Debian" > * __salt__['grains']['osrelease'] = "22" || "14.04" > * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" > * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html > * Docs: > http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.get > * Src: > https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 > ("def os_data()") > > $ sudo salt-call --local grains.item os_family os osrelease oscodename > local: > ---------- > os: > Fedora > os_family: > RedHat > oscodename: > Twenty Two > osrelease: > 22 > > > >> --nate >> >> >>> >>> >>> My original concern with using that was that it *over*specifies the >>> distro (e.g. not only do CentOS and RHEL releases show up as different >>> platforms, but so do X.Y releases within a series), but the >>> binary-compatibility.txt idea resolves that issue, since a derived >>> distro can explicitly identify itself as binary compatible with its >>> upstream and be able to use the corresponding wheel files. >>> >>> Regards, >>> Nick. >>> >>> -- >>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate at bx.psu.edu Tue Aug 25 19:54:40 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Tue, 25 Aug 2015 13:54:40 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: I've started down this road of Linux platform detection, here's the work so far: https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py I'm collecting distribution details here: https://gist.github.com/natefoo/814c5bf936922dad97ff One thing to note, although it's not used, I'm attempting to label a particular ABI as stable or unstable, so for example, Debian testing is unstable, whereas full releases are stable. Arch and Gentoo are always unstable, Ubuntu is always stable, etc. Hopefully this would be useful in making a decision about what wheels to allow into PyPI. --nate On Mon, Aug 24, 2015 at 2:17 PM, Nate Coraor wrote: > On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner wrote: > >> >> >> On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor wrote: >> >>> On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan >>> wrote: >>> >>>> On 21 August 2015 at 05:58, Robert Collins >>>> wrote: >>>> > On 21 August 2015 at 07:25, Donald Stufft wrote: >>>> >> >>>> >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) >>>> wrote: >>>> >>> If you need that for some reason just put the longer information in >>>> the >>>> >>> metadata, inside the WHEEL file for example. Surely "does it work >>>> on my >>>> >>> system" dominates, as opposed to "I have a wheel with this mnemonic >>>> tag, >>>> >>> now let me install debian 5 so I can get it to run". >>>> >>> >>>> >>> >>>> >> >>>> >> It?s less about ?now let me install Debian 5? and more like tooling >>>> that doesn?t run *on* the platform but which needs to make decisions based >>>> on what platform a wheel is built for. >>>> > >>>> > Cramming that into the file name is a mistake IMO. >>>> > >>>> > Make it declarative data, make it indexable, and index it. We can do >>>> > that locally as much as via the REST API. >>>> > >>>> > That btw is why the draft for referencing external dependencies >>>> > specifies file names (because file names give an ABI in the context of >>>> > a platform) - but we do need to identify the platform, and >>>> > platform.distribution should be good enough for that (or perhaps we >>>> > start depending on lsb-release for detection >>>> >>>> LSB has too much stuff in it, so most distros aren't LSB compliant out >>>> of the box - you have to install extra packages. >>>> >>>> /etc/os-release is a better option: >>>> http://www.freedesktop.org/software/systemd/man/os-release.html >>> >>> >>> As per this discussion, and because I've discovered that the entire >>> platform module is deprecated in 3.5 (and other amusements, like a >>> Ubuntu-modified version of platform that ships on Ubuntu - platform as >>> shipped with CPython detects Ubuntu as debian), I'm switching to >>> os-release, but even that is unreliable - the file does not exist in >>> CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and >>> VERSION_ID are unset (which is not wrong - there is no release of testing, >>> but it does make identifying the platform more complicated since even the >>> codename is not provided other than at the end of PRETTY_NAME). Regardless >>> of whether a hash or a human-identifiable string is used to identify the >>> platform, there still needs to be a way to reliably detect it. >>> >>> Unless someone tells me not to, I'm going to default to using os-release >>> and then fall back to other methods in the event that os-release isn't >>> available, and this will be in some sort of library alongside pep425tags in >>> wheel/pip. >>> >>> FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions >>> without explicit need for a binary-compatibility.cfg (although not blindly >>> - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS >>> and Fedora versions are not congruent). >>> >> >> IIUC, then the value of os-release >> will be used to generalize >> the compatible versions of *.so deps >> of a given distribution at a point in time? >> >> This works for distros that don't change [libc] much during a release, >> but for rolling release models (e.g. arch, gentoo), >> IDK how this simplification will work. >> (This is a graph with nodes and edges (with attributes), and rules). >> > > Arch, Gentoo, and other rolling release distributions don't have a stable > ABI, so by definition I don't think we can support redistributable wheels > on them. I'm adding platform detection support for them regardless, but I > don't think there's any way to allow wheels built for these platforms in > PyPI. > > >> * Keying/namespacing is a simplification which may work. >> * *conda preprocessing selectors* (and ~LSB-Python-Conda) >> ~'prune' large parts of the graph >> >> * Someone mentioned LSB[-Python-Base] (again as a simplification) >> * [[package, [version<=>verstr]]] >> >> Salt >> * __salt__['grains']['os'] = "Fedora" || "Ubuntu" >> * __salt__['grains']['os_family'] = "RedHat" || "Debian" >> * __salt__['grains']['osrelease'] = "22" || "14.04" >> * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" >> * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html >> * Docs: >> http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.get >> * Src: >> https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 >> ("def os_data()") >> >> $ sudo salt-call --local grains.item os_family os osrelease oscodename >> local: >> ---------- >> os: >> Fedora >> os_family: >> RedHat >> oscodename: >> Twenty Two >> osrelease: >> 22 >> >> >> >>> --nate >>> >>> >>>> >>>> >>>> My original concern with using that was that it *over*specifies the >>>> distro (e.g. not only do CentOS and RHEL releases show up as different >>>> platforms, but so do X.Y releases within a series), but the >>>> binary-compatibility.txt idea resolves that issue, since a derived >>>> distro can explicitly identify itself as binary compatible with its >>>> upstream and be able to use the corresponding wheel files. >>>> >>>> Regards, >>>> Nick. >>>> >>>> -- >>>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Aug 25 20:17:43 2015 From: steve.dower at python.org (Steve Dower) Date: Tue, 25 Aug 2015 11:17:43 -0700 Subject: [Distutils] Building Extensions for Python 3.5 on Windows Message-ID: <55DCB147.9020604@python.org> I've written up a long technical blog post about the compiler and CRT changes in Python 3.5, which will be of interest to those who build and distribute native extensions for Windows. http://stevedower.id.au/blog/building-for-python-3-5/ Hopefully it puts some of the changes we've made into a context where they don't just look like unnecessary pain. Feedback and discussion welcome, either on these lists or on the post itself. Cheers, Steve From wes.turner at gmail.com Wed Aug 26 03:42:54 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 25 Aug 2015 20:42:54 -0500 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Tue, Aug 25, 2015 at 12:54 PM, Nate Coraor wrote: > I've started down this road of Linux platform detection, here's the work > so far: > > https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py > IDK whether codecs.open(file, 'r', encoding='utf8') is necessary or not? There are probably distros with Unicode characters in their e.g. lsb-release files. > > I'm collecting distribution details here: > > https://gist.github.com/natefoo/814c5bf936922dad97ff > Oh wow; thanks! > > > One thing to note, although it's not used, I'm attempting to label a > particular ABI as stable or unstable, so for example, Debian testing is > unstable, whereas full releases are stable. Arch and Gentoo are always > unstable, Ubuntu is always stable, etc. Hopefully this would be useful in > making a decision about what wheels to allow into PyPI. > Is it possible to enumerate the set into a table? e.g. [((distro,ver), {'ABI': 'stable'}), (...)] > > --nate > > On Mon, Aug 24, 2015 at 2:17 PM, Nate Coraor wrote: > >> On Mon, Aug 24, 2015 at 1:51 PM, Wes Turner wrote: >> >>> >>> >>> On Mon, Aug 24, 2015 at 10:03 AM, Nate Coraor wrote: >>> >>>> On Fri, Aug 21, 2015 at 2:51 AM, Nick Coghlan >>>> wrote: >>>> >>>>> On 21 August 2015 at 05:58, Robert Collins >>>>> wrote: >>>>> > On 21 August 2015 at 07:25, Donald Stufft wrote: >>>>> >> >>>>> >> On August 20, 2015 at 3:23:09 PM, Daniel Holth (dholth at gmail.com) >>>>> wrote: >>>>> >>> If you need that for some reason just put the longer information >>>>> in the >>>>> >>> metadata, inside the WHEEL file for example. Surely "does it work >>>>> on my >>>>> >>> system" dominates, as opposed to "I have a wheel with this >>>>> mnemonic tag, >>>>> >>> now let me install debian 5 so I can get it to run". >>>>> >>> >>>>> >>> >>>>> >> >>>>> >> It?s less about ?now let me install Debian 5? and more like tooling >>>>> that doesn?t run *on* the platform but which needs to make decisions based >>>>> on what platform a wheel is built for. >>>>> > >>>>> > Cramming that into the file name is a mistake IMO. >>>>> > >>>>> > Make it declarative data, make it indexable, and index it. We can do >>>>> > that locally as much as via the REST API. >>>>> > >>>>> > That btw is why the draft for referencing external dependencies >>>>> > specifies file names (because file names give an ABI in the context >>>>> of >>>>> > a platform) - but we do need to identify the platform, and >>>>> > platform.distribution should be good enough for that (or perhaps we >>>>> > start depending on lsb-release for detection >>>>> >>>>> LSB has too much stuff in it, so most distros aren't LSB compliant out >>>>> of the box - you have to install extra packages. >>>>> >>>>> /etc/os-release is a better option: >>>>> http://www.freedesktop.org/software/systemd/man/os-release.html >>>> >>>> >>>> As per this discussion, and because I've discovered that the entire >>>> platform module is deprecated in 3.5 (and other amusements, like a >>>> Ubuntu-modified version of platform that ships on Ubuntu - platform as >>>> shipped with CPython detects Ubuntu as debian), I'm switching to >>>> os-release, but even that is unreliable - the file does not exist in >>>> CentOS/RHEL 6, for example. On Debian testing/sid installs, VERSION and >>>> VERSION_ID are unset (which is not wrong - there is no release of testing, >>>> but it does make identifying the platform more complicated since even the >>>> codename is not provided other than at the end of PRETTY_NAME). Regardless >>>> of whether a hash or a human-identifiable string is used to identify the >>>> platform, there still needs to be a way to reliably detect it. >>>> >>>> Unless someone tells me not to, I'm going to default to using >>>> os-release and then fall back to other methods in the event that os-release >>>> isn't available, and this will be in some sort of library alongside >>>> pep425tags in wheel/pip. >>>> >>>> FWIW, os-release's `ID_LIKE` gives us some ability to make assumptions >>>> without explicit need for a binary-compatibility.cfg (although not blindly >>>> - for example, CentOS sets this to "rhel fedora", but of course RHEL/CentOS >>>> and Fedora versions are not congruent). >>>> >>> >>> IIUC, then the value of os-release >>> will be used to generalize >>> the compatible versions of *.so deps >>> of a given distribution at a point in time? >>> >>> This works for distros that don't change [libc] much during a release, >>> but for rolling release models (e.g. arch, gentoo), >>> IDK how this simplification will work. >>> (This is a graph with nodes and edges (with attributes), and rules). >>> >> >> Arch, Gentoo, and other rolling release distributions don't have a stable >> ABI, so by definition I don't think we can support redistributable wheels >> on them. I'm adding platform detection support for them regardless, but I >> don't think there's any way to allow wheels built for these platforms in >> PyPI. >> >> >>> * Keying/namespacing is a simplification which may work. >>> * *conda preprocessing selectors* (and ~LSB-Python-Conda) >>> ~'prune' large parts of the graph >>> >>> * Someone mentioned LSB[-Python-Base] (again as a simplification) >>> * [[package, [version<=>verstr]]] >>> >>> Salt >>> * __salt__['grains']['os'] = "Fedora" || "Ubuntu" >>> * __salt__['grains']['os_family'] = "RedHat" || "Debian" >>> * __salt__['grains']['osrelease'] = "22" || "14.04" >>> * __salt__['grains']['oscodename'] = "Twenty Two" || "trusty" >>> * Docs: http://docs.saltstack.com/en/latest/topics/targeting/grains.html >>> * Docs: >>> http://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.grains.html#salt.modules.grains.get >>> * Src: >>> https://github.com/saltstack/salt/blob/develop/salt/grains/core.py#L1018 >>> ("def os_data()") >>> >>> $ sudo salt-call --local grains.item os_family os osrelease oscodename >>> local: >>> ---------- >>> os: >>> Fedora >>> os_family: >>> RedHat >>> oscodename: >>> Twenty Two >>> osrelease: >>> 22 >>> >>> >>> >>>> --nate >>>> >>>> >>>>> >>>>> >>>>> My original concern with using that was that it *over*specifies the >>>>> distro (e.g. not only do CentOS and RHEL releases show up as different >>>>> platforms, but so do X.Y releases within a series), but the >>>>> binary-compatibility.txt idea resolves that issue, since a derived >>>>> distro can explicitly identify itself as binary compatible with its >>>>> upstream and be able to use the corresponding wheel files. >>>>> >>>>> Regards, >>>>> Nick. >>>>> >>>>> -- >>>>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>>>> _______________________________________________ >>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>> >>>> >>>> >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p at 2015.forums.dobrogost.net Wed Aug 26 17:10:19 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Wed, 26 Aug 2015 17:10:19 +0200 Subject: [Distutils] Normalized version confuses external tool (PyCharm) Message-ID: Hi! After `pip install js.deform==2.0a2-3` I get the message "Successfully installed js.deform-2.0a2.post3" where version "2.0a2-3" was normalized (I guess) to "2.0a2.post3" and that's the version pip list reports as well. Due to this PyCharm shows warning saying this requirement "js.deform==2.0a2-3" is not satisfied. What is the preferred way of solving this? Should package version be changed by author to 2.0a2.post3? Regards, Piotr Dobrogost From donald at stufft.io Wed Aug 26 17:21:10 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 26 Aug 2015 11:21:10 -0400 Subject: [Distutils] Normalized version confuses external tool (PyCharm) In-Reply-To: References: Message-ID: On August 26, 2015 at 11:19:55 AM, Piotr Dobrogost (p at 2015.forums.dobrogost.net) wrote: > Hi! > > After `pip install js.deform==2.0a2-3` I get the message "Successfully > installed js.deform-2.0a2.post3" where version "2.0a2-3" was > normalized (I guess) to "2.0a2.post3" and that's the version pip list > reports as well. Due to this PyCharm shows warning saying this > requirement "js.deform==2.0a2-3" is not satisfied. > > What is the preferred way of solving this? Should package version be > changed by author to 2.0a2.post3? > > Well, I don?t know what PyCharm is doing, but ideally if their somehow interacting with the environment in a way that they care about Python version numbers, they should support PEP 440. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Wed Aug 26 17:22:49 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Aug 2015 16:22:49 +0100 Subject: [Distutils] Normalized version confuses external tool (PyCharm) In-Reply-To: References: Message-ID: On 26 August 2015 at 16:10, Piotr Dobrogost

wrote: > After `pip install js.deform==2.0a2-3` I get the message "Successfully > installed js.deform-2.0a2.post3" where version "2.0a2-3" was > normalized (I guess) to "2.0a2.post3" and that's the version pip list > reports as well. Due to this PyCharm shows warning saying this > requirement "js.deform==2.0a2-3" is not satisfied. > > What is the preferred way of solving this? Should package version be > changed by author to 2.0a2.post3? My immediate thought is that PyCharm's warning is wrong (the requirement *is* satisfied, after all) and you should report a bug to them. I suspect their code for checking requirements isn't following the latest spec. Paul From p at 2015.forums.dobrogost.net Wed Aug 26 17:29:03 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Wed, 26 Aug 2015 17:29:03 +0200 Subject: [Distutils] Normalized version confuses external tool (PyCharm) In-Reply-To: References: Message-ID: On Wed, Aug 26, 2015 at 5:21 PM, Donald Stufft wrote: > > Well, I don?t know what PyCharm is doing, Me neither but I suspect they simply compare version reported by pip (or something else) with the version from requirements.txt > but ideally if their somehow interacting with the environment in a way that they care about Python version numbers, they should support PEP 440. I agree they should and this would solve this problem. However I'm wondering if the general advice for authors shouldn't be to prefer normalized versions so to avoid problems like this? Regards, Piotr From donald at stufft.io Wed Aug 26 17:39:04 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 26 Aug 2015 11:39:04 -0400 Subject: [Distutils] Normalized version confuses external tool (PyCharm) In-Reply-To: References: Message-ID: On August 26, 2015 at 11:29:49 AM, Piotr Dobrogost (p at 2015.forums.dobrogost.net) wrote: > On Wed, Aug 26, 2015 at 5:21 PM, Donald Stufft wrote: > > > > Well, I don?t know what PyCharm is doing, > > Me neither but I suspect they simply compare version reported by pip > (or something else) with the version from requirements.txt > > > but ideally if their somehow interacting with the environment in a way that they care > about Python version numbers, they should support PEP 440. > > I agree they should and this would solve this problem. However I'm > wondering if the general advice for authors shouldn't be to prefer > normalized versions so to avoid problems like this? > > Yea, the advice is to prefer the normalized forms (and if I recall, I think setuptools will normalize for them so if you?re using a newer version of setuptools it?ll do that for you and solve the problem anyways). The problem comes with older versions of setuptools and things already uploaded. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From dlwubi at gmail.com Wed Aug 26 17:36:51 2015 From: dlwubi at gmail.com (Desta Wubishet) Date: Wed, 26 Aug 2015 18:36:51 +0300 Subject: [Distutils] Distributing .c and .pyd modules Message-ID: Dear Sir, I got your email from the manual with the title "Distributing Python modules". After writing my modules in Python, I used Cython to convert them to .c and .pyd files. But I faced problem in creating a distribution package containing only the .c and .pyd files. Please let me know if there is any material that I can use as a reference for this purpose or I would appreciate if you can give me some hint especially in preparing the MANIFEST.in file. Best, Desta -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Aug 27 03:24:05 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 26 Aug 2015 21:24:05 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI Message-ID: While developing Warehouse, one of the things I wanted to get done was a final ruling on PEP 470. With that in mind I?d like to bring it back up for discussion and hopefully ultimately a ruling. Their are two major differences in this version of PEP 470, and I?d like to point them out explicitly. Removal of the ?External Repository Discover? feature. I?ve been thinking about this for awhile, and I finally removed it. I?ve always been uncomfortable with this feature and I finally realized why it was. Essentially, the major use case for not hosting things on PyPI that I think PyPI can reasonably be expected to accommodate is people who cannot publish their software to the US for various reasons. At the time I came up with the solution I did, It was an attempt to placate the folks who were against PEP 470 while assuming very few people would ever actually use it, essentially a junk feature to push the PEP through. I think that the feature itself is a bad feature and I think it presents a poor experience for people who want to use it, so I?ve removed it from the PEP and instead focused the PEP on explicitly recommending that all installers should implement the ability to specify multiple repositories and deprecating and removing the ability for finding anything but files hosted by the repository itself on /simple/. I recognize this is a regression for anyone who *does* have concerns with uploading their projects to a server hosted in the US. If there is someone that has this concern, and is also willing to put in the effort and legwork required, I will happily collaborate with them to design a solution that both follows whatever legal requirements they might have, as well as provides a good experience for people using PyPI and pip. I have some rough ideas on what this could look like, but I think it?s really a separate discussion since I believe externally hosted files like we were is an overall bad experience for people and is largely a historic accident from how PyPI and Python packaging has evolved. I don?t want to derail this thread or PEP exploring these ideas (some of which I don?t even know if they would satisfy the requirements since it?s all dealing with legal jurisdictions other than my own), but i wanted to make explicit that someone who knows the legalities and is willing to put in the work can reach out to me. The other major difference is that I?ve shortened the time schedule from 6 months to 3 months. Given that authors are either going to upload their projects to PyPI or not and there is no longer a need to setup an external index I think a shorter time schedule is fine, especially since they will be given a script they can run that will spider their projects for any installable files and upload them to PyPI for them in a quick one shot deal that would require very little effort for them. Everything else in the PEP is basically the same except for rewordings. I do need a BDFL Delegate for this PEP, Richard does not have the time to do it and the other logical candidate for a PyPI centric PEP is myself, but I don?t feel it?s appropriate to BDFL Delegate my own PEP. You can see the PEP online at?https://www.python.org/dev/peps/pep-0470/?(make sure it?s updated and you see the one that has Aug 26 2015 in it?s Post History). The PEP has also been inlined below. ----------------- Abstract ======== This PEP proposes the deprecation and removal of support for hosting files externally to PyPI as well as the deprecation and removal of the functionality added by PEP 438, particularly rel information to classify different types of links and the meta-tag to indicate API version. Rationale ========= Historically PyPI did not have any method of hosting files nor any method of automatically retrieving installables, it was instead focused on providing a central registry of names, to prevent naming collisions, and as a means of discovery for finding projects to use. In the course of time setuptools began to scrape these human facing pages, as well as pages linked from those pages, looking for things it could automatically download and install. Eventually this became the "Simple" API which used a similar URL structure however it eliminated any of the extraneous links and information to make the API more efficient. Additionally PyPI grew the ability for a project to upload release files directly to PyPI enabling PyPI to act as a repository in addition to an index. This gives PyPI two equally important roles that it plays in the Python ecosystem, that of index to enable easy discovery of Python projects and central repository to enable easy hosting, download, and installation of Python projects. Due to the history behind PyPI and the very organic growth it has experienced the lines between these two roles are blurry, and this blurring has caused confusion for the end users of both of these roles and this has in turn caused ire between people attempting to use PyPI in different capacities, most often when end users want to use PyPI as a repository but the author wants to use PyPI solely as an index. This confusion comes down to end users of projects not realizing if a project is hosted on PyPI or if it relies on an external service. This often manifests itself when the external service is down but PyPI is not. People will see that PyPI works, and other projects works, but this one specific one does not. They often times do not realize who they need to contact in order to get this fixed or what their remediation steps are. PEP 438 attempted to solve this issue by allowing projects to explicitly declare if they were using the repository features or not, and if they were not, it had the installers classify the links it found as either "internal", "verifiable external" or "unverifiable external". PEP 438 was accepted and implemented in pip 1.4 (released on Jul 23, 2013) with the final transition implemented in pip 1.5 (released on Jan 2, 2014). PEP 438 was successful in bringing about more people to utilize PyPI's repository features, an altogether good thing given the global CDN powering PyPI providing speed ups for a lot of people, however it did so by introducing a new point of confusion and pain for both the end users and the authors. By moving to using explicit multiple repositories we can make the lines between these two roles much more explicit and remove the "hidden" surprises caused by the current implementation of handling people who do not want to use PyPI as a repository. Key User Experience Expectations -------------------------------- #. Easily allow external hosting to "just work" when appropriately configured ? ?at the system, user or virtual environment level. #. Eliminate any and all references to the confusing "verifiable external" and ? ?"unverifiable external" distinction from the user experience (both when ? ?installing and when releasing packages). #. The repository aspects of PyPI should become *just* the default package ? ?hosting location (i.e. the only one that is treated as opt-out rather than ? ?opt-in by most client tools in their default configuration). Aside from that ? ?aspect, hosting on PyPI should not otherwise provide an enhanced user ? ?experience over hosting your own package repository. #. Do all of the above while providing default behaviour that is secure against ? ?most attackers below the nation state adversary level. Why Additional Repositories? ---------------------------- The two common installer tools, pip and easy_install/setuptools, both support the concept of additional locations to search for files to satisfy the installation requirements and have done so for many years. This means that there is no need to "phase" in a new flag or concept and the solution to installing a project from a repository other than PyPI will function regardless of how old (within reason) the end user's installer is. Not only has this concept existed in the Python tooling for some time, but it is a concept that exists across languages and even extending to the OS level with OS package tools almost universally using multiple repository support making it extremely likely that someone is already familiar with the concept. Additionally, the multiple repository approach is a concept that is useful outside of the narrow scope of allowing projects that wish to be included on the index portion of PyPI but do not wish to utilize the repository portion of PyPI. This includes places where a company may wish to host a repository that contains their internal packages or where a project may wish to have multiple "channels" of releases, such as alpha, beta, release candidate, and final release. This could also be used for projects wishing to host files which cannot be uploaded to PyPI, such as multi-gigabyte data files or, currently at least, Linux Wheels. Why Not PEP 438 or Similar? --------------------------- While the additional search location support has existed in pip and setuptools for quite some time support for PEP 438 has only existed in pip since the 1.4 version, and still has yet to be implemented in setuptools. The design of PEP 438 did mean that users still benefited for projects which did not require external files even with older installers, however for projects which *did* require external files, users are still silently being given either potentially unreliable or, even worse, unsafe files to download. This system is also unique to Python as it arises out of the history of PyPI, this means that it is almost certain that this concept will be foreign to most, if not all users, until they encounter it while attempting to use the Python toolchain. Additionally, the classification system proposed by PEP 438 has, in practice, turned out to be extremely confusing to end users, so much so that it is a position of this PEP that the situation as it stands is completely untenable. The common pattern for a user with this system is to attempt to install a project possibly get an error message (or maybe not if the project ever uploaded something to PyPI but later switched without removing old files), see that the error message suggests ``--allow-external``, they reissue the command adding that flag most likely getting another error message, see that this time the error message suggests also adding ``--allow-unverified``, and again issue the command a third time, this time finally getting the thing they wish to install. This UX failure exists for several reasons. #. If pip can locate files at all for a project on the Simple API it will ? ?simply use that instead of attempting to locate more. This is generally the ? ?right thing to do as attempting to locate more would erase a large part of ? ?the benefit of PEP 438. This means that if a project *ever* uploaded a file ? ?that matches what the user has requested for install that will be used ? ?regardless of how old it is. #. PEP 438 makes an implicit assumption that most projects would either upload ? ?themselves to PyPI or would update themselves to directly linking to release ? ?files. While a large number of projects did ultimately decide to upload to ? ?PyPI, some of them did so only because the UX around what PEP 438 was so bad ? ?that they felt forced to do so. More concerning however, is the fact that ? ?very few projects have opted to directly and safely link to files and ? ?instead they still simply link to pages which must be scraped in order to ? ?find the actual files, thus rendering the safe variant ? ?(``--allow-external``) largely useless. #. Even if an author wishes to directly link to their files, doing so safely is ? ?non-obvious. It requires the inclusion of a MD5 hash (for historical ? ?reasons) in the hash of the URL. If they do not include this then their ? ?files will be considered "unverified". #. PEP 438 takes a security centric view and disallows any form of a global opt ? ?in for unverified projects. While this is generally a good thing, it creates ? ?extremely verbose and repetitive command invocations such as:: ? ? ? $ pip install --allow-external myproject --allow-unverified myproject myproject ? ? ? $ pip install --allow-all-external --allow-unverified myproject myproject Multiple Repository/Index Support ================================= Installers SHOULD implement or continue to offer, the ability to point the installer at multiple URL locations. The exact mechanisms for a user to indicate they wish to use an additional location is left up to each individual implementation. Additionally the mechanism discovering an installation candidate when multiple repositories are being used is also up to each individual implementation, however once configured an implementation should not discourage, warn, or otherwise cast a negative light upon the use of a repository simply because it is not the default repository. Currently both pip and setuptools implement multiple repository support by using the best installation candidate it can find from either repository, essentially treating it as if it were one large repository. Installers SHOULD also implement some mechanism for removing or otherwise disabling use of the default repository. The exact specifics of how that is achieved is up to each individual implementation. Installers SHOULD also implement some mechanism for whitelisting and blacklisting which projects a user wishes to install from a particular repository. The exact specifics of how that is achieved is up to each individual implementation. Deprecation and Removal of Link Spidering ========================================= A new hosting mode will be added to PyPI. This hosting mode will be called ``pypi-only`` and will be in addition to the three that PEP 438 has already given us which are ``pypi-explicit``, ``pypi-scrape``, ``pypi-scrape-crawl``. This new hosting mode will modify a project's simple api page so that it only lists the files which are directly hosted on PyPI and will not link to anything else. Upon acceptance of this PEP and the addition of the ``pypi-only`` mode, all new projects will be defaulted to the PyPI only mode and they will be locked to this mode and unable to change this particular setting. An email will then be sent out to all of the projects which are hosted only on PyPI informing them that in one month their project will be automatically converted to the ``pypi-only`` mode. A month after these emails have been sent any of those projects which were emailed, which still are hosted only on PyPI will have their mode set permanently to ``pypi-only``. At the same time, an email will be sent to projects which rely on hosting external to PyPI. This email will warn these projects that externally hosted files have been deprecated on PyPI and that in 3 months from the time of that email that all external links will be removed from the installer APIs. This email **MUST** include instructions for converting their projects to be hosted on PyPI and **MUST** include links to a script or package that will enable them to enter their PyPI credentials and package name and have it automatically download and re-host all of their files on PyPI. This email **MUST** also include instructions for setting up their own index page. This email must also contain a link to the Terms of Service for PyPI as many users may have signed up a long time ago and may not recall what those terms are. Finally this email must also contain a list of the links registered with PyPI where we were able to detect an installable file was located. Two months after the initial email, another email must be sent to any projects still relying on external hosting. This email will include all of the same information that the first email contained, except that the removal date will be one month away instead of three. Finally a month later all projects will be switched to the ``pypi-only`` mode and PyPI will be modified to remove the externally linked files functionality. Summary of Changes ================== Repository side --------------- #. Deprecate and remove the hosting modes as defined by PEP 438. #. Restrict simple API to only list the files that are contained within the ? ?repository. Client side ----------- #. Implement multiple repository support. #. Implement some mechanism for removing/disabling the default repository. #. Deprecate / Remove PEP 438 Impact ====== To determine impact, we've looked at all projects using a method of searching PyPI which is similar to what pip and setuptools use and searched for all files available on PyPI, safely linked from PyPI, unsafely linked from PyPI, and finally unsafely available outside of PyPI. When the same file was found in multiple locations it was deduplicated and only counted it in one location based on the following preferences: PyPI > Safely Off PyPI > Unsafely Off PyPI. This gives us the broadest possible definition of impact, it means that any single file for this project may no longer be visible by default, however that file could be years old, or it could be a binary file while there is a sdist available on PyPI. This means that the *real* impact will likely be much smaller, but in an attempt not to miscount we take the broadest possible definition. At the time of this writing there are 65,232 projects hosted on PyPI and of those, 59 of them rely on external files that are safely hosted outside of PyPI and 931 of them rely on external files which are unsafely hosted outside of PyPI. This shows us that 1.5% of projects will be affected in some way by this change while 98.5% will continue to function as they always have. In addition, only 5% of the projects affected are using the features provided by PEP 438 to safely host outside of PyPI while 95% of them are exposing their users to Remote Code Execution via a Man In The Middle attack. Data Sovereignty ================ In the discussions around previous versions of this PEP, one of the key use cases for wanting to host files externally to PyPI was due to data sovereignty requirements for people living in jurisdictions outside of the USA, where PyPI is currently hosted. The author of this PEP is not blind to these concerns and realizes that this PEP represents a regression for the people that have these concerns, however the current situation is presenting an extremely poor user experience and the feature is only being used by a small percentage of projects. In addition, the data sovereignty problems requires familarity with the laws outside of the home jurisdiction of the author of this PEP, who is also the principal developer and operator of PyPI. For these reasons, a solution for the problem of data sovereignty has been deferred and is considered outside of the scope for this PEP. If someone for whom the issue of data sovereignty matters to them wishes to put forth the effort, then at that time a system can be designed, implemented, and ultimately deployed and operated that would satisfy both the needs of non US users that cannot upload their projects to a system on US soil and the quality of user experience that is attempted to be created on PyPI. Rejected Proposals ================== Allow easier discovery of externally hosted indexes --------------------------------------------------- A previous version of this PEP included a new feature added to both PyPI and installers that would allow project authors to enter into PyPI a list of URLs that would instruct installers to ignore any files uploaded to PyPI and instead return an error telling the end user about these extra URLs that they can add to their installer to make the installation work. This idea is rejected because it provides a similar painful end user experience where people will first attempt to install something, get an error, then have to re-run the installation with the correct options. Keep the current classification system but adjust the options ------------------------------------------------------------- This PEP rejects several related proposals which attempt to fix some of the usability problems with the current system but while still keeping the general gist of PEP 438. This includes: * Default to allowing safely externally hosted files, but disallow unsafely ? hosted. * Default to disallowing safely externally hosted files with only a global flag ? to enable them, but disallow unsafely hosted. * Continue on the suggested path of PEP 438 and remove the option to unsafely ? host externally but continue to allow the option to safely host externally. These proposals are rejected because: * The classification system introduced in PEP 438 in an entirely unique concept ? to PyPI which is not generically applicable even in the context of Python ? packaging. Adding additional concepts comes at a cost. * The classification system itself is non-obvious to explain and to ? pre-determine what classification of link a project will require entails ? inspecting the project's ``/simple//`` page, and possibly any URLs ? linked from that page. * The ability to host externally while still being linked for automatic ? discovery is mostly a historic relic which causes a fair amount of pain and ? complexity for little reward. * The installer's ability to optimize or clean up the user interface is limited ? due to the nature of the implicit link scraping which would need to be done. ? This extends to the ``--allow-*`` options as well as the inability to ? determine if a link is expected to fail or not. * The mechanism paints a very broad brush when enabling an option, while ? PEP 438 attempts to limit this with per package options. However a project ? that has existed for an extended period of time may often times have several ? different URLs listed in their simple index. It is not unusual for at least ? one of these to no longer be under control of the project. While an ? unregistered domain will sit there relatively harmless most of the time, pip ? will continue to attempt to install from it on every discovery phase. This ? means that an attacker simply needs to look at projects which rely on unsafe ? external URLs and register expired domains to attack users. Implement this PEP, but Do Not Remove the Existing Links -------------------------------------------------------- This is essentially the backwards compatible version of this PEP. It attempts to allow people using older clients, or clients which do not implement this PEP to continue on as if nothing had changed. This proposal is rejected because the vast bulk of those scenarios are unsafe uses of the deprecated features. It is the opinion of this PEP that silently allowing unsafe actions to take place on behalf of end users is simply not an acceptable solution. Copyright ========= This document has been placed in the public domain. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ben+python at benfinney.id.au Thu Aug 27 04:05:55 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Thu, 27 Aug 2015 12:05:55 +1000 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI References: Message-ID: <85bndteb98.fsf@benfinney.id.au> Donald Stufft writes: > Removal of the ?External Repository Discover? feature. [?] I think > that the feature itself is a bad feature and I think it presents a > poor experience for people who want to use it, so I?ve removed it from > the PEP and instead focused the PEP on explicitly recommending that > all installers should implement the ability to specify multiple > repositories and deprecating and removing the ability for finding > anything but files hosted by the repository itself on /simple/. +1, thank you for this improvement. -- \ ?Human reason is snatching everything to itself, leaving | `\ nothing for faith.? ?Bernard of Clairvaux, 1090?1153 CE | _o__) | Ben Finney From olivier.grisel at ensta.org Thu Aug 27 09:48:05 2015 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 27 Aug 2015 09:48:05 +0200 Subject: [Distutils] Distributing .c and .pyd modules In-Reply-To: References: Message-ID: You can use the wheel package format to generate platform specific packages for Windows. Have a look at the python packaging documentation: https://packaging.python.org In particular: https://packaging.python.org/en/latest/distributing.html#wheels -- Olivier From solipsis at pitrou.net Thu Aug 27 10:25:57 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 27 Aug 2015 10:25:57 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI References: Message-ID: <20150827102557.4bfb39c4@fsol> On Wed, 26 Aug 2015 21:24:05 -0400 Donald Stufft wrote: > > At the time of this writing there are 65,232 projects hosted on PyPI and of > those, 59 of them rely on external files that are safely hosted outside of PyPI > and 931 of them rely on external files which are unsafely hosted outside of > PyPI. This shows us that 1.5% of projects will be affected in some way by this > change while 98.5% will continue to function as they always have. In addition, > only 5% of the projects affected are using the features provided by PEP 438 to > safely host outside of PyPI while 95% of them are exposing their users to > Remote Code Execution via a Man In The Middle attack. Out of curiosity, have you tried to determine if those Unsafely Off PyPI projects were either still active or "popular" ? The PEP looks fine anyway, good job :) Regards Antoine. From p.f.moore at gmail.com Thu Aug 27 10:28:32 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 27 Aug 2015 09:28:32 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On 27 August 2015 at 02:24, Donald Stufft wrote: > While developing Warehouse, one of the things I wanted to get done was a final ruling on PEP 470. With that in mind I?d like to bring it back up for discussion and hopefully ultimately a ruling. > > Their are two major differences in this version of PEP 470, and I?d like to point them out explicitly. > > Removal of the ?External Repository Discover? feature. I?ve been thinking about this for awhile, and I finally removed it. I?ve always been uncomfortable with this feature and I finally realized why it was. Essentially, the major use case for not hosting things on PyPI that I think PyPI can reasonably be expected to accommodate is people who cannot publish their software to the US for various reasons. At the time I came up with the solution I did, It was an attempt to placate the folks who were against PEP 470 while assuming very few people would ever actually use it, essentially a junk feature to push the PEP through. I think that the feature itself is a bad feature and I think it presents a poor experience for people who want to use it, so I?ve removed it from the PEP and instead focused the PEP on explicitly recommending that all installers should implement the ability to specify multiple repositories and deprecating and removing the ability for finding anything but files hosted by the repository itself on /simple/. +1 on the proposal. Agreed that while the removal of the external hosting/discovery feature is a regression, it's one we need to make in order to provide a clean baseline and a good user experience. Encouraging people to set up an external index if they have off-PyPI hosting requirements seems entirely reasonable. But I would say (having tried to do this for testing and personal use in the past) it's not easy to find good documentation on how to set up an external index (when you're starting from a simple web host). Having a "how to set up a PyPI-style simple index" document, linked from the PEP (and ultimately from the docs) would be a useful resource for people in general, and a good starting point for any discussion with people who have requirements for not hosting on PyPI. Probably putting such a document in https://packaging.python.org/en/latest/distributing.html would make sense. Paul From mal at egenix.com Thu Aug 27 12:57:06 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 27 Aug 2015 12:57:06 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: <55DEED02.5080109@egenix.com> On 27.08.2015 03:24, Donald Stufft wrote: > While developing Warehouse, one of the things I wanted to get done was a final ruling on PEP 470. With that in mind I?d like to bring it back up for discussion and hopefully ultimately a ruling. > > Their are two major differences in this version of PEP 470, and I?d like to point them out explicitly. > > Removal of the ?External Repository Discover? feature. I?ve been thinking about this for awhile, and I finally removed it. I?ve always been uncomfortable with this feature and I finally realized why it was. Essentially, the major use case for not hosting things on PyPI that I think PyPI can reasonably be expected to accommodate is people who cannot publish their software to the US for various reasons. At the time I came up with the solution I did, It was an attempt to placate the folks who were against PEP 470 while assuming very few people would ever actually use it, essentially a junk feature to push the PEP through. I think that the feature itself is a bad feature and I think it presents a poor experience for people who want to use it, so I?ve removed it from the PEP and instead focused the PEP on explicitly recommending that all installers should implement the ability to specify multiple repositories and deprecating and removing the ability for finding anything but file s hoste d by the repository itself on /simple/. This feature was part of a compromise to reach consensus on the removal of external hosting. While I don't think the details of the repository discovery need to be part of PEP 470, I do believe that the PEP should continue to support the idea of having a way for package managers to easily find external indexes for a particular package and not outright reject it. Instead, the PEP should highlight this compromise and defer it to a separate PEP. More comments: * The user experience argument you give in the PEP 470 for rejecting the idea is not really sound: the purpose of the discovery feature is to provide a *better user experience* than an error message saying that a package cannot be found and requiring the user to do research on the web to find the right URLs. Package managers can use the information about the other indexes they receive from PyPI to either present them to the user to use/install them or to even directly go there to find the packages. * The section on data sovereignty should really be removed or reworded. PEPs should be neutral and not imply political views, in particular not make it look like the needs of US users of PyPI are more important then those of non-US users. Using "poor user experience" as argument here is really not appropriate. PyPI is a central part of the Python community infrastructure and should be regarded as a resource for the world-wide community. There is no reason to assume that we cannot have several PyPI installations around the world to address any such issues. * It is rather unusual to have a PEP switch from a compromise solution to a rejection of the compromise this late in the process. I will soon be starting a PSF working group to address some of the reasons why people cannot upload packages to PyPI. The intent is to work on the PyPI terms to make them more package author friendly. Anyone interested to join ? > I recognize this is a regression for anyone who *does* have concerns with uploading their projects to a server hosted in the US. If there is someone that has this concern, and is also willing to put in the effort and legwork required, I will happily collaborate with them to design a solution that both follows whatever legal requirements they might have, as well as provides a good experience for people using PyPI and pip. I have some rough ideas on what this could look like, but I think it?s really a separate discussion since I believe externally hosted files like we were is an overall bad experience for people and is largely a historic accident from how PyPI and Python packaging has evolved. I don?t want to derail this thread or PEP exploring these ideas (some of which I don?t even know if they would satisfy the requirements since it?s all dealing with legal jurisdictions other than my own), but i wanted to make explicit that someone who knows the legalities and is willing to put in the work can reach out to me. We can start a separate thread about discovery, using a separate PEP to formalize it. This could be as simply as having a flag saying "use the download URL as index and offer this to the user trying to find the package distribution files". > The other major difference is that I?ve shortened the time schedule from 6 months to 3 months. Given that authors are either going to upload their projects to PyPI or not and there is no longer a need to setup an external index I think a shorter time schedule is fine, especially since they will be given a script they can run that will spider their projects for any installable files and upload them to PyPI for them in a quick one shot deal that would require very little effort for them. It would be good to have both PEP 470 and the discovery PEP available at the same time. > Everything else in the PEP is basically the same except for rewordings. > > I do need a BDFL Delegate for this PEP, Richard does not have the time to do it and the other logical candidate for a PyPI centric PEP is myself, but I don?t feel it?s appropriate to BDFL Delegate my own PEP. > > You can see the PEP online at https://www.python.org/dev/peps/pep-0470/ (make sure it?s updated and you see the one that has Aug 26 2015 in it?s Post History). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 27 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From donald at stufft.io Thu Aug 27 13:51:23 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 07:51:23 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <55DEED02.5080109@egenix.com> References: <55DEED02.5080109@egenix.com> Message-ID: On August 27, 2015 at 6:57:17 AM, M.-A. Lemburg (mal at egenix.com) wrote: > This feature was part of a compromise to reach consensus on the removal > of external hosting. While I don't think the details of the repository discovery > need to be part of PEP 470, I do believe that the PEP should continue > to support the idea of having a way for package managers to easily > find external indexes for a particular package and not outright reject it. > > Instead, the PEP should highlight this compromise and defer it to a > separate PEP. I?ve never thought that particular API was actually a good idea, I think it?s a poor end user experience because it invokes the same sort of ?well if you knew what I needed to type to make it work, why didn?t you just do it for me? reaction as PEP 438 does. The user experience will be something like: $ pip install something-not-hosted-on-pypi ... ERROR: Can not find something-not-hosted-on-pypi, it is not hosted on PyPI, it's author has indicated that it found at: * https://pypi.example.com/all/ : All Platforms * https://pypi.example.com/ubuntu-trust/ : Ubuntu Trusty To enable, please invoke pip by added --extra-index-url $ pip install --extra-index-url https://pypi.example.com/all/ something-not-hosted-on-pypi This leaves the user feeling annoyed that we didn?t just search those locations by default. I truly think it is a bad experience and I only ever added it because I wanted the discussion to be over with and I was trying to placate people by giving them a bad feature. Instead, I think that we can design a solution that works by default and will work without the end user needing to do anything at all. However, I?m not an expert in the US laws (the country I live in and have lived in all my life) and I?m especially not an expert in the laws of countries other than the US. This means I don?t fully understand the issue that needs to be solved. In addition to that, I only have so many hours in the day and I need to find a way to prioritize what I?m working on, the data sovereignty issue may only affect people who do not live in the US, however it does not affect everyone who is outside of the US. Most projects from authors outside of the US are perfectly fine with hosting their projects within the US and it is a minority of projects who cannot or will not for one reason or another. I am happy to work with someone impacted by the removal of offsite to design and implement a solution to these issues that provides an experience to those people that matches the experience for people willing or able to host in the US. If the PSF wants to hire someone to do this instead of relying on someone affected to volunteer, I?m also happy to work with them. However, I do not think it?s fair to everyone else, inside and outside of the US, to continue to confuse and infuriate them while we wait for someone to step forward. I?m one person and I?m the only person who gets paid dedicated time to work on Python?s packaging, but I?m spread thin and I have a backlog a mile long, if I don?t prioritize the things that affect most people over the things that affect a small number of people, and leave it up to the people who need an edge case feature things that are already blocked on me are going to languish even further. Finally, I wasn?t sure if this should be a new PEP or if it should continue as PEP 470, I talked to Nick and he suggested it would be fine to just continue on using PEP 470 for this. >? > More comments: > > * The user experience argument you give in the PEP 470 for rejecting the > idea is not really sound: the purpose of the discovery feature is to > provide a *better user experience* than an error message saying that a package > cannot be found and requiring the user to do research on the web to find > the right URLs. Package managers can use the information about the > other indexes they receive from PyPI to either present them to the user > to use/install them or to even directly go there to find the packages. It?s a slightly better user experience than a flat out error yes, but it?s a much worse user experience than what people using PyPI deserve. If we?re going to solve a problem, I?d much rather do it correctly in a way that doesn?t frustrate end users and gives them a great experience rather than something that is going to annoy them and be a ?death of a thousand cuts? to people hosting off of PyPI. I think that the compromise feature I added in PEP 470 will be the same sort of compromise feature we had in PEP 438, something that on the tin looks like a compromise, enough to get the folks who need/want it to think the PEP is supporting their use case, but in reality is just another cut in a ?death of a thousand cuts? to hosting outside of the US (or off of PyPI). I don?t want to continue to implement half solutions that I know are going to be painful to end users with the idea in mind that I know the pain is going to drive people away from those solutions and towards the one good option we have currently. I?d rather be honest to everyone involved about what is truly supported and focus on making that a great experience for everyone. > > * The section on data sovereignty should really be removed or reworded. > PEPs should be neutral and not imply political views, in particular not > make it look like the needs of US users of PyPI are more important then > those of non-US users. Using "poor user experience" as argument here is > really not appropriate. I?m perhaps not great at wording things. I don?t think it?s US users vs Non-US users, since plenty of people outside of the US are perfectly happy and able to upload their projects to PyPI. I think it?s more of ?the needs of the many outweigh the needs of the few?, but with an explicit callout that if one of those few want to come forward and work with me, we can get something in place that really solves that problem in a user friendly way. Perhaps you could suggest a rewording that you think says the above? I don?t see a political view being implied nor do I see the needs of US users being prioritized over the needs of non-US users. > > PyPI is a central part of the Python community infrastructure and > should be regarded as a resource for the world-wide community. > There is no reason to assume that we cannot have several PyPI > installations around the world to address any such issues. I don?t assume that we can?t do something like that, one of my ideas for solving this issue looks something like that in fact. However without someone who cares willing to step forward and bring to the table an expertise in what will satisfy those legalities or not and with a willingness to pitch it and contribute to help make such a solution a reality I don?t feel comfortable spending any time a solution that may not even actually solve the problem at hand. I don?t think the solution that was in the PEP is that solution though. I think it was a poison pill that I fully expected to have a terrible experience which would just force people to host on PyPI or have their project suffer. Repositories hosted by random people end up making people?s installs extremely unreliable. We?ve made great strides in making it so that ``pip install `` rarely fails because something is down, and I think blessing a feature or continuing to support one that doesn?t aid in making that more the case is harmful to the community as a whole. Honestly, if someone comes forward now-ish or the PSF tells me now-ish they will hire someone to figure out the intricacies of this matter, I?ll even put this PEP back on hold while we design this feature. I have no particularly animosity towards hosting outside of the US, I just don?t have the expertise or time to design and implement it on my own. > > * It is rather unusual to have a PEP switch from a compromise solution > to a rejection of the compromise this late in the process. > > I will soon be starting a PSF working group to address some of > the reasons why people cannot upload packages to PyPI. The intent > is to work on the PyPI terms to make them more package author > friendly. Anyone interested to join ? I don?t have any particular insight into the ToS nor do I really care what it says as long as it grants PyPI everything it needs to function. I should probably be a part of the WG though since it involves PyPI and I?m really the only person working on PyPI. I wouldn?t want to adopt a ToS for PyPI without Van?s approval though. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Thu Aug 27 13:53:34 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 07:53:34 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On August 27, 2015 at 4:28:34 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > Encouraging people to set up an external index if they have off-PyPI > hosting requirements seems entirely reasonable. But I would say > (having tried to do this for testing and personal use in the past) > it's not easy to find good documentation on how to set up an external > index (when you're starting from a simple web host). Having a "how to > set up a PyPI-style simple index" document, linked from the PEP (and > ultimately from the docs) would be a useful resource for people in > general, and a good starting point for any discussion with people who > have requirements for not hosting on PyPI. Probably putting such a > document in https://packaging.python.org/en/latest/distributing.html > would make sense. We can add documentation, it?s basically ?stick some files in a directory, run python -m http.server?, adjusted for someone?s need for performance and ?production? readiness. We make it pretty easy to make one with ?find-links, any old web server with an auto index and a directory full of files will do it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Thu Aug 27 14:00:46 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 08:00:46 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <20150827102557.4bfb39c4@fsol> References: <20150827102557.4bfb39c4@fsol> Message-ID: On August 27, 2015 at 4:26:28 AM, Antoine Pitrou (solipsis at pitrou.net) wrote: > > Out of curiosity, have you tried to determine if those Unsafely Off > PyPI projects were either still active or "popular" ? > > 10 Months ago I attempted to figure out how popular or active those projects were, I didn?t redo those numbers but I can if people think it should be in the PEP. I felt the previous versions of the PEP were a bit too much of ?well if you look at the data this way you get X but if you look at it this way you get Y? and I tried to narrow it down to just a single measure of impact that took the broadest interpretation of what impact would mean. Anyways, 10 months ago I parsed the log files for a single day and I looked how often the /simple/foo/ pages got hit for every project on PyPI. This isn?t a great metric since people running an install on multiple machines or using something like tox to run it multiple times on the same machine will be counted multiple times, however I got data that looked like this: Top Externally Hosted Projects by Requests ------------------------------------------- This is determined by looking at the number of requests the ``/simple//`` page had gotten in a single day. The total number of requests during that day was 10,623,831. ============================== ======== Project ? ? ? ? ? ? ? ? ? ? ? ?Requests ============================== ======== PIL ? ? ? ? ? ? ? ? ? ? ? ? ? ?63869 Pygame ? ? ? ? ? ? ? ? ? ? ? ? 2681 mysql-connector-python ? ? ? ? 1562 pyodbc ? ? ? ? ? ? ? ? ? ? ? ? 724 elementtree ? ? ? ? ? ? ? ? ? ?635 salesforce-python-toolkit ? ? ?316 wxPython ? ? ? ? ? ? ? ? ? ? ? 295 PyXML ? ? ? ? ? ? ? ? ? ? ? ? ?251 RBTools ? ? ? ? ? ? ? ? ? ? ? ?235 python-graph-core ? ? ? ? ? ? ?123 cElementTree ? ? ? ? ? ? ? ? ? 121 ============================== ======== Top Externally Hosted Projects by Unique IPs -------------------------------------------- This is determined by looking at the IP addresses of requests the ``/simple//`` page had gotten in a single day. The total number of unique IP addresses during that day was 124,604. ============================== ========== Project ? ? ? ? ? ? ? ? ? ? ? ?Unique IPs ============================== ========== PIL ? ? ? ? ? ? ? ? ? ? ? ? ? ?4553 mysql-connector-python ? ? ? ? 462 Pygame ? ? ? ? ? ? ? ? ? ? ? ? 202 pyodbc ? ? ? ? ? ? ? ? ? ? ? ? 181 elementtree ? ? ? ? ? ? ? ? ? ?166 wxPython ? ? ? ? ? ? ? ? ? ? ? 126 RBTools ? ? ? ? ? ? ? ? ? ? ? ?114 PyXML ? ? ? ? ? ? ? ? ? ? ? ? ?87 salesforce-python-toolkit ? ? ?76 pyDes ? ? ? ? ? ? ? ? ? ? ? ? ?76 ============================== ========== I don?t know what those number look like today, but I suspect that they?d be even more weighted towards PIL being the primary project that actual users are pulling down that isn?t hosted on PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Thu Aug 27 14:19:22 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 27 Aug 2015 13:19:22 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> Message-ID: On 27 August 2015 at 12:51, Donald Stufft wrote: > Perhaps you could suggest a rewording that you think says the above? I don?t see a political view being implied nor do I see the needs of US users being prioritized over the needs of non-US users. Just for perspective, I didn't read that section as prioritising US users or authors in any way. It does reflect the reality that PyPI is hosted in the US, but I think it's both fair and necessary to point that out and explain the implications. It may be that the comment "If someone for whom the issue of data sovereignty matters to them wishes to put forth the effort..." reflects a little too much your frustration with not being able to get anywhere with a solution to this issue. Maybe rephrase that as something along the lines of "The data sovereignty issue will need to be addressed by someone with an understanding of the restrictions and constraints involved. As the author of this PEP does not have that expertise, it should be addressed in a separate PEP"? Paul From donald at stufft.io Thu Aug 27 14:47:54 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 08:47:54 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> Message-ID: On August 27, 2015 at 8:19:25 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > Just for perspective, I didn't read that section as prioritising US > users or authors in any way. It does reflect the reality that PyPI is > hosted in the US, but I think it's both fair and necessary to point > that out and explain the implications. Right, and the reality is also that the only people who are currently working on it consistently (ever since Richard has stepped back to pursue things more interesting to him) are also based in the US. Basically just me with occasional contributions from other people, but primarily me. I don?t have metrics on legacy PyPI because it?s in Mercurial, but you can explore Stacklytics [1][2] for Warehouse.? > > It may be that the comment "If someone for whom the issue of data > sovereignty matters to them wishes to put forth the effort..." > reflects a little too much your frustration with not being able to get > anywhere with a solution to this issue. Maybe rephrase that as > something along the lines of "The data sovereignty issue will need to > be addressed by someone with an understanding of the restrictions and > constraints involved. As the author of this PEP does not have that > expertise, it should be addressed in a separate PEP?? Thanks, I?ve switched to your suggested wording [3]. [1] http://stackalytics.com/?release=all&project_type=pypa-group&metric=loc&module=warehouse [2]?http://stackalytics.com/?release=all&project_type=pypa-group&metric=commits&module=warehouse [3]?https://hg.python.org/peps/rev/6c8a8f29a798 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From tseaver at palladion.com Thu Aug 27 16:32:32 2015 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 27 Aug 2015 10:32:32 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/27/2015 07:51 AM, Donald Stufft wrote: > This leaves the user feeling annoyed that we didn?t just search those > locations by default. I truly think it is a bad experience and I only > ever added it because I wanted the discussion to be over with and I > was trying to placate people by giving them a bad feature I don't understand the sensibility here: an error message which tells me "not hosted on PyPI, try 'pip...' instead" seems like a *good* UX to me: Having a tool which respectw its default policy ("trust only PyPI") while giving me the information I need to off-road when needed is a good balance. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBAgAGBQJV3x+AAAoJEPKpaDSJE9HYE+IP/0LiQ+TwRnDvAcDsRG9wXPWd l0D2jnw6uGkJez5CJD+4JA9x3SRabixY+K1VRffgnG8NuR0omfioJbAFXyQVV9g1 ymUEZzJG5O8kd7v+iIvrguu/dGu1hAH9nwsLQwAkPdrUSfB6YFIfBmJfpb3vqJAx Frf6A2zAedwLGTB23XFajJBYpDXZGjwj1+N6zedDHvC0xfN+fRW3jyYwaTJYklDU y7HZRuSIOuy6mRgjwi73iNsSexY+jcIyKWdh4msD6+BLge8X1/8BuDtA5Q5agMrH ZxrlHaVv5AxtDks6lYzIhSocK2D28NiU2IBbha8OC9NUnmJdNHVHmvmR4UNTjYTY cP1+bi3cMQ28LVcuGffXoV4HlydpOf1QKIOaK6fwecr6ooIv2Y0/BJrSElgCrBZ6 tx3nP/uHd6R5GoxLl88ZyT7oFsoPQUVSsWZTc+1hnxI7PskvTAZXogUJShO0um+9 hKXjmc911fGKMszNk8xnagUMWJVimuBwLZwsVAjg9s0D5Xfq/LRS3qzs88Dwmb5C FPpaJINqgSfWRkiivgIsO421PUuX+vyjLr13vcfCaN5TCwHQujgOe+6PFqOjjVNC UONyy98A3RzdUuBuRruwzFUkaBRjqra7TMQrFzhHFIwOeiY/T2dMTRnMfAlC+TVO hv5zjuS8U9/WqpiWH2zq =a10G -----END PGP SIGNATURE----- From donald at stufft.io Thu Aug 27 17:16:57 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 11:16:57 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> Message-ID: On August 27, 2015 at 10:33:15 AM, Tres Seaver (tseaver at palladion.com) wrote: > On 08/27/2015 07:51 AM, Donald Stufft wrote: > > > This leaves the user feeling annoyed that we didn?t just search those > > locations by default. I truly think it is a bad experience and I only > > ever added it because I wanted the discussion to be over with and I > > was trying to placate people by giving them a bad feature > > I don't understand the sensibility here: an error message which tells me > "not hosted on PyPI, try 'pip...' instead" seems like a *good* UX to me: > Having a tool which respectw its default policy ("trust only PyPI") > while giving me the information I need to off-road when needed is a good > balance. > Given my experience dealing with pip?s users and the fallout of PEP 438, the very next question we?d get after implementing that UI will either be ?If pip knows what I need to do, why can?t it just do it for me instead of making me type it out again? OR ?Give me a flag so I can just automatically accept every externally hosted index?. Both of these asks are completely logical from an end user who doesn?t understand why the situation is the way it is, but are also essentially ?let me be insecure all the time implicitly? flags. On the other hand, if we just remove it then we can explain that we used to support an insecure method of finding links, but that we no longer support it. The difference here is that there is no bread crumb of ?here?s some information that pip obviously knows, because it?s telling it to you? to lead people to ask for something to opt into a ?global insecure? flag. We have a clear answer that doesn?t leave room for argument: ?We no longer get that information from PyPI so we cannot use it?. I think it?s a bad API because I think it?s going to cause frustration with people, particularly that pip is making them do extra leg work to approve or type in an repository URL. In addition, the discovery mechanism will only be in new versions of pip, however only about a third of our users upgrade quickly after a release (see:?https://caremad.io/2015/04/a-year-of-pypi-downloads/) so the error case is going to be happening with the vast bulk of users anyways (The ?Unknown? in that graph is older than 1.4). I also think it?s a bad experience because you?re mandating that they are lowering the ?uptime? of any particular installation set that includes an external repository unless every repository has a 100% uptime. This is because you?re adding new single points of failures into a system. PyPI has had a 99.94% uptime over the last year which corresponds with 5 1/2 hours of downtime. Let?s assume that?s a rough average for what we can expect, If someone adds a single additional repository then the uptime of the system as a whole becomes 99.88% (or X hours of downtime), a third repository brings it to 99.82% (X hours), a fourth brings it to 99.76% (X hours). I think this is a conservative estimate of what the affects of the downtime would be. On the other hand, here?s what I consider a good experience which is possible if my assumptions about what is acceptable for data sovereignty are correct: Project ?Foo? doesn?t want to host their projects in the US for $REASONS, they go to https://pypi.python.org/?and register their project, when registering they select to have their uploads hosted in the EU. Anytime they upload their files to https://pypi.python.org?instead of storing them in a bucket in us-west-2, PyPI checks for their preferences, sees they have selected the EU and instead stores their files in?eu-west-1 (Ireland). User ?Jill? wants to install Project ?Foo? and she is using pip 1.5.6 from her Debian operating system. When she types in ``pip install Foo`` pip goes to https://pypi.python.org/simple/foo/?gets a list of files which have been hosted in the EU. Without any updates or changes required on her end, pip downloads these files and installs them. Here?s the thing though, which I?ve been saying: I don?t know the laws and I don?t think it?s reasonable to expect me to learn the laws for all these other countries. There are open questions on how to actually implement this. For example, what exactly are we trying to achieve? If we?re trying to protect against the US government compelling the hosting company to do something, then you?re pretty much boned because if the files were hosted in the EU you still have the fact that it?d be a service controlled by a US Non Profit, ran by volunteers that live in the US, developed by someone who lives in the US who is employed by someone who lives in the US. If we?re trying to comply with some sort of data locality laws like?https://en.wikipedia.org/wiki/Data_Protection_Directive?does OSS even count as ?personal data?? If it does, then does uploading it to https://pypi.python.org/?which is located in the US but storing and hosting it from the EU satisfy the requirements? What about putting it behind Fastly (another US company), when a US user requests those files can it route them and cache them in a US Datacenter? Is it OK to have it linked from https://pypi.python.org/?(Again, hosted in the US) or do we need a whole separate repository to handle these files? I think we can make this a great experience, but it is it?s own discussion and it needs to include stakeholders who actually know what the requirements are. I need someone who can put forth some effort into making it a reality instead of expecting me to do it all. If nobody wants to put in any effort to make it happen, maybe it?s not actually that important to them? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ntoll at ntoll.org Thu Aug 27 16:00:45 2015 From: ntoll at ntoll.org (Nicholas H.Tollervey) Date: Thu, 27 Aug 2015 15:00:45 +0100 Subject: [Distutils] 403 response when registering a new package in PyPI via setup.py Message-ID: <55DF180D.5000207@ntoll.org> Hi, I get a 403 when I try to register a package on PyPI under the name "training" using setup.py (and via the form too). A cursory search tells me that the name is free. However, a buddy tells me that perhaps the name isn't free - but the associated package has no releases yet. If this is the case it sounds like the presumed existing "training" module is in a bad/wrong state. Would it be possible to give PyPI a kick with a proverbial spanner so I can, in fact, register my package and/or remove the existing erroneous entry under the name "training". Put simply, what steps can I take to fix this? Thanks for your help, Nicholas. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From nate at bx.psu.edu Thu Aug 27 21:21:40 2015 From: nate at bx.psu.edu (Nate Coraor) Date: Thu, 27 Aug 2015 15:21:40 -0400 Subject: [Distutils] Working toward Linux wheel support In-Reply-To: References: Message-ID: On Tue, Aug 25, 2015 at 1:54 PM, Nate Coraor wrote: > I've started down this road of Linux platform detection, here's the work > so far: > > https://bitbucket.org/natefoo/wheel/src/tip/wheel/platform/linux.py > > I'm collecting distribution details here: > > https://gist.github.com/natefoo/814c5bf936922dad97ff > > One thing to note, although it's not used, I'm attempting to label a > particular ABI as stable or unstable, so for example, Debian testing is > unstable, whereas full releases are stable. Arch and Gentoo are always > unstable, Ubuntu is always stable, etc. Hopefully this would be useful in > making a decision about what wheels to allow into PyPI. > > --nate > > Hi all, Platform detection and binary-compatibility.cfg support is now available in my branch of pip[1]. I've also built a large number of psycopg2 wheels for testing[2]. Here's what happens when you try to install one of them on CentOS 7 using my pip: # pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Could not find a version that satisfies the requirement psycopg2 (from versions: ) No matching distribution found for psycopg2 Then create /etc/python/binary-compatibility.cfg: # cat /etc/python/binary-compatibility.cfg { "linux_x86_64_centos_7": { "install": ["linux_x86_64_rhel_6"] } } # pip install --index https://wheels.galaxyproject.org/ --no-cache-dir psycopg2 Collecting psycopg2 Downloading https://wheels.galaxyproject.org/packages/psycopg2-2.6.1-cp27-cp27mu-linux_x86_64_rhel_6.whl (307kB) 100% |################################| 307kB 75.7MB/s Installing collected packages: psycopg2 Successfully installed psycopg2-2.6.1 Of course, I have not attempted to solve the external dependency problem: # python -c 'import psycopg2; print psycopg2' Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.7/site-packages/psycopg2/__init__.py", line 50, in from psycopg2._psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: libpq.so.5: cannot open shared object file: No such file or directory But after installing postgresql-libs, everything works as expected: # python -c 'import psycopg2; print psycopg2' This is an improvement over the current situation of an sdist in PyPI, however, since only one non-default package (postgresql-libs) needs to be installed as opposed to postgresql-devel and the build tools (gcc, make, etc.). In addition, a user installing psycopg2 is likely to already have postgresql-libs installed. I'd really appreciate if this work could be given a look, and some discussion could take place on where to go from here. Thanks, --nate [1]: https://github.com/natefoo/pip/tree/linux-wheels [2]: https://wheels.galaxyproject.org/simple/psycopg2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Thu Aug 27 23:00:13 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 27 Aug 2015 23:00:13 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> Message-ID: <55DF7A5D.8090804@egenix.com> On 27.08.2015 13:51, Donald Stufft wrote: > On August 27, 2015 at 6:57:17 AM, M.-A. Lemburg (mal at egenix.com) wrote: >> This feature was part of a compromise to reach consensus on the removal >> of external hosting. While I don't think the details of the repository discovery >> need to be part of PEP 470, I do believe that the PEP should continue >> to support the idea of having a way for package managers to easily >> find external indexes for a particular package and not outright reject it. >> >> Instead, the PEP should highlight this compromise and defer it to a >> separate PEP. > > I?ve never thought that particular API was actually a good idea, I think it?s a poor end user experience because it invokes the same sort of ?well if you knew what I needed to type to make it work, why didn?t you just do it for me? reaction as PEP 438 does. The user experience will be something like: > > $ pip install something-not-hosted-on-pypi > ... > ERROR: Can not find something-not-hosted-on-pypi, it is not hosted on PyPI, it's author has indicated that it found at: > > * https://pypi.example.com/all/ : All Platforms > * https://pypi.example.com/ubuntu-trust/ : Ubuntu Trusty > > To enable, please invoke pip by added --extra-index-url Uhm, no :-) This would be a more user friendly way of doing it: $ pip install something-not-hosted-on-pypi ... I'm sorry, but we cannot find something-not-hosted-on-pypi on the available configured trusted indexes: * https://pypi.python.org/ However, the author has indicated that it can be found at: * https://pypi.example.com/ Should we add this PyPI index to the list of trusted indexes ? (y/n) >>> y Thank you. We added https://pypi.example.com/ to the list of indexes. You are currently using these indexes as trusted indexes: * https://pypi.python.org/ * https://pypi.example.com/ We will now retry the installation. ... something-not-hosted-on-pypi installed successfully. $ > $ pip install --extra-index-url https://pypi.example.com/all/ something-not-hosted-on-pypi > > This leaves the user feeling annoyed that we didn?t just search those locations by default. I truly think it is a bad experience and I only ever added it because I wanted the discussion to be over with and I was trying to placate people by giving them a bad feature. There's nothing bad in the above UI. A user will go through the discovery process once and the next time around, everything will just work. > Instead, I think that we can design a solution that works by default and will work without the end user needing to do anything at all. However, I?m not an expert in the US laws (the country I live in and have lived in all my life) and I?m especially not an expert in the laws of countries other than the US. This means I don?t fully understand the issue that needs to be solved. In addition to that, I only have so many hours in the day and I need to find a way to prioritize what I?m working on, the data sovereignty issue may only affect people who do not live in the US, however it does not affect everyone who is outside of the US. Most projects from authors outside of the US are perfectly fine with hosting their projects within the US and it is a minority of projects who cannot or will not for one reason or another. All Linux distros I know and use have repositories distributed all over the planet, and many also provide official and less official ones, for the users to choose from, so there is more than enough evidence that a federated system for software distribution works better than a centralized one. I wonder why we can't we agree on this ? > I am happy to work with someone impacted by the removal of offsite to design and implement a solution to these issues that provides an experience to those people that matches the experience for people willing or able to host in the US. If the PSF wants to hire someone to do this instead of relying on someone affected to volunteer, I?m also happy to work with them. However, I do not think it?s fair to everyone else, inside and outside of the US, to continue to confuse and infuriate them while we wait for someone to step forward. I?m one person and I?m the only person who gets paid dedicated time to work on Python?s packaging, but I?m spread thin and I have a backlog a mile long, if I don?t prioritize the things that affect most people over the things that affect a small number of people, and leave it up to the people who need an edge case feature things that are already blocked on me are going to languish even further. I'm happy to help write a PEP for the discovery feature and I'd also love to help with the implementation. My problem is that no one is paying me to work on this and so my time investment into this has to stay way behind of what I'd like to invest. > Finally, I wasn?t sure if this should be a new PEP or if it should continue as PEP 470, I talked to Nick and he suggested it would be fine to just continue on using PEP 470 for this. If you don't feel comfortable with the discovery feature, I think it's better to split it off into a separate PEP. >> More comments: >> >> * The user experience argument you give in the PEP 470 for rejecting the >> idea is not really sound: the purpose of the discovery feature is to >> provide a *better user experience* than an error message saying that a package >> cannot be found and requiring the user to do research on the web to find >> the right URLs. Package managers can use the information about the >> other indexes they receive from PyPI to either present them to the user >> to use/install them or to even directly go there to find the packages. > > It?s a slightly better user experience than a flat out error yes, but it?s a much worse user experience than what people using PyPI deserve. If we?re going to solve a problem, I?d much rather do it correctly in a way that doesn?t frustrate end users and gives them a great experience rather than something that is going to annoy them and be a ?death of a thousand cuts? to people hosting off of PyPI. I think that the compromise feature I added in PEP 470 will be the same sort of compromise feature we had in PEP 438, something that on the tin looks like a compromise, enough to get the folks who need/want it to think the PEP is supporting their use case, but in reality is just another cut in a ?death of a thousand cuts? to hosting outside of the US (or off of PyPI). I think you are forgetting that the worst user experience is one where users are left without installable Python packages :-) No matter how much you try to get people to host everything on pypi.python.org, there are always going to be some which don't want to do this and rather stick with their own PyPI index server for whatever reason. > I don?t want to continue to implement half solutions that I know are going to be painful to end users with the idea in mind that I know the pain is going to drive people away from those solutions and towards the one good option we have currently. I?d rather be honest to everyone involved about what is truly supported and focus on making that a great experience for everyone. > >> >> * The section on data sovereignty should really be removed or reworded. >> PEPs should be neutral and not imply political views, in particular not >> make it look like the needs of US users of PyPI are more important then >> those of non-US users. Using "poor user experience" as argument here is >> really not appropriate. > > I?m perhaps not great at wording things. I don?t think it?s US users vs Non-US users, since plenty of people outside of the US are perfectly happy and able to upload their projects to PyPI. I think it?s more of ?the needs of the many outweigh the needs of the few?, but with an explicit callout that if one of those few want to come forward and work with me, we can get something in place that really solves that problem in a user friendly way. > > Perhaps you could suggest a rewording that you think says the above? I don?t see a political view being implied nor do I see the needs of US users being prioritized over the needs of non-US users. I'd just remove the whole section. Splitting the user base into US and non-US users, even if just to explain that you cannot cover all non-US views or requirements is not something we should put into an official Python document. >> PyPI is a central part of the Python community infrastructure and >> should be regarded as a resource for the world-wide community. >> There is no reason to assume that we cannot have several PyPI >> installations around the world to address any such issues. > > I don?t assume that we can?t do something like that, one of my ideas for solving this issue looks something like that in fact. However without someone who cares willing to step forward and bring to the table an expertise in what will satisfy those legalities or not and with a willingness to pitch it and contribute to help make such a solution a reality I don?t feel comfortable spending any time a solution that may not even actually solve the problem at hand. > > I don?t think the solution that was in the PEP is that solution though. I think it was a poison pill that I fully expected to have a terrible experience which would just force people to host on PyPI or have their project suffer. Repositories hosted by random people end up making people?s installs extremely unreliable. We?ve made great strides in making it so that ``pip install `` rarely fails because something is down, and I think blessing a feature or continuing to support one that doesn?t aid in making that more the case is harmful to the community as a whole. > > Honestly, if someone comes forward now-ish or the PSF tells me now-ish they will hire someone to figure out the intricacies of this matter, I?ll even put this PEP back on hold while we design this feature. I have no particularly animosity towards hosting outside of the US, I just don?t have the expertise or time to design and implement it on my own. Fully understood, alas, I don't have more cycles to spare to help with designing a full blown distributed PyPI system on my free time. As for the PSF: the situation is somewhat similar. The money is available for such projects, but we don't have enough people to provide management and oversight :-( While we're slowly working on changing this, it won't happen over night. >> * It is rather unusual to have a PEP switch from a compromise solution >> to a rejection of the compromise this late in the process. >> >> I will soon be starting a PSF working group to address some of >> the reasons why people cannot upload packages to PyPI. The intent >> is to work on the PyPI terms to make them more package author >> friendly. Anyone interested to join ? > > I don?t have any particular insight into the ToS nor do I really care what it says as long as it grants PyPI everything it needs to function. I should probably be a part of the WG though since it involves PyPI and I?m really the only person working on PyPI. I wouldn?t want to adopt a ToS for PyPI without Van?s approval though. Since PyPI is legally run by the PSF, the PSF board will have to approve the new terms. Having you on board for the WG, would certainly be very useful, since there may well be technical details that come into play. Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 27 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-27: Released eGenix mx Base 3.2.9 ... http://egenix.com/go83 2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From robertc at robertcollins.net Thu Aug 27 23:23:36 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 28 Aug 2015 09:23:36 +1200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <55DF7A5D.8090804@egenix.com> References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> Message-ID: On 28 Aug 2015 9:00 am, "M.-A. Lemburg" wrote: > > All Linux distros I know and use have repositories distributed all > over the planet, and many also provide official and less official > ones, for the users to choose from, so there is more than enough evidence > that a federated system for software distribution works better than a > centralized one. None of them provide cross repository discovery except Conary ttbomk. And its is inherited so a different ux. So that's a difference. Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 28 01:26:32 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 27 Aug 2015 19:26:32 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <55DF7A5D.8090804@egenix.com> References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> Message-ID: On August 27, 2015 at 5:00:27 PM, M.-A. Lemburg (mal at egenix.com) wrote: > > Uhm, no :-) This would be a more user friendly way of doing it: Well, except we?d have to throw away all of the work we?ve done for discovering things up until that point so we?d need to essentially restart the entire process anyway, I don?t think that there has ever been any effort to make setuptools or pip re-rentrant in that regards. The mechanism in the older PEP 470 also supported more than one index to support people who wanted to host binary builds of their project in cases where the compatibility information in Wheel aren?t enough to adequately differentiate when wheels are compatible or not. If we?re going to support a discoverable index like that then we should support the other major reason people might not host on PyPI too, but that means we can?t automagically add things to the list of repositories because we don?t know which list of repositories is accurate. The other problem is that end users should really be adding the configuration to their requirements.txt or other files in the situations they are using those so that they work in situations where they don?t have an interactive console to prompt them (for example, deploying to Heroku). If we?re automagically adding it to the list on a prompt then we make it less obvious they need to do anything else and we just push the pain off until they attempt to deploy their project. Finally, additional repositories are similar to additional CAs in the browser ecosystem, you want to carefully consider which ones you trust because they get the ability to essentially execute whatever arbitrary code they want on your system. There *should* be some level of deliberation and thought behind a user adding a new repository. Allowing a new one with a simple prompt is as dangerous as a browser running into a HTTPS certificate it doesn?t trust and going ?Well I see you don?t trust this CA, do you want to add it to your list and reload??, a UI that most (or all) browsers are moving away from and hiding as much as possible. > > All Linux distros I know and use have repositories distributed all > over the planet, and many also provide official and less official > ones, for the users to choose from, so there is more than enough evidence > that a federated system for software distribution works better than a > centralized one. > > I wonder why we can't we agree on this ? Sure, and people are more than welcome to not host on PyPI and all of the tools support a federated system. However those tools also don?t have any sort of meta links between repositories that will automatically suggest that you add additional repositories to your list. What you have configured is what you have configured. The latest update to PEP 470 represents moving to the exact same system that all the Linux distros you know and use have. > > I'm happy to help write a PEP for the discovery feature and I'd also > love to help with the implementation. My problem is that no one is > paying me to work on this and so my time investment into this has > to stay way behind of what I'd like to invest. Sure, I mean I don?t expect other people to have near the amount of time I do, since my entire job is working on Python packaging. Part of why I?m bringing this up now instead of closer to when Warehouse is ready to launch is to give plenty of time for discussion, implementation, and migration. > > No matter how much you try to get people to host everything > on pypi.python.org, there are always going to be some which > don't want to do this and rather stick with their own PyPI > index server for whatever reason. I don?t care if people host things off of PyPI, I just don?t think we need or should complicate the API to try and provide a seamless experience for people hosting things outside of PyPI. If you?re going off on your own you should expect there to be some level of ?not on by default?-ness. Honestly, If someone wanted to set up an additional repositor(y|ies) I wouldn?t even be personally opposed to adding it to the default list of repositories in pip assuming some basic guidelines/rules were followed. I don?t speak for all of the other maintainers so they might be opposed to it but I?d think something like: * Is being operated by a known and trusted entity (e.g. Joe Nobody doesn?t get to do this). * Agrees to consider PyPI the central authority for who owns a particular name (e.g. just because you host a repository doesn?t mean you get to make Django releases). * Some plan for how they plan to operate it in regards to how they?ll keep the uptime high. > I'd just remove the whole section. Splitting the user base into US > and non-US users, even if just to explain that you cannot cover all > non-US views or requirements is not something we should put into > an official Python document. Okay. > > Since PyPI is legally run by the PSF, the PSF board will have to > approve the new terms. > > Having you on board for the WG, would certainly be very useful, > since there may well be technical details that come into play. > Ok, sure sign me up. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From vinay_sajip at yahoo.co.uk Fri Aug 28 22:06:40 2015 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 28 Aug 2015 20:06:40 +0000 (UTC) Subject: [Distutils] PyPI search RPC not working as expected Message-ID: The PyPI XML-RPC API seems not to be working as expected. The following simple script: from pprint import pprint import sys try: from xmlrpclib import ServerProxy except ImportError: from xmlrpc.client import ServerProxy rpc_proxy = ServerProxy('https://pypi.python.org/pypi') if len(sys.argv) < 2: pkg = 'sarge' else: pkg = sys.argv[1] pprint(rpc_proxy.search({'name': pkg})) Returns an empty list when passed the command line argument "tatterdema" (it should match my project "tatterdemalion"), whereas it does return a non-empty list when passed "sarg" (matching my project "sarge"), or when passed "jobswo" (matching my project "jobsworth"). My project "ragamuffin" also fails to show up if passed to the script. Can anyone shed any light on this? Have there been any changes to how the search is supposed to work? Regards, Vinay Sajip From ncoghlan at gmail.com Sat Aug 29 15:57:40 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 29 Aug 2015 23:57:40 +1000 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> Message-ID: On 28 Aug 2015 07:31, "Robert Collins" wrote: > > > On 28 Aug 2015 9:00 am, "M.-A. Lemburg" wrote: > > > > > All Linux distros I know and use have repositories distributed all > > over the planet, and many also provide official and less official > > ones, for the users to choose from, so there is more than enough evidence > > that a federated system for software distribution works better than a > > centralized one. > > None of them provide cross repository discovery except Conary ttbomk. And its is inherited so a different ux. > > So that's a difference. Right, the distro model is essentially the one Donald is proposing - centrally controlled default repos, ability to enable additional repos on client systems. Geographically distributed mirrors are different, as those are just redistributing signed content from the main repos. Hosting in multiple regions and/or avoiding selected regions would definitely be a nice service to offer, and it would be good to have a straightforward way to deploy and run an external repo (e.g. a devpi Docker image), but the proposed core model is itself a tried and tested one. Reducing back to that, and restarting the exploration of multi-index support from there with a clear statement of objectives would be a good way to go. If we need to manually whitelist some external repos for transition management purposes, then that's likely a better option than settling for a nominally general purpose feature we'd prefer people didn't actually use. Regards, Nick. > > Rob > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Aug 29 20:46:38 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 29 Aug 2015 19:46:38 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On 27 August 2015 at 02:24, Donald Stufft wrote: > I do need a BDFL Delegate for this PEP, Richard does not have the time to do it and the other logical candidate for a PyPI centric PEP is myself, but I don?t feel it?s appropriate to BDFL Delegate my own PEP. I've agreed to be BDFL-Delegate for this PEP. Overall, I believe the PEP is looking good. I've reviewed the latest version of the PEP, and as much as I can of the discussion - but the history of the whole issue is pretty messy and I may well have missed something. If so, please speak up! I do have some specific points I'd like to see addressed by the PEP: 1. While the migration process talks about how we will assist existing users of the off-PyPI hosting option to migrate, the PEP doesn't include any provision for *new* projects who prefer not to host on PyPI. In the spirit of ensuring that off-PyPI hosting is seen as a fully supported option, I'd like to see the PEP include a statement that the process for setting up an external index will be added to the packaging user guide (that documentation is already being included in the emails being sent out, let's just also make sure it's published somewhere permanent as well). 2. The PEP says very little on how users should find out when an external index is required, and how to configure it. I'd suggest a couple of explicit points get included here - in "Multiple Repository/Index Support" a note should be added saying that projects that need an external index SHOULD document the index location prominently in their PyPI index page (i.e. near the top of their long_description), and installers SHOULD provide an example of configuring an external index in their documentation. 3. In the section "Allow easier discovery of externally hosted indexes", maybe replace the paragraph "This idea is rejected because it provides a similar painful end user experience where people will first attempt to install something, get an error, then have to re-run the installation with the correct options." with "This feature has been removed from the scope of the PEP because it proved too difficult to develop a solution that avoided UX issues similar to those that caused so many problems with the PEP 438 solution. If needed, a future PEP could revisit this idea." 4. The "Opposition" section still feels unsatisfying to me. It seems to me that the point of this section is to try to address specific points that came up in the discussion, so why not make that explicit and turn it into a "Frequently Asked Questions" section? Something along the lines of the following: * I can't host my project on PyPI because of , what should I do? (Data sovereignty would be one question in this category). The answer would be to host externally and instruct users to add your index to their config. * But this provides a worse user experience for my users than the current situation - how do I explain that to my users? There are two aspects here. On the one hand, you have to explain to your users why you don't host on PyPI. That has to be a project-specific issue, and the PEP can't offer any help with that. On the other hand, the existing transparent use of external links has been removed for security, reliability and user friendliness reasons that have been covered elsewhere in the PEP. * Switching my current hosting to an index-style structure breaks my workflow/doesn't fit my hosting provider's rules/... I believe the answer here was to host an index on pythonhosted.org pointing to the existing files. But it's a fair question, and one the PEP should probably cover in a bit more detail. * Why don't you provide ? Generally, the answer here is that the PEP authors don't have sufficient experience with the subject matter behind X. This PEP is intended to be a straightforward, easily understood baseline, similar to existing models such as Linux distribution repositories. Additional PEPs covering extra functionality to address further specialised requirements are welcomed, but would require someone with a good understanding of the underlying issue to develop. If anyone has any further points to raise, now is the time to do so! I don't see any need for another extended debate, hopefully most of the issues have already been discussed, and I'm going to assume unless told otherwise that people are happy they are covered properly in the PEP. In particular, if anyone wants to vote an explicit -1 on the current proposal, then please do so now. Paul From donald at stufft.io Sat Aug 29 20:53:56 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 29 Aug 2015 14:53:56 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: <199EB019-639C-4830-8243-14E552023694@stufft.io> These changes all seem reasonable to me and I can push an updated PEP later today. Sent from my iPhone > On Aug 29, 2015, at 2:46 PM, Paul Moore wrote: > > I do have some specific points I'd like to see addressed by the PEP From p.f.moore at gmail.com Sat Aug 29 21:41:46 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 29 Aug 2015 20:41:46 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On 27 August 2015 at 12:53, Donald Stufft wrote: > We can add documentation, it?s basically ?stick some files in a directory, run python -m http.server?, adjusted for someone?s need for performance and ?production? readiness. We make it pretty easy to make one with ?find-links, any old web server with an auto index and a directory full of files will do it. The devil's in the details, though. * Do I need to use canonical names for packages in my index? Assuming so, what *are* the rules for a "canonical name"? * I need a main page with package name -> package page links on it. The link text is what needs to be the package name, yes? * All that matters on the package page is the targets of all the links - these should point to downloadable files, yes? It shouldn't be hard to write these up (and as I said, the PEP proposes to do so in the email sent to package owners, all I'm suggesting is store that information somewhere permanent as well). And one other point - the way the PEP talks, I believe we're suggesting people set something up that works for --extra-index-url, *not* --find-links. Pip has two different discovery mechanisms here, and I think we need to be careful over that. The PEP talks about an external *index*, which I interpret as --extra-index-url. Maybe there's another point here - the PEP should say "installers may have additional discovery methods, but they *MUST* clearly state which one corresponds to the index specification method described in this PEP". Paul From donald at stufft.io Sun Aug 30 01:12:42 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 29 Aug 2015 19:12:42 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On August 29, 2015 at 2:46:40 PM, Paul Moore (p.f.moore at gmail.com) wrote: > > I do have some specific points I'd like to see addressed by the > PEP: Ok, I?ve gone ahead and addressed (I think) everything you?ve pointed out, you can see the diff at?https://hg.python.org/peps/rev/8ddbde2dfd45?or see the entire PEP at?https://www.python.org/dev/peps/pep-0470/?once it updates. If there are any additional changes that need to be made, let me know! ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Sun Aug 30 06:48:12 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 30 Aug 2015 14:48:12 +1000 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On 30 August 2015 at 05:41, Paul Moore wrote: > On 27 August 2015 at 12:53, Donald Stufft wrote: >> We can add documentation, it?s basically ?stick some files in a directory, run python -m http.server?, adjusted for someone?s need for performance and ?production? readiness. We make it pretty easy to make one with ?find-links, any old web server with an auto index and a directory full of files will do it. > > The devil's in the details, though. > > * Do I need to use canonical names for packages in my index? Assuming > so, what *are* the rules for a "canonical name"? This is a good point - even if folks are hosting externally, we still want them to claim at least a top-level name on the global index. If they'd like to just claim a single name, and not worry about PyPI beyond that, then a zc style usage of namespace packages likely makes sense, and only claim additional names on PyPI if they want to promote a package out of their custom namespace and into the global one. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sun Aug 30 13:48:44 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 30 Aug 2015 12:48:44 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On 30 August 2015 at 05:48, Nick Coghlan wrote: >> The devil's in the details, though. >> >> * Do I need to use canonical names for packages in my index? Assuming >> so, what *are* the rules for a "canonical name"? > > This is a good point - even if folks are hosting externally, we still > want them to claim at least a top-level name on the global index. Although what you say is *also* a good point, my original question was whether I need to make my links use all lowercase and whichever of dash or underscore it is that pip treats as the canonical form (I can never remember). The point I was making is to make sure that people with an existing non-PyPI file structure can set up an index easily. They'll quite likely need to build some sort of static webpage for that (consider a project hosted on something like sourceforge, setting up an index to replace their current external links on a cheap provider that only offers static webpage hosting). If it's an older project, it's quite possible they will *not* use the canonical form of the project name in their filenames, so they'll need to fix the name up or they'll get obscure "pip can't find your project" errors. Regarding your point, though, looking at the wider picture there are *three* classes of project to consider: 1. Projects registered and hosted on PyPI 2. Projects registered on PyPI but hosted elsewhere 3. Projects neither hosted nor registered on PyPI (There's also projects in category 1, but with some (presumably historical) files hosted elsewhere, but I'll ignore those for now, as for most purposes they can be treated as category 1) Category 3 could quite easily be massive (private indexes used by companies, for example) but is irrelevant for the purposes of this PEP. Category 1 is straightforward - the PEP is a 100% clear win there, as the overhead of unneeded link scraping is removed. The problem is existing category 2 projects, and new projects that want to join that category. We need to ensure that hosting off PyPI remains a viable option for such projects, which is why it's important to document how to create an index. But as you point out, we *also* need to make sure people don't think "what's the point in registering on PyPI if I'm setting up my own index anyway?" (and hence choose category 3 rather than category 2). Maybe there should be a further FAQ in the PEP - "If I'm setting up my own index for my files, why should I bother registering my project on PyPI at all?" I suspect this is the real question at the root of a lot of the objections to the PEP. For people hosting off PyPI, the current arrangement (ignoring the UX issues) means that "pip install foo" works for them. We're now proposing to remove that benefit, and while it's not the *only* benefit of being registered on PyPI, maybe a reminder of what the other benefits are would put this into perspective. Paul From donald at stufft.io Sun Aug 30 15:52:35 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Aug 2015 09:52:35 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: Message-ID: On August 30, 2015 at 7:48:46 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > Maybe there should be a further FAQ in the PEP - "If I'm setting > up my > own index for my files, why should I bother registering my project > on > PyPI at all?" I suspect this is the real question at the root of > a lot > of the objections to the PEP. For people hosting off PyPI, the > current > arrangement (ignoring the UX issues) means that "pip install > foo" > works for them. We're now proposing to remove that benefit, and > while > it's not the *only* benefit of being registered on PyPI, maybe > a > reminder of what the other benefits are would put this into > perspective. Added to PEP 470 - https://hg.python.org/peps/rev/2794fe98567d ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From mal at egenix.com Mon Aug 31 10:44:32 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 31 Aug 2015 10:44:32 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> Message-ID: <55E413F0.6010002@egenix.com> On 29.08.2015 15:57, Nick Coghlan wrote: > On 28 Aug 2015 07:31, "Robert Collins" wrote: >> >> >> On 28 Aug 2015 9:00 am, "M.-A. Lemburg" wrote: >>> >> >>> All Linux distros I know and use have repositories distributed all >>> over the planet, and many also provide official and less official >>> ones, for the users to choose from, so there is more than enough > evidence >>> that a federated system for software distribution works better than a >>> centralized one. >> >> None of them provide cross repository discovery except Conary ttbomk. And > its is inherited so a different ux. >> >> So that's a difference. > > Right, the distro model is essentially the one Donald is proposing - > centrally controlled default repos, ability to enable additional repos on > client systems. Geographically distributed mirrors are different, as those > are just redistributing signed content from the main repos. > > Hosting in multiple regions and/or avoiding selected regions would > definitely be a nice service to offer, and it would be good to have a > straightforward way to deploy and run an external repo (e.g. a devpi Docker > image), but the proposed core model is itself a tried and tested one. > Reducing back to that, and restarting the exploration of multi-index > support from there with a clear statement of objectives would be a good way > to go. > > If we need to manually whitelist some external repos for transition > management purposes, then that's likely a better option than settling for a > nominally general purpose feature we'd prefer people didn't actually use. There are quite a few systems out there that let you search for repos with the packages you need, but they are usually web based and not integrated into the package managers, e.g. rpmfind, various PPA search tools (e.g. Launchpad or open build service), etc. There's also another difference: Linux repos are usually managed by a single entity owning the packages, very much unlike PyPI which is merely a hosting platform and index to point to packages owned by the authors. So it's natural for PyPI to let package manager users know about where to find packages which are not hosted on PyPI and the user experience (which people always bring up as the number one argument for all sorts of things on this list ;-)), is much better when providing this information to the user directly, rather than saying "I couldn't find any distribution files for you - go look on PyPI for instructions where to find them...". -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 31 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-27: Released eGenix mx Base 3.2.9 ... http://egenix.com/go83 2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From wichert at wiggy.net Mon Aug 31 11:05:35 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 31 Aug 2015 11:05:35 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <55E413F0.6010002@egenix.com> References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> Message-ID: <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> On 31 Aug 2015, at 10:44, M.-A. Lemburg wrote: > > There's also another difference: Linux repos are usually managed > by a single entity owning the packages, very much unlike PyPI which > is merely a hosting platform and index to point to packages owned > by the authors. That is probably true for public repositories. However, there are also a huge number of organisations who have internal repositories for deb/rpm packages, and many of those contain third party packages. I have a couple, and most of them contain a combination of our own packages as well as collection of backports and custom packages for software that hasn?t been packaged by anyone else. Wichert. From mal at egenix.com Mon Aug 31 11:35:59 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 31 Aug 2015 11:35:59 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> Message-ID: <55E41FFF.3010407@egenix.com> On 31.08.2015 11:05, Wichert Akkerman wrote: > On 31 Aug 2015, at 10:44, M.-A. Lemburg wrote: >> >> There's also another difference: Linux repos are usually managed >> by a single entity owning the packages, very much unlike PyPI which >> is merely a hosting platform and index to point to packages owned >> by the authors. > > That is probably true for public repositories. However, there are also a huge number of organisations who have internal repositories for deb/rpm packages, and many of those contain third party packages. I have a couple, and most of them contain a combination of our own packages as well as collection of backports and custom packages for software that hasn?t been packaged by anyone else. True, but for those, I think explicitly adding the index URL to the package installer search path is the better approach. Or perhaps I misunderstood and you meant something like: "If the package is not in my internal repo, I don't want pip to look it up on PyPI or anywhere else." That's a valid use case, but it seems orthogonal to the question of making public repositories for specific packages more easily configurable for package manager users. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 31 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-27: Released eGenix mx Base 3.2.9 ... http://egenix.com/go83 2015-08-19: Released mxODBC 3.3.5 ... http://egenix.com/go82 ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From wichert at wiggy.net Mon Aug 31 11:43:28 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 31 Aug 2015 11:43:28 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: <55E41FFF.3010407@egenix.com> References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: > On 31 Aug 2015, at 11:35, M.-A. Lemburg wrote: > > On 31.08.2015 11:05, Wichert Akkerman wrote: >> On 31 Aug 2015, at 10:44, M.-A. Lemburg wrote: >>> >>> There's also another difference: Linux repos are usually managed >>> by a single entity owning the packages, very much unlike PyPI which >>> is merely a hosting platform and index to point to packages owned >>> by the authors. >> >> That is probably true for public repositories. However, there are also a huge number of organisations who have internal repositories for deb/rpm packages, and many of those contain third party packages. I have a couple, and most of them contain a combination of our own packages as well as collection of backports and custom packages for software that hasn?t been packaged by anyone else. > > True, but for those, I think explicitly adding the index URL to > the package installer search path is the better approach. Sure. > Or perhaps I misunderstood and you meant something like: > > "If the package is not in my internal repo, I don't want > pip to look it up on PyPI or anywhere else.? I just wanted to add a bit of context, since your statement seemed to reflect a slightly different reality than mine. You do bring up a good point though. I really like the apt-preferences approach. That allows you to define some rules to set which repository should be used. You can do things like always prefer a specific repository, or do that only for specific packages, with a default rule to use whichever repository has the latest version. Very, very useful. Wichert. From p.f.moore at gmail.com Mon Aug 31 12:36:20 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 31 Aug 2015 11:36:20 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 10:43, Wichert Akkerman wrote: > I just wanted to add a bit of context, since your statement seemed to reflect a slightly different reality than mine. You do bring up a good point though. I really like the apt-preferences approach. That allows you to define some rules to set which repository should be used. You can do things like always prefer a specific repository, or do that only for specific packages, with a default rule to use whichever repository has the latest version. Very, very useful. There's been a few posts now about how the new system is or is not like Linux package management systems. As a Windows user, my view of Linux package management is pretty limited. To me it seems like the basic approach is, if a package is in the official repo, you can just do apt-get install (or yum install) and it works. If the package is elsewhere, you need to find out where (usually manually, as far as I can see) and then do a bit of config, and then the package is available just like standard ones. That's pretty much the same as the proposed solution for PyPI/pip. If there's any additional functionality that Linux systems provide, could someone summarise it from an end user POV for me? (And maybe also point out why I'd never noticed it as a naive Linux user!) To me, that's a key to whether this PEP is missing something important relative to those systems. Paul From wichert at wiggy.net Mon Aug 31 12:59:37 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 31 Aug 2015 12:59:37 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: > On 31 Aug 2015, at 12:36, Paul Moore wrote: > > On 31 August 2015 at 10:43, Wichert Akkerman wrote: >> I just wanted to add a bit of context, since your statement seemed to reflect a slightly different reality than mine. You do bring up a good point though. I really like the apt-preferences approach. That allows you to define some rules to set which repository should be used. You can do things like always prefer a specific repository, or do that only for specific packages, with a default rule to use whichever repository has the latest version. Very, very useful. > > There's been a few posts now about how the new system is or is not > like Linux package management systems. As a Windows user, my view of > Linux package management is pretty limited. To me it seems like the > basic approach is, if a package is in the official repo, you can just > do apt-get install (or yum install) and it works. If the package is > elsewhere, you need to find out where (usually manually, as far as I > can see) and then do a bit of config, and then the package is > available just like standard ones. That's pretty much the same as the > proposed solution for PyPI/pip. > > If there's any additional functionality that Linux systems provide, > could someone summarise it from an end user POV for me? (And maybe > also point out why I'd never noticed it as a naive Linux user!) To me, > that's a key to whether this PEP is missing something important > relative to those systems. Sure. My knowledge of rpm is 20 years out of date, so I am going to focus the deb/dpkg/apt world only. The whole packaging system is build around archives. The packaging tools themselves do not have anything hardcoded there, they pick up the archive from a configuration file (/etc/apt/sources.list). That file lists all archives. For example: # Hetzner APT-Mirror deb http://mirror.hetzner.de/ubuntu/packages trusty main restricted universe multiverse deb-src http://mirror.hetzner.de/ubuntu/packages trusty main restricted universe multiverse deb http://de.archive.ubuntu.com/ubuntu/ trusty multiverse deb http://security.ubuntu.com/ubuntu trusty-security main restricted deb-src http://security.ubuntu.com/ubuntu trusty-security main restricted There are two things of note here: you can have multiple archive types (?deb? and ?deb-src? in this case), and different URL types. Besides the standard http there is also a cdrom scheme which can mount cdroms, there is a tor scheme now, etc. These are pluggable and handled by a binaries in /usr/lib/apt/methods/ . When you do an install of a new computer the installer will add some default repositories there, generally a local mirror and the security archive. https://wiki.debian.org/SourcesList has some more detailed information (which is probably not relevant here). There are some convenient tools available to register extra archives. For example add-apt-repository, which allows you to do this: # add-apt-repository ppa:saltstack/salt # add-apt-repository ?deb http://nl.archive.ubuntu.com/ubuntu trusty universe" This will try to download and configure the correct GPG key to check archive signatures as well. Each archive has an index which lists all its packages with metadata. You download these to your system (using ?apt-get update? or a GUI), and the local copy is used by all other tools. That results in fast searches, and solvers having fast access to the complete database of available packages. When installing a package your normally get the latest version, independent of which archive has that version. You can ask the system which versions are available: # apt-cache policy salt-minion salt-minion: Installed: 2015.5.3+ds-1trusty1 Candidate: 2015.5.3+ds-1trusty1 Version table: *** 2015.5.3+ds-1trusty1 0 500 http://ppa.launchpad.net/saltstack/salt/ubuntu/ trusty/main amd64 Packages 100 /var/lib/dpkg/status 0.17.5+ds-1 0 500 http://mirror.hetzner.de/ubuntu/packages/ trusty/universe amd64 Packages 500 http://de.archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages In this case there are two versions available: 2015.5.3+ds-1trusty1 is currently installed and came from a ppa, and 0.17.5+ds-1 which is available from two mirrors. This can result in somewhat interesting behaviour when multiple archives have the same packages and are updated independently. To handle that you can define preferences to tell the system how you want to handle that. For example if you want salt to always be installed from the ppa you can define this preference: Package: salt* Pin: origin ?LP-PPA-saltstack-salt? Pin-Priority: 900 This makes sure packages from the salt ppa have a higher priority, so they are always preferred. You can also use this to make an archive of backports available on your system, but only track a few specific packages from there. There are more options available there; see https://wiki.debian.org/AptPreferences and https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_tweaking_candidate_version for more information. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Aug 31 13:21:36 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 31 Aug 2015 07:21:36 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On August 31, 2015 at 6:36:42 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > If there's any additional functionality that Linux systems > provide, > could someone summarise it from an end user POV for me? (And maybe > also point out why I'd never noticed it as a naive Linux user!) > To me, > that's a key to whether this PEP is missing something important > relative to those systems. Wichert provided a much more in-depth summary, but I?d just say that the primary differences from an end user POV is that the defaults aren?t baked into the tool itself, but rather laid down in a config file by the system installer, and that they have more features for controlling how multiple repositories are combined into a list of packages that the installer can come from (e.g. only get package X from Y repository or such) but that their default behavior is roughly equivalent to what pip and setuptools does. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From wes.turner at gmail.com Mon Aug 31 14:52:38 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 31 Aug 2015 07:52:38 -0500 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: Thank you for this description of APT! * debtorrent adds a bittorrent APT transport (to download packages from *more than one index*) I, myself, have always wished that APT had a configuration file schema (JSON, YAML) and was written in Python. Dnf 1.1.0 just fixed package download caching... More packaging notes: https://westurner.org/tools/#packages I see that you would remove the rel= links, because external traversal and allow_hosts. -- A somewhat relevant suggestion would be to add schema.org RDFa to the pages (so search engines can offload the search part) (and include the install_requires edges in the ./JSON-LD, as well). On Aug 31, 2015 6:07 AM, "Wichert Akkerman" wrote: > > On 31 Aug 2015, at 12:36, Paul Moore wrote: > > On 31 August 2015 at 10:43, Wichert Akkerman wrote: > > I just wanted to add a bit of context, since your statement seemed to > reflect a slightly different reality than mine. You do bring up a good > point though. I really like the apt-preferences approach. That allows you > to define some rules to set which repository should be used. You can do > things like always prefer a specific repository, or do that only for > specific packages, with a default rule to use whichever repository has the > latest version. Very, very useful. > > > There's been a few posts now about how the new system is or is not > like Linux package management systems. As a Windows user, my view of > Linux package management is pretty limited. To me it seems like the > basic approach is, if a package is in the official repo, you can just > do apt-get install (or yum install) and it works. If the package is > elsewhere, you need to find out where (usually manually, as far as I > can see) and then do a bit of config, and then the package is > available just like standard ones. That's pretty much the same as the > proposed solution for PyPI/pip. > > If there's any additional functionality that Linux systems provide, > could someone summarise it from an end user POV for me? (And maybe > also point out why I'd never noticed it as a naive Linux user!) To me, > that's a key to whether this PEP is missing something important > relative to those systems. > > > Sure. My knowledge of rpm is 20 years out of date, so I am going to focus > the deb/dpkg/apt world only. The whole packaging system is build around > archives. The packaging tools themselves do not have anything hardcoded > there, they pick up the archive from a configuration file > (/etc/apt/sources.list). That file lists all archives. For example: > > # Hetzner APT-Mirror > deb http://mirror.hetzner.de/ubuntu/packages trusty main restricted > universe multiverse > deb-src http://mirror.hetzner.de/ubuntu/packages trusty main restricted > universe multiverse > > deb http://de.archive.ubuntu.com/ubuntu/ trusty multiverse > > deb http://security.ubuntu.com/ubuntu trusty-security main restricted > deb-src http://security.ubuntu.com/ubuntu trusty-security main restricted > > There are two things of note here: you can have multiple archive types > (?deb? and ?deb-src? in this case), and different URL types. Besides the > standard http there is also a cdrom scheme which can mount cdroms, there is > a tor scheme now, etc. These are pluggable and handled by a binaries in > /usr/lib/apt/methods/ . When you do an install of a new computer the > installer will add some default repositories there, generally a local > mirror and the security archive. https://wiki.debian.org/SourcesList has > some more detailed information (which is probably not relevant here). > > There are some convenient tools available to register extra archives. For > example add-apt-repository, which allows you to do this: > > # add-apt-repository ppa:saltstack/salt > # add-apt-repository ?deb http://nl.archive.ubuntu.com/ubuntu trusty > universe" > > > This will try to download and configure the correct GPG key to check > archive signatures as well. > > Each archive has an index which lists all its packages with metadata. You > download these to your system (using ?apt-get update? or a GUI), and the > local copy is used by all other tools. That results in fast searches, and > solvers having fast access to the complete database of available packages. > > When installing a package your normally get the latest version, > independent of which archive has that version. You can ask the system which > versions are available: > > # apt-cache policy salt-minion > salt-minion: > Installed: 2015.5.3+ds-1trusty1 > Candidate: 2015.5.3+ds-1trusty1 > Version table: > *** 2015.5.3+ds-1trusty1 0 > 500 http://ppa.launchpad.net/saltstack/salt/ubuntu/ trusty/main > amd64 Packages > 100 /var/lib/dpkg/status > 0.17.5+ds-1 0 > 500 http://mirror.hetzner.de/ubuntu/packages/ trusty/universe > amd64 Packages > 500 http://de.archive.ubuntu.com/ubuntu/ trusty/universe amd64 > Packages > > In this case there are two versions available: 2015.5.3+ds-1trusty1 is > currently installed and came from a ppa, and 0.17.5+ds-1 which is available > from two mirrors. > > This can result in somewhat interesting behaviour when multiple archives > have the same packages and are updated independently. To handle that you > can define preferences to tell the system how you want to handle that. For > example if you want salt to always be installed from the ppa you can define > this preference: > > Package: salt* > Pin: origin ?LP-PPA-saltstack-salt? > Pin-Priority: 900 > > > This makes sure packages from the salt ppa have a higher priority, so they > are always preferred. You can also use this to make an archive of > backports available on your system, but only track a few specific packages > from there. There are more options available there; see > https://wiki.debian.org/AptPreferences and > https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_tweaking_candidate_version for > more information. > > Wichert. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Aug 31 15:32:50 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 31 Aug 2015 23:32:50 +1000 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 20:59, Wichert Akkerman wrote: > Sure. My knowledge of rpm is 20 years out of date, so I am going to focus > the deb/dpkg/apt world only. For the purposes of this discussion, we can consider the two ecosystems essentially equivalent. There are differences around what's builtin and what's a plugin, but the general principles are the same: * default repos are defined by each distro, not by the tool developers * source repos and binary repos are separate from each other * users can add additional repos of both kinds via config files * config files can also set relative priorities between repos That last one is the main defence against malicious repos - if you set your distro repos to a high priority, then third party repos can't override distro packages, but if you set a particular third party repo to a high priority then it can override *anything*, not just the packages you're expecting it to replace. The key *differences* that are relevant to the current discussion are: * PyPA are the maintainers of both the default repo *and* the installation tools * we don't currently have any kind of repo priority mechanism This is why I think it's important to be clear that we *want* to improve the off-PyPI hosting experience, as that's something we consider reasonable for people to want to do, but it's going to take time and development effort. The question is thus whether it makes sense to delay significant improvements for the common case (i.e. hosting on PyPI), while we work on handling the minority case, and I don't believe it does. It *may* be worth hacking in special case handling for the packages already hosted externally, but we can do that as exactly that (i.e. a special case providing a legacy bridging mechanism until a better solution is available), rather than as an indefinitely supported feature. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Aug 31 15:33:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 31 Aug 2015 14:33:34 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 11:59, Wichert Akkerman wrote: > Sure. My knowledge of rpm is 20 years out of date, so I am going to focus > the deb/dpkg/apt world only. The whole packaging system is build around > archives. The packaging tools themselves do not have anything hardcoded > there, they pick up the archive from a configuration file > (/etc/apt/sources.list). That file lists all archives. [... Useful explanation omitted...] Thanks, that was very helpful. From that I understand that the key differences are: 1. deb doesn't have a hard-coded "official" repository, everything is in the config files. 2. There are tools to manage the config, rather than editing the config files by hand. 3. There is pluggable support for archive and URL types. In the context of the PEP I don't think these are significant differences, so it seems to me that the PyPI solution proposed in the PEP matches pretty closely to the deb approach. Which is what I thought, but it's nice to have it confirmed. Thanks, Paul From p.f.moore at gmail.com Mon Aug 31 16:17:33 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 31 Aug 2015 15:17:33 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 14:32, Nick Coghlan wrote: > This is why I think it's important to be clear that we *want* to > improve the off-PyPI hosting experience, as that's something we > consider reasonable for people to want to do, but it's going to take > time and development effort. +1 It's important also that we get input from people who need off-PyPI hosting, as they are the people with the knowledge of what's required. > The question is thus whether it makes > sense to delay significant improvements for the common case (i.e. > hosting on PyPI), while we work on handling the minority case, and I > don't believe it does. Agreed again. IMO, our initial responsibility is to get a solid, stable baseline functionality. The principle behind the PEP is that getting packages from anywhere other than PyPI must be an opt-in process by the user. That's a basic consequence of the idea that users should know who provides the services they use, combined with the fact that PyPI is the official Python repository. We've provided an option based on those principles for people who host off-PyPI, and there's no reason we couldn't improve that solution, but the principle should remain - users choose which package providers to use and trust. > It *may* be worth hacking in special case handling for the packages > already hosted externally, but we can do that as exactly that (i.e. a > special case providing a legacy bridging mechanism until a better > solution is available), rather than as an indefinitely supported > feature. Hmm, I'm not aware of any concrete suggestions along those lines. According to the PEP, 3 months after it is implemented all projects on PyPI will be in pypi-only mode, and all of the other legacy modes will be unavailable, and unused. No external links will be visible, no link-scraping will occur, and the relevant code could be removed from installer tools such as pip. Are you suggesting that this shouldn't occur? Paul From wes.turner at gmail.com Mon Aug 31 16:20:43 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 31 Aug 2015 09:20:43 -0500 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: devpi may be the packaging infrastructure component to integrate with / document? search("docker devpi") http://doc.devpi.net/latest/ > *index inheritance:* Each index can inherit packages from another index, including the pypi cacheroot/pypi. This allows to have development indexes that also contain all releases from a production index. All privately uploaded packages will by default inhibit lookups from pypi, allowing to stay safe from an attacker who could otherwise upload malicious release files to the public PyPI index. On Aug 31, 2015 8:33 AM, "Nick Coghlan" wrote: > On 31 August 2015 at 20:59, Wichert Akkerman wrote: > > Sure. My knowledge of rpm is 20 years out of date, so I am going to focus > > the deb/dpkg/apt world only. > > Status Quo: * Default: [pypi, ] Proposed in the PEP (IIUC): * Default: [] Suggested here: > For the purposes of this discussion, we can consider the two > ecosystems essentially equivalent. There are differences around what's > builtin and what's a plugin, but the general principles are the same: > > * default repos are defined by each distro, not by the tool developers > * source repos and binary repos are separate from each other > * users can add additional repos of both kinds via config files > * config files can also set relative priorities between repos > > That last one is the main defence against malicious repos - if you set > your distro repos to a high priority, then third party repos can't > override distro packages, but if you set a particular third party repo > to a high priority then it can override *anything*, not just the > packages you're expecting it to replace. > How do I diff these (ordered) graph solutions (w/ versions, extras_requires, and potentially _ABI_feat_x_)? There are defined graph algorithms for JSON-LD (RDFa), which would make it much easier to correlate a version+[sdist,bdist,wheel-<...>] of a package with a URI with a package catalog with a URI served by a repo with a URI > > The key *differences* that are relevant to the current discussion are: > > * PyPA are the maintainers of both the default repo *and* the installation > tools > * we don't currently have any kind of repo priority mechanism > Is it traversed in a list, or does config parser OrderedDict? > > This is why I think it's important to be clear that we *want* to > improve the off-PyPI hosting experience, as that's something we > consider reasonable for people to want to do, but it's going to take > time and development effort. The question is thus whether it makes > sense to delay significant improvements for the common case (i.e. > hosting on PyPI), while we work on handling the minority case, and I > don't believe it does. > > It *may* be worth hacking in special case handling for the packages > already hosted externally, but we can do that as exactly that (i.e. a > special case providing a legacy bridging mechanism until a better > solution is available), rather than as an indefinitely supported > feature. > Is there like a bigquery githubarchive of these, for large queries? > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 31 16:25:36 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 31 Aug 2015 15:25:36 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 15:21, Nick Coghlan wrote: >> Are you suggesting that this shouldn't occur? > > I'm saying we can look at the numbers at the end of the grace period > and decide what to do then :) Sounds reasonable to me :-) Paul From ncoghlan at gmail.com Mon Aug 31 16:21:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 Sep 2015 00:21:27 +1000 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 1 September 2015 at 00:17, Paul Moore wrote: > On 31 August 2015 at 14:32, Nick Coghlan wrote: >> It *may* be worth hacking in special case handling for the packages >> already hosted externally, but we can do that as exactly that (i.e. a >> special case providing a legacy bridging mechanism until a better >> solution is available), rather than as an indefinitely supported >> feature. > > Hmm, I'm not aware of any concrete suggestions along those lines. > According to the PEP, 3 months after it is implemented all projects on > PyPI will be in pypi-only mode, and all of the other legacy modes will > be unavailable, and unused. No external links will be visible, no > link-scraping will occur, and the relevant code could be removed from > installer tools such as pip. > > Are you suggesting that this shouldn't occur? I'm saying we can look at the numbers at the end of the grace period and decide what to do then :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Aug 31 16:28:02 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 31 Aug 2015 15:28:02 +0100 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On 31 August 2015 at 15:20, Wes Turner wrote: > Status Quo: > * Default: [pypi, ] > > Proposed in the PEP (IIUC): > * Default: [] No. The PEP doesn't propose any change here - pip looks only at PyPI by default at the moment (but users can set config or options to include other indexes) and that will remain the case. I didn't understand the rest of your email, I'm afraid, so I can't comment. Sorry. Paul From donald at stufft.io Mon Aug 31 16:31:35 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 31 Aug 2015 10:31:35 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On August 31, 2015 at 10:26:04 AM, Paul Moore (p.f.moore at gmail.com) wrote: > On 31 August 2015 at 15:21, Nick Coghlan wrote: > >> Are you suggesting that this shouldn't occur? > > > > I'm saying we can look at the numbers at the end of the grace period > > and decide what to do then :) > > Sounds reasonable to me :-) FWIW, 10 months ago PIL was an order of magnitude the biggest user (based on hits to /simple// of this feature both by sheer number of requests AND by unique IP addresses. I removed these numbers from the updated PEP because I felt the PEP was getting heavy on the ?weasel? factor by trying to be ?well if you look at it X way, the impact is A, if you look at it Y way the impact is B?. I?ve included the numbers from 10 months ago below, but if we think this numbers are useful I can redo them now again. Top Externally Hosted Projects by Requests ------------------------------------------ This is determined by looking at the number of requests the ``/simple//`` page had gotten in a single day. The total number of requests during that day was 10,623,831. ============================== ======== Project ? ? ? ? ? ? ? ? ? ? ? ?Requests ============================== ======== PIL ? ? ? ? ? ? ? ? ? ? ? ? ? ?63869 Pygame ? ? ? ? ? ? ? ? ? ? ? ? 2681 mysql-connector-python ? ? ? ? 1562 pyodbc ? ? ? ? ? ? ? ? ? ? ? ? 724 elementtree ? ? ? ? ? ? ? ? ? ?635 salesforce-python-toolkit ? ? ?316 wxPython ? ? ? ? ? ? ? ? ? ? ? 295 PyXML ? ? ? ? ? ? ? ? ? ? ? ? ?251 RBTools ? ? ? ? ? ? ? ? ? ? ? ?235 python-graph-core ? ? ? ? ? ? ?123 cElementTree ? ? ? ? ? ? ? ? ? 121 ============================== ======== Top Externally Hosted Projects by Unique IPs -------------------------------------------- This is determined by looking at the IP addresses of requests the ``/simple//`` page had gotten in a single day. The total number of unique IP addresses during that day was 124,604. ============================== ========== Project ? ? ? ? ? ? ? ? ? ? ? ?Unique IPs ============================== ========== PIL ? ? ? ? ? ? ? ? ? ? ? ? ? ?4553 mysql-connector-python ? ? ? ? 462 Pygame ? ? ? ? ? ? ? ? ? ? ? ? 202 pyodbc ? ? ? ? ? ? ? ? ? ? ? ? 181 elementtree ? ? ? ? ? ? ? ? ? ?166 wxPython ? ? ? ? ? ? ? ? ? ? ? 126 RBTools ? ? ? ? ? ? ? ? ? ? ? ?114 PyXML ? ? ? ? ? ? ? ? ? ? ? ? ?87 salesforce-python-toolkit ? ? ?76 pyDes ? ? ? ? ? ? ? ? ? ? ? ? ?76 ============================== ========== ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Mon Aug 31 16:34:15 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 31 Aug 2015 10:34:15 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On August 31, 2015 at 10:31:35 AM, Donald Stufft (donald at stufft.io) wrote: > > FWIW, 10 months ago PIL was an order of magnitude the biggest user (based on hits to /simple// > of this feature both by sheer number of requests AND by unique IP addresses. Oh, and also you can see that, at least 10 months ago, the actual use of this feature drops sharply the further away from PIL you get. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From wichert at wiggy.net Mon Aug 31 16:37:25 2015 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 31 Aug 2015 16:37:25 +0200 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: <80E81566-4FDC-42C0-9FF5-D019BC17FAFF@wiggy.net> On 31 Aug 2015, at 16:31, Donald Stufft wrote: > Top Externally Hosted Projects by Requests > ------------------------------------------ > > This is determined by looking at the number of requests the > ``/simple//`` page had gotten in a single day. The total number of > requests during that day was 10,623,831. > > ============================== ======== > Project Requests > ============================== ======== > PIL 63869 > Pygame 2681 > mysql-connector-python 1562 > pyodbc 724 > elementtree 635 > salesforce-python-toolkit 316 > wxPython 295 > PyXML 251 > RBTools 235 > python-graph-core 123 > cElementTree 121 > ============================== ======== Looking very briefly at that list all of those are obsolete/replaced and have not seen a release in years, or they are now hosted in PyPI. Wichert. From wes.turner at gmail.com Mon Aug 31 16:37:58 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 31 Aug 2015 09:37:58 -0500 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On Mon, Aug 31, 2015 at 9:31 AM, Donald Stufft wrote: > On August 31, 2015 at 10:26:04 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > On 31 August 2015 at 15:21, Nick Coghlan wrote: > > >> Are you suggesting that this shouldn't occur? > > > > > > I'm saying we can look at the numbers at the end of the grace period > > > and decide what to do then :) > > > > Sounds reasonable to me :-) > > FWIW, 10 months ago PIL was an order of magnitude the biggest user (based > on hits to /simple// of this feature both by sheer number of requests > AND by unique IP addresses. I removed these numbers from the updated PEP > because I felt the PEP was getting heavy on the ?weasel? factor by trying > to be ?well if you look at it X way, the impact is A, if you look at it Y > way the impact is B?. I?ve included the numbers from 10 months ago below, > but if we think this numbers are useful I can redo them now again. > Status Quo: * pypi, warehouse httpd logs? Opportunities: * JOIN "/simple//....ext" w/ package metadata (for faceted queries) * offload to bigquery * overload warehouse * PIL -> pillow ? > > > > > Top Externally Hosted Projects by Requests > ------------------------------------------ > > This is determined by looking at the number of requests the > ``/simple//`` page had gotten in a single day. The total number of > requests during that day was 10,623,831. > > ============================== ======== > Project Requests > ============================== ======== > PIL 63869 > Pygame 2681 > mysql-connector-python 1562 > pyodbc 724 > elementtree 635 > salesforce-python-toolkit 316 > wxPython 295 > PyXML 251 > RBTools 235 > python-graph-core 123 > cElementTree 121 > ============================== ======== > > > Top Externally Hosted Projects by Unique IPs > -------------------------------------------- > > This is determined by looking at the IP addresses of requests the > ``/simple//`` page had gotten in a single day. The total number of > unique IP addresses during that day was 124,604. > > ============================== ========== > Project Unique IPs > ============================== ========== > PIL 4553 > mysql-connector-python 462 > Pygame 202 > pyodbc 181 > elementtree 166 > wxPython 126 > RBTools 114 > PyXML 87 > salesforce-python-toolkit 76 > pyDes 76 > ============================== ========== > > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Aug 31 19:13:17 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 31 Aug 2015 19:13:17 +0200 Subject: [Distutils] Bogus search result Message-ID: <20150831191317.71581836@fsol> Hello, I don't if that's just me, but I go to https://pypi.python.org/pypi and type "llvmlite" into the search box, clicking the "search" button leads me to the page for the Numba package. It should either lead me to the page for the *llvmlite* package or show me a list of search results. Regards Antoine. From donald at stufft.io Mon Aug 31 19:18:02 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 31 Aug 2015 13:18:02 -0400 Subject: [Distutils] Bogus search result In-Reply-To: <20150831191317.71581836@fsol> References: <20150831191317.71581836@fsol> Message-ID: On August 31, 2015 at 1:13:42 PM, Antoine Pitrou (solipsis at pitrou.net) wrote: > > Hello, > > I don't if that's just me, but I go to https://pypi.python.org/pypi and > type "llvmlite" into the search box, clicking the "search" button leads > me to the page for the Numba package. It should either lead me to the > page for the *llvmlite* package or show me a list of search results. > > No, there?s something going on with the search results. I just haven?t gotten around to looking into that yet. From donald at stufft.io Mon Aug 31 21:53:00 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 31 Aug 2015 15:53:00 -0400 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On August 31, 2015 at 10:31:35 AM, Donald Stufft (donald at stufft.io) wrote: > > I can redo them now again. So, I went ahead and ran all of the numbers using the data from 2015-08-18 (chosen because when I looked at the last couple weeks of log files, it was the? largest log file). I think the difference in just ~10 months supports the idea that use of this feature is declining and the model in the PEP will be a cleaner and easier to understand model. On this day, there were 20,398,771 total requests to /simple// which resulted in either a 200 or a 304 response code, out of these ~20 million requests 80,622 went to projects which do not have any files hosted on PyPI but do have files hosted off of PyPI. This represents ~0.4% of the total traffic for that particular day. The top packages look a bit different than it did 10 months ago, surprisingly to me PIL has severely dropped off from where it had ~63k 10 months ago and it now has 5.5k, however pygame has risen from 2.6k 10 months ago to ~32k. The total number of requests has doubled between now and 10 months ago and it appears that numbers of the top packages have more or less done the same, with the exception of the very top package which has been cut in half. Similar to 10 months ago we see the numbers rapidly drop by orders of magnitude. Overall, the top 10 in this list togther represented 70,691 requests 10 months ago, and now they represent 41,703. That's roughly 60% of what they were 10 months ago while the total number of requests increased by 100%, so it's really more like 30% of what they were previously when adjusted for the traffic increase. ============================== ======== Project ? ? ? ? ? ? ? ? ? ? ? ?Requests ============================== ======== Pygame ? ? ? ? ? ? ? ? ? ? ? ? 32238 PIL ? ? ? ? ? ? ? ? ? ? ? ? ? ?5548 mysql-connector-python ? ? ? ? 5152 RBTools ? ? ? ? ? ? ? ? ? ? ? ?3723 python-apt ? ? ? ? ? ? ? ? ? ? 3028 meliae ? ? ? ? ? ? ? ? ? ? ? ? 1679 elementtree ? ? ? ? ? ? ? ? ? ?1576 which ? ? ? ? ? ? ? ? ? ? ? ? ?457 salesforce-python-toolkit ? ? ?454 pywbem ? ? ? ? ? ? ? ? ? ? ? ? 400 wxPython ? ? ? ? ? ? ? ? ? ? ? 359 pyDes ? ? ? ? ? ? ? ? ? ? ? ? ?301 PyXML ? ? ? ? ? ? ? ? ? ? ? ? ?300 robotframework-seleniumlibrary 282 basemap ? ? ? ? ? ? ? ? ? ? ? ?255 Is any of this information useful for the PEP? I removed it because I though it was too much, but I'm happy to add it back in if it'd be useful. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From njs at vorpus.org Mon Aug 31 18:30:09 2015 From: njs at vorpus.org (Nathaniel Smith) Date: Mon, 31 Aug 2015 09:30:09 -0700 Subject: [Distutils] Reviving PEP 470 - Removing External Hosting support on PyPI In-Reply-To: References: <55DEED02.5080109@egenix.com> <55DF7A5D.8090804@egenix.com> <55E413F0.6010002@egenix.com> <36A1712F-417D-42FA-AADD-0C52EE103DCB@wiggy.net> <55E41FFF.3010407@egenix.com> Message-ID: On Aug 31, 2015 7:31 AM, "Donald Stufft" wrote: > > On August 31, 2015 at 10:26:04 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > On 31 August 2015 at 15:21, Nick Coghlan wrote: > > >> Are you suggesting that this shouldn't occur? > > > > > > I'm saying we can look at the numbers at the end of the grace period > > > and decide what to do then :) > > > > Sounds reasonable to me :-) > > FWIW, 10 months ago PIL was an order of magnitude the biggest user (based on hits to /simple// of this feature both by sheer number of requests AND by unique IP addresses. I removed these numbers from the updated PEP because I felt the PEP was getting heavy on the ?weasel? factor by trying to be ?well if you look at it X way, the impact is A, if you look at it Y way the impact is B?. I?ve included the numbers from 10 months ago below, but if we think this numbers are useful I can redo them now again. > > > > > > Top Externally Hosted Projects by Requests > ------------------------------------------ > > This is determined by looking at the number of requests the > ``/simple//`` page had gotten in a single day. The total number of > requests during that day was 10,623,831. > > ============================== ======== > Project Requests > ============================== ======== > PIL 63869 Maybe what we need is a special case rule that's similar to the ones discussed already, except where if people try to install PIL then instead of printing an external repository url, it prints the URL of an explanation of why they want Pillow instead... > Pygame 2681 > mysql-connector-python 1562 > pyodbc 724 > elementtree 635 > salesforce-python-toolkit 316 > wxPython 295 > PyXML 251 > RBTools 235 > python-graph-core 123 > cElementTree 121 > ============================== ======== > > > Top Externally Hosted Projects by Unique IPs > -------------------------------------------- > > This is determined by looking at the IP addresses of requests the > ``/simple//`` page had gotten in a single day. The total number of > unique IP addresses during that day was 124,604. > > ============================== ========== > Project Unique IPs > ============================== ========== > PIL 4553 > mysql-connector-python 462 > Pygame 202 > pyodbc 181 > elementtree 166 > wxPython 126 > RBTools 114 > PyXML 87 > salesforce-python-toolkit 76 > pyDes 76 > ============================== ========== > > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: