From r1chardj0n3s at gmail.com Thu Jul 3 03:07:33 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 3 Jul 2014 11:07:33 +1000 Subject: [Distutils] Let's summit at EuroPython Message-ID: Hi folks, I'd like to get interested folks together at EuroPython to get up to date, talk through current issues and generally catch up on all things Python Packaging. Specific things we'll be talking about: - current tools: where they're at and what their plans are - the state of PEPs 426, 440, 459, 470 - related to those: how linux distros are doing and feel about PEPs 426 and 459 - the state of the docs and user experience If you're interested in coming along (and I've not already contacted you) then please let me know and once we've sorted out the details of when and where I'll get back in touch with you. Please let me know what you'd like to have us discuss! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 3 03:25:57 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 2 Jul 2014 21:25:57 -0400 Subject: [Distutils] Let's summit at EuroPython In-Reply-To: References: Message-ID: <8B2E9043-8080-4205-A63C-84A1C08F8214@stufft.io> On Jul 2, 2014, at 9:07 PM, Richard Jones wrote: > Hi folks, > > I'd like to get interested folks together at EuroPython to get up to date, talk through current issues and generally catch up on all things Python Packaging. > > Specific things we'll be talking about: > > - current tools: where they're at and what their plans are > - the state of PEPs 426, 440, 459, 470 > - related to those: how linux distros are doing and feel about PEPs 426 and 459 > - the state of the docs and user experience > > If you're interested in coming along (and I've not already contacted you) then please let me know and once we've sorted out the details of when and where I'll get back in touch with you. Please let me know what you'd like to have us discuss! > > > Richard > I totally wish I could be there :( ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Thu Jul 3 09:03:44 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 3 Jul 2014 07:03:44 +0000 Subject: [Distutils] Let's summit at EuroPython In-Reply-To: <8B2E9043-8080-4205-A63C-84A1C08F8214@stufft.io> References: <8B2E9043-8080-4205-A63C-84A1C08F8214@stufft.io> Message-ID: <20140703070344.GL7481@merlinux.eu> Hi Richard, On Wed, Jul 02, 2014 at 21:25 -0400, Donald Stufft wrote: > On Jul 2, 2014, at 9:07 PM, Richard Jones wrote: > > > Hi folks, > > > > I'd like to get interested folks together at EuroPython to get up to date, talk through current issues and generally catch up on all things Python Packaging. > > > > Specific things we'll be talking about: > > > > - current tools: where they're at and what their plans are > > - the state of PEPs 426, 440, 459, 470 > > - related to those: how linux distros are doing and feel about PEPs 426 and 459 > > - the state of the docs and user experience Planning a whole day of workshops? :) Let's see what the interests of people are at the venue to arrange the final topic menu :) > > If you're interested in coming along (and I've not already contacted you) then please let me know and once we've sorted out the details of when and where I'll get back in touch with you. Please let me know what you'd like to have us discuss! I suggest to arrange a time in an afternoon so that we can evolve it into a dinner. best, holger > > > > > Richard > > > > I totally wish I could be there :( > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From r1chardj0n3s at gmail.com Thu Jul 3 09:11:44 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 3 Jul 2014 17:11:44 +1000 Subject: [Distutils] Let's summit at EuroPython In-Reply-To: <20140703070344.GL7481@merlinux.eu> References: <8B2E9043-8080-4205-A63C-84A1C08F8214@stufft.io> <20140703070344.GL7481@merlinux.eu> Message-ID: On 3 July 2014 17:03, holger krekel wrote: > Hi Richard, > > On Wed, Jul 02, 2014 at 21:25 -0400, Donald Stufft wrote: > > On Jul 2, 2014, at 9:07 PM, Richard Jones > wrote: > > > > > Hi folks, > > > > > > I'd like to get interested folks together at EuroPython to get up to > date, talk through current issues and generally catch up on all things > Python Packaging. > > > > > > Specific things we'll be talking about: > > > > > > - current tools: where they're at and what their plans are > > > - the state of PEPs 426, 440, 459, 470 > > > - related to those: how linux distros are doing and feel about PEPs > 426 and 459 > > > - the state of the docs and user experience > > Planning a whole day of workshops? :) > Let's see what the interests of people are at the venue to arrange the > final > topic menu :) > Yes, absolutely! I should have phrased it "specific things we could be talking about". > > > If you're interested in coming along (and I've not already contacted > you) then please let me know and once we've sorted out the details of when > and where I'll get back in touch with you. Please let me know what you'd > like to have us discuss! > > I suggest to arrange a time in an afternoon so that we can > evolve it into a dinner. I had an extraordinarily similar thought - though I had skipped the afternoon part and dinner was just "the pub". I'm flexible though :) Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Thu Jul 3 12:24:32 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Thu, 03 Jul 2014 12:24:32 +0200 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: On 30-06-14 17:56, Nick Coghlan wrote: > Yeah, it's the "you still need a way to define what goes into the image" > part that intrigues me with respect to combining tools like zc.buildout > with Docker. Buildout, to me, solves all there is to solve regarding python packages and a bit of configuration. Including calling bower to go grab the necessary css/js :-) That css/js is quite an important part of "what goes into the image". Bower with it's dependency mechanism solves that (and it can be called from buildout). A third important one is system packages: "what do I apt-get install". **Question/idea**: what about some mechanism to get this apt-get information out of a python package? If a site or package absolutely requires gdal or redis or memcache, it feels natural to me to have that knowledge somewhere in the python package. Does anyone do something like this? I was thinking along the lines of a simple 'debian_dependencies.txt' that I could use as input for ansible/fabric/fpm/whatever. Looking for ideas :-) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From ganong at gmail.com Wed Jul 2 15:04:39 2014 From: ganong at gmail.com (Peter Ganong) Date: Wed, 2 Jul 2014 09:04:39 -0400 Subject: [Distutils] Installing setuptools on an un-networked computer Message-ID: Hi, I am doing research with confidential data on a Windows 7 computer not connected to the Internet. I can bring files in on a thumb drive. How can I install setuptools? To be clear, the issue is not the one answered in the help file here , but is how to install setuptools itself on an unnetworked drive. Thanks, Peter -- Peter Ganong PhD Candidate in Economics at Harvard scholar.harvard.edu/ganong/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 3 14:14:01 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 3 Jul 2014 12:14:01 +0000 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: <20140703121401.GU1251@yuggoth.org> On 2014-07-03 12:24:32 +0200 (+0200), Reinout van Rees wrote: [...] > **Question/idea**: what about some mechanism to get this apt-get > information out of a python package? [...] Does anyone do > something like this? [...] Robert Collins wrote something called "bindep" as a proposal for solving that problem: http://git.openstack.org/cgit/stackforge/bindep/tree/README.rst -- Jeremy Stanley From p.f.moore at gmail.com Thu Jul 3 14:59:58 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Jul 2014 13:59:58 +0100 Subject: [Distutils] Installing setuptools on an un-networked computer In-Reply-To: References: Message-ID: On 2 July 2014 14:04, Peter Ganong wrote: > Hi, > > I am doing research with confidential data on a Windows 7 computer not > connected to the Internet. I can bring files in on a thumb drive. How can I > install setuptools? To be clear, the issue is not the one answered in the > help file here, but is how to install setuptools itself on an unnetworked > drive. You can probably just unpack the sdist and run python setup.py install. Paul From ncoghlan at gmail.com Thu Jul 3 16:28:51 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Jul 2014 07:28:51 -0700 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: On 3 July 2014 03:24, Reinout van Rees wrote: > On 30-06-14 17:56, Nick Coghlan wrote: >> >> Yeah, it's the "you still need a way to define what goes into the image" >> part that intrigues me with respect to combining tools like zc.buildout >> with Docker. > > > Buildout, to me, solves all there is to solve regarding python packages and > a bit of configuration. Including calling bower to go grab the necessary > css/js :-) > > That css/js is quite an important part of "what goes into the image". Bower > with it's dependency mechanism solves that (and it can be called from > buildout). > > A third important one is system packages: "what do I apt-get install". > > **Question/idea**: what about some mechanism to get this apt-get information > out of a python package? If a site or package absolutely requires gdal or > redis or memcache, it feels natural to me to have that knowledge somewhere > in the python package. > > Does anyone do something like this? I was thinking along the lines of a > simple 'debian_dependencies.txt' that I could use as input for > ansible/fabric/fpm/whatever. > > Looking for ideas :-) Allowing external dependency info to be captured in the upstream Python packages is one of the goals behind the metadata extension system for metadata 2.0. The current draft of the extension system is at http://www.python.org/dev/peps/pep-0426/#metadata-extensions A preliminary set of "standard extensions" is at http://www.python.org/dev/peps/pep-0459/ The idea is that the "core metadata" focuses on what is needed to support the essential dependency resolution process, while extensions represent optional extras. The "installer_must_handle" field is an escape clause allowing a distribution to say "if you don't handle this extension, you can't install this package properly, so fail rather than silently doing the wrong thing". That approach will allow us to add post-install hooks later as an extension, and have earlier versions of tools fall back to installing from source rather than silently failing to run the post-install hooks. Cheers, Nick. From michael at merickel.org Thu Jul 3 17:36:54 2014 From: michael at merickel.org (Michael Merickel) Date: Thu, 3 Jul 2014 10:36:54 -0500 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: I don't normally like shameless plugs but I haven't seen many people use docker in the following way yet and I happen to be using it with zc.buildout. I have built a small tool called marina[1] that I'm using to build binary slugs that can be installed in production. It uses throwaway (clean-room) containers to compile the application (using buildout or whatever else), and then dumps out an archive (or tags a new docker image) of the built assets which can be deployed. Marina is not using dockerfile's explicitly but rather you make a parent image (with buildpacks or whatever else) however you like. The image is then executed with your scripts (such as buildout) and the assets are archived. The approach keeps ssh keys and credentials out of the images. A separate data-only container is also mounted to allow caching of assets (eggs folder?) between runs which speeds up the build process dramatically. I'm using it on OS X to build binary tarballs for Ubuntu. The tarballs themselves can then be deployed as a docker image or just directly on a VM. It's all very alpha but I'm using it successfully with ansible to distribute tarballs to VMs in production. [1] https://github.com/mmerickel/marina Michael On Thu, Jul 3, 2014 at 9:28 AM, Nick Coghlan wrote: > On 3 July 2014 03:24, Reinout van Rees wrote: > > On 30-06-14 17:56, Nick Coghlan wrote: > >> > >> Yeah, it's the "you still need a way to define what goes into the image" > >> part that intrigues me with respect to combining tools like zc.buildout > >> with Docker. > > > > > > Buildout, to me, solves all there is to solve regarding python packages > and > > a bit of configuration. Including calling bower to go grab the necessary > > css/js :-) > > > > That css/js is quite an important part of "what goes into the image". > Bower > > with it's dependency mechanism solves that (and it can be called from > > buildout). > > > > A third important one is system packages: "what do I apt-get install". > > > > **Question/idea**: what about some mechanism to get this apt-get > information > > out of a python package? If a site or package absolutely requires gdal or > > redis or memcache, it feels natural to me to have that knowledge > somewhere > > in the python package. > > > > Does anyone do something like this? I was thinking along the lines of a > > simple 'debian_dependencies.txt' that I could use as input for > > ansible/fabric/fpm/whatever. > > > > Looking for ideas :-) > > Allowing external dependency info to be captured in the upstream > Python packages is one of the goals behind the metadata extension > system for metadata 2.0. > > The current draft of the extension system is at > http://www.python.org/dev/peps/pep-0426/#metadata-extensions > > A preliminary set of "standard extensions" is at > http://www.python.org/dev/peps/pep-0459/ > > The idea is that the "core metadata" focuses on what is needed to > support the essential dependency resolution process, while extensions > represent optional extras. The "installer_must_handle" field is an > escape clause allowing a distribution to say "if you don't handle this > extension, you can't install this package properly, so fail rather > than silently doing the wrong thing". That approach will allow us to > add post-install hooks later as an extension, and have earlier > versions of tools fall back to installing from source rather than > silently failing to run the post-install hooks. > > Cheers, > Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Thu Jul 3 19:01:33 2014 From: jim at zope.com (Jim Fulton) Date: Thu, 3 Jul 2014 13:01:33 -0400 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: On Thu, Jul 3, 2014 at 6:24 AM, Reinout van Rees wrote: > On 30-06-14 17:56, Nick Coghlan wrote: >> >> Yeah, it's the "you still need a way to define what goes into the image" >> part that intrigues me with respect to combining tools like zc.buildout >> with Docker. > > > Buildout, to me, solves all there is to solve regarding python packages and > a bit of configuration. Including calling bower to go grab the necessary > css/js :-) > > That css/js is quite an important part of "what goes into the image". Bower > with it's dependency mechanism solves that (and it can be called from > buildout). > > A third important one is system packages: "what do I apt-get install". If you're building system packages, then you can use their dependency system. If you're using docker images, you can put this in a base layer. > **Question/idea**: what about some mechanism to get this apt-get information > out of a python package? If a site or package absolutely requires gdal or > redis or memcache, it feels natural to me to have that knowledge somewhere > in the python package. > > Does anyone do something like this? I was thinking along the lines of a > simple 'debian_dependencies.txt' that I could use as input for > ansible/fabric/fpm/whatever. Depending exactly on what "this" is, Fred Drake's package grinder, for building RPMs, might come close: https://bitbucket.org/zc/packagegrinder I'm really hoping that Docker will eventually allow us to stop building system packages though. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From jim at zope.com Thu Jul 3 19:05:04 2014 From: jim at zope.com (Jim Fulton) Date: Thu, 3 Jul 2014 13:05:04 -0400 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: On Thu, Jul 3, 2014 at 11:36 AM, Michael Merickel wrote: > I don't normally like shameless plugs but I haven't seen many people use > docker in the following way yet and I happen to be using it with > zc.buildout. > > I have built a small tool called marina[1] that I'm using to build binary > slugs that can be installed in production. It uses throwaway (clean-room) > containers to compile the application (using buildout or whatever else), and > then dumps out an archive (or tags a new docker image) of the built assets > which can be deployed. Marina is not using dockerfile's explicitly but > rather you make a parent image (with buildpacks or whatever else) however > you like. The image is then executed with your scripts (such as buildout) > and the assets are archived. The approach keeps ssh keys and credentials out > of the images. A separate data-only container is also mounted to allow > caching of assets (eggs folder?) between runs which speeds up the build > process dramatically. I'm using it on OS X to build binary tarballs for > Ubuntu. The tarballs themselves can then be deployed as a docker image or > just directly on a VM. It's all very alpha but I'm using it successfully > with ansible to distribute tarballs to VMs in production. > > [1] https://github.com/mmerickel/marina Not sure I follow all the above, but it sounds interesting. I look forward to studying it. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From reinout at vanrees.org Fri Jul 4 10:51:56 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Fri, 04 Jul 2014 10:51:56 +0200 Subject: [Distutils] zc.buildout & Docker container images In-Reply-To: References: Message-ID: On 03-07-14 19:01, Jim Fulton wrote: > > I'm really hoping that Docker will eventually allow us to stop > building system packages though. Yes. Sounds like you can use whatever method you want to build up a docker image and be done with it. Hopefully you'll do it in a nicely ordered repeatable way with some combination of fabric/ansible/buildout/puppet/whatever. Things like adding nginx to the docker and symlinking a buildout-generated nginx config file into the /etc/nginx directory suddenly don't run afoul of I-need-to-be-root anymore. I wonder what this means for buildout :-) Like you said, a docker instance means there's less need for the isolation of buildout (and virtualenv). On the other hand, buildout could install/adjust things globally in /etc directly, for instance! The "compose a complete application" promise of buildout gets an extra boost this way. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From bcannon at gmail.com Sat Jul 5 00:51:44 2014 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 04 Jul 2014 22:51:44 +0000 Subject: [Distutils] Specifying Python version as part of requirements Message-ID: I just checked PEP 440 and pip doesn't seem to have anything specific, so I thought I would ask if there is any way now or in the future to specify that a dependency is only needed for certain versions of Python? My current use case is I want to use unittest2, but pip errors out during installation under Python 3 of it since unittest2 imports itself and triggers a syntax error. Instead of having to hack around this by making it a conditional include in my setup.py I would like to declare that I only need it in Python 2.x in a requirements.txt file or something. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 5 01:48:09 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 4 Jul 2014 19:48:09 -0400 Subject: [Distutils] Specifying Python version as part of requirements In-Reply-To: References: Message-ID: So this is in PEP uh, 426 I think. It?s not final yet but it kinda works right now for Wheels. Basically you do a conditional include to install_requires in your setup.py, and then if you?re creating wheels you do something like -> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 That will overwrite install_requires, but only for Wheels. On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: > I just checked PEP 440 and pip doesn't seem to have anything specific, so I thought I would ask if there is any way now or in the future to specify that a dependency is only needed for certain versions of Python? > > My current use case is I want to use unittest2, but pip errors out during installation under Python 3 of it since unittest2 imports itself and triggers a syntax error. Instead of having to hack around this by making it a conditional include in my setup.py I would like to declare that I only need it in Python 2.x in a requirements.txt file or something. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Sat Jul 5 04:38:58 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 4 Jul 2014 22:38:58 -0400 Subject: [Distutils] Specifying Python version as part of requirements In-Reply-To: References: Message-ID: The next wheel release and the wheel code in revision control will support the setuptools conditional requirements syntax. See wheel's own setup.py in mercurial tip. ? On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: > So this is in PEP uh, 426 I think. It?s not final yet but it kinda works > right now for Wheels. > > Basically you do a conditional include to install_requires in your > setup.py, and then if > you?re creating wheels you do something like -> > https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 > > That will overwrite install_requires, but only for Wheels. > > On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: > > > I just checked PEP 440 and pip doesn't seem to have anything specific, > so I thought I would ask if there is any way now or in the future to > specify that a dependency is only needed for certain versions of Python? > > > > My current use case is I want to use unittest2, but pip errors out > during installation under Python 3 of it since unittest2 imports itself and > triggers a syntax error. Instead of having to hack around this by making it > a conditional include in my setup.py I would like to declare that I only > need it in Python 2.x in a requirements.txt file or something. > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Sat Jul 5 15:25:35 2014 From: bcannon at gmail.com (Brett Cannon) Date: Sat, 05 Jul 2014 13:25:35 +0000 Subject: [Distutils] Specifying Python version as part of requirements References: Message-ID: On Fri Jul 04 2014 at 10:38:58 PM, Daniel Holth wrote: > The next wheel release and the wheel code in revision control will support > the setuptools conditional requirements syntax. See wheel's own setup.py in > mercurial tip. ? > Glad this will work eventually, but unfortunately not all of my dependencies have wheels yet, so I will have to stick with my setup.py 'if' statements. -Brett > On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: > >> So this is in PEP uh, 426 I think. It?s not final yet but it kinda works >> right now for Wheels. >> >> Basically you do a conditional include to install_requires in your >> setup.py, and then if >> you?re creating wheels you do something like -> >> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 >> >> That will overwrite install_requires, but only for Wheels. >> >> On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: >> >> > I just checked PEP 440 and pip doesn't seem to have anything specific, >> so I thought I would ask if there is any way now or in the future to >> specify that a dependency is only needed for certain versions of Python? >> > >> > My current use case is I want to use unittest2, but pip errors out >> during installation under Python 3 of it since unittest2 imports itself and >> triggers a syntax error. Instead of having to hack around this by making it >> a conditional include in my setup.py I would like to declare that I only >> need it in Python 2.x in a requirements.txt file or something. >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >> DCFA >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sat Jul 5 15:26:26 2014 From: dholth at gmail.com (Daniel Holth) Date: Sat, 5 Jul 2014 09:26:26 -0400 Subject: [Distutils] Specifying Python version as part of requirements In-Reply-To: References: Message-ID: Setuptools supports it even without wheel. On Jul 5, 2014 9:25 AM, "Brett Cannon" wrote: > > > On Fri Jul 04 2014 at 10:38:58 PM, Daniel Holth wrote: > >> The next wheel release and the wheel code in revision control will >> support the setuptools conditional requirements syntax. See wheel's own >> setup.py in mercurial tip. ? >> > Glad this will work eventually, but unfortunately not all of my > dependencies have wheels yet, so I will have to stick with my setup.py 'if' > statements. > > -Brett > > >> On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: >> >>> So this is in PEP uh, 426 I think. It?s not final yet but it kinda works >>> right now for Wheels. >>> >>> Basically you do a conditional include to install_requires in your >>> setup.py, and then if >>> you?re creating wheels you do something like -> >>> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 >>> >>> That will overwrite install_requires, but only for Wheels. >>> >>> On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: >>> >>> > I just checked PEP 440 and pip doesn't seem to have anything specific, >>> so I thought I would ask if there is any way now or in the future to >>> specify that a dependency is only needed for certain versions of Python? >>> > >>> > My current use case is I want to use unittest2, but pip errors out >>> during installation under Python 3 of it since unittest2 imports itself and >>> triggers a syntax error. Instead of having to hack around this by making it >>> a conditional include in my setup.py I would like to declare that I only >>> need it in Python 2.x in a requirements.txt file or something. >>> > _______________________________________________ >>> > Distutils-SIG maillist - Distutils-SIG at python.org >>> > https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >>> ----------------- >>> Donald Stufft >>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >>> DCFA >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Sun Jul 6 18:18:31 2014 From: bcannon at gmail.com (Brett Cannon) Date: Sun, 06 Jul 2014 16:18:31 +0000 Subject: [Distutils] Specifying Python version as part of requirements References: Message-ID: On Sat Jul 05 2014 at 9:26:26 AM, Daniel Holth wrote: > Setuptools supports it even without wheel. > How do you specify that since PEP 426 is oriented towards JSON metadata. Would it be: test_requires = ["unittest2;python_version < '3.0'"] ? > On Jul 5, 2014 9:25 AM, "Brett Cannon" wrote: > >> >> >> On Fri Jul 04 2014 at 10:38:58 PM, Daniel Holth wrote: >> >>> The next wheel release and the wheel code in revision control will >>> support the setuptools conditional requirements syntax. See wheel's own >>> setup.py in mercurial tip. ? >>> >> Glad this will work eventually, but unfortunately not all of my >> dependencies have wheels yet, so I will have to stick with my setup.py 'if' >> statements. >> >> -Brett >> >> >>> On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: >>> >>>> So this is in PEP uh, 426 I think. It?s not final yet but it kinda >>>> works right now for Wheels. >>>> >>>> Basically you do a conditional include to install_requires in your >>>> setup.py, and then if >>>> you?re creating wheels you do something like -> >>>> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 >>>> >>>> That will overwrite install_requires, but only for Wheels. >>>> >>>> On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: >>>> >>>> > I just checked PEP 440 and pip doesn't seem to have anything >>>> specific, so I thought I would ask if there is any way now or in the future >>>> to specify that a dependency is only needed for certain versions of Python? >>>> > >>>> > My current use case is I want to use unittest2, but pip errors out >>>> during installation under Python 3 of it since unittest2 imports itself and >>>> triggers a syntax error. Instead of having to hack around this by making it >>>> a conditional include in my setup.py I would like to declare that I only >>>> need it in Python 2.x in a requirements.txt file or something. >>>> > _______________________________________________ >>>> > Distutils-SIG maillist - Distutils-SIG at python.org >>>> > https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> >>>> ----------------- >>>> Donald Stufft >>>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >>>> DCFA >>>> >>>> >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Jul 6 18:19:51 2014 From: dholth at gmail.com (Daniel Holth) Date: Sun, 6 Jul 2014 12:19:51 -0400 Subject: [Distutils] Specifying Python version as part of requirements In-Reply-To: References: Message-ID: Sorry it only works for install requires via the extras syntax. On Jul 6, 2014 12:18 PM, "Brett Cannon" wrote: > > > On Sat Jul 05 2014 at 9:26:26 AM, Daniel Holth wrote: > >> Setuptools supports it even without wheel. >> > How do you specify that since PEP 426 is oriented towards JSON metadata. > Would it be: > > test_requires = ["unittest2;python_version < '3.0'"] > > ? > > > >> On Jul 5, 2014 9:25 AM, "Brett Cannon" wrote: >> >>> >>> >>> On Fri Jul 04 2014 at 10:38:58 PM, Daniel Holth >>> wrote: >>> >>>> The next wheel release and the wheel code in revision control will >>>> support the setuptools conditional requirements syntax. See wheel's own >>>> setup.py in mercurial tip. ? >>>> >>> Glad this will work eventually, but unfortunately not all of my >>> dependencies have wheels yet, so I will have to stick with my setup.py 'if' >>> statements. >>> >>> -Brett >>> >>> >>>> On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: >>>> >>>>> So this is in PEP uh, 426 I think. It?s not final yet but it kinda >>>>> works right now for Wheels. >>>>> >>>>> Basically you do a conditional include to install_requires in your >>>>> setup.py, and then if >>>>> you?re creating wheels you do something like -> >>>>> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 >>>>> >>>>> That will overwrite install_requires, but only for Wheels. >>>>> >>>>> On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: >>>>> >>>>> > I just checked PEP 440 and pip doesn't seem to have anything >>>>> specific, so I thought I would ask if there is any way now or in the future >>>>> to specify that a dependency is only needed for certain versions of Python? >>>>> > >>>>> > My current use case is I want to use unittest2, but pip errors out >>>>> during installation under Python 3 of it since unittest2 imports itself and >>>>> triggers a syntax error. Instead of having to hack around this by making it >>>>> a conditional include in my setup.py I would like to declare that I only >>>>> need it in Python 2.x in a requirements.txt file or something. >>>>> > _______________________________________________ >>>>> > Distutils-SIG maillist - Distutils-SIG at python.org >>>>> > https://mail.python.org/mailman/listinfo/distutils-sig >>>>> >>>>> >>>>> ----------------- >>>>> Donald Stufft >>>>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 >>>>> 3372 DCFA >>>>> >>>>> >>>>> _______________________________________________ >>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Jul 6 18:23:32 2014 From: dholth at gmail.com (Daniel Holth) Date: Sun, 6 Jul 2014 12:23:32 -0400 Subject: [Distutils] Specifying Python version as part of requirements In-Reply-To: References: Message-ID: Something like extras_require = ':python_version=="2.6"': ['argparse'], 'example_extra_name:another == "4"': ['keyring'], See if it works to define a test extra in extras_require, and make your tests_require reference mypackagename[test]: extras_require = { 'test:python_version=="2.6"': ['argparse'], 'test':['other', 'dependencies'] }, test_requires = [ 'mypackagename[test]' ] On Sun, Jul 6, 2014 at 12:19 PM, Daniel Holth wrote: > Sorry it only works for install requires via the extras syntax. > > On Jul 6, 2014 12:18 PM, "Brett Cannon" wrote: >> >> >> >> On Sat Jul 05 2014 at 9:26:26 AM, Daniel Holth wrote: >>> >>> Setuptools supports it even without wheel. >> >> How do you specify that since PEP 426 is oriented towards JSON metadata. >> Would it be: >> >> test_requires = ["unittest2;python_version < '3.0'"] >> >> ? >> >> >>> >>> On Jul 5, 2014 9:25 AM, "Brett Cannon" wrote: >>>> >>>> >>>> >>>> On Fri Jul 04 2014 at 10:38:58 PM, Daniel Holth >>>> wrote: >>>>> >>>>> The next wheel release and the wheel code in revision control will >>>>> support the setuptools conditional requirements syntax. See wheel's own >>>>> setup.py in mercurial tip. ? >>>> >>>> Glad this will work eventually, but unfortunately not all of my >>>> dependencies have wheels yet, so I will have to stick with my setup.py 'if' >>>> statements. >>>> >>>> -Brett >>>> >>>>> >>>>> On Jul 4, 2014 7:48 PM, "Donald Stufft" wrote: >>>>>> >>>>>> So this is in PEP uh, 426 I think. It?s not final yet but it kinda >>>>>> works right now for Wheels. >>>>>> >>>>>> Basically you do a conditional include to install_requires in your >>>>>> setup.py, and then if >>>>>> you?re creating wheels you do something like -> >>>>>> https://github.com/dstufft/twine/blob/master/setup.cfg#L9-L13 >>>>>> >>>>>> That will overwrite install_requires, but only for Wheels. >>>>>> >>>>>> On Jul 4, 2014, at 6:51 PM, Brett Cannon wrote: >>>>>> >>>>>> > I just checked PEP 440 and pip doesn't seem to have anything >>>>>> > specific, so I thought I would ask if there is any way now or in the future >>>>>> > to specify that a dependency is only needed for certain versions of Python? >>>>>> > >>>>>> > My current use case is I want to use unittest2, but pip errors out >>>>>> > during installation under Python 3 of it since unittest2 imports itself and >>>>>> > triggers a syntax error. Instead of having to hack around this by making it >>>>>> > a conditional include in my setup.py I would like to declare that I only >>>>>> > need it in Python 2.x in a requirements.txt file or something. >>>>>> > _______________________________________________ >>>>>> > Distutils-SIG maillist - Distutils-SIG at python.org >>>>>> > https://mail.python.org/mailman/listinfo/distutils-sig >>>>>> >>>>>> >>>>>> ----------------- >>>>>> Donald Stufft >>>>>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 >>>>>> 3372 DCFA >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>>> > From dholth at gmail.com Sun Jul 6 19:39:08 2014 From: dholth at gmail.com (Daniel Holth) Date: Sun, 6 Jul 2014 13:39:08 -0400 Subject: [Distutils] wheel 0.24.0 released Message-ID: The much-anticipated 0.24.0 release of wheel has occurred. - The python tag used for pure-python packages is now .pyN (major version only). This change actually occurred in 0.23.0 when the --python-tag option was added, but was not explicitly mentioned in the changelog then. - wininst2wheel and egg2wheel removed. Use "wheel convert [archive]" instead. - Wheel now supports setuptools style conditional requirements via the extras_require={} syntax. Separate 'extra' names from conditions using the : character. Wheel's own setup.py does this. (The empty-string extra is the same as install_requires.) These conditional requirements should work the same whether the package is installed by wheel or by setup.py. So we support and convert setuptools' conditional requiremenst to METADATA. However, although pkg_resources.working_set.by_key['wheel']._dep_map is correct, pip install wheel[extraname] doesn't seem to work at all; this bug may be fixed in an unreleased pip? Daniel Holth From mal at egenix.com Mon Jul 7 15:02:42 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 07 Jul 2014 15:02:42 +0200 Subject: [Distutils] pip, virtualenv, setuptools on Windows Message-ID: <53BA9A72.4060204@egenix.com> I wonder what our story is for the pip, virtualenv, setuptools packaging setup on Windows. At the moment, the only way to get this setup seems to be following guides such as these: http://www.tylerbutler.com/2012/05/how-to-install-python-pip-and-virtualenv-on-windows-with-powershell/ But that's not really how Windows users expect things to work. They want to click on an installer, preferably a MSI one, since that integrates well with managed corporate Windows environments, and then expect everything to just work. Much in the same way they install Python itself. Is there anything planned in this area ? Hint: If someone were to send in a grant request to build such an installer for Windows, I'm sure the PSF board would love to fund work such work. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 07 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 14 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From vinay_sajip at yahoo.co.uk Mon Jul 7 15:20:57 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 7 Jul 2014 14:20:57 +0100 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: <53BA9A72.4060204@egenix.com> References: <53BA9A72.4060204@egenix.com> Message-ID: <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> ISTM the click-on-an-MSI-and-everything-works may be OK for a "system-wide" installation for a given version of Python, but I don't see how that can work with venvs (it's the same problem as for bdist_wininst installers). How would one pass the venv (to install into) to a point-and-click installer, without using a command line? Drag-and-drop onto a venv folder is a possibility, but that's not exactly the conventional usage idiom for MSIs. Regards, Vinay Sajip ________________________________ From: M.-A. Lemburg To: Python Distutils SIG Sent: Monday, 7 July 2014, 14:02 Subject: [Distutils] pip, virtualenv, setuptools on Windows I wonder what our story is for the pip, virtualenv, setuptools packaging setup on Windows. At the moment, the only way to get this setup seems to be following guides such as these: http://www.tylerbutler.com/2012/05/how-to-install-python-pip-and-virtualenv-on-windows-with-powershell/ But that's not really how Windows users expect things to work. They want to click on an installer, preferably a MSI one, since that integrates well with managed corporate Windows environments, and then expect everything to just work. Much in the same way they install Python itself. Is there anything planned in this area ? Hint: If someone were to send in a grant request to build such an installer for Windows, I'm sure the PSF board would love to fund work such work. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source? (#1, Jul 07 2014) >>> Python Projects, Consulting and Support ...? http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ...? ? ? http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ...? ? ? ? http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ...? ? ? ? ? 14 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: ? eGenix.com Software, Skills and Services GmbH? Pastor-Loeh-Str.48 ? ? D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg ? ? ? ? ? Registered at Amtsgericht Duesseldorf: HRB 46611 ? ? ? ? ? ? ? http://www.egenix.com/company/contact/ _______________________________________________ Distutils-SIG maillist? -? Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Mon Jul 7 15:25:10 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 07 Jul 2014 15:25:10 +0200 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> Message-ID: <53BA9FB6.5010102@egenix.com> On 07.07.2014 15:20, Vinay Sajip wrote: > ISTM the click-on-an-MSI-and-everything-works may be OK for a "system-wide" installation for a given version of Python, but I don't see how that can work with venvs (it's the same problem as for bdist_wininst installers). How would one pass the venv (to install into) to a point-and-click installer, without using a command line? Drag-and-drop onto a venv folder is a possibility, but that's not exactly the conventional usage idiom for MSIs. Perhaps I wasn't clear enough: I am talking about bootstrapping a Windows system Python installation with pip, setuptools and virtualenv. Once this is done, virtualenvs can be setup as usual from the command line. The problem is getting to that point easily :-) > Regards, > > Vinay Sajip > > > ________________________________ > From: M.-A. Lemburg > To: Python Distutils SIG > Sent: Monday, 7 July 2014, 14:02 > Subject: [Distutils] pip, virtualenv, setuptools on Windows > > > I wonder what our story is for the pip, virtualenv, setuptools packaging > setup on Windows. > > At the moment, the only way to get this setup seems to be following > guides such as these: > > http://www.tylerbutler.com/2012/05/how-to-install-python-pip-and-virtualenv-on-windows-with-powershell/ > > But that's not really how Windows users expect things to work. They > want to click on an installer, preferably a MSI one, since that > integrates well with managed corporate Windows environments, and then > expect everything to just work. Much in the same way they install Python > itself. > > Is there anything planned in this area ? > > Hint: If someone were to send in a grant request to build such an > installer for Windows, I'm sure the PSF board would love to fund work > such work. > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 07 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 14 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From p.f.moore at gmail.com Mon Jul 7 15:29:23 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 Jul 2014 14:29:23 +0100 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: <53BA9FB6.5010102@egenix.com> References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> Message-ID: On 7 July 2014 14:25, M.-A. Lemburg wrote: > On 07.07.2014 15:20, Vinay Sajip wrote: >> ISTM the click-on-an-MSI-and-everything-works may be OK for a "system-wide" installation for a given version of Python, but I don't see how that can work with venvs (it's the same problem as for bdist_wininst installers). How would one pass the venv (to install into) to a point-and-click installer, without using a command line? Drag-and-drop onto a venv folder is a possibility, but that's not exactly the conventional usage idiom for MSIs. > > Perhaps I wasn't clear enough: I am talking about bootstrapping > a Windows system Python installation with pip, setuptools and > virtualenv. > > Once this is done, virtualenvs can be setup as usual from the > command line. The problem is getting to that point easily :-) The MSI for Python 3.4 includes both pip and venv by default, which pretty much covers this scenario. Setuptools is also included but that's officially an implementation detail (as the core devs don't want to "bless" setuptools to the extent that distributing it with Python would imply). Regardless, "pip install setuptools" does the job, and you have the environment you're describing. If you want virtualenv rather than venv, "pip install virtualenv" gets that. Am I missing something? Paul From mal at egenix.com Mon Jul 7 16:25:48 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 07 Jul 2014 16:25:48 +0200 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> Message-ID: <53BAADEC.40800@egenix.com> On 07.07.2014 15:29, Paul Moore wrote: > On 7 July 2014 14:25, M.-A. Lemburg wrote: >> On 07.07.2014 15:20, Vinay Sajip wrote: >>> ISTM the click-on-an-MSI-and-everything-works may be OK for a "system-wide" installation for a given version of Python, but I don't see how that can work with venvs (it's the same problem as for bdist_wininst installers). How would one pass the venv (to install into) to a point-and-click installer, without using a command line? Drag-and-drop onto a venv folder is a possibility, but that's not exactly the conventional usage idiom for MSIs. >> >> Perhaps I wasn't clear enough: I am talking about bootstrapping >> a Windows system Python installation with pip, setuptools and >> virtualenv. >> >> Once this is done, virtualenvs can be setup as usual from the >> command line. The problem is getting to that point easily :-) > > The MSI for Python 3.4 includes both pip and venv by default, which > pretty much covers this scenario. Setuptools is also included but > that's officially an implementation detail (as the core devs don't > want to "bless" setuptools to the extent that distributing it with > Python would imply). Regardless, "pip install setuptools" does the > job, and you have the environment you're describing. > > If you want virtualenv rather than venv, "pip install virtualenv" gets that. > > Am I missing something? Yes: Installers for Python 2.6, 2.7 and 3.3 :-) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 07 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 14 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From p.f.moore at gmail.com Mon Jul 7 16:51:23 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 Jul 2014 15:51:23 +0100 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: <53BAADEC.40800@egenix.com> References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> <53BAADEC.40800@egenix.com> Message-ID: On 7 July 2014 15:25, M.-A. Lemburg wrote: > Yes: Installers for Python 2.6, 2.7 and 3.3 :-) Ha. That one, I'll leave to someone who cares about 2.x... ;-) Paul From donald at stufft.io Mon Jul 7 17:39:38 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 7 Jul 2014 11:39:38 -0400 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> <53BAADEC.40800@egenix.com> Message-ID: <71C8565A-AA1E-4BFC-876F-E6CB50FC9070@stufft.io> I think there may have been discussion about this for 2.7 at PyCon. 2.6 and 3.3 I?m guessing would require someone else to do them. I tried awhile back but never got it finished. On Jul 7, 2014, at 10:51 AM, Paul Moore wrote: > On 7 July 2014 15:25, M.-A. Lemburg wrote: >> Yes: Installers for Python 2.6, 2.7 and 3.3 :-) > > Ha. That one, I'll leave to someone who cares about 2.x... ;-) > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Steve.Dower at microsoft.com Mon Jul 7 18:42:17 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Mon, 7 Jul 2014 16:42:17 +0000 Subject: [Distutils] pip, virtualenv, setuptools on Windows In-Reply-To: References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> <53BAADEC.40800@egenix.com> Message-ID: <7aec470fc5174985b2ea690a71aec0e4@BLUPR03MB389.namprd03.prod.outlook.com> Paul Moore wrote: > On 7 July 2014 15:25, M.-A. Lemburg wrote: >> Yes: Installers for Python 2.6, 2.7 and 3.3 :-) > > Ha. That one, I'll leave to someone who cares about 2.x... ;-) > > Paul That "someone" is Nick Coghlan, and as long as the python-dev discussion doesn't take too long, it should happen for 2.7.x (where x may be 10+, at the rate OpenSSL bugs keep forcing us to rerelease...) There's also https://bootstrap.pypa.io/get-pip.py, which is better than an MSI in almost every way IMHO. Cheers, Steve From nad at acm.org Mon Jul 7 20:30:04 2014 From: nad at acm.org (Ned Deily) Date: Mon, 07 Jul 2014 11:30:04 -0700 Subject: [Distutils] pip, virtualenv, setuptools on Windows References: <53BA9A72.4060204@egenix.com> <1404739257.42590.YahooMailNeo@web172402.mail.ir2.yahoo.com> <53BA9FB6.5010102@egenix.com> <53BAADEC.40800@egenix.com> <7aec470fc5174985b2ea690a71aec0e4@BLUPR03MB389.namprd03.prod.outlook.com> Message-ID: In article <7aec470fc5174985b2ea690a71aec0e4 at BLUPR03MB389.namprd03.prod.outlook.com >, Steve Dower wrote: > Paul Moore wrote: > > On 7 July 2014 15:25, M.-A. Lemburg wrote: > >> Yes: Installers for Python 2.6, 2.7 and 3.3 :-) > > > > Ha. That one, I'll leave to someone who cares about 2.x... ;-) > That "someone" is Nick Coghlan, and as long as the python-dev discussion > doesn't take too long, it should happen for 2.7.x (where x may be 10+, at the > rate OpenSSL bugs keep forcing us to rerelease...) I think Nick has broadly hinted that he would welcome someone else picking up the task of extending PEP 453 to Python 2.7. I don't have the time right at the moment to do it myself but I certainly am willing to backport to 2.7 the pieces of the implementation that I did. http://permalink.gmane.org/gmane.comp.python.devel/147836 > There's also https://bootstrap.pypa.io/get-pip.py, which is better than an > MSI in almost every way IMHO. Definitely and that should work for 2.6 and 3.3 as well. -- Ned Deily, nad at acm.org From mal at egenix.com Tue Jul 8 16:32:49 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 08 Jul 2014 16:32:49 +0200 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 Message-ID: <53BC0111.8000909@egenix.com> I just tried the documented interfaces for PyPI (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and it fails with an error: > python pypirpc.py Traceback (most recent call last): File "pypirpc.py", line 7, in pprint.pprint(client.package_releases('egenix-web-installer-test')) File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ return self.__send(self.__name, args) File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request verbose=self.__verbose File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request return self._parse_response(h.getfile(), sock) File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response p.close() File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close self._parser.Parse("", 1) # end of data xml.parsers.expat.ExpatError: no element found: line 1, column 0 The call works with Python 2.7. It appears that xmlrpclib is not receiving any body data from the server. The returned data in 2.7 looks completely harmless: send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: 185\r\n\r\n\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" reply: 'HTTP/1.1 200 OK\r\n' header: Server: nginx/1.6.0 header: Content-Type: text/xml header: charset: UTF-8 header: Strict-Transport-Security: max-age=31536000; includeSubDomains header: Transfer-Encoding: chunked header: Accept-Ranges: bytes header: Date: Tue, 08 Jul 2014 14:19:41 GMT header: Via: 1.1 varnish header: Connection: keep-alive header: X-Served-By: cache-fra1229-FRA header: X-Cache: MISS header: X-Cache-Hits: 0 header: X-Timer: S1404829181.210045,VS0,VE325 body: "\n\n\n\n\n0.2.0\n\n\n\n\n" Could this be a network error rather than a program one ? The code in 2.7 does a retry in case of a connection reset or abort, which code in 2.6 and earlier does not apply. Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 08 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From mal at egenix.com Tue Jul 8 16:49:12 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 08 Jul 2014 16:49:12 +0200 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <53BC0111.8000909@egenix.com> References: <53BC0111.8000909@egenix.com> Message-ID: <53BC04E8.1030405@egenix.com> I opened an issue for this: https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with On 08.07.2014 16:32, M.-A. Lemburg wrote: > I just tried the documented interfaces for PyPI > (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and > it fails with an error: > >> python pypirpc.py > Traceback (most recent call last): > File "pypirpc.py", line 7, in > pprint.pprint(client.package_releases('egenix-web-installer-test')) > File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ > return self.__send(self.__name, args) > File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request > verbose=self.__verbose > File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request > return self._parse_response(h.getfile(), sock) > File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response > p.close() > File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close > self._parser.Parse("", 1) # end of data > xml.parsers.expat.ExpatError: no element found: line 1, column 0 > > The call works with Python 2.7. It appears that xmlrpclib > is not receiving any body data from the server. > > The returned data in 2.7 looks completely harmless: > > send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: > xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: > 185\r\n\r\n version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" > reply: 'HTTP/1.1 200 OK\r\n' > header: Server: nginx/1.6.0 > header: Content-Type: text/xml > header: charset: UTF-8 > header: Strict-Transport-Security: max-age=31536000; includeSubDomains > header: Transfer-Encoding: chunked > header: Accept-Ranges: bytes > header: Date: Tue, 08 Jul 2014 14:19:41 GMT > header: Via: 1.1 varnish > header: Connection: keep-alive > header: X-Served-By: cache-fra1229-FRA > header: X-Cache: MISS > header: X-Cache-Hits: 0 > header: X-Timer: S1404829181.210045,VS0,VE325 > body: " version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" > > Could this be a network error rather than a program one ? > > The code in 2.7 does a retry in case of a connection reset or abort, > which code in 2.6 and earlier does not apply. > > Thanks, > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 08 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From donald at stufft.io Tue Jul 8 17:09:16 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 8 Jul 2014 11:09:16 -0400 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <53BC04E8.1030405@egenix.com> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> Message-ID: <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> You?re using http:// rather than https:// yea? On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: > I opened an issue for this: > https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with > > On 08.07.2014 16:32, M.-A. Lemburg wrote: >> I just tried the documented interfaces for PyPI >> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >> it fails with an error: >> >>> python pypirpc.py >> Traceback (most recent call last): >> File "pypirpc.py", line 7, in >> pprint.pprint(client.package_releases('egenix-web-installer-test')) >> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >> return self.__send(self.__name, args) >> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >> verbose=self.__verbose >> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >> return self._parse_response(h.getfile(), sock) >> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >> p.close() >> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >> self._parser.Parse("", 1) # end of data >> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >> >> The call works with Python 2.7. It appears that xmlrpclib >> is not receiving any body data from the server. >> >> The returned data in 2.7 looks completely harmless: >> >> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >> 185\r\n\r\n> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >> reply: 'HTTP/1.1 200 OK\r\n' >> header: Server: nginx/1.6.0 >> header: Content-Type: text/xml >> header: charset: UTF-8 >> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >> header: Transfer-Encoding: chunked >> header: Accept-Ranges: bytes >> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >> header: Via: 1.1 varnish >> header: Connection: keep-alive >> header: X-Served-By: cache-fra1229-FRA >> header: X-Cache: MISS >> header: X-Cache-Hits: 0 >> header: X-Timer: S1404829181.210045,VS0,VE325 >> body: "> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >> >> Could this be a network error rather than a program one ? >> >> The code in 2.7 does a retry in case of a connection reset or abort, >> which code in 2.6 and earlier does not apply. >> >> Thanks, >> > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Jul 08 2014) >>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mal at egenix.com Tue Jul 8 17:13:22 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 08 Jul 2014 17:13:22 +0200 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> Message-ID: <53BC0A92.8010901@egenix.com> On 08.07.2014 17:09, Donald Stufft wrote: > You?re using http:// rather than https:// yea? Yes. xmlrpclib in Python 2.6 doesn't understand HTTPS. Support for it was added in Python 2.7. > On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: > >> I opened an issue for this: >> https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with >> >> On 08.07.2014 16:32, M.-A. Lemburg wrote: >>> I just tried the documented interfaces for PyPI >>> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >>> it fails with an error: >>> >>>> python pypirpc.py >>> Traceback (most recent call last): >>> File "pypirpc.py", line 7, in >>> pprint.pprint(client.package_releases('egenix-web-installer-test')) >>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >>> return self.__send(self.__name, args) >>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >>> verbose=self.__verbose >>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >>> return self._parse_response(h.getfile(), sock) >>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >>> p.close() >>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >>> self._parser.Parse("", 1) # end of data >>> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >>> >>> The call works with Python 2.7. It appears that xmlrpclib >>> is not receiving any body data from the server. >>> >>> The returned data in 2.7 looks completely harmless: >>> >>> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >>> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >>> 185\r\n\r\n>> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >>> reply: 'HTTP/1.1 200 OK\r\n' >>> header: Server: nginx/1.6.0 >>> header: Content-Type: text/xml >>> header: charset: UTF-8 >>> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >>> header: Transfer-Encoding: chunked >>> header: Accept-Ranges: bytes >>> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >>> header: Via: 1.1 varnish >>> header: Connection: keep-alive >>> header: X-Served-By: cache-fra1229-FRA >>> header: X-Cache: MISS >>> header: X-Cache-Hits: 0 >>> header: X-Timer: S1404829181.210045,VS0,VE325 >>> body: ">> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >>> >>> Could this be a network error rather than a program one ? >>> >>> The code in 2.7 does a retry in case of a connection reset or abort, >>> which code in 2.6 and earlier does not apply. >>> >>> Thanks, >>> >> >> -- >> Marc-Andre Lemburg >> eGenix.com >> >> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >> ________________________________________________________________________ >> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >> >> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >> >> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >> Registered at Amtsgericht Duesseldorf: HRB 46611 >> http://www.egenix.com/company/contact/ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 08 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From donald at stufft.io Tue Jul 8 17:15:11 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 8 Jul 2014 11:15:11 -0400 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <53BC0A92.8010901@egenix.com> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> <53BC0A92.8010901@egenix.com> Message-ID: <80645F17-C9F7-41CD-B511-D71AD8D9E125@stufft.io> Uh what? HTTPS works fine on my copy of 2.6? If I recall this problem only happens if you use http:// on 2.6. On Jul 8, 2014, at 11:13 AM, M.-A. Lemburg wrote: > On 08.07.2014 17:09, Donald Stufft wrote: >> You?re using http:// rather than https:// yea? > > Yes. xmlrpclib in Python 2.6 doesn't understand HTTPS. Support for > it was added in Python 2.7. > >> On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: >> >>> I opened an issue for this: >>> https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with >>> >>> On 08.07.2014 16:32, M.-A. Lemburg wrote: >>>> I just tried the documented interfaces for PyPI >>>> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >>>> it fails with an error: >>>> >>>>> python pypirpc.py >>>> Traceback (most recent call last): >>>> File "pypirpc.py", line 7, in >>>> pprint.pprint(client.package_releases('egenix-web-installer-test')) >>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >>>> return self.__send(self.__name, args) >>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >>>> verbose=self.__verbose >>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >>>> return self._parse_response(h.getfile(), sock) >>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >>>> p.close() >>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >>>> self._parser.Parse("", 1) # end of data >>>> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >>>> >>>> The call works with Python 2.7. It appears that xmlrpclib >>>> is not receiving any body data from the server. >>>> >>>> The returned data in 2.7 looks completely harmless: >>>> >>>> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >>>> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >>>> 185\r\n\r\n>>> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >>>> reply: 'HTTP/1.1 200 OK\r\n' >>>> header: Server: nginx/1.6.0 >>>> header: Content-Type: text/xml >>>> header: charset: UTF-8 >>>> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >>>> header: Transfer-Encoding: chunked >>>> header: Accept-Ranges: bytes >>>> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >>>> header: Via: 1.1 varnish >>>> header: Connection: keep-alive >>>> header: X-Served-By: cache-fra1229-FRA >>>> header: X-Cache: MISS >>>> header: X-Cache-Hits: 0 >>>> header: X-Timer: S1404829181.210045,VS0,VE325 >>>> body: ">>> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >>>> >>>> Could this be a network error rather than a program one ? >>>> >>>> The code in 2.7 does a retry in case of a connection reset or abort, >>>> which code in 2.6 and earlier does not apply. >>>> >>>> Thanks, >>>> >>> >>> -- >>> Marc-Andre Lemburg >>> eGenix.com >>> >>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>> ________________________________________________________________________ >>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>> >>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>> >>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>> http://www.egenix.com/company/contact/ >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Jul 08 2014) >>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mal at egenix.com Tue Jul 8 17:17:05 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 08 Jul 2014 17:17:05 +0200 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <80645F17-C9F7-41CD-B511-D71AD8D9E125@stufft.io> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> <53BC0A92.8010901@egenix.com> <80645F17-C9F7-41CD-B511-D71AD8D9E125@stufft.io> Message-ID: <53BC0B71.8060902@egenix.com> On 08.07.2014 17:15, Donald Stufft wrote: > Uh what? HTTPS works fine on my copy of 2.6? If I recall this problem > only happens if you use http:// on 2.6. Ah, sorry. You're right. I had looked at a diff of the module between 2.6 and 2.7 and saw lots of HTTPS related changes. With https:// it does work in 2.6 as well. Thanks. I'll close the bug report a fix the wiki page. > On Jul 8, 2014, at 11:13 AM, M.-A. Lemburg wrote: > >> On 08.07.2014 17:09, Donald Stufft wrote: >>> You?re using http:// rather than https:// yea? >> >> Yes. xmlrpclib in Python 2.6 doesn't understand HTTPS. Support for >> it was added in Python 2.7. >> >>> On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: >>> >>>> I opened an issue for this: >>>> https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with >>>> >>>> On 08.07.2014 16:32, M.-A. Lemburg wrote: >>>>> I just tried the documented interfaces for PyPI >>>>> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >>>>> it fails with an error: >>>>> >>>>>> python pypirpc.py >>>>> Traceback (most recent call last): >>>>> File "pypirpc.py", line 7, in >>>>> pprint.pprint(client.package_releases('egenix-web-installer-test')) >>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >>>>> return self.__send(self.__name, args) >>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >>>>> verbose=self.__verbose >>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >>>>> return self._parse_response(h.getfile(), sock) >>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >>>>> p.close() >>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >>>>> self._parser.Parse("", 1) # end of data >>>>> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >>>>> >>>>> The call works with Python 2.7. It appears that xmlrpclib >>>>> is not receiving any body data from the server. >>>>> >>>>> The returned data in 2.7 looks completely harmless: >>>>> >>>>> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >>>>> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >>>>> 185\r\n\r\n>>>> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >>>>> reply: 'HTTP/1.1 200 OK\r\n' >>>>> header: Server: nginx/1.6.0 >>>>> header: Content-Type: text/xml >>>>> header: charset: UTF-8 >>>>> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >>>>> header: Transfer-Encoding: chunked >>>>> header: Accept-Ranges: bytes >>>>> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >>>>> header: Via: 1.1 varnish >>>>> header: Connection: keep-alive >>>>> header: X-Served-By: cache-fra1229-FRA >>>>> header: X-Cache: MISS >>>>> header: X-Cache-Hits: 0 >>>>> header: X-Timer: S1404829181.210045,VS0,VE325 >>>>> body: ">>>> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >>>>> >>>>> Could this be a network error rather than a program one ? >>>>> >>>>> The code in 2.7 does a retry in case of a connection reset or abort, >>>>> which code in 2.6 and earlier does not apply. >>>>> >>>>> Thanks, >>>>> >>>> >>>> -- >>>> Marc-Andre Lemburg >>>> eGenix.com >>>> >>>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>>> ________________________________________________________________________ >>>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>>> >>>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>>> >>>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>>> http://www.egenix.com/company/contact/ >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >>> ----------------- >>> Donald Stufft >>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>> >> >> -- >> Marc-Andre Lemburg >> eGenix.com >> >> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >> ________________________________________________________________________ >> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >> >> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >> >> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >> Registered at Amtsgericht Duesseldorf: HRB 46611 >> http://www.egenix.com/company/contact/ > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 08 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From donald at stufft.io Tue Jul 8 17:18:18 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 8 Jul 2014 11:18:18 -0400 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <53BC0B71.8060902@egenix.com> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> <53BC0A92.8010901@egenix.com> <80645F17-C9F7-41CD-B511-D71AD8D9E125@stufft.io> <53BC0B71.8060902@egenix.com> Message-ID: <4F759B92-99EC-46A7-822E-E98E24A29F0B@stufft.io> Oh Good. I thought I was going crazy there for a minute :) I?m pretty sure it happens because of the redirect from HTTP to HTTPS. On Jul 8, 2014, at 11:17 AM, M.-A. Lemburg wrote: > On 08.07.2014 17:15, Donald Stufft wrote: >> Uh what? HTTPS works fine on my copy of 2.6? If I recall this problem >> only happens if you use http:// on 2.6. > > Ah, sorry. You're right. I had looked at a diff of the module > between 2.6 and 2.7 and saw lots of HTTPS related changes. > > With https:// it does work in 2.6 as well. > > Thanks. I'll close the bug report a fix the wiki page. > >> On Jul 8, 2014, at 11:13 AM, M.-A. Lemburg wrote: >> >>> On 08.07.2014 17:09, Donald Stufft wrote: >>>> You?re using http:// rather than https:// yea? >>> >>> Yes. xmlrpclib in Python 2.6 doesn't understand HTTPS. Support for >>> it was added in Python 2.7. >>> >>>> On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: >>>> >>>>> I opened an issue for this: >>>>> https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with >>>>> >>>>> On 08.07.2014 16:32, M.-A. Lemburg wrote: >>>>>> I just tried the documented interfaces for PyPI >>>>>> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >>>>>> it fails with an error: >>>>>> >>>>>>> python pypirpc.py >>>>>> Traceback (most recent call last): >>>>>> File "pypirpc.py", line 7, in >>>>>> pprint.pprint(client.package_releases('egenix-web-installer-test')) >>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >>>>>> return self.__send(self.__name, args) >>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >>>>>> verbose=self.__verbose >>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >>>>>> return self._parse_response(h.getfile(), sock) >>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >>>>>> p.close() >>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >>>>>> self._parser.Parse("", 1) # end of data >>>>>> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >>>>>> >>>>>> The call works with Python 2.7. It appears that xmlrpclib >>>>>> is not receiving any body data from the server. >>>>>> >>>>>> The returned data in 2.7 looks completely harmless: >>>>>> >>>>>> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >>>>>> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >>>>>> 185\r\n\r\n>>>>> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >>>>>> reply: 'HTTP/1.1 200 OK\r\n' >>>>>> header: Server: nginx/1.6.0 >>>>>> header: Content-Type: text/xml >>>>>> header: charset: UTF-8 >>>>>> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >>>>>> header: Transfer-Encoding: chunked >>>>>> header: Accept-Ranges: bytes >>>>>> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >>>>>> header: Via: 1.1 varnish >>>>>> header: Connection: keep-alive >>>>>> header: X-Served-By: cache-fra1229-FRA >>>>>> header: X-Cache: MISS >>>>>> header: X-Cache-Hits: 0 >>>>>> header: X-Timer: S1404829181.210045,VS0,VE325 >>>>>> body: ">>>>> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >>>>>> >>>>>> Could this be a network error rather than a program one ? >>>>>> >>>>>> The code in 2.7 does a retry in case of a connection reset or abort, >>>>>> which code in 2.6 and earlier does not apply. >>>>>> >>>>>> Thanks, >>>>>> >>>>> >>>>> -- >>>>> Marc-Andre Lemburg >>>>> eGenix.com >>>>> >>>>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>>>> ________________________________________________________________________ >>>>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>>>> >>>>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>>>> >>>>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>>>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>>>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>>>> http://www.egenix.com/company/contact/ >>>>> _______________________________________________ >>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>>> >>>> ----------------- >>>> Donald Stufft >>>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>>> >>> >>> -- >>> Marc-Andre Lemburg >>> eGenix.com >>> >>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>> ________________________________________________________________________ >>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>> >>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>> >>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>> http://www.egenix.com/company/contact/ >> >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Jul 08 2014) >>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mal at egenix.com Tue Jul 8 17:29:11 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 08 Jul 2014 17:29:11 +0200 Subject: [Distutils] PyPI XMLRPC interface no longer works with Python 2.6 In-Reply-To: <4F759B92-99EC-46A7-822E-E98E24A29F0B@stufft.io> References: <53BC0111.8000909@egenix.com> <53BC04E8.1030405@egenix.com> <04CAF73D-AC62-40DE-97AB-FEE77ADA9ABC@stufft.io> <53BC0A92.8010901@egenix.com> <80645F17-C9F7-41CD-B511-D71AD8D9E125@stufft.io> <53BC0B71.8060902@egenix.com> <4F759B92-99EC-46A7-822E-E98E24A29F0B@stufft.io> Message-ID: <53BC0E47.20204@egenix.com> On 08.07.2014 17:18, Donald Stufft wrote: > Oh Good. I thought I was going crazy there for a minute :) I?m pretty sure > it happens because of the redirect from HTTP to HTTPS. That's probably the cause, even though I don't get a redirect header in the XMLRPC reply. The server appears to return a simple HTTP/1.1 200 OK and then stops talking to Python 2.6. Here's a telnet transcript: > telnet pypi.python.org 80 Trying 185.31.17.175... Connected to pypi.python.org. Escape character is '^]'. POST /pypi HTTP/1.0 Host: pypi.python.org User-Agent: xmlrpclib.py/1.0.1 (by www.pythonware.com) Content-Type: text/xml Content-Length: 185 package_releases egenix-web-installer-test HTTP/1.1 200 OK Server: nginx/1.6.0 Content-Type: text/xml charset: UTF-8 Strict-Transport-Security: max-age=31536000; includeSubDomains Accept-Ranges: bytes Date: Tue, 08 Jul 2014 15:24:44 GMT Via: 1.1 varnish Connection: close X-Served-By: cache-fra1224-FRA X-Cache: MISS X-Cache-Hits: 0 X-Timer: S1404833078.851717,VS0,VE5364 Connection closed by foreign host. > I guess the HTTP/1.0 in the POST request causes Varnish to close the connection without a redirect. A bit weird. Python 2.7 sends an HTTP/1.1. > On Jul 8, 2014, at 11:17 AM, M.-A. Lemburg wrote: > >> On 08.07.2014 17:15, Donald Stufft wrote: >>> Uh what? HTTPS works fine on my copy of 2.6? If I recall this problem >>> only happens if you use http:// on 2.6. >> >> Ah, sorry. You're right. I had looked at a diff of the module >> between 2.6 and 2.7 and saw lots of HTTPS related changes. >> >> With https:// it does work in 2.6 as well. >> >> Thanks. I'll close the bug report a fix the wiki page. Looks like closing the ticket has to be done by someone else. >>> On Jul 8, 2014, at 11:13 AM, M.-A. Lemburg wrote: >>> >>>> On 08.07.2014 17:09, Donald Stufft wrote: >>>>> You?re using http:// rather than https:// yea? >>>> >>>> Yes. xmlrpclib in Python 2.6 doesn't understand HTTPS. Support for >>>> it was added in Python 2.7. >>>> >>>>> On Jul 8, 2014, at 10:49 AM, M.-A. Lemburg wrote: >>>>> >>>>>> I opened an issue for this: >>>>>> https://bitbucket.org/pypa/pypi/issue/157/pypi-xmlrpc-interface-no-longer-works-with >>>>>> >>>>>> On 08.07.2014 16:32, M.-A. Lemburg wrote: >>>>>>> I just tried the documented interfaces for PyPI >>>>>>> (https://wiki.python.org/moin/PyPIXmlRpc) with Python 2.6 and >>>>>>> it fails with an error: >>>>>>> >>>>>>>> python pypirpc.py >>>>>>> Traceback (most recent call last): >>>>>>> File "pypirpc.py", line 7, in >>>>>>> pprint.pprint(client.package_releases('egenix-web-installer-test')) >>>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1199, in __call__ >>>>>>> return self.__send(self.__name, args) >>>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1489, in __request >>>>>>> verbose=self.__verbose >>>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1253, in request >>>>>>> return self._parse_response(h.getfile(), sock) >>>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 1390, in _parse_response >>>>>>> p.close() >>>>>>> File "/usr/local/python-2.6-ucs2/lib/python2.6/xmlrpclib.py", line 604, in close >>>>>>> self._parser.Parse("", 1) # end of data >>>>>>> xml.parsers.expat.ExpatError: no element found: line 1, column 0 >>>>>>> >>>>>>> The call works with Python 2.7. It appears that xmlrpclib >>>>>>> is not receiving any body data from the server. >>>>>>> >>>>>>> The returned data in 2.7 looks completely harmless: >>>>>>> >>>>>>> send: "POST /pypi HTTP/1.1\r\nHost: pypi.python.org\r\nAccept-Encoding: gzip\r\nUser-Agent: >>>>>>> xmlrpclib.py/1.0.1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: >>>>>>> 185\r\n\r\n>>>>>> version='1.0'?>\n\npackage_releases\n\n\negenix-web-installer-test\n\n\n\n" >>>>>>> reply: 'HTTP/1.1 200 OK\r\n' >>>>>>> header: Server: nginx/1.6.0 >>>>>>> header: Content-Type: text/xml >>>>>>> header: charset: UTF-8 >>>>>>> header: Strict-Transport-Security: max-age=31536000; includeSubDomains >>>>>>> header: Transfer-Encoding: chunked >>>>>>> header: Accept-Ranges: bytes >>>>>>> header: Date: Tue, 08 Jul 2014 14:19:41 GMT >>>>>>> header: Via: 1.1 varnish >>>>>>> header: Connection: keep-alive >>>>>>> header: X-Served-By: cache-fra1229-FRA >>>>>>> header: X-Cache: MISS >>>>>>> header: X-Cache-Hits: 0 >>>>>>> header: X-Timer: S1404829181.210045,VS0,VE325 >>>>>>> body: ">>>>>> version='1.0'?>\n\n\n\n\n0.2.0\n\n\n\n\n" >>>>>>> >>>>>>> Could this be a network error rather than a program one ? >>>>>>> >>>>>>> The code in 2.7 does a retry in case of a connection reset or abort, >>>>>>> which code in 2.6 and earlier does not apply. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>> >>>>>> -- >>>>>> Marc-Andre Lemburg >>>>>> eGenix.com >>>>>> >>>>>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>>>>> ________________________________________________________________________ >>>>>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>>>>> >>>>>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>>>>> >>>>>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>>>>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>>>>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>>>>> http://www.egenix.com/company/contact/ >>>>>> _______________________________________________ >>>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>> >>>>> >>>>> ----------------- >>>>> Donald Stufft >>>>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>>>> >>>> >>>> -- >>>> Marc-Andre Lemburg >>>> eGenix.com >>>> >>>> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >>>> ________________________________________________________________________ >>>> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >>>> >>>> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >>>> >>>> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >>>> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >>>> Registered at Amtsgericht Duesseldorf: HRB 46611 >>>> http://www.egenix.com/company/contact/ >>> >>> >>> ----------------- >>> Donald Stufft >>> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >>> >> >> -- >> Marc-Andre Lemburg >> eGenix.com >> >> Professional Python Services directly from the Source (#1, Jul 08 2014) >>>>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >> ________________________________________________________________________ >> 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go >> >> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >> >> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >> Registered at Amtsgericht Duesseldorf: HRB 46611 >> http://www.egenix.com/company/contact/ > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 08 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-07-21: EuroPython 2014, Berlin, Germany ... 13 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From mario at include-once.org Fri Jul 11 23:24:32 2014 From: mario at include-once.org (mario at include-once.org) Date: Fri, 11 Jul 2014 23:24:32 +0200 Subject: [Distutils] PyPI changelog support, releases.json / common NEWS.rst format? Message-ID: I'm currently tinkering on a freshmeat substitute. And for automating release announcements been looking around for common package meta data schemes. PyPIs /pypi/pkgname/json (or the xmlrpc interface) looks quite interesting. It obviously mostly targets dependency management and systemic categorization. So from googling around this never came up: But would it be feasible to include a version changelog / release summary via du.register? Of course, I'm referring to a human-readable "This version adds and fixes..." changelog, not the (name, version, timestamp) journal tuple. The releases{} per-URL `comment_text` seems widely unused. Was that its purpose? I hope this isn't getting too off-topic, this is -just for comparison and context- what I'm intending to eventually map PyPI release streams onto: http://fossil.include-once.org/freshcode/wiki/releases.json There are probably other priorities for distutils / warehouse currently. So alternatively, is there a semi-standard for NEWS.txt or CHANGELOG etc. files within Python packages? Cheeseshop project homepages which also include a release notes list via `long_description` seem far and few between. I actually found just one: https://pypi.python.org/pypi/py-translate Which seems to share a reStructuredText source for documentation and pypi homepage: https://raw.githubusercontent.com/jjangsangy/py-translate/master/HISTORY.rst (I'd have presumed Markdown-style release notes to be favoured.) Anyway, is there an estimate on how many packages include release notes at all? From wichert at wiggy.net Sun Jul 13 18:04:03 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Sun, 13 Jul 2014 18:04:03 +0200 Subject: [Distutils] PyPI not rendering ReST long_description Message-ID: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is not rendering ReST. I can?t figure out why though: according to both ?python setup.py check? and "python setup.py --long-description | rst2html-2.7.py > /dev/null? there are no ReST syntax errors in me description. Is there any way to see why PyPI is not rendering my ReST? Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Sun Jul 13 21:36:18 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 13 Jul 2014 12:36:18 -0700 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> Message-ID: On Sun, Jul 13, 2014 at 9:04 AM, Wichert Akkerman wrote: > I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the > distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is > not rendering ReST. I can?t figure out why though: according to both ?python > setup.py check? and "python setup.py --long-description | rst2html-2.7.py > > /dev/null? there are no ReST syntax errors in me description. Is there any > way to see why PyPI is not rendering my ReST? I filed a PyPI issue about this some time before July 2012. It bothers me that I can't seem to find the issue listed on the current PyPI issues page: https://bitbucket.org/pypa/pypi/issues . It also disturbs me that I can't seem to find the issue on the old SourceForge page either: http://sourceforge.net/projects/pypi/ . The only thing I have to go on is this link which no longer goes anywhere: http://sourceforge.net/tracker/?func=detail&aid=3539253&group_id=66150&atid=513503 I had included it in a related issue I filed on the main Python bug tracker: http://bugs.python.org/issue15266 If I recall, my suggestion in the PyPI issue I filed was three-fold: 1) When uploading a reST long_description to PyPI via the command-line, PyPI should provide an error message saying why the conversion to HTML failed (so the user knows what to fix). 2) When setting a reST long_description on PyPI via the web UI, PyPI should provide an error message saying why the conversion to HTML failed (so the user knows what to fix). 3) The user should have some way of running the conversion locally (as a check), using the same rules as PyPI, so the user can be assured in advance that the conversion will be successful when interacting with the real PyPI. The third suggestion was the subject of the Distutils issue I filed on the main Python bug tracker. On a related note, in the new packaging docs about registering and uploading packages to PyPI: https://packaging.python.org/en/latest/tutorial.html#uploading-your-project-to-pypi I don't the see the troubleshooting tip that the old packaging docs contained, namely the advice to try running: $ python setup.py --long-description | rst2html.py > output.html The old location is here: https://docs.python.org/2.7/distutils/packageindex.html#pypi-package-display . It would be good if someone went through that page in its entirety and made sure that any useful information is copied over or otherwise reflected in the new packaging docs. --Chris > Wichert. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From wichert at wiggy.net Sun Jul 13 22:35:30 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Sun, 13 Jul 2014 22:35:30 +0200 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> Message-ID: <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> > On 13 Jul 2014, at 21:36, Chris Jerdonek wrote: > > On Sun, Jul 13, 2014 at 9:04 AM, Wichert Akkerman wrote: >> I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the >> distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is >> not rendering ReST. I can?t figure out why though: according to both ?python >> setup.py check? and "python setup.py --long-description | rst2html-2.7.py > >> /dev/null? there are no ReST syntax errors in me description. Is there any >> way to see why PyPI is not rendering my ReST? > > I filed a PyPI issue about this some time before July 2012. > > It bothers me that I can't seem to find the issue listed on the > current PyPI issues page: https://bitbucket.org/pypa/pypi/issues . It > also disturbs me that I can't seem to find the issue on the old > SourceForge page either: http://sourceforge.net/projects/pypi/ . That is somewhat worrying. I created a new ticket for this: https://bitbucket.org/pypa/pypi/issue/161/rest-formatting-fails-and-there-is-no-way > > 3) The user should have some way of running the conversion locally (as > a check), using the same rules as PyPI, so the user can be assured in > advance that the conversion will be successful when interacting with > the real PyPI. I don?t believe that is a viable approach. I suspect there is a reasonable risk that environment changes between PyPI and your local system, for example environment settings or available locals, and using different versions of packages such as docutils will lead to incorrect results. There really is no substitute for PyPI itself reporting errors it finds. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 13 23:06:10 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 13 Jul 2014 17:06:10 -0400 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> Message-ID: <58EA2537-76EA-4280-BC20-D9FE3E72A408@stufft.io> On Jul 13, 2014, at 4:35 PM, Wichert Akkerman wrote: >> >> On 13 Jul 2014, at 21:36, Chris Jerdonek wrote: >> >> On Sun, Jul 13, 2014 at 9:04 AM, Wichert Akkerman wrote: >>> I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the >>> distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is >>> not rendering ReST. I can?t figure out why though: according to both ?python >>> setup.py check? and "python setup.py --long-description | rst2html-2.7.py > >>> /dev/null? there are no ReST syntax errors in me description. Is there any >>> way to see why PyPI is not rendering my ReST? >> >> I filed a PyPI issue about this some time before July 2012. >> >> It bothers me that I can't seem to find the issue listed on the >> current PyPI issues page: https://bitbucket.org/pypa/pypi/issues . It >> also disturbs me that I can't seem to find the issue on the old >> SourceForge page either: http://sourceforge.net/projects/pypi/ . > > That is somewhat worrying. I created a new ticket for this: https://bitbucket.org/pypa/pypi/issue/161/rest-formatting-fails-and-there-is-no-way > >> >> 3) The user should have some way of running the conversion locally (as >> a check), using the same rules as PyPI, so the user can be assured in >> advance that the conversion will be successful when interacting with >> the real PyPI. > > I don?t believe that is a viable approach. I suspect there is a reasonable risk that environment changes between PyPI and your local system, for example environment settings or available locals, and using different versions of packages such as docutils will lead to incorrect results. There really is no substitute for PyPI itself reporting errors it finds. > > Wichert. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Currently PyPI has no method to report that it has failed except to fail the upload with a message. This isn?t acceptable because we don?t mandate ReST and some people want to just have plain text uploads. With PyPI/Metadata 2.0 this is getting resolved in a way that?ll make all of this possible as well as additional renderers. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.jerdonek at gmail.com Sun Jul 13 23:24:08 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 13 Jul 2014 14:24:08 -0700 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> Message-ID: On Sun, Jul 13, 2014 at 1:35 PM, Wichert Akkerman wrote: > > On 13 Jul 2014, at 21:36, Chris Jerdonek wrote: > > On Sun, Jul 13, 2014 at 9:04 AM, Wichert Akkerman wrote: > > I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the > distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is > not rendering ReST. I can?t figure out why though: according to both ?python > setup.py check? and "python setup.py --long-description | rst2html-2.7.py > > /dev/null? there are no ReST syntax errors in me description. Is there any > way to see why PyPI is not rendering my ReST? > > > I filed a PyPI issue about this some time before July 2012. > > It bothers me that I can't seem to find the issue listed on the > current PyPI issues page: https://bitbucket.org/pypa/pypi/issues . It > also disturbs me that I can't seem to find the issue on the old > SourceForge page either: http://sourceforge.net/projects/pypi/ . > > > That is somewhat worrying. I created a new ticket for this: > https://bitbucket.org/pypa/pypi/issue/161/rest-formatting-fails-and-there-is-no-way > > > 3) The user should have some way of running the conversion locally (as > a check), using the same rules as PyPI, so the user can be assured in > advance that the conversion will be successful when interacting with > the real PyPI. > > > I don?t believe that is a viable approach. I suspect there is a reasonable > risk that environment changes between PyPI and your local system, for > example environment settings or available locals, and using different > versions of packages such as docutils will lead to incorrect results. There > really is no substitute for PyPI itself reporting errors it finds. I was suggesting this in addition to PyPI reporting errors and not as a substitute. It shouldn't be necessary to interact with a remote service when developing locally to validate and check for errors. --Chris From wichert at wiggy.net Mon Jul 14 08:04:07 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 14 Jul 2014 08:04:07 +0200 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <58EA2537-76EA-4280-BC20-D9FE3E72A408@stufft.io> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <34C61247-C9A4-4FA4-9E84-752339B18C8D@wiggy.net> <58EA2537-76EA-4280-BC20-D9FE3E72A408@stufft.io> Message-ID: <0A0C9FB2-DF32-4D48-9108-CF4D19D0BABD@wiggy.net> > On 13 Jul 2014, at 23:06, Donald Stufft wrote: > Currently PyPI has no method to report that it has failed except to fail the upload with a message. This isn?t acceptable because we don?t mandate ReST and some people want to just have plain text uploads. With PyPI/Metadata 2.0 this is getting resolved in a way that?ll make all of this possible as well as additional renderers. Doesn?t that still leave the other two options as possible right now? Wichert. From wichert at wiggy.net Mon Jul 14 09:30:59 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 14 Jul 2014 09:30:59 +0200 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> Message-ID: <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> On 13 Jul 2014, at 18:04, Wichert Akkerman wrote: > I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI is not rendering ReST. I can?t figure out why though: according to both ?python setup.py check? and "python setup.py --long-description | rst2html-2.7.py > /dev/null? there are no ReST syntax errors in me description. Is there any way to see why PyPI is not rendering my ReST? I ended up debugging this by manually bisecting the long description through the PyPI web-interface. The culprit turned out to be this bit: `Pyramid `_ This is perfectly valid ReST, but I am guessing PyPI somehow forbids you from using URLs without a scheme. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Mon Jul 14 23:11:27 2014 From: erik.m.bray at gmail.com (Erik Bray) Date: Mon, 14 Jul 2014 17:11:27 -0400 Subject: [Distutils] PyPI changelog support, releases.json / common NEWS.rst format? In-Reply-To: References: Message-ID: On Fri, Jul 11, 2014 at 5:24 PM, wrote: > I'm currently tinkering on a freshmeat substitute. And for automating > release announcements been looking around for common package meta > data schemes. PyPIs /pypi/pkgname/json (or the xmlrpc interface) looks > quite interesting. It obviously mostly targets dependency management > and systemic categorization. > > So from googling around this never came up: But would it be feasible > to include a version changelog / release summary via du.register? > > Of course, I'm referring to a human-readable "This version adds and > fixes..." changelog, not the (name, version, timestamp) journal tuple. > > The releases{} per-URL `comment_text` seems widely unused. > Was that its purpose? > > > I hope this isn't getting too off-topic, this is -just for comparison > and context- what I'm intending to eventually map PyPI release streams > onto: > http://fossil.include-once.org/freshcode/wiki/releases.json > > > There are probably other priorities for distutils / warehouse > currently. So alternatively, is there a semi-standard for NEWS.txt > or CHANGELOG etc. files within Python packages? > > Cheeseshop project homepages which also include a release notes list > via `long_description` seem far and few between. I actually found > just one: > https://pypi.python.org/pypi/py-translate > Which seems to share a reStructuredText source for documentation and > pypi homepage: > https://raw.githubusercontent.com/jjangsangy/py-translate/master/HISTORY.rst > > (I'd have presumed Markdown-style release notes to be favoured.) > Anyway, is there an estimate on how many packages include release > notes at all? Interesting question--I've long struggled over the question of how best to maintain a readable "human-readable" changelog/release notes. Part of it is social, and getting all contributors to enter the correct changelog entry for any changes they submit. I've mostly gotten that hammered out for my main projects. But managing a changelog for a project with multiple release branches is also a bit tricky and something I've been wanting to find a better way to automate. As for actual formats I long ago adopted the format used by zest.releaser [1] even though I'm not using it as much any more to make my releases. I think the format it recognizes by default is all valid ReST, and I've preferred to stick with that. I also try to link every changelog entry to a bugtracker issue number in square brackets. So for the most part the format has been kept machine-parseable (in fact we have a Sphinx plugin that automatically converts the "[#nnnn]" issue number markers to links). See for example the Astropy changelog [2]. Still, if anyone else has further thoughts on this topic I'd be interested. Erik [1] https://github.com/zestsoftware/zest.releaser/blob/master/CHANGES.rst [2] https://github.com/astropy/astropy/blob/master/CHANGES.rst From richard at python.org Tue Jul 15 11:22:04 2014 From: richard at python.org (Richard Jones) Date: Tue, 15 Jul 2014 19:22:04 +1000 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> Message-ID: Hi, I just want to note that I'm aware of this issue and I have "do something about it" in my long TODO. That link is malformed in any case - docutils just passes it on through and you're just lucky that browsers will guess that it is supposed to have a "http://" scheme on the front. PyPI does indeed check URL schemes in links for security reasons. I guess I never though that I'd need to allow a blank one - I'd have to think about the implications of allowing it. Richard On 14 July 2014 17:30, Wichert Akkerman wrote: > > On 13 Jul 2014, at 18:04, Wichert Akkerman wrote: > > I just uploaded a new pyramid_sqlalchemy package to PyPI. Looking at the > distribution page at https://pypi.python.org/pypi/pyramid_sqlalchemy PyPI > is not rendering ReST. I can?t figure out why though: according to both > ?python setup.py check? and "python setup.py --long-description | > rst2html-2.7.py > /dev/null? there are no ReST syntax errors in me > description. Is there any way to see why PyPI is not rendering my ReST? > > > I ended up debugging this by manually bisecting the long description > through the PyPI web-interface. The culprit turned out to be this bit: > > `Pyramid `_ > > This is perfectly valid ReST, but I am guessing PyPI somehow forbids you > from using URLs without a scheme. > > Wichert. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Tue Jul 15 11:24:59 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Tue, 15 Jul 2014 11:24:59 +0200 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> Message-ID: <376BBB8A-A647-46D5-B073-20BE2E8C2CBA@wiggy.net> > On 15 Jul 2014, at 11:22, Richard Jones wrote: > > Hi, I just want to note that I'm aware of this issue and I have "do something about it" in my long TODO. > > That link is malformed in any case - docutils just passes it on through and you're just lucky that browsers will guess that it is supposed to have a "http://" scheme on the front. PyPI does indeed check URL schemes in links for security reasons. I guess I never though that I'd need to allow a blank one - I'd have to think about the implications of allowing it. I don?t particularly mind PyPI refusing URL without a scheme ? that is a perfectly reasonably precaution. The problem is that there is no way at all to see why PyPI was refusing to render my ReST without manually bisecting the long description manually through its web-interface. Wichert. From richard at python.org Tue Jul 15 11:55:56 2014 From: richard at python.org (Richard Jones) Date: Tue, 15 Jul 2014 19:55:56 +1000 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <376BBB8A-A647-46D5-B073-20BE2E8C2CBA@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> <376BBB8A-A647-46D5-B073-20BE2E8C2CBA@wiggy.net> Message-ID: Yep, I understand and sympathise with the frustration. On 15 July 2014 19:24, Wichert Akkerman wrote: > > > On 15 Jul 2014, at 11:22, Richard Jones wrote: > > > > Hi, I just want to note that I'm aware of this issue and I have "do > something about it" in my long TODO. > > > > That link is malformed in any case - docutils just passes it on through > and you're just lucky that browsers will guess that it is supposed to have > a "http://" scheme on the front. PyPI does indeed check URL schemes in > links for security reasons. I guess I never though that I'd need to allow a > blank one - I'd have to think about the implications of allowing it. > > I don?t particularly mind PyPI refusing URL without a scheme ? that is a > perfectly reasonably precaution. The problem is that there is no way at all > to see why PyPI was refusing to render my ReST without manually bisecting > the long description manually through its web-interface. > > Wichert. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 15 13:35:55 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 Jul 2014 06:35:55 -0500 Subject: [Distutils] PyPI changelog support, releases.json / common NEWS.rst format? In-Reply-To: References: Message-ID: On 14 Jul 2014 17:11, "Erik Bray" wrote: > > Still, if anyone else has further thoughts on this topic I'd be interested. Twisted still has the most sophisticated approach to NEWS files I've seen: https://twistedmatrix.com/trac/wiki/ReviewProcess#Newsfiles The upcoming PEP 459 python.details extension includes the ability to specifically identify the changelog file: http://www.python.org/dev/peps/pep-0459/#document-names Cheers, Nick. > > Erik > > > [1] https://github.com/zestsoftware/zest.releaser/blob/master/CHANGES.rst > [2] https://github.com/astropy/astropy/blob/master/CHANGES.rst > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Jul 15 15:12:19 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 15 Jul 2014 13:12:19 +0000 Subject: [Distutils] devpi-2.0.0: web/search UI, replication, fixes Message-ID: <20140715131219.GW7481@merlinux.eu> devpi-2.0.0: web, search, replication for PyPI indexes ========================================================== The devpi system in version 2.0 brings tons of fixes and new features for the private github-style pypi caching server, most notably: - a new web interface featuring search of metadata and documentation as well as easier navigation and showing of test results per release file. - a new transactional storage system (based on sqlite) supporting real-time replication over http. - a new (experimental) hook system for extending server-side functionality. - ported to a solid web framework and wsgi-server: pyramid and waitress. Upgrade note: devpi-server-2.0 requires to ``--export`` your 1.2 server state and then using ``--import`` with the new version before you can serve your private packages through devpi-server-2.0.0. Also, please checkout the web plugin if you want to have a web interface:: http://doc.devpi.net/2.0/web.html Here is a Quickstart tutorial for efficient pypi-mirroring on your laptop:: http://doc.devpi.net/2.0/quickstart-pypimirror.html And if you want to manage your releases or implement staging as an individual or within an organisation:: http://doc.devpi.net/2.0/quickstart-releaseprocess.html If you want to host a devpi-server installation with nginx/supervisor and access it from clients from different hosts:: http://doc.devpi.net/2.0/quickstart-server.html More documentation here:: http://doc.devpi.net/2.0/ many many thanks to Florian Schulze who implemented the new ``devpi-web`` package and helped with many other improvements. have fun, Holger Krekel, merlinux GmbH 2.0.0 -------------- devpi-server: - major revamp of the internal core of devpi to support replication (both master and server code), a plugin architecture with the new devpi-web plugin providing a new web interface. Mostly done by Florian Schulze and Holger Krekel. - moved all html views except for files and the simple index to new devpi-web package. Thanks to Florian Schulze for the PR. - implement issue103: By default if you register a package in an index, no lookup on pypi is made for that package anymore. You have to add the package to the pypi_whitelist of the index to let pypi releases be mixed in. This is to prevent malicious uploads on pypi to overwrite private packages. - change json api to get rid of the different meaning of URLs with and without a trailing slash. "/{user}/" is now the same as "/user" and always lists indices. "/{user}/{index}" and "/{user}/{index}/ now always lists the index config and the contained per-stage projects (not inherited ones). - switch the wsgi app to use Pyramid and waitress for WSGI serving. - don't refresh releaselinks from the mirroring thread but rather rely on the next access to do it. - fix issue98: deleting a project config or a project version now accepts names which map to the canonical name of a project. - fix issue82 and fix issue81: root/pypi now provides the same attributes as normal indexes and results in a 409 MethodNotAllowed http code when trying to change the config. - fix issue91: make serverport available as well. Thanks David Bonner. - fix issue100: support large file uploads. As we switched away from bottle to pyramid, the body-size limit is gone. - fix issue99: make "devpi-server --start" etc work when devpi-server is not itself on PATH (by using sys.argv[0] for finding the binary) - fix issue84: uploading of wheels where the registered package name has an underscore works despite a wheel's metadata carrying hyphens instead. At submit-file time we now lookup the registered name and use that instead of assuming the one coming with the wheel is the correct one. - add refresh button on root/pypi project simple index pages which clears the internal cache to force a refetch from PyPI. - implement issue75: We use the custom X-Devpi-Auth header for authentication now, instead of overwriting the Authentication header. - added experimental support for using client certificates when running as a replica of a server running behind a proxy devpi-client: - Compatibility with devpi-server >= 2.0.0 - introduce "patchjson PATH JSONFILE" command which allows to send a request containing a json data structure to a specified path - fix issue85: "devpi list -v" now shows package names with latest versions. - implement issue75: We use the custom X-Devpi-Auth header for authentication now, instead of overwriting the Authentication header. - added experimental support for basic authentication by parsing user and password from the url given to the "devpi use" command. - issue74: added experimental support for client side certificates via "devpi use --client-cert" devpi-web: - initial release From holger at merlinux.eu Wed Jul 16 13:08:37 2014 From: holger at merlinux.eu (holger krekel) Date: Wed, 16 Jul 2014 11:08:37 +0000 Subject: [Distutils] devpi-server 2.0.1 bugfixes In-Reply-To: <20140715131219.GW7481@merlinux.eu> References: <20140715131219.GW7481@merlinux.eu> Message-ID: <20140716110837.GA7481@merlinux.eu> Hi again, a small follow up release of devpi-server is out to fix "setup.py" register/upload commands with basic auth. best, holger devpi-server-2.0.1 -------------------- - fix regression which prevented the basic authentication for the setuptools upload/register commands to fail. Thanks Florian Schulze. - fix issue106: better error messages on upload failures. And better allow auto-registration when uploading release files. From chris.jerdonek at gmail.com Wed Jul 16 13:33:03 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 16 Jul 2014 04:33:03 -0700 Subject: [Distutils] PyPI not rendering ReST long_description In-Reply-To: <376BBB8A-A647-46D5-B073-20BE2E8C2CBA@wiggy.net> References: <0AED3997-0CDC-43CD-9F39-CEF9EFA67C67@wiggy.net> <837447E9-5707-4157-8400-686DDE125FDC@wiggy.net> <376BBB8A-A647-46D5-B073-20BE2E8C2CBA@wiggy.net> Message-ID: On Tue, Jul 15, 2014 at 2:24 AM, Wichert Akkerman wrote: > >> On 15 Jul 2014, at 11:22, Richard Jones wrote: >> >> Hi, I just want to note that I'm aware of this issue and I have "do something about it" in my long TODO. >> >> That link is malformed in any case - docutils just passes it on through and you're just lucky that browsers will guess that it is supposed to have a "http://" scheme on the front. PyPI does indeed check URL schemes in links for security reasons. I guess I never though that I'd need to allow a blank one - I'd have to think about the implications of allowing it. > > I don?t particularly mind PyPI refusing URL without a scheme ? that is a perfectly reasonably precaution. The problem is that there is no way at all to see why PyPI was refusing to render my ReST without manually bisecting the long description manually through its web-interface. One possible interim solution is to add a "check ReST" form to PyPI's web UI. The form could simply display the error output of running PyPI's conversion to HTML on whatever text is given. That would address Donald's point that it's not acceptable to fail an upload, while still giving people *some* reasonable way to troubleshoot failures. The existence of the validation form could be documented in the PyPI upload docs. --Chris From jricjr3 at gmail.com Wed Jul 16 05:43:42 2014 From: jricjr3 at gmail.com (Joshua Richardson) Date: Tue, 15 Jul 2014 20:43:42 -0700 Subject: [Distutils] setup.py sdist not including my README.md Message-ID: Sorry for the Noob question, but I've been banging my head on this all afternoon. Following some tutorial I found somewhere on PyPI, I created a setup.py which included a line like this: # Get the long description from the relevant file with open(path.join(here, 'README.md'), encoding='utf-8') as f: long_description = f.read() and then in the setup() function, I pass long_description=long_description Somehow this worked, when I installed the package on my development box (Mac with setuptools 5.1), so I prematurely published it to PyPI. Unfortunately, when I tried installing it to a fresh CentOS box (setuptools 0.6c5-2.el5), I got this error: Downloading/unpacking Dumper Downloading Dumper-1.0.1.tar.gz Running setup.py egg_info for package Dumper .... IOError: [Errno 2] No such file or directory: '/tmp/pip-build/Dumper/README.md' So, I checked the tarball and saw that README.md is not in there. I read the setuptools developer guide, and decided maybe I should try: include_package_data=True, I re-ran "setup.py sdist" and checked the tarball, and the README.md is still not in there. Ok, maybe because my source-control is git. So, I added a MANIFEST.in, containing one line: "README.md". Still no luck. So, I added: package_data={ '': ['README.md'], }, Still a no-go. Now I'm feeling stupid. SOS. Thanks for reading this far! --josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.v.diaz at gmail.com Wed Jul 16 14:00:14 2014 From: vladimir.v.diaz at gmail.com (Vladimir Diaz) Date: Wed, 16 Jul 2014 08:00:14 -0400 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: Message-ID: Have you tried adding the line "include README.md" to MANIFEST.in? That should do the trick. On Tue, Jul 15, 2014 at 11:43 PM, Joshua Richardson wrote: > Sorry for the Noob question, but I've been banging my head on this all > afternoon. > > Following some tutorial I found somewhere on PyPI, I created a setup.py > which included a line like this: > > # Get the long description from the relevant file > > with open(path.join(here, 'README.md'), encoding='utf-8') as f: > > long_description = f.read() > > > and then in the setup() function, I pass > > > long_description=long_description > > > Somehow this worked, when I installed the package on my development box > (Mac with setuptools 5.1), so I prematurely published it to PyPI. > Unfortunately, when I tried installing it to a fresh CentOS box > (setuptools 0.6c5-2.el5), I got this error: > > > Downloading/unpacking Dumper > > Downloading Dumper-1.0.1.tar.gz > > Running setup.py egg_info for package Dumper > > .... > > IOError: [Errno 2] No such file or directory: > '/tmp/pip-build/Dumper/README.md' > > So, I checked the tarball and saw that README.md is not in there. I read > the setuptools developer guide, and decided maybe I should try: > > include_package_data=True, > > I re-ran "setup.py sdist" and checked the tarball, and the README.md is > still not in there. Ok, maybe because my source-control is git. So, I > added a MANIFEST.in, containing one line: "README.md". Still no luck. So, > I added: > > package_data={ > > '': ['README.md'], > > }, > > > Still a no-go. Now I'm feeling stupid. SOS. > > > Thanks for reading this far! --josh > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Jul 16 14:29:32 2014 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 16 Jul 2014 08:29:32 -0400 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: Message-ID: <53C6702C.2090103@trueblade.com> And don't forget to include MANIFEST.in in MANIFEST.in. On 7/16/2014 8:00 AM, Vladimir Diaz wrote: > Have you tried adding the line "include README.md" to MANIFEST.in? That > should do the trick. > > > On Tue, Jul 15, 2014 at 11:43 PM, Joshua Richardson > wrote: > > Sorry for the Noob question, but I've been banging my head on this > all afternoon. > > Following some tutorial I found somewhere on PyPI, I created a > setup.py which included a line like this: > > # Get the long description from the relevant file > > with open(path.join(here, 'README.md'), encoding='utf-8') as f: > > long_description = f.read() > > > and then in the setup() function, I pass > > > long_description=long_description > > > Somehow this worked, when I installed the package on my development > box (Mac with setuptools 5.1), so I prematurely published it to > PyPI. Unfortunately, when I tried installing it to a fresh CentOS > box (setuptools 0.6c5-2.el5), I got this error: > > > Downloading/unpacking Dumper > > Downloading Dumper-1.0.1.tar.gz > > Running setup.py egg_info for package Dumper > > .... > > IOError: [Errno 2] No such file or directory: > '/tmp/pip-build/Dumper/README.md' > > > So, I checked the tarball and saw that README.md is not in there. I > read the setuptools developer guide, and decided maybe I should try: > > include_package_data=True, > > > I re-ran "setup.py sdist" and checked the tarball, and the README.md > is still not in there. Ok, maybe because my source-control is git. > So, I added a MANIFEST.in, containing one line: "README.md". Still > no luck. So, I added: > > package_data={ > > '': ['README.md'], > > }, > > > Still a no-go. Now I'm feeling stupid. SOS. > > > Thanks for reading this far! --josh > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From jricjr3 at gmail.com Wed Jul 16 22:47:52 2014 From: jricjr3 at gmail.com (Joshua Richardson) Date: Wed, 16 Jul 2014 13:47:52 -0700 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: <53C6702C.2090103@trueblade.com> References: <53C6702C.2090103@trueblade.com> Message-ID: > And don't forget to include MANIFEST.in in MANIFEST.in. > > On 7/16/2014 8:00 AM, Vladimir Diaz wrote: > > Have you tried adding the line "include README.md" to MANIFEST.in? That > > should do the trick. It did do the trick! Thanks so much! Where should I submit a suggestion for improved documentation? There was no documentation about the MANIFEST.in file format at http://guide.python-distribute.org/creation.html or at https://pythonhosted.org/setuptools/setuptools.html#including-data-files . Reading the documentation at https://docs.python.org/2/distutils/sourcedist.html#manifest , I got the impression that I should list the raw file names (or patterns) without the include directive. Thanks, --josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Thu Jul 17 10:35:03 2014 From: richard at python.org (Richard Jones) Date: Thu, 17 Jul 2014 10:35:03 +0200 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: <53C6702C.2090103@trueblade.com> Message-ID: Thanks for the feedback, Josh. The Python 3 version of the distutils documentation is far improved on this topic, I believe (though please, file a bug / change if you can improve it :) https://docs.python.org/3/distutils/sourcedist.html#manifest The setuptools documentation is part of the repository at https://bitbucket.org/pypa/setuptools/src in the docs directory, so you could file a bug against that project, or even submit a pull request to it if you're comfortable with that. I believe it would be sensible to redirect from the distribute documentation to setuptools... what does distutils-sig think? I also noticed that the setuptools project front page (aka the README) still promotes easy_install - should that be updated? Richard On 16 July 2014 22:47, Joshua Richardson wrote: > > And don't forget to include MANIFEST.in in MANIFEST.in. > > > > On 7/16/2014 8:00 AM, Vladimir Diaz wrote: > > > Have you tried adding the line "include README.md" to MANIFEST.in? > That > > > should do the trick. > > It did do the trick! Thanks so much! > > Where should I submit a suggestion for improved documentation? > > There was no documentation about the MANIFEST.in file format at > http://guide.python-distribute.org/creation.html or at > https://pythonhosted.org/setuptools/setuptools.html#including-data-files . > > Reading the documentation at > https://docs.python.org/2/distutils/sourcedist.html#manifest , I got the > impression that I should list the raw file names (or patterns) without the > include directive. > > Thanks, --josh > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.genest at ubisoft.com Thu Jul 17 18:04:07 2014 From: david.genest at ubisoft.com (David Genest) Date: Thu, 17 Jul 2014 16:04:07 +0000 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows Message-ID: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Hi, In our environment, we are using the scripts parameter to package external binary files, which is really a practical use case: 1) it allows for bundling of executables in our package python distribution (with pertaining dlls etc) 2) the binary is available in an activated virtualenv For the install case, everything is ok, the binaries are correctly installed. But for the develop command, setuptools garbles the binary by copying the binary file opened in text mode and writing it in binary mode. This is because of the line ending translation. I implemented a naive fix, only to be surprised by the python3 unicode management, and caused a regression. (see https://bitbucket.org/pypa/setuptools/issue/210/on-windows-binary-files-specified-as) The less naive approach to the problem was implemented in https://bitbucket.org/pypa/setuptools/pull-request/60/fix-regression-in-issue-218/diff but the pull request was rejected. But the problem still stands. How can I use the develop command and not have my binaries specified as scripts be garbled. Jason R. Coombs argued that this change would add undocumented special casing, but I am inclined to disagree because the use case works: * on *nix platforms (because the open function does not translate line endings (the "b" mode has no effect on this platform)) * it already works when we go through the install flow (on every platform) * there is a discrepancy between install and develop semantics, for the scripts parameter I would like to know what is the take in the community on this. Note that this is the first time I contribute a change, so I may not be up to speed with all the dos and don'ts. Thanks. David. From ncoghlan at gmail.com Thu Jul 17 20:38:48 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 17 Jul 2014 14:38:48 -0400 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: <53C6702C.2090103@trueblade.com> Message-ID: On 17 Jul 2014 05:28, "Richard Jones" wrote: > > Thanks for the feedback, Josh. > > The Python 3 version of the distutils documentation is far improved on this topic, I believe (though please, file a bug / change if you can improve it :) > > https://docs.python.org/3/distutils/sourcedist.html#manifest > > The setuptools documentation is part of the repository at https://bitbucket.org/pypa/setuptools/src in the docs directory, so you could file a bug against that project, or even submit a pull request to it if you're comfortable with that. > > I believe it would be sensible to redirect from the distribute documentation to setuptools... what does distutils-sig think? I also noticed that the setuptools project front page (aka the README) still promotes easy_install - should that be updated? I believe Marcus Smith is trying to coordinate efforts to bring the docs of the various PyPA projects up to date and providing a consistent message. Another part of that is getting the setuptools docs to a point where they stand alone, without depending on folks also going to read parts of the distutils docs. It's all tricky to prioritise though - "providing a better experience for new users" tends to be a much weaker motivation for volunteer effort than "make the tooling better for me personally". Cheers, Nick. > > > Richard > > > On 16 July 2014 22:47, Joshua Richardson wrote: >> >> > And don't forget to include MANIFEST.in in MANIFEST.in. >> > >> > On 7/16/2014 8:00 AM, Vladimir Diaz wrote: >> > > Have you tried adding the line "include README.md" to MANIFEST.in? That >> > > should do the trick. >> >> It did do the trick! Thanks so much! >> >> Where should I submit a suggestion for improved documentation? >> >> There was no documentation about the MANIFEST.in file format at http://guide.python-distribute.org/creation.html or at https://pythonhosted.org/setuptools/setuptools.html#including-data-files . >> >> Reading the documentation at https://docs.python.org/2/distutils/sourcedist.html#manifest , I got the impression that I should list the raw file names (or patterns) without the include directive. >> >> Thanks, --josh >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Jul 17 21:09:58 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 17 Jul 2014 20:09:58 +0100 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows In-Reply-To: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> References: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 17 July 2014 17:04, David Genest wrote: > I would like to know what is the take in the community on this. My view would be: 1. The scripts argument is for *scripts* not binary files, so it is perfectly correct to open them in text mode, do line ending translation, etc. 2. What you're doing is not supported behaviour (although I appreciate the fact that it's useful to you). 3. The scripts feature is generally discouraged in favour of entry points. 4. By including binaries, you make your package non-portable, and do so in a way that setuptools/wheel/pip cannot spot, so (for example) if you were to build a wheel it would not be recognised as architecture-specific. You can probably include a project-specific hack if you're so inclined, but this shouldn't be added as a feature of setuptools. Longer term, maybe your use case is something that we could support via Metadata 2.0. I'm honestly not sure at the moment, as your description isn't very specific. What "external binary files" do you package? If they are DLLs (as you mention) how would they work on Unix? I'm a bit confused here, as you mention the script handling being OK for you on Unix, and yet you're talking about DLLs, which are for Windows... If this is intended to be a single-platform solution, how do you anticipate communicating that fact to the build and install tools (wheel, pip, etc)? Thanks for raising the issue, it sounds like you have a use case that isn't well supported at the moment. But I agree with Jason that this shouldn't be forced into the scripts functionality, which is, after all, intended for *scripts*. Paul From ncoghlan at gmail.com Thu Jul 17 21:50:03 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 17 Jul 2014 15:50:03 -0400 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows In-Reply-To: References: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 17 Jul 2014 15:10, "Paul Moore" wrote: > > Longer term, maybe your use case is something that we could support > via Metadata 2.0. For the record, the current draft of the python.commands extension in PEP 459 does indeed include support for reporting prebuilt commands: http://www.python.org/dev/peps/pep-0459/#the-python-commands-extension The draft metadata extensions haven't really been properly reviewed yet, though. For pip 1.6, Donald and I have been focused on hammering PEP 440 into shape. IIRC, we had one outstanding issue to resolve before bringing that back to the list, but travel has currently fried my brain, so I don't recall off the top of my head what it was. Cheers, Nick. > I'm honestly not sure at the moment, as your > description isn't very specific. What "external binary files" do you > package? If they are DLLs (as you mention) how would they work on > Unix? I'm a bit confused here, as you mention the script handling > being OK for you on Unix, and yet you're talking about DLLs, which are > for Windows... If this is intended to be a single-platform solution, > how do you anticipate communicating that fact to the build and install > tools (wheel, pip, etc)? > > Thanks for raising the issue, it sounds like you have a use case that > isn't well supported at the moment. But I agree with Jason that this > shouldn't be forced into the scripts functionality, which is, after > all, intended for *scripts*. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.genest at ubisoft.com Thu Jul 17 22:11:48 2014 From: david.genest at ubisoft.com (David Genest) Date: Thu, 17 Jul 2014 20:11:48 +0000 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows In-Reply-To: References: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: <85ecc044ea244007bc7e6a70c6a7d9d4@MSR-MAIL-EXCH01.ubisoft.org> > On 17 July 2014 17:04, David Genest wrote: > > I would like to know what is the take in the community on this. > > My view would be: > > 1. The scripts argument is for *scripts* not binary files, so it is perfectly > correct to open them in text mode, do line ending translation, etc. I see that. But on windows, beside naming, the scripts argument is used to populate the Scripts directory (which are not only scripts, but sometimes setuptools wrappers - clearly executable). Cross-platform can be achieved by bundling unix and windows variants at setup.py time. > 2. What you're doing is not supported behaviour (although I appreciate the > fact that it's useful to you). It is de-facto. We have been using this "functionality" to package our windows only binary executables in an egg (using install). Wheels are supported also (but I agree, pip or wheel cannot infer the platform). > 3. The scripts feature is generally discouraged in favour of entry points. > 4. By including binaries, you make your package non-portable, and do so in a > way that setuptools/wheel/pip cannot spot, so (for example) if you were to > build a wheel it would not be recognised as architecture-specific. I agree that they would not be portable, but in a Windows only situation, this does not pose a problem. I'm not using the feature to publish a cross-platform solution, it is used to publish a package in-house. But the current setuptools tooling is perfectly adapted to this. Also, the binary delivery mechanism is a needed feature, especially on windows. The current API permits it's use now (on unix, or more precisely, platforms where the 'b' mode has no effect when opening files). No checks are being performed. Why should it be different on Windows ? > very specific. What "external binary files" do you package? If they are DLLs In our situation, it is an executable embedding python, but it could be any binary auxiliary executable useful for the package. Dlls are only another binary example of what is needed by the package (and loadable by windows because on the path). > (as you mention) how would they work on Unix? I'm a bit confused here, as > you mention the script handling being OK for you on Unix, and yet you're > talking about DLLs, which are for Windows... If this is intended to be a single- > platform solution, how do you anticipate communicating that fact to the build > and install tools (wheel, pip, etc)? I was probably too vague. I meant that the existing functionality supports binary files on unix, albeit not in a cross-platform way. In other words, if you specify a binary file in scripts, it will be copied over without being broken. It is not the case on Windows. I am not up to date with metadata, but I agree that it should be communicated in the tools. > > Thanks for raising the issue, it sounds like you have a use case that isn't well > supported at the moment. But I agree with Jason that this shouldn't be > forced into the scripts functionality, which is, after all, intended for *scripts*. Acknowledging that the current script solution is not for cross-platform transportation of binaries, the problem is that it is difficult to package arbitrary binary executables that are available in a virtualenv. Helper executables, or any other auxiliary executables. Right now, AFAIK, on python2, (and python3) it is not possible to publish a binary executable by other means than the script parameter. Knowing that the install works right now, I would think that the develop command should at least behave the same. David. From david.genest at ubisoft.com Thu Jul 17 22:15:35 2014 From: david.genest at ubisoft.com (David Genest) Date: Thu, 17 Jul 2014 20:15:35 +0000 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows In-Reply-To: References: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: > For the record, the current draft of the python.commands extension in PEP 459 does indeed include support for reporting prebuilt > commands: http://www.python.org/dev/peps/pep-0459/#the-python-commands-extension > The draft metadata extensions haven't really been properly reviewed yet, though. For pip 1.6, Donald and I have been focused on > hammering PEP 440 into shape. I agree that this would help, and in fact solve the problem. But only in newer python releases. From ncoghlan at gmail.com Thu Jul 17 22:21:44 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 17 Jul 2014 16:21:44 -0400 Subject: [Distutils] setuptools develop command garbles binary files specified as scripts setup() parameter on windows In-Reply-To: References: <6333269e3e26426a9a76e625877279f8@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 17 Jul 2014 16:15, "David Genest" wrote: > > > For the record, the current draft of the python.commands extension in PEP 459 does indeed include support for reporting prebuilt > > commands: http://www.python.org/dev/peps/pep-0459/#the-python-commands-extension > > The draft metadata extensions haven't really been properly reviewed yet, though. For pip 1.6, Donald and I have been focused on > > hammering PEP 440 into shape. > > I agree that this would help, and in fact solve the problem. But only in newer python releases. No, the focus of packaging efforts has moved to pip and setuptools specifically to avoid that problem - when metadata 2.0 is adopted, it will be supported at least as far back as Python 2.7, and likely on 2.6 as well. The only requirement will be to use setuptools (or another metadata 2.0 compatible build system) and pip (or another metadata 2.0 compatible installer) rather than using distutils directly. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Fri Jul 18 09:06:51 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 18 Jul 2014 09:06:51 +0200 Subject: [Distutils] 503 backend read error on PyPI Message-ID: <1F7286CB-650F-43BD-A95A-C11AC2EBF705@wiggy.net> Just seen on https://pypi.python.org/pypi from a Dutch IP: Error 503 backend read error backend read error Guru Mediation: Details: cache-iad2132-IAD 1405663962 2954825964 Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Fri Jul 18 09:14:38 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 18 Jul 2014 09:14:38 +0200 Subject: [Distutils] 503 backend read error on PyPI In-Reply-To: <1F7286CB-650F-43BD-A95A-C11AC2EBF705@wiggy.net> References: <1F7286CB-650F-43BD-A95A-C11AC2EBF705@wiggy.net> Message-ID: <587D06DE-DB84-4649-A731-0F164D10D665@wiggy.net> On 18 Jul 2014, at 09:06, Wichert Akkerman wrote: > Just seen on https://pypi.python.org/pypi from a Dutch IP: > > Error 503 backend read error > > backend read error > > Guru Mediation: > > Details: cache-iad2132-IAD 1405663962 2954825964 > This seems to happen consistently after a 30 second timeout. The Guru Meditation error style seems to vary; I just got "XID: 1384941053?. Interestingly deleting my cookies seems to have fixed the errors. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Fri Jul 18 09:16:12 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 18 Jul 2014 09:16:12 +0200 Subject: [Distutils] 503 backend read error on PyPI In-Reply-To: <587D06DE-DB84-4649-A731-0F164D10D665@wiggy.net> References: <1F7286CB-650F-43BD-A95A-C11AC2EBF705@wiggy.net> <587D06DE-DB84-4649-A731-0F164D10D665@wiggy.net> Message-ID: On 18 Jul 2014, at 09:14, Wichert Akkerman wrote: > This seems to happen consistently after a 30 second timeout. The Guru Meditation error style seems to vary; I just got "XID: 1384941053?. > > Interestingly deleting my cookies seems to have fixed the errors. I managed to dig up the cookies that caused PyPI to break for me: Cookie:_gauges_unique_year=1; _gauges_unique=1; pypi=4abaf8fe55319bc68a63b5e4a97911e0 Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Mon Jul 21 14:09:26 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Mon, 21 Jul 2014 14:09:26 +0200 Subject: [Distutils] Packaging meetup @ EuroPython 2014 Message-ID: I've set the time for the packaging meetup to 11:00 this Thursday. There is currently no meeting room available for such a gathering, so we'll just meet outside in the garden. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Jul 22 07:36:51 2014 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 21 Jul 2014 22:36:51 -0700 Subject: [Distutils] Installing setuptools on an un-networked computer In-Reply-To: References: Message-ID: you can put "get-pip.py" and wheels (*.whl files from pypi) for pip and setuptools on your thumb drive and then do python get-pip.py --no-index --find-links=/path/to/wheels this will install pip and setuptools. using options to get-pip.py is mentioned here in the pip docs: https://pip.pypa.io/en/latest/installing.html#install-pip On Wed, Jul 2, 2014 at 6:04 AM, Peter Ganong wrote: > Hi, > > I am doing research with confidential data on a Windows 7 computer not > connected to the Internet. I can bring files in on a thumb drive. How can I > install setuptools? To be clear, the issue is not the one answered in the > help file here > , > but is how to install setuptools itself on an unnetworked drive. > > Thanks, > > Peter > > -- > Peter Ganong > PhD Candidate in Economics at Harvard > scholar.harvard.edu/ganong/ > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Jul 22 08:02:10 2014 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 21 Jul 2014 23:02:10 -0700 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: <53C6702C.2090103@trueblade.com> Message-ID: > > > I believe Marcus Smith is trying to coordinate efforts to bring the docs > of the various PyPA projects up to date and providing a consistent message. > Another part of that is getting the setuptools docs to a point where they > stand alone, without depending on folks also going to read parts of the > distutils docs. > I hesitate to mention it yet, but I do have a 1/2 done refactor of the setuptools docs that will include a master reference of *all* keywords and subcommands to prevent having to hop around between setuptools and distutils. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 22 13:30:50 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 Jul 2014 21:30:50 +1000 Subject: [Distutils] setup.py sdist not including my README.md In-Reply-To: References: <53C6702C.2090103@trueblade.com> Message-ID: On 22 July 2014 16:02, Marcus Smith wrote: >> >> I believe Marcus Smith is trying to coordinate efforts to bring the docs >> of the various PyPA projects up to date and providing a consistent message. >> Another part of that is getting the setuptools docs to a point where they >> stand alone, without depending on folks also going to read parts of the >> distutils docs. > > I hesitate to mention it yet, but I do have a 1/2 done refactor of the > setuptools docs that will include a master reference of *all* keywords and > subcommands to prevent having to hop around between setuptools and > distutils. Woohoo! Just knowing you're working on it is helpful - it's one of the big items on my wishlist that I couldn't even consider writing myself, since I don't know setuptools *or* distutils anywhere near well enough :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From doko at ubuntu.com Wed Jul 23 15:26:01 2014 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 23 Jul 2014 15:26:01 +0200 Subject: [Distutils] Packaging meetup @ EuroPython 2014 In-Reply-To: References: Message-ID: <53CFB7E9.8010204@ubuntu.com> Am 21.07.2014 14:09, schrieb Richard Jones: > I've set the time for the packaging meetup to 11:00 this Thursday. There is > currently no meeting room available for such a gathering, so we'll just > meet outside in the garden. hi, I'll be there if I can get in (not attending EuroPython myself). please could somebody send a more concrete address besides "in the garden"? ;) thanks, Matthias From r1chardj0n3s at gmail.com Wed Jul 23 16:45:36 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 23 Jul 2014 16:45:36 +0200 Subject: [Distutils] Packaging meetup @ EuroPython 2014 In-Reply-To: <53CFB7E9.8010204@ubuntu.com> References: <53CFB7E9.8010204@ubuntu.com> Message-ID: Hi Matthias, The garden is on the inside of the venue. Unfortunately there isn't really any place outside that's even nearby that we could meet. I will discuss with the EuroPython organisers about whether we can get you in for the meeting. Richard On 23 July 2014 15:26, Matthias Klose wrote: > Am 21.07.2014 14:09, schrieb Richard Jones: > > I've set the time for the packaging meetup to 11:00 this Thursday. There > is > > currently no meeting room available for such a gathering, so we'll just > > meet outside in the garden. > > hi, I'll be there if I can get in (not attending EuroPython myself). please > could somebody send a more concrete address besides "in the garden"? ;) > > thanks, Matthias > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed Jul 23 19:08:35 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 23 Jul 2014 19:08:35 +0200 Subject: [Distutils] PEP 470 discussion, part 3 Message-ID: I have been mulling over PEP 470 for some time without having the time to truly dedicate to addressing it. I believe I'm up to date with its contents and the (quite significant, and detailed) discussion around it. To summarise my understanding, PEP 470 proposes to remove the current link spidering (pypi-scrape, pypi-scrape-crawl) while retaining explicit hosting (pypi-explicit). I believe it retains the explicit links to external hosting provided by pypi-explicit. The reason given for this change is the current bad user experience around the --allow-external and --allow-unverified options to pip install. That is, that users currently attempt to install a non-pypi-explicit package and the result is an obscure error message. I believe the current PEP addresses the significant usability issues around this by swapping them for other usability issues. In fact, I believe it will make matters worse with potential confusion about which index hosts what, potential masking of release files or even, in the worst scenario, potential spoofing of release files by indexes out of the control of project owners. I would like us to consider instead working on the usability of the existing workflow, by rather than throwing an error, we start a dialog with the user: $ pip install PIL Downloading/unpacking PIL PIL is hosted externally to PyPI. Do you still wish to download it? [Y/n] y PIL has no checksum. Are you sure you wish to download it? [Y/n] y Downloading/unpacking PIL Downloading PIL-1.1.7.tar.gz (506kB): 506kB downloaded ... Obviously this would require scraping the site, but given that this interaction would only happen for a very small fraction of projects (those for which no download is located), the overall performance hit is negligible. The PEP currently states that this would be a "massive performance hit" for reasons I don't understand. The two prompts could be made automatic "y" responses for tools using the existing --allow-external and --allow-unverified flags. I also note that PEP 470 says "PEP 438 proposed a system of classifying file links as either internal, external, or unsafe", whereas PEP 438 has no mention of "unsafe". This leads "unsafe" to never actually be defined anywhere that I can see. Finally, the Rejected Proposals section of the PEP appears to have a couple of justifications for rejection which have nothing whatsoever to do with the Rationale ("PyPI is fronted by a globally distributed CDN...", "PyPI supports mirroring...") As Holger has already indicated, that second one is going to have a heck of a time dealing with PEP 470 changes at least in the devpi case. "PyPI has monitoring and an on-call rotation of sysadmins..." would be solved through improving the failure message reported to the user as discussed above. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 23 20:18:58 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 23 Jul 2014 14:18:58 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 23, 2014 at 1:09:00 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: I have been mulling over PEP 470 for some time without having the time to truly dedicate to addressing it. I believe I'm up to date with its contents and the (quite significant, and detailed) discussion around it. To summarise my understanding, PEP 470 proposes to remove the current link spidering (pypi-scrape, pypi-scrape-crawl) while retaining explicit hosting (pypi-explicit). I believe it retains the explicit links to external hosting provided by pypi-explicit. No, it removes pypi-explicit as well, leaving only files hosted on PyPI. On top of that it adds a new functionality where project authors can indicate that their files are hosted on a non PyPI index. This allows tooling to indicate to users that they need to add additional indexes to their install commands in order to install something, as well as allowing PyPI to still act as a central authority for naming without forcing people to upload to PyPI. The reason given for this change is the current bad user experience around the --allow-external and --allow-unverified options to pip install. That is, that users currently attempt to install a non-pypi-explicit package and the result is an obscure error message. That?s part of the bad UX, the other part is that users are not particularly aware of the difference between an external vs an unverified link (in fact many people involved in packaging were not aware until it was explained to them by me, the difference is subtle). Part of the problem is while it?s easy for *tooling* to determine the difference between external and unverified, for a human it requires inspecting the actual HTML of the /simple/ page. I believe the current PEP addresses the significant usability issues around this by swapping them for other usability issues. In fact, I believe it will make matters worse with potential confusion about which index hosts what, potential masking of release files or even, in the worst scenario, potential spoofing of release files by indexes out of the control of project owners. So that?s a potential problem with any multi index thing yes. However I do not believe they are serious problems. It is a model that is in use by every linux vendor ever and anyone who has ever used a Linux (or most of the various BSDs) are already familiar with it. On top of that it is something that end users would need to be aware of if they want to use a private index, or they want to install commercial software that has a restricted index, or any other number of situations. In other words multiple indexes don?t go away, they will always be there. The effect of PEP 438 is that users need to be aware of *two* different ways of installing things not hosted on PyPI instead of just one.? This two concepts instead of one is another part of the bad UX inflicted by PEP 438. The zen states that there should be one way to do something, and I think that is a good thing to strive for.? I would like us to consider instead working on the usability of the existing workflow, by rather than throwing an error, we start a dialog with the user: $ pip install PIL Downloading/unpacking PIL ? PIL is hosted externally to PyPI. Do you still wish to download it? [Y/n] y ? PIL has no checksum. Are you sure you wish to download it? [Y/n] y Downloading/unpacking PIL ? Downloading PIL-1.1.7.tar.gz (506kB): 506kB downloaded ... Obviously this would require scraping the site, but given that this interaction would only happen for a very small fraction of projects (those for which no download is located), the overall performance hit is negligible. The PEP currently states that this would be a "massive performance hit" for reasons I don't understand. It?s a big performance hit because we can?t just assume that if there is a download located on PyPI that there is not a better download hosted externally. So in order to actually do this accurately then we must scan any URL we locate in order to build up an entire list of all the potential files, and then ask if the person wants to download it. For a sort of indication of the difference, I can scan all of PyPI looking for potential release files in about 20 minutes if I restrict myself to only things hosted directly on PyPI. If I include the additional scanning then that time jumps up to 3-4 hours. That?s what, 13x slower? And that?s with an incredibly aggressive timeout and a blacklist to only try bad hosts once. The two prompts could be made automatic "y" responses for tools using the existing --allow-external and --allow-unverified flags. I also note that PEP 470 says "PEP 438 proposed a system of classifying file links as either internal, external, or unsafe", whereas PEP 438 has no mention of "unsafe". This leads "unsafe" to never actually be defined anywhere that I can see. I can define them in the PEP, but basically: * internal - Things hosted by PyPI itself. * external - Things hosted off of PyPI, but linked directly from the /simple/ page with an acceptable hash * unsafe - Things hosted off of PyPI, either linked directly from the /simple/ page *without* an acceptable hash, or things hosted on a page which is linked from a rel=?homepage? or rel=?download? link. Finally, the?Rejected Proposals section of the PEP appears to have a couple of justifications for rejection which have nothing whatsoever to do with the Rationale ("PyPI is fronted by a globally distributed CDN...", "PyPI supports mirroring...") As Holger has already indicated, that second one is going to have a heck of a time dealing with PEP 470 changes at least in the devpi case. PEP 381 mirroring will require zero changes to deal with the proposed change since it explicitly requires that the mirror client download the HTML of the /simple/ page and serve it unmodified. If devpi requires changes that is because it does not follow the documented protocol. Those additional justifications are why we need a much clearer line between what is available on the PyPI repository, and what is available elsewhere. They are why we can?t just eliminate the ``?allow-external`` case (which is safe, but has availability and speed concerns). ?"PyPI has monitoring and an on-call rotation of sysadmins..." would be solved through improving the failure message reported to the user as discussed above. We can?t have better failure messages because we don?t have any idea if a particular URL is expected to be up or if it has bit rotted to death and thus is an expected failure. Because of this pip has to more or less silently ignore failing URLs and ends up presenting very confusing error messages. Forgive me if these don?t make sense, I?m real sick today. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 24 00:25:38 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Jul 2014 08:25:38 +1000 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On 24 Jul 2014 03:09, "Richard Jones" wrote: > > I believe the current PEP addresses the significant usability issues around this by swapping them for other usability issues. In fact, I believe it will make matters worse with potential confusion about which index hosts what, potential masking of release files or even, in the worst scenario, potential spoofing of release files by indexes out of the control of project owners. Donald covered most points I would have made in his reply, but I do have a couple of additions specifically on this point: a) For private indexes, being able to override upstream is a feature, not a bug b) Categorically preventing spoofing is what end-to-end signing is for pip's own existing multiple index support is what makes devpi and its concept not only of private indexes, but also separate dev, staging and production indexes, possible. PEP 470 proposes to make some small enhancements to the multiple index support in order to allow subsequent deprecation and removal of the complicated and largely redundant link spidering system. >From a usability perspective, Ubuntu PPAs (Personal Package Archives, where users can easily host custom repos on Launchpad) have proved enormously popular, and Fedora has now adopted a similar model with it's COPR RPM building and yum repo hosting service. conda also uses channel selection as a way of determining what packages are available. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 24 01:20:17 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 23 Jul 2014 19:20:17 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 23, 2014 at 6:27:31 PM, Nick Coghlan (ncoghlan at gmail.com) wrote: a) For private indexes, being able to override upstream is a feature, not a bug b) Categorically preventing spoofing is what end-to-end signing is for I forgot to mention, that you basically need to trust the maintainers of the packages you choose to install anyways. Even if we don?t use multi index it?s trivial for a package to masquerade as another one. In metadata 2.0 even with package signing you end up where I can have you install ?django-foobar? which depends on ?FakeDjango?, which provides ?Django?, and then for all intents and purposes you have a ?Django? package installed. The point being we can?t rely on the index ACLs to protect a user who has elected to install something that does something bad. The authors of a package that the user has opted to install *are not* in the threat model. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Thu Jul 24 10:55:40 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 24 Jul 2014 10:55:40 +0200 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: Thanks for responding, even from your sick bed. This message about users having to view and understand /simple/ indexes is repeated many times. I didn't have to do that in the case of PIL. The tool told me "use --allow-external PIL to allow" and then when that failed it told me "use --allow-unverified PIL to allow". There was no needing to understand why, nor any reading of /simple/ indexes. Currently most users (I'm thinking of people who install PIL once or twice) don't need to edit configuration files, and with a modification we could make the above process interactive. Those ~3000 packages that have internal and external packages would be slow, yes. This PEP proposes a potentially confusing break for both users and packagers. In particular, during the transition there will be packages which just disappear as far as users are concerned. In those cases users will indeed need to learn that there is a /simple/ page and they will need to view it in order to find the URL to add to their installation invocation in some manner. Even once install tools start supporting the new mechanism, users who lag (which as we all know are the vast majority) will run into this. On the devpi front: indeed it doesn't use the mirroring protocol because it is not a mirror. It is a caching proxy that uses the same protocols as the install tools to obtain, and then cache the files for install. Those files are then presented in a single index for the user to use. There is no need for multi-index support, even in the case of having multiple staging indexes. There is a need for devpi to be able to behave just like an installer without needing intervention, which I believe will be possible in this proposal as it can automatically add external indexes as it needs to. I talked to a number of people last night and I believe the package spoofing concept is also a vulnerability in the Linux multi-index model (where an external index provides an "updated release" of some core package like libssl on Linux, or perhaps requests in Python land). As I understand it, there is no protection against this. Happy to be told why I'm wrong, of course :) Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladimir.v.diaz at gmail.com Thu Jul 24 12:40:45 2014 From: vladimir.v.diaz at gmail.com (Vladimir Diaz) Date: Thu, 24 Jul 2014 06:40:45 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: In metadata 2.0 even with package signing you end up where I can have you install ?django-foobar? which depends on ?FakeDjango?, which provides ?Django?, and then for all intents and purposes you have a ?Django? package installed. Can you go into more detail? Particularly, the part where "FakeDjango" provides Django. Richard Jones mentions the case where an external index provides an "updated release" and tricks the updater into installing a compromised "Django." Is this the same thing? On Thu, Jul 24, 2014 at 4:55 AM, Richard Jones wrote: > Thanks for responding, even from your sick bed. > > This message about users having to view and understand /simple/ indexes is > repeated many times. I didn't have to do that in the case of PIL. The tool > told me "use --allow-external PIL to allow" and then when that failed it > told me "use --allow-unverified PIL to allow". There was no needing to > understand why, nor any reading of /simple/ indexes. > Currently most users (I'm thinking of people who install PIL once or > twice) don't need to edit configuration files, and with a modification we > could make the above process interactive. Those ~3000 packages that have > internal and external packages would be slow, yes. > > This PEP proposes a potentially confusing break for both users and > packagers. In particular, during the transition there will be packages > which just disappear as far as users are concerned. In those cases users > will indeed need to learn that there is a /simple/ page and they will need > to view it in order to find the URL to add to their installation invocation > in some manner. Even once install tools start supporting the new mechanism, > users who lag (which as we all know are the vast majority) will run into > this. > > On the devpi front: indeed it doesn't use the mirroring protocol because > it is not a mirror. It is a caching proxy that uses the same protocols as > the install tools to obtain, and then cache the files for install. Those > files are then presented in a single index for the user to use. There is no > need for multi-index support, even in the case of having multiple staging > indexes. There is a need for devpi to be able to behave just like an > installer without needing intervention, which I believe will be possible in > this proposal as it can automatically add external indexes as it needs to. > > I talked to a number of people last night and I believe the package > spoofing concept is also a vulnerability in the Linux multi-index model > (where an external index provides an "updated release" of some core package > like libssl on Linux, or perhaps requests in Python land). As I understand > it, there is no protection against this. Happy to be told why I'm wrong, of > course :) > > > Richard > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Thu Jul 24 13:26:09 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 24 Jul 2014 13:26:09 +0200 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: Even ignoring the malicious possibility there is a probably greater chance of accidental mistakes: - company sets up internal index using pip's multi-index support and hosts various modules - someone quite innocently uploads something with the same name, never version, to pypi - company installs now use that unknown code devpi avoids this (I would recommend it over multi-index for companies anyway) by having a white list system for packages that might be pulled from upstream that would clash with internal packages. As Nick's mentioned, a signing infrastructure - tied to the index registration of a name - could solve this problem. There still remains the usability issue of unsophisticated users running into external indexes and needing to cope with that in one of a myriad of ways as evidenced by the PEP. One solution proposed and refined at the EuroPython gathering today has PyPI caching packages from external indexes *for packages registered with PyPI*. That is: a requirement of registering your package (and external index URL) with PyPI is that you grant PyPI permission to cache packages from your index in the central index - a scenario that is ideal for users. Organisations not wishing to do that understand that they're the ones causing the pain for users. An extension of this proposal is quite elegant; to reduce the pain of migration from the current approach to the new, we implement that caching right now, using the current simple index scraping. This ensures the packages are available to all clients throughout the transition period. The transition issue was enough for those at the meeting today to urge me to reject the PEP. Richard On 24 July 2014 12:40, Vladimir Diaz wrote: > In metadata 2.0 even with package signing you end up where I can have you > install ?django-foobar? which depends on ?FakeDjango?, which provides > ?Django?, and then for all intents and purposes you have a ?Django? package > installed. > > Can you go into more detail? Particularly, the part where "FakeDjango" > provides Django. > > Richard Jones mentions the case where an external index provides an > "updated release" and tricks the updater into installing a compromised > "Django." Is this the same thing? > > > On Thu, Jul 24, 2014 at 4:55 AM, Richard Jones > wrote: > >> Thanks for responding, even from your sick bed. >> >> This message about users having to view and understand /simple/ indexes >> is repeated many times. I didn't have to do that in the case of PIL. The >> tool told me "use --allow-external PIL to allow" and then when that failed >> it told me "use --allow-unverified PIL to allow". There was no needing to >> understand why, nor any reading of /simple/ indexes. >> Currently most users (I'm thinking of people who install PIL once or >> twice) don't need to edit configuration files, and with a modification we >> could make the above process interactive. Those ~3000 packages that have >> internal and external packages would be slow, yes. >> >> This PEP proposes a potentially confusing break for both users and >> packagers. In particular, during the transition there will be packages >> which just disappear as far as users are concerned. In those cases users >> will indeed need to learn that there is a /simple/ page and they will need >> to view it in order to find the URL to add to their installation invocation >> in some manner. Even once install tools start supporting the new mechanism, >> users who lag (which as we all know are the vast majority) will run into >> this. >> >> On the devpi front: indeed it doesn't use the mirroring protocol because >> it is not a mirror. It is a caching proxy that uses the same protocols as >> the install tools to obtain, and then cache the files for install. Those >> files are then presented in a single index for the user to use. There is no >> need for multi-index support, even in the case of having multiple staging >> indexes. There is a need for devpi to be able to behave just like an >> installer without needing intervention, which I believe will be possible in >> this proposal as it can automatically add external indexes as it needs to. >> >> I talked to a number of people last night and I believe the package >> spoofing concept is also a vulnerability in the Linux multi-index model >> (where an external index provides an "updated release" of some core package >> like libssl on Linux, or perhaps requests in Python land). As I understand >> it, there is no protection against this. Happy to be told why I'm wrong, of >> course :) >> >> >> Richard >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Thu Jul 24 13:28:35 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 24 Jul 2014 13:28:35 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython Message-ID: Several great ideas came out of today's meetup. Some of those I'll leave to the proponents themselves to post about, but a couple of little nuggets for thought: 1. reject wheel uploads in the absence of an sdist in the index (the linux guys were really happy about that as a proposal ;) 2. add a system-wide configuration option to pip etc. so that there could be a system-wide override of the package index to use Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at bytereef.org Thu Jul 24 14:14:35 2014 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 24 Jul 2014 14:14:35 +0200 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: <20140724121435.GA3673@sleipnir.bytereef.org> Richard Jones wrote: > There still remains the usability issue of unsophisticated users running into > external indexes and needing to cope with that in one of a myriad of ways as > evidenced by the PEP. One solution proposed and refined at the EuroPython > gathering today has PyPI caching packages from external indexes *for packages > registered with PyPI*. That is: a requirement of registering your package (and > external index URL) with PyPI is that you grant PyPI permission to cache > packages from your index in the central index - a scenario that is ideal for > users. -1. That is unlikely to solve the draconian-terms-and-conditions problem and one reason to host externally is to get your own download statistics. > Organisations not wishing to do that understand that they're the ones > causing the pain for users. No. First, checksummed external packages could be downloaded without asking at all. Second, if international authors are required to study US export law before uploading, I wonder who is causing the pain. Finally, how can an author cause pain for users? Without him, the work would not be there in the first place. Stefan Krah From donald at stufft.io Thu Jul 24 17:23:31 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 11:23:31 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 24, 2014 at 4:55:42 AM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Thanks for responding, even from your sick bed. This message about users having to view and understand /simple/ indexes is repeated many times. I didn't have to do that in the case of PIL. The tool told me "use --allow-external PIL to allow" and then when that failed it told me "use --allow-unverified PIL to allow". There was no needing to understand why, nor any reading of /simple/ indexes. Currently most users (I'm thinking of people who install PIL once or twice) don't need to edit configuration files, and with a modification we could make the above process interactive. Those ~3000 packages that have internal and external packages would be slow, yes. They need to do it to understand if a link is internal, external, or unverified. The feedback *i?ve* gotten is complete confusion about the difference between them. Even making that process interactive still means that pip cannot hard fail on a failure to retrieve an URL and thus must present confusing error messages in the case an URL is temporarily down. This PEP proposes a potentially confusing break for both users and packagers. In particular, during the transition there will be packages which just disappear as far as users are concerned. In those cases users will indeed need to learn that there is a /simple/ page and they will need to view it in order to find the URL to add to their installation invocation in some manner. Even once install tools start supporting the new mechanism, users who lag (which as we all know are the vast majority) will run into this. So we lengthen the transition time, gate it on an installer that has the automatic hinting becoming the dominant version. We can pretty easily see exactly what version of the tooling is being used to install stuff from PyPI. On the devpi front: indeed it doesn't use the mirroring protocol because it is not a mirror. It is a caching proxy that uses the same protocols as the install tools to obtain, and then cache the files for install. Those files are then presented in a single index for the user to use. There is no need for multi-index support, even in the case of having multiple staging indexes. There is a need for devpi to be able to behave just like an installer without needing intervention, which I believe will be possible in this proposal as it can automatically add external indexes as it needs to. Yes, devpi should be able to update itself to add the external indexes. I talked to a number of people last night and I believe the package spoofing concept is also a vulnerability in the Linux multi-index model (where an external index provides an "updated release" of some core package like libssl on Linux, or perhaps requests in Python land). As I understand it, there is no protection against this. Happy to be told why I'm wrong, of course :) It?s not really a ?vulnerability?, tt?s something that is able to be done regardless and thus package authors are not part of the thread model. If I?m installing a package from a malicious author I?m executing arbitrary Python from them. They can drop a .egg-info into site-packages and spoof a package that way. It is completely impossible to remove the ability for a package author of a package that someone else is installing from spoofing another package. The spoofing problem is a red herring, it?s like saying that your browser vendor could get your bank password because you?re typing it into the browser. Well yes they could, but it?s a mandatory thing. If you?re installing a package a wrote you must extend trust to me. ? ? ? Richard --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Jul 24 17:24:24 2014 From: barry at python.org (Barry Warsaw) Date: Thu, 24 Jul 2014 11:24:24 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython References: Message-ID: <20140724112424.1dada057@anarchist.wooz.org> On Jul 24, 2014, at 01:28 PM, Richard Jones wrote: >1. reject wheel uploads in the absence of an sdist in the index (the linux >guys were really happy about that as a proposal ;) +1 :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From donald at stufft.io Thu Jul 24 17:26:08 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 11:26:08 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 24, 2014 at 6:40:47 AM, Vladimir Diaz (vladimir.v.diaz at gmail.com) wrote: In metadata 2.0 even with package signing you end up where I can have you install ?django-foobar? which depends on ?FakeDjango?, which provides ?Django?, and then for all intents and purposes you have a ?Django? package installed. Can you go into more detail?? Particularly, the part where "FakeDjango" provides Django. Richard Jones mentions the case where an external index provides an "updated release" and tricks the updater into installing a compromised "Django."? Is this the same thing? No it?s not the same thing. Metadata 2.0 provides mechanisms for one package to claim to be another package. This only takes affect once that package has been installed though. This functionality allows things like a package to provide a compatible shim that uses different internal guts, or for one package to obsolete another or even for multiple packages to ?provide? the same thing and allow the user to select which one they want to use at install time. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 24 17:41:57 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 11:41:57 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 24, 2014 at 7:26:11 AM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Even ignoring the malicious possibility there is a probably greater chance of accidental mistakes: - company sets up internal index using pip's multi-index support and hosts various modules - someone quite innocently uploads something with the same name, never version, to pypi - company installs now use that unknown code devpi avoids this (I would recommend it over multi-index for companies anyway) by having a white list system for packages that might be pulled from upstream that would clash with internal packages. As Nick's mentioned, a signing infrastructure - tied to the index registration of a name - could solve this problem. Yes, those are two solutions, another solution is for PyPI to allow registering a namespace, like dstufft.* and companies simply name all their packages that. This isn?t a unique problem to this PEP though. This problem exists anytime a company has an internal package that they do not want on PyPI. It?s unlikely that any of those companies are using the external link feature if that package is internal. There still remains the usability issue of unsophisticated users running into external indexes and needing to cope with that in one of a myriad of ways as evidenced by the PEP. One solution proposed and refined at the EuroPython gathering today has PyPI caching packages from external indexes *for packages registered with PyPI*. That is: a requirement of registering your package (and external index URL) with PyPI is that you grant PyPI permission to cache packages from your index in the central index - a scenario that is ideal for users. Organisations not wishing to do that understand that they're the ones causing the pain for users. We can?t cache the packages which aren?t currently hosted on PyPI. Not in an automatic fashion anyways. We?d need to ensure that their license allows us to do so. The PyPI ToS ensures this when they upload but if they never upload then they?ve never agreed to the ToS for that artifact. An extension of this proposal is quite elegant; to reduce the pain of migration from the current approach to the new, we implement that caching right now, using the current simple index scraping. This ensures the packages are available to all clients throughout the transition period. As said above, we can?t legally do this automatically, we?d need to ensure that there is a license that grants us distribution rights. The transition issue was enough for those at the meeting today to urge me to reject the PEP. To be clear, there are really three issues at play: 1) Should we continue to support scraping external urls *at all*. This is a cause of a lot of problems in pip and it infects our architecture with things that cause confusing error messages that we cannot really get away from. It?s also super slow and grossly insecure.? 2) Should we continue to support direct links from a project?s /simple/ page to a downloadable file which isn?t hosted on PyPI.? 3) If we allow direct links to a downloadable file from a project?s /simple/ page, do we mandate that they include a hash (and thus are safe) or do we also allow ones without a checksum (and thus are unsafe). For me, 1 is absolutely not. It is terrible and it is the cause of horrible UX issues as well as performance issues. However 1 is also the majorly useful one. Eliminating 1 eliminates PIL and that is > 90% of the /simple/ traffic for the projects which this will have any impact. For me 2 is a question of, is the relatively small (both traffic and number of packages) worth the extra cognitive overhead of users having to understand that there are *two* ways for something to be installed from not PyPI. Additionally is it worth the removal of ability for people to legally mirror the actual *files* without manually white listing the ones that they?ve vetted and found the license to allow them to do so (and even then in the future a project could switch to a license which doesn?t allow that). For me this is again no, it?s not worth it. Additional concepts to learn with their own quirks and causing pain for people wanting to mirror their installs is not worth keeping things working for a tiny fraction of things. For me 3 is no just because 2 is no, but assuming 2 is ?yes?, I still think 3 is no because the external vs unverified split is confusing to users. Additionally the impact of this one, if I recall correctly, is almost zero. ? ? ? Richard On 24 July 2014 12:40, Vladimir Diaz??wrote: In metadata 2.0 even with package signing you end up where I can have you install ?django-foobar? which depends on ?FakeDjango?, which provides ?Django?, and then for all intents and purposes you have a ?Django? package installed. Can you go into more detail?? Particularly, the part where "FakeDjango" provides Django. Richard Jones mentions the case where an external index provides an "updated release" and tricks the updater into installing a compromised "Django."? Is this the same thing? On Thu, Jul 24, 2014 at 4:55 AM, Richard Jones??wrote: Thanks for responding, even from your sick bed. This message about users having to view and understand /simple/ indexes is repeated many times. I didn't have to do that in the case of PIL. The tool told me "use --allow-external PIL to allow" and then when that failed it told me "use --allow-unverified PIL to allow". There was no needing to understand why, nor any reading of /simple/ indexes. Currently most users (I'm thinking of people who install PIL once or twice) don't need to edit configuration files, and with a modification we could make the above process interactive. Those ~3000 packages that have internal and external packages would be slow, yes. This PEP proposes a potentially confusing break for both users and packagers. In particular, during the transition there will be packages which just disappear as far as users are concerned. In those cases users will indeed need to learn that there is a /simple/ page and they will need to view it in order to find the URL to add to their installation invocation in some manner. Even once install tools start supporting the new mechanism, users who lag (which as we all know are the vast majority) will run into this. On the devpi front: indeed it doesn't use the mirroring protocol because it is not a mirror. It is a caching proxy that uses the same protocols as the install tools to obtain, and then cache the files for install. Those files are then presented in a single index for the user to use. There is no need for multi-index support, even in the case of having multiple staging indexes. There is a need for devpi to be able to behave just like an installer without needing intervention, which I believe will be possible in this proposal as it can automatically add external indexes as it needs to. I talked to a number of people last night and I believe the package spoofing concept is also a vulnerability in the Linux multi-index model (where an external index provides an "updated release" of some core package like libssl on Linux, or perhaps requests in Python land). As I understand it, there is no protection against this. Happy to be told why I'm wrong, of course :) ? ? ? Richard _______________________________________________ Distutils-SIG maillist ?- ?Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 24 17:50:15 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 11:50:15 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: <20140724121435.GA3673@sleipnir.bytereef.org> References: <20140724121435.GA3673@sleipnir.bytereef.org> Message-ID: On July 24, 2014 at 8:23:59 AM, Stefan Krah (stefan at bytereef.org) wrote: Richard Jones wrote:? > There still remains the usability issue of unsophisticated users running into? > external indexes and needing to cope with that in one of a myriad of ways as? > evidenced by the PEP. One solution proposed and refined at the EuroPython? > gathering today has PyPI caching packages from external indexes *for packages? > registered with PyPI*. That is: a requirement of registering your package (and? > external index URL) with PyPI is that you grant PyPI permission to cache? > packages from your index in the central index - a scenario that is ideal for? > users.? -1. That is unlikely to solve the draconian-terms-and-conditions problem? and one reason to host externally is to get your own download statistics.? The ToS is not draconian, it is a minimal ToS which allows PyPI to function. If people want/need additional stats we can add them to PyPI. This is on the TODO list anyways. > Organisations not wishing to do that understand that they're the ones? > causing the pain for users.? No. First, checksummed external packages could be downloaded without asking? at all. Second, if international authors are required to study US export law? before uploading, I wonder who is causing the pain.? With PEP 470 you are not required to study anything nor upload to PyPI, if you wish to host outside of PyPI you simply host an external index, which is as simple as a plain html file with links to the downloadable files. Finally, how can an author cause pain for users? Without him, the work? would not be there in the first place.? I?m not quite sure how to answer this. It?s quite obvious that an author?s choices can cause pain for a user. For example, the author could have an option where if specified it silently deleted the entire filesystem of the user. This would be incredibly painful for the end user (assuming they didn?t want that of course). Now a project is owned by the author, so they are allowed to choose to do things which cause pain for end users, and end users get to make a choice about whether it?s worth using that project even with the pain incurred from the author?s choices. The reason we don?t download checksummed external packages by default any more is because they *do* represent a choice that causes pain for end users and thus users should be aware they are making that choice. Stefan Krah? _______________________________________________? Distutils-SIG maillist - Distutils-SIG at python.org? https://mail.python.org/mailman/listinfo/distutils-sig? --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 24 17:57:24 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 11:57:24 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: Message-ID: On July 24, 2014 at 7:28:55 AM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Several great ideas came out of today's meetup. Some of those I'll leave to the proponents themselves to post about, but a couple of little nuggets for thought: 1. reject wheel uploads in the absence of an sdist in the index (the linux guys were really happy about that as a proposal ;) This is gonna make openstack sad I think? They were relying on the fact that pip prior to 1.4 didn?t install Wheels, and pip 1.4+ has the pre-releases are excluded by default logic to publish pre-releases safely to PyPI. I?m not generally opposed though. Just stating that this will prevent that ?trick? from working. 2. add a system-wide configuration option to pip etc. so that there could be a system-wide override of the package index to use Yea this was already on my list of things to do when I refactor the configuration stuff to use locations which are more in line with what the OS norms are (XDG on *nix, ~/Library on OSX, %AppData% stuff on Windows). --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcappos at nyu.edu Thu Jul 24 17:57:37 2014 From: jcappos at nyu.edu (Justin Cappos) Date: Thu, 24 Jul 2014 11:57:37 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: FYI: PEP 458 provides a way to address most of the security issues with this as well. (We call these "provides-everything" attacks in some of our prior work: https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.pdf) One way of handling this is that whomever registers the name can choose what other packages can be registered that meet that dependency. Another is that PyPI could automatically manage the metadata for this. Clearly someone has to be responsible for making sure that this is 'off-by-default' so that a malicious party cannot claim to provide a popular package and get their software installed instead. What do you think makes the most sense? Even if only "the right" projects can create trusted packages for a dependency, there are security issues also with respect to which package should be trusted. Suppose you have projects zap and bar, which should be chosen to meet a dependency. Which should be used? With TUF we currently support them choosing a fixed project (zap or bar), but supporting the most recent upload is also possible. We had an explicit tag and type of delegation in Stork for this case (the timestamp tag), but I think we can get equivalent functionality with threshold signatures in TUF. Once we understand more about how people would like to use it, we can make sure PEP 458 explains how this is supported in a clean way while minimizing the security impact. Thanks, Justin On Thu, Jul 24, 2014 at 11:41 AM, Donald Stufft wrote: > On July 24, 2014 at 7:26:11 AM, Richard Jones (r1chardj0n3s at gmail.com) > wrote: > > Even ignoring the malicious possibility there is a probably greater chance > of accidental mistakes: > > - company sets up internal index using pip's multi-index support and hosts > various modules > - someone quite innocently uploads something with the same name, never > version, to pypi > - company installs now use that unknown code > > devpi avoids this (I would recommend it over multi-index for companies > anyway) by having a white list system for packages that might be pulled > from upstream that would clash with internal packages. > > As Nick's mentioned, a signing infrastructure - tied to the index > registration of a name - could solve this problem. > > Yes, those are two solutions, another solution is for PyPI to allow > registering a namespace, like dstufft.* and companies simply name all their > packages that. This isn?t a unique problem to this PEP though. This problem > exists anytime a company has an internal package that they do not want on > PyPI. It?s unlikely that any of those companies are using the external link > feature if that package is internal. > > > > There still remains the usability issue of unsophisticated users running > into external indexes and needing to cope with that in one of a myriad of > ways as evidenced by the PEP. One solution proposed and refined at the > EuroPython gathering today has PyPI caching packages from external indexes > *for packages registered with PyPI*. That is: a requirement of registering > your package (and external index URL) with PyPI is that you grant PyPI > permission to cache packages from your index in the central index - a > scenario that is ideal for users. Organisations not wishing to do that > understand that they're the ones causing the pain for users. > > We can?t cache the packages which aren?t currently hosted on PyPI. Not in > an automatic fashion anyways. We?d need to ensure that their license allows > us to do so. The PyPI ToS ensures this when they upload but if they never > upload then they?ve never agreed to the ToS for that artifact. > > > > An extension of this proposal is quite elegant; to reduce the pain of > migration from the current approach to the new, we implement that caching > right now, using the current simple index scraping. This ensures the > packages are available to all clients throughout the transition period. > > As said above, we can?t legally do this automatically, we?d need to ensure > that there is a license that grants us distribution rights. > > > > The transition issue was enough for those at the meeting today to urge me > to reject the PEP. > > To be clear, there are really three issues at play: > > 1) Should we continue to support scraping external urls *at all*. This is > a cause of a lot of problems in pip and it infects our architecture with > things that cause confusing error messages that we cannot really get away > from. It?s also super slow and grossly insecure. > > 2) Should we continue to support direct links from a project?s /simple/ > page to a downloadable file which isn?t hosted on PyPI. > > 3) If we allow direct links to a downloadable file from a project?s > /simple/ page, do we mandate that they include a hash (and thus are safe) > or do we also allow ones without a checksum (and thus are unsafe). > > For me, 1 is absolutely not. It is terrible and it is the cause of > horrible UX issues as well as performance issues. However 1 is also the > majorly useful one. Eliminating 1 eliminates PIL and that is > 90% of the > /simple/ traffic for the projects which this will have any impact. > > For me 2 is a question of, is the relatively small (both traffic and > number of packages) worth the extra cognitive overhead of users having to > understand that there are *two* ways for something to be installed from not > PyPI. Additionally is it worth the removal of ability for people to legally > mirror the actual *files* without manually white listing the ones that > they?ve vetted and found the license to allow them to do so (and even then > in the future a project could switch to a license which doesn?t allow > that). For me this is again no, it?s not worth it. Additional concepts to > learn with their own quirks and causing pain for people wanting to mirror > their installs is not worth keeping things working for a tiny fraction of > things. > > For me 3 is no just because 2 is no, but assuming 2 is ?yes?, I still > think 3 is no because the external vs unverified split is confusing to > users. Additionally the impact of this one, if I recall correctly, is > almost zero. > > > > > Richard > > > On 24 July 2014 12:40, Vladimir Diaz wrote: > >> In metadata 2.0 even with package signing you end up where I can have you >> install ?django-foobar? which depends on ?FakeDjango?, which provides >> ?Django?, and then for all intents and purposes you have a ?Django? package >> installed. >> >> Can you go into more detail? Particularly, the part where "FakeDjango" >> provides Django. >> >> Richard Jones mentions the case where an external index provides an >> "updated release" and tricks the updater into installing a compromised >> "Django." Is this the same thing? >> >> >> On Thu, Jul 24, 2014 at 4:55 AM, Richard Jones >> wrote: >> >>> Thanks for responding, even from your sick bed. >>> >>> This message about users having to view and understand /simple/ indexes >>> is repeated many times. I didn't have to do that in the case of PIL. The >>> tool told me "use --allow-external PIL to allow" and then when that failed >>> it told me "use --allow-unverified PIL to allow". There was no needing to >>> understand why, nor any reading of /simple/ indexes. >>> Currently most users (I'm thinking of people who install PIL once or >>> twice) don't need to edit configuration files, and with a modification we >>> could make the above process interactive. Those ~3000 packages that have >>> internal and external packages would be slow, yes. >>> >>> This PEP proposes a potentially confusing break for both users and >>> packagers. In particular, during the transition there will be packages >>> which just disappear as far as users are concerned. In those cases users >>> will indeed need to learn that there is a /simple/ page and they will need >>> to view it in order to find the URL to add to their installation invocation >>> in some manner. Even once install tools start supporting the new mechanism, >>> users who lag (which as we all know are the vast majority) will run into >>> this. >>> >>> On the devpi front: indeed it doesn't use the mirroring protocol because >>> it is not a mirror. It is a caching proxy that uses the same protocols as >>> the install tools to obtain, and then cache the files for install. Those >>> files are then presented in a single index for the user to use. There is no >>> need for multi-index support, even in the case of having multiple staging >>> indexes. There is a need for devpi to be able to behave just like an >>> installer without needing intervention, which I believe will be possible in >>> this proposal as it can automatically add external indexes as it needs to. >>> >>> I talked to a number of people last night and I believe the package >>> spoofing concept is also a vulnerability in the Linux multi-index model >>> (where an external index provides an "updated release" of some core package >>> like libssl on Linux, or perhaps requests in Python land). As I understand >>> it, there is no protection against this. Happy to be told why I'm wrong, of >>> course :) >>> >>> >>> Richard >>> >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >> > > > > -- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 24 18:03:53 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 24 Jul 2014 12:03:53 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 24, 2014 at 11:58:01 AM, Justin Cappos (jcappos at nyu.edu) wrote: FYI: PEP 458 provides a way to address most of the security issues with this as well. ? (We call these "provides-everything" attacks in some of our prior work: https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.pdf) One way of handling this is that whomever registers the name can choose what other packages can be registered that meet that dependency. ? Another is that PyPI could automatically manage the metadata for this. ? Clearly someone has to be responsible for making sure that this is 'off-by-default' so that a malicious party cannot claim to provide a popular package and get their software installed instead. ? What do you think makes the most sense? Even if only "the right" projects can create trusted packages for a dependency, there are security issues also with respect to which package should be trusted. ? Suppose you have projects zap and bar, which should be chosen to meet a dependency. ? Which should be used? With TUF we currently support them choosing a fixed project (zap or bar), but supporting the most recent upload is also possible. ? We had an explicit tag and type of delegation in Stork for this case (the timestamp tag), but I think we can get equivalent functionality with threshold signatures in TUF. Once we understand more about how people would like to use it, we can make sure PEP 458 explains how this is supported in a clean way while minimizing the security impact. Thanks, Justin Sorry, I think the provides functionality is outside of the scope of what we would use TUF for. It is *only* respected if you have that project installed. In other words if there is a package ?FakeDjango? which provides ?Django?, then ``pip install Django`` will *never* install ?FakeDjango?. However if you?ve already done ``pip install FakeDjango`` then later on you do ``pip install Django`` it will see that it is already installed (because ?FakeDjango? provides it). IOW it only matters once you?ve already chosen to trust that package and have installed it. This is to prevent any sort of spoofing attacks and to simplify the interface. This doesn?t prevent a project which you?ve elected to trust by installing it from spoofing itself, but it?s impossible to prevent them from doing that anyways without hobbling our package formats so much that they are useless. For instance any ability to execute code (such as setup.py!) means that FakeDjango could, once installed, spoof Django just by dropping the relevant metadata files to say it is already installed. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcappos at nyu.edu Thu Jul 24 18:08:55 2014 From: jcappos at nyu.edu (Justin Cappos) Date: Thu, 24 Jul 2014 12:08:55 -0400 Subject: [Distutils] [tuf] Re: PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: Got it. Thanks for clearing this up. Glad to hear that virtual dependencies are not an issue. It simplifies things a lot! Justin On Thu, Jul 24, 2014 at 12:03 PM, Donald Stufft wrote: > On July 24, 2014 at 11:58:01 AM, Justin Cappos (jcappos at nyu.edu) wrote: > > FYI: PEP 458 provides a way to address most of the security issues with > this as well. (We call these "provides-everything" attacks in some of our > prior work: https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.pdf) > > > One way of handling this is that whomever registers the name can choose > what other packages can be registered that meet that dependency. Another > is that PyPI could automatically manage the metadata for this. Clearly > someone has to be responsible for making sure that this is 'off-by-default' > so that a malicious party cannot claim to provide a popular package and get > their software installed instead. What do you think makes the most sense? > > Even if only "the right" projects can create trusted packages for a > dependency, there are security issues also with respect to which package > should be trusted. Suppose you have projects zap and bar, which should be > chosen to meet a dependency. Which should be used? > > With TUF we currently support them choosing a fixed project (zap or bar), > but supporting the most recent upload is also possible. We had an > explicit tag and type of delegation in Stork for this case (the timestamp > tag), but I think we can get equivalent functionality with threshold > signatures in TUF. > > Once we understand more about how people would like to use it, we can make > sure PEP 458 explains how this is supported in a clean way while minimizing > the security impact. > > Thanks, > Justin > > > Sorry, I think the provides functionality is outside of the scope of what > we would use TUF for. It is *only* respected if you have that project > installed. In other words if there is a package ?FakeDjango? which provides > ?Django?, then ``pip install Django`` will *never* install ?FakeDjango?. > However if you?ve already done ``pip install FakeDjango`` then later on you > do ``pip install Django`` it will see that it is already installed (because > ?FakeDjango? provides it). > > IOW it only matters once you?ve already chosen to trust that package and > have installed it. This is to prevent any sort of spoofing attacks and to > simplify the interface. This doesn?t prevent a project which you?ve elected > to trust by installing it from spoofing itself, but it?s impossible to > prevent them from doing that anyways without hobbling our package formats > so much that they are useless. For instance any ability to execute code > (such as setup.py!) means that FakeDjango could, once installed, spoof > Django just by dropping the relevant metadata files to say it is already > installed. > > -- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Jul 24 18:44:02 2014 From: dholth at gmail.com (Daniel Holth) Date: Thu, 24 Jul 2014 12:44:02 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: Message-ID: Also, reject uploads that are not released under a DFSG license or lack man pages. On Jul 24, 2014 11:57 AM, "Donald Stufft" wrote: > On July 24, 2014 at 7:28:55 AM, Richard Jones (r1chardj0n3s at gmail.com) > wrote: > > Several great ideas came out of today's meetup. Some of those I'll leave > to the proponents themselves to post about, but a couple of little nuggets > for thought: > > 1. reject wheel uploads in the absence of an sdist in the index (the linux > guys were really happy about that as a proposal ;) > > This is gonna make openstack sad I think? They were relying on the fact > that pip prior to 1.4 didn?t install Wheels, and pip 1.4+ has the > pre-releases are excluded by default logic to publish pre-releases safely > to PyPI. > > I?m not generally opposed though. Just stating that this will prevent that > ?trick? from working. > > > 2. add a system-wide configuration option to pip etc. so that there could > be a system-wide override of the package index to use > > Yea this was already on my list of things to do when I refactor the > configuration stuff to use locations which are more in line with what the > OS norms are (XDG on *nix, ~/Library on OSX, %AppData% stuff on Windows). > -- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 24 23:13:08 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Jul 2014 07:13:08 +1000 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On 25 Jul 2014 02:05, "Donald Stufft" wrote: > > Sorry, I think the provides functionality is outside of the scope of what we would use TUF for. It is *only* respected if you have that project installed. In other words if there is a package ?FakeDjango? which provides ?Django?, then ``pip install Django`` will *never* install ?FakeDjango?. However if you?ve already done ``pip install FakeDjango`` then later on you do ``pip install Django`` it will see that it is already installed (because ?FakeDjango? provides it). For the record, from a system integrator perspective, this is considered a feature rather than a bug: it's designed so it's possible to swap in an alternative to the real package as a temporary measure until the real one catches up. For example, right now, getting systemd to work right inside a Docker container is a bit tricky, but you don't really need it if you're just running one or two services per container. The workaround is a substitute package called "fakesystemd" - it lets the package installation proceed, even though the systemd integration won't work. The folks actually working with systemd inside Docker then swap the fake one out for the real one. > IOW it only matters once you?ve already chosen to trust that package and have installed it. This is to prevent any sort of spoofing attacks and to simplify the interface. This doesn?t prevent a project which you?ve elected to trust by installing it from spoofing itself, but it?s impossible to prevent them from doing that anyways without hobbling our package formats so much that they are useless. For instance any ability to execute code (such as setup.py!) means that FakeDjango could, once installed, spoof Django just by dropping the relevant metadata files to say it is already installed. Yep. While it may sound self-serving (because it is - this is ultimately one of the services that gets me paid), a commercial relationship that helps assure them "this won't eat your machine" is one of the reasons companies pay open source redistributors and other service providers for software that is freely available directly from upstream. They're not really paying for the software directly - they're outsourcing the task of due diligence in checking whether the software is safe enough to allow it to be installed on their systems. Even the core repos of the community Linux distros provide a higher level of assurance than the "anything goes, use at your own risk" style services like PyPI, Ubuntu PPAs and Fedora's COPR. That doesn't make the latter services bad, it just means they occupy a niche in the ecosystem that makes using them directly inherently inadvisable for users with a low tolerance for risk. Cheers, Nick. > > -- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Fri Jul 25 09:04:09 2014 From: chris at simplistix.co.uk (Chris Withers) Date: Fri, 25 Jul 2014 08:04:09 +0100 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: Message-ID: <53D20169.1060205@simplistix.co.uk> On 24/07/2014 17:44, Daniel Holth wrote: > Also, reject uploads that are not released under a DFSG license What's a DFSG license> > or lack > man pages. Are you serious? Chris > > On Jul 24, 2014 11:57 AM, "Donald Stufft" > wrote: > > On July 24, 2014 at 7:28:55 AM, Richard Jones > (r1chardj0n3s at gmail.com ) wrote: >> Several great ideas came out of today's meetup. Some of those I'll >> leave to the proponents themselves to post about, but a couple of >> little nuggets for thought: >> >> 1. reject wheel uploads in the absence of an sdist in the index >> (the linux guys were really happy about that as a proposal ;) > > This is gonna make openstack sad I think? They were relying on the > fact that pip prior to 1.4 didn?t install Wheels, and pip 1.4+ has > the pre-releases are excluded by default logic to publish > pre-releases safely to PyPI. > > I?m not generally opposed though. Just stating that this will > prevent that ?trick? from working. > >> >> 2. add a system-wide configuration option to pip etc. so that >> there could be a system-wide override of the package index to use > > Yea this was already on my list of things to do when I refactor the > configuration stuff to use locations which are more in line with > what the OS norms are (XDG on *nix, ~/Library on OSX, %AppData% > stuff on Windows). > > -- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 > 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com > ______________________________________________________________________ > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From ncoghlan at gmail.com Fri Jul 25 10:36:15 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Jul 2014 18:36:15 +1000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <53D20169.1060205@simplistix.co.uk> References: <53D20169.1060205@simplistix.co.uk> Message-ID: On 25 Jul 2014 17:46, "Chris Withers" wrote: > > On 24/07/2014 17:44, Daniel Holth wrote: >> >> Also, reject uploads that are not released under a DFSG license > > > What's a DFSG license> > >> or lack >> man pages. > > > Are you serious? I took it as a sarcastic comment cryptically expressing disagreement with the notion of accommodating reasonable requests from redistributors by positing a slippery slope argument where we start asking upstream to enforce evermore of our policy guidelines to make our lives easier, even when those changes aren't of any benefit to *arbitrary* redistributors (let alone folks doing their own system integration). With a Linux vendor employee responsible for approving the packaging metadata PEPs, I think it's a reasonable concern (although it could have been better expressed). However, while access to a source tarball (or the ability to create one) is indeed a gating criterion for entry to downstream build systems, I don't think *mandating* source package upload to PyPI is a necessary part of the answer. We can nudge people in that direction, and make uploading source in addition to binaries the path of least resistance, but I don't think we need to cross the line into enforcement. Packages without readily available source uploads just won't be redistributed (except in cases like OpenStack where we get the original source from somewhere else). Cheers, Nick. > > Chris > >> >> On Jul 24, 2014 11:57 AM, "Donald Stufft" > > wrote: >> >> On July 24, 2014 at 7:28:55 AM, Richard Jones >> (r1chardj0n3s at gmail.com ) wrote: >>> >>> Several great ideas came out of today's meetup. Some of those I'll >>> leave to the proponents themselves to post about, but a couple of >>> little nuggets for thought: >>> >>> 1. reject wheel uploads in the absence of an sdist in the index >>> (the linux guys were really happy about that as a proposal ;) >> >> >> This is gonna make openstack sad I think? They were relying on the >> fact that pip prior to 1.4 didn?t install Wheels, and pip 1.4+ has >> the pre-releases are excluded by default logic to publish >> pre-releases safely to PyPI. >> >> I?m not generally opposed though. Just stating that this will >> prevent that ?trick? from working. >> >>> >>> 2. add a system-wide configuration option to pip etc. so that >>> there could be a system-wide override of the package index to use >> >> >> Yea this was already on my list of things to do when I refactor the >> configuration stuff to use locations which are more in line with >> what the OS norms are (XDG on *nix, ~/Library on OSX, %AppData% >> stuff on Windows). >> >> -- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 >> 3372 DCFA >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> ______________________________________________________________________ >> This email has been scanned by the Symantec Email Security.cloud service. >> For more information please visit http://www.symanteccloud.com >> ______________________________________________________________________ >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > -- > Simplistix - Content Management, Batch Processing & Python Consulting > - http://www.simplistix.co.uk > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 25 14:46:19 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 08:46:19 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> Message-ID: On July 25, 2014 at 4:36:17 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: On 25 Jul 2014 17:46, "Chris Withers" wrote: > > On 24/07/2014 17:44, Daniel Holth wrote: >> >> Also, reject uploads that are not released under a DFSG license > > > What's a DFSG license> > >> or lack >> man pages. > > > Are you serious? I took it as a sarcastic comment cryptically expressing disagreement with the notion of accommodating reasonable requests from redistributors by positing a slippery slope argument where we start asking upstream to enforce evermore of our policy guidelines to make our lives easier, even when those changes aren't of any benefit to *arbitrary* redistributors (let alone folks doing their own system integration). With a Linux vendor employee responsible for approving the packaging metadata PEPs, I think it's a reasonable concern (although it could have been better expressed). However, while access to a source tarball (or the ability to create one) is indeed a gating criterion for entry to downstream build systems, I don't think *mandating* source package upload to PyPI is a necessary part of the answer. We can nudge people in that direction, and make uploading source in addition to binaries the path of least resistance, but I don't think we need to cross the line into enforcement. Packages without readily available source uploads just won't be redistributed (except in cases like OpenStack where we get the original source from somewhere else). Yea, I?m not sure whether I like it or not. Probably once we get a for real build farm for PyPI setup that will be a pretty reasonable sized carrot for people to upload sources. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Fri Jul 25 15:13:12 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 25 Jul 2014 15:13:12 +0200 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: [apologies for the terrible quoting, gmail's magic failed today] On 24 July 2014 17:41, Donald Stufft wrote: > On July 24, 2014 at 7:26:11 AM, Richard Jones (r1chardj0n3s at gmail.com) wrote: > > > This PEP proposes a potentially confusing break for both users and packagers. In particular, during the transition there will be packages which just disappear as far as users are concerned. In those cases users will indeed need to learn that there is a /simple/ page and they will need to view it in order to find the URL to add to their installation invocation in some manner. Even once install tools start supporting the new mechanism, users who lag (which as we all know are the vast majority) will run into this. > > So we lengthen the transition time, gate it on an installer that has the automatic hinting becoming the dominant version. We can pretty easily see exactly what version of the tooling is being used to install stuff from PyPI. I would like to see the PEP have detail added around this transition and how we will avoid packages vanishing. Perhaps we could have a versioned /simple/ to allow transition to go more smoothly with monitoring activity on the two versions? /simple-2/? /simpler/? :) Additionally, it's been pointed out to me that I've been running on assumptions about how multi-index support works. The algorithm that must be implemented by installer tools needs to be spelled out in the PEP. > Even ignoring the malicious possibility there is a probably greater chance of accidental mistakes: > > - company sets up internal index using pip's multi-index support and hosts various modules > - someone quite innocently uploads something with the same name, never version, to pypi > - company installs now use that unknown code > > devpi avoids this (I would recommend it over multi-index for companies anyway) by having a white list system for packages that might be pulled from upstream that would clash with internal packages. > > As Nick's mentioned, a signing infrastructure - tied to the index registration of a name - could solve this problem. > > Yes, those are two solutions, another solution is for PyPI to allow registering a namespace, like dstufft.* and companies simply name all their packages that. This isn?t a unique problem to this PEP though. This problem exists anytime a company has an internal package that they do not want on PyPI. It?s unlikely that any of those companies are using the external link feature if that package is internal. As i mentioned, using devpi solves this issue for companies hosting internal indexes. Requiring companies to register names on a public index to avoid collision has been raised a few times along the lines of "I hope we don't have to register names on the public index to avoid this." :) > > There still remains the usability issue of unsophisticated users running into external indexes and needing to cope with that in one of a myriad of ways as evidenced by the PEP. One solution proposed and refined at the EuroPython gathering today has PyPI caching packages from external indexes *for packages registered with PyPI*. That is: a requirement of registering your package (and external index URL) with PyPI is that you grant PyPI permission to cache packages from your index in the central index - a scenario that is ideal for users. Organisations not wishing to do that understand that they're the ones causing the pain for users. > > We can?t cache the packages which aren?t currently hosted on PyPI. Not in an automatic fashion anyways. We?d need to ensure that their license allows us to do so. The PyPI ToS ensures this when they upload but if they never upload then they?ve never agreed to the ToS for that artifact. I didn't state it clearly: this would be opt-in with the project granting PyPI permission to perform this caching. Their option is to not do so and simply not have a listing on PyPI. > > An extension of this proposal is quite elegant; to reduce the pain of migration from the current approach to the new, we implement that caching right now, using the current simple index scraping. This ensures the packages are available to all clients throughout the transition period. > > As said above, we can?t legally do this automatically, we?d need to ensure that there is a license that grants us distribution rights. A variation on the above two ideas is to just record the *link* to the externally-hosted file from PyPI, rather than that file's content. It is more error-prone, but avoids issues of file ownership. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jul 25 15:21:53 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Jul 2014 23:21:53 +1000 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On 25 July 2014 23:13, Richard Jones wrote: >> Yes, those are two solutions, another solution is for PyPI to allow >> registering a namespace, like dstufft.* and companies simply name all their >> packages that. This isn?t a unique problem to this PEP though. This problem >> exists anytime a company has an internal package that they do not want on >> PyPI. It?s unlikely that any of those companies are using the external link >> feature if that package is internal. > > As i mentioned, using devpi solves this issue for companies hosting internal > indexes. Requiring companies to register names on a public index to avoid > collision has been raised a few times along the lines of "I hope we don't > have to register names on the public index to avoid this." :) Restricting packages to come from particular indexes is (or should be) independent of the PEP 470 design. pip has multiple index support today, and if you enable it, any enabled index can currently provide any package. If that's a significant concern for anyone, changing it is just a pip RFE rather than needing to be part of a PEP. >> > There still remains the usability issue of unsophisticated users running >> > into external indexes and needing to cope with that in one of a myriad of >> > ways as evidenced by the PEP. One solution proposed and refined at the >> > EuroPython gathering today has PyPI caching packages from external indexes >> > *for packages registered with PyPI*. That is: a requirement of registering >> > your package (and external index URL) with PyPI is that you grant PyPI >> > permission to cache packages from your index in the central index - a >> > scenario that is ideal for users. Organisations not wishing to do that >> > understand that they're the ones causing the pain for users. >> >> We can?t cache the packages which aren?t currently hosted on PyPI. Not in >> an automatic fashion anyways. We?d need to ensure that their license allows >> us to do so. The PyPI ToS ensures this when they upload but if they never >> upload then they?ve never agreed to the ToS for that artifact. > > I didn't state it clearly: this would be opt-in with the project granting > PyPI permission to perform this caching. Their option is to not do so and > simply not have a listing on PyPI. This is exactly the "packages not hosted on PyPI are second class citizens" scenario we're trying to *avoid*. We can't ask a global community to comply with US export laws just to be listed on the main community index. >> > An extension of this proposal is quite elegant; to reduce the pain of >> > migration from the current approach to the new, we implement that caching >> > right now, using the current simple index scraping. This ensures the >> > packages are available to all clients throughout the transition period. >> >> As said above, we can?t legally do this automatically, we?d need to ensure >> that there is a license that grants us distribution rights. > > A variation on the above two ideas is to just record the *link* to the > externally-hosted file from PyPI, rather than that file's content. It is > more error-prone, but avoids issues of file ownership. This is essentially what PEP 470 proposes, except that the link says "this project is hosted on this external index, check there for the relevant details" rather than having individual links for every externally hosted version. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Fri Jul 25 15:27:31 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 25 Jul 2014 09:27:31 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: Maybe we should get on the namespaces bandwagon and allow organizations to register a prefix. Then you would be able to know that dependencies called "company/mysupersecretprogram" would never accidentally exist on pypi On Fri, Jul 25, 2014 at 9:21 AM, Nick Coghlan wrote: > On 25 July 2014 23:13, Richard Jones wrote: >>> Yes, those are two solutions, another solution is for PyPI to allow >>> registering a namespace, like dstufft.* and companies simply name all their >>> packages that. This isn?t a unique problem to this PEP though. This problem >>> exists anytime a company has an internal package that they do not want on >>> PyPI. It?s unlikely that any of those companies are using the external link >>> feature if that package is internal. >> >> As i mentioned, using devpi solves this issue for companies hosting internal >> indexes. Requiring companies to register names on a public index to avoid >> collision has been raised a few times along the lines of "I hope we don't >> have to register names on the public index to avoid this." :) > > Restricting packages to come from particular indexes is (or should be) > independent of the PEP 470 design. pip has multiple index support > today, and if you enable it, any enabled index can currently provide > any package. > > If that's a significant concern for anyone, changing it is just a pip > RFE rather than needing to be part of a PEP. > >>> > There still remains the usability issue of unsophisticated users running >>> > into external indexes and needing to cope with that in one of a myriad of >>> > ways as evidenced by the PEP. One solution proposed and refined at the >>> > EuroPython gathering today has PyPI caching packages from external indexes >>> > *for packages registered with PyPI*. That is: a requirement of registering >>> > your package (and external index URL) with PyPI is that you grant PyPI >>> > permission to cache packages from your index in the central index - a >>> > scenario that is ideal for users. Organisations not wishing to do that >>> > understand that they're the ones causing the pain for users. >>> >>> We can?t cache the packages which aren?t currently hosted on PyPI. Not in >>> an automatic fashion anyways. We?d need to ensure that their license allows >>> us to do so. The PyPI ToS ensures this when they upload but if they never >>> upload then they?ve never agreed to the ToS for that artifact. >> >> I didn't state it clearly: this would be opt-in with the project granting >> PyPI permission to perform this caching. Their option is to not do so and >> simply not have a listing on PyPI. > > This is exactly the "packages not hosted on PyPI are second class > citizens" scenario we're trying to *avoid*. We can't ask a global > community to comply with US export laws just to be listed on the main > community index. > >>> > An extension of this proposal is quite elegant; to reduce the pain of >>> > migration from the current approach to the new, we implement that caching >>> > right now, using the current simple index scraping. This ensures the >>> > packages are available to all clients throughout the transition period. >>> >>> As said above, we can?t legally do this automatically, we?d need to ensure >>> that there is a license that grants us distribution rights. >> >> A variation on the above two ideas is to just record the *link* to the >> externally-hosted file from PyPI, rather than that file's content. It is >> more error-prone, but avoids issues of file ownership. > > This is essentially what PEP 470 proposes, except that the link says > "this project is hosted on this external index, check there for the > relevant details" rather than having individual links for every > externally hosted version. > > Cheers, > Nick. and there would be great rejoicing. IIUC conda's binstar does something like this... From r1chardj0n3s at gmail.com Fri Jul 25 15:29:12 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 25 Jul 2014 15:29:12 +0200 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On 25 July 2014 15:21, Nick Coghlan wrote: > On 25 July 2014 23:13, Richard Jones wrote: > > A variation on the above two ideas is to just record the *link* to the > > externally-hosted file from PyPI, rather than that file's content. It is > > more error-prone, but avoids issues of file ownership. > > This is essentially what PEP 470 proposes, except that the link says > "this project is hosted on this external index, check there for the > relevant details" rather than having individual links for every > externally hosted version. > Well, not quite. The PEP proposes a link to a page for an index with arbitrary contents. The above would link only to packages for the /simple/ name in question. A very small amount of protection against accidents but some protection nonetheless. Also, an installer does not need to go to that external index to find anything - everything is listed in the one place. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 25 15:34:14 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 09:34:14 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 25, 2014 at 9:29:14 AM, Richard Jones (r1chardj0n3s at gmail.com) wrote: On 25 July 2014 15:21, Nick Coghlan wrote: On 25 July 2014 23:13, Richard Jones wrote: > A variation on the above two ideas is to just record the *link* to the > externally-hosted file from PyPI, rather than that file's content. It is > more error-prone, but avoids issues of file ownership. This is essentially what PEP 470 proposes, except that the link says "this project is hosted on this external index, check there for the relevant details" rather than having individual links for every externally hosted version. Well, not quite. The PEP proposes a link to a page for an index with arbitrary contents. The above would link only to packages for the /simple/ name in question. ?A very small amount of protection against accidents but some protection nonetheless. Also, an installer does not need to go to that external index to find anything - everything is listed in the one place. ? ? ?Richard This is still a second mechanism that users have to know and be aware of. The multi index support isn?t going away and it is the primary way to support things not hosted on PyPI in every situation *except* the ?well I have a publicly available thing, but I don?t want to upload it to PyPI for whatever reason? case. As evidenced by the numbers I really don?t think that use case is a big enough use case to warrant it?s own special mechanisms. Especially given the fact that it forces some bad architecture on the installers. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jul 25 15:43:26 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Jul 2014 23:43:26 +1000 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On 25 July 2014 23:34, Donald Stufft wrote: > On July 25, 2014 at 9:29:14 AM, Richard Jones (r1chardj0n3s at gmail.com) > wrote: > > On 25 July 2014 15:21, Nick Coghlan wrote: >> >> On 25 July 2014 23:13, Richard Jones wrote: >> > A variation on the above two ideas is to just record the *link* to the >> > externally-hosted file from PyPI, rather than that file's content. It is >> > more error-prone, but avoids issues of file ownership. >> >> This is essentially what PEP 470 proposes, except that the link says >> "this project is hosted on this external index, check there for the >> relevant details" rather than having individual links for every >> externally hosted version. > > > Well, not quite. The PEP proposes a link to a page for an index with > arbitrary contents. The above would link only to packages for the /simple/ > name in question. A very small amount of protection against accidents but > some protection nonetheless. Also, an installer does not need to go to that > external index to find anything - everything is listed in the one place. > > This is still a second mechanism that users have to know and be aware of. > The multi index support isn?t going away and it is the primary way to > support things not hosted on PyPI in every situation *except* the ?well I > have a publicly available thing, but I don?t want to upload it to PyPI for > whatever reason? case. As evidenced by the numbers I really don?t think that > use case is a big enough use case to warrant it?s own special mechanisms. > Especially given the fact that it forces some bad architecture on the > installers. The Linux distros have honed paranoia to a fine art, and even we don't think maintaining explicit package<->repo maps is worth the hassle, especially when end-to-end package signing is a better solution to handling provenance concerns. If people are especially worried about it (especially given we don't have generally usable end-to-end signing yet), then a simpler solution than package <-> repo maps is to have repo (or index, in PyPI terminology) priorities, such that packages from lower priority repos can never take precedence over those from higher priority repos. With yum-plugin-priorities, repos get a default priority of 99, and you can specify an explicit priority in each repo config. This can be used to have a company internal repo take precedence over the Red Hat or community repos, for example. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Jul 25 15:49:43 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 09:49:43 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On July 25, 2014 at 9:43:28 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: On 25 July 2014 23:34, Donald Stufft wrote: > On July 25, 2014 at 9:29:14 AM, Richard Jones (r1chardj0n3s at gmail.com) > wrote: > > On 25 July 2014 15:21, Nick Coghlan wrote: >> >> On 25 July 2014 23:13, Richard Jones wrote: >> > A variation on the above two ideas is to just record the *link* to the >> > externally-hosted file from PyPI, rather than that file's content. It is >> > more error-prone, but avoids issues of file ownership. >> >> This is essentially what PEP 470 proposes, except that the link says >> "this project is hosted on this external index, check there for the >> relevant details" rather than having individual links for every >> externally hosted version. > > > Well, not quite. The PEP proposes a link to a page for an index with > arbitrary contents. The above would link only to packages for the /simple/ > name in question. A very small amount of protection against accidents but > some protection nonetheless. Also, an installer does not need to go to that > external index to find anything - everything is listed in the one place. > > This is still a second mechanism that users have to know and be aware of. > The multi index support isn?t going away and it is the primary way to > support things not hosted on PyPI in every situation *except* the ?well I > have a publicly available thing, but I don?t want to upload it to PyPI for > whatever reason? case. As evidenced by the numbers I really don?t think that > use case is a big enough use case to warrant it?s own special mechanisms. > Especially given the fact that it forces some bad architecture on the > installers. The Linux distros have honed paranoia to a fine art, and even we don't think maintaining explicit package<->repo maps is worth the hassle, especially when end-to-end package signing is a better solution to handling provenance concerns. If people are especially worried about it (especially given we don't have generally usable end-to-end signing yet), then a simpler solution than package <-> repo maps is to have repo (or index, in PyPI terminology) priorities, such that packages from lower priority repos can never take precedence over those from higher priority repos. With yum-plugin-priorities, repos get a default priority of 99, and you can specify an explicit priority in each repo config. This can be used to have a company internal repo take precedence over the Red Hat or community repos, for example. Not that this solves it generically, but I?ve toyed with the idea of a requirements 2.0 file format that included constructions that said things like ?for this dependency, mandate it come from a particular index? or the like. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 25 15:54:09 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 25 Jul 2014 09:54:09 -0400 Subject: [Distutils] PEP 470 discussion, part 3 In-Reply-To: References: Message-ID: On Fri, Jul 25, 2014 at 9:49 AM, Donald Stufft wrote: > On July 25, 2014 at 9:43:28 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > On 25 July 2014 23:34, Donald Stufft wrote: >> On July 25, 2014 at 9:29:14 AM, Richard Jones (r1chardj0n3s at gmail.com) >> wrote: >> >> On 25 July 2014 15:21, Nick Coghlan wrote: >>> >>> On 25 July 2014 23:13, Richard Jones wrote: >>> > A variation on the above two ideas is to just record the *link* to the >>> > externally-hosted file from PyPI, rather than that file's content. It >>> > is >>> > more error-prone, but avoids issues of file ownership. >>> >>> This is essentially what PEP 470 proposes, except that the link says >>> "this project is hosted on this external index, check there for the >>> relevant details" rather than having individual links for every >>> externally hosted version. >> >> >> Well, not quite. The PEP proposes a link to a page for an index with >> arbitrary contents. The above would link only to packages for the /simple/ >> name in question. A very small amount of protection against accidents but >> some protection nonetheless. Also, an installer does not need to go to >> that >> external index to find anything - everything is listed in the one place. >> >> This is still a second mechanism that users have to know and be aware of. >> The multi index support isn?t going away and it is the primary way to >> support things not hosted on PyPI in every situation *except* the ?well I >> have a publicly available thing, but I don?t want to upload it to PyPI for >> whatever reason? case. As evidenced by the numbers I really don?t think >> that >> use case is a big enough use case to warrant it?s own special mechanisms. >> Especially given the fact that it forces some bad architecture on the >> installers. > > The Linux distros have honed paranoia to a fine art, and even we don't > think maintaining explicit package<->repo maps is worth the hassle, > especially when end-to-end package signing is a better solution to > handling provenance concerns. > > If people are especially worried about it (especially given we don't > have generally usable end-to-end signing yet), then a simpler solution > than package <-> repo maps is to have repo (or index, in PyPI > terminology) priorities, such that packages from lower priority repos > can never take precedence over those from higher priority repos. > > With yum-plugin-priorities, repos get a default priority of 99, and > you can specify an explicit priority in each repo config. This can be > used to have a company internal repo take precedence over the Red Hat > or community repos, for example. > > > Not that this solves it generically, but I?ve toyed with the idea of a > requirements 2.0 file format that included constructions that said things > like ?for this dependency, mandate it come from a particular index? or the > like. There was a similar idea in the wheel signatures scheme, where you would include the public keys of the allowed signers alongside the dependency name. The system would have allowed you to install particular built dependencies but only if they were signed by one of a set of signers, preventing you from accidentally installing the wrong thing. At the time I'd proposed extending the array access syntax packagename[keyidentifier=xyzabc...] used for extras. From dholth at gmail.com Fri Jul 25 16:02:28 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 25 Jul 2014 10:02:28 -0400 Subject: [Distutils] build a wheel with waf instead of setuptools In-Reply-To: References: Message-ID: On Fri, Jul 25, 2014 at 5:51 AM, Daniel Holth wrote: > Here's a little something I cooked up based on the waf (a build > system) playground/package example. It's a build script for wheel > (what else) that builds a .whl for wheel when you run "waf configure" > and then "waf package" with waf 1.8.0. I've tested it in Python 2.7. > > Waf is a build system that, unlike distutils, won't fall over > immediately when you try to extend it. One of its features is support > for building Python extensions, and it is itself written in Python. > > Right now for expedience instead of generating the METADATA or > entry_points.txt from setup() arguments it just copies them from files > at the root, and the command "waf dist" (for producing sdists) > includes too many files but that is easy to fix. In particular I liked > using ant_glob() a lot better than MANIFEST.in. The wscript does not > use MANIFEST.in. > > https://bitbucket.org/dholth/wheel/src/1bbbd010558afe215a01cfd238a26da42ad60802/wscript > > If there are any interested waf-wizards out there then we could take > the wheel building feature and pull it out of the individual wscript, > refine it a bit, and have another non-distutils way to publish Python > packages. > > Daniel Holth This kind of thing will require us to implement a flag that tells pip "setup.py cannot install; go through wheel" which is somewhere in the plans.. From donald at stufft.io Fri Jul 25 16:20:32 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 10:20:32 -0400 Subject: [Distutils] build a wheel with waf instead of setuptools In-Reply-To: References: Message-ID: On July 25, 2014 at 10:03:01 AM, Daniel Holth (dholth at gmail.com) wrote: On Fri, Jul 25, 2014 at 5:51 AM, Daniel Holth wrote:? > Here's a little something I cooked up based on the waf (a build? > system) playground/package example. It's a build script for wheel? > (what else) that builds a .whl for wheel when you run "waf configure"? > and then "waf package" with waf 1.8.0. I've tested it in Python 2.7.? >? > Waf is a build system that, unlike distutils, won't fall over? > immediately when you try to extend it. One of its features is support? > for building Python extensions, and it is itself written in Python.? >? > Right now for expedience instead of generating the METADATA or? > entry_points.txt from setup() arguments it just copies them from files? > at the root, and the command "waf dist" (for producing sdists)? > includes too many files but that is easy to fix. In particular I liked? > using ant_glob() a lot better than MANIFEST.in. The wscript does not? > use MANIFEST.in.? >? > https://bitbucket.org/dholth/wheel/src/1bbbd010558afe215a01cfd238a26da42ad60802/wscript? >? > If there are any interested waf-wizards out there then we could take? > the wheel building feature and pull it out of the individual wscript,? > refine it a bit, and have another non-distutils way to publish Python? > packages.? >? > Daniel Holth? This kind of thing will require us to implement a flag that tells pip? "setup.py cannot install; go through wheel" which is somewhere in the? plans..? I don?t think there is any plans to tell pip *not* to use a setup.py and to use a Wheel instead. Rather I think the plans are to enable pluggable builders so that a sdist 2.0 package doesn?t rely on setup.py and could use a waf builder (for instance) plugin. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 25 16:21:45 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 25 Jul 2014 10:21:45 -0400 Subject: [Distutils] build a wheel with waf instead of setuptools In-Reply-To: References: Message-ID: On Fri, Jul 25, 2014 at 10:20 AM, Donald Stufft wrote: > On July 25, 2014 at 10:03:01 AM, Daniel Holth (dholth at gmail.com) wrote: > > On Fri, Jul 25, 2014 at 5:51 AM, Daniel Holth wrote: >> Here's a little something I cooked up based on the waf (a build >> system) playground/package example. It's a build script for wheel >> (what else) that builds a .whl for wheel when you run "waf configure" >> and then "waf package" with waf 1.8.0. I've tested it in Python 2.7. >> >> Waf is a build system that, unlike distutils, won't fall over >> immediately when you try to extend it. One of its features is support >> for building Python extensions, and it is itself written in Python. >> >> Right now for expedience instead of generating the METADATA or >> entry_points.txt from setup() arguments it just copies them from files >> at the root, and the command "waf dist" (for producing sdists) >> includes too many files but that is easy to fix. In particular I liked >> using ant_glob() a lot better than MANIFEST.in. The wscript does not >> use MANIFEST.in. >> >> >> https://bitbucket.org/dholth/wheel/src/1bbbd010558afe215a01cfd238a26da42ad60802/wscript >> >> If there are any interested waf-wizards out there then we could take >> the wheel building feature and pull it out of the individual wscript, >> refine it a bit, and have another non-distutils way to publish Python >> packages. >> >> Daniel Holth > > This kind of thing will require us to implement a flag that tells pip > "setup.py cannot install; go through wheel" which is somewhere in the > plans.. > > I don?t think there is any plans to tell pip *not* to use a setup.py and to > use a Wheel instead. Rather I think the plans are to enable pluggable > builders so that a sdist 2.0 package doesn?t rely on setup.py and could use > a waf builder (for instance) plugin. Just a flag that tells pip it can't use the "install" command and has to do package -> install package on an sdist. From barry at python.org Fri Jul 25 17:10:26 2014 From: barry at python.org (Barry Warsaw) Date: Fri, 25 Jul 2014 11:10:26 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython References: <53D20169.1060205@simplistix.co.uk> Message-ID: <20140725111026.72172ff2@anarchist.wooz.org> On Jul 25, 2014, at 08:46 AM, Donald Stufft wrote: >Yea, I?m not sure whether I like it or not. Probably once we get a for real >build farm for PyPI setup that will be a pretty reasonable sized carrot for >people to upload sources. That's really the right long-term approach, IMO. I'd like to some day see source-only uploads, with wheel builds for various supported platforms appearing "automatically" via the build farm. But ultimately you're right. If we don't have source available from *somewhere* (with PyPI being the most obvious and easiest location for downstreams), then oh well, you won't get your package into some Linux distros, and your users will have to roll their own. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From sontek at gmail.com Fri Jul 25 19:52:25 2014 From: sontek at gmail.com (John M. Anderson) Date: Fri, 25 Jul 2014 10:52:25 -0700 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <20140725111026.72172ff2@anarchist.wooz.org> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> Message-ID: <1406310745.4264.7.camel@aladdin> On Fri, 2014-07-25 at 11:10 -0400, Barry Warsaw wrote: > On Jul 25, 2014, at 08:46 AM, Donald Stufft wrote: > > >Yea, I?m not sure whether I like it or not. Probably once we get a for real > >build farm for PyPI setup that will be a pretty reasonable sized carrot for > >people to upload sources. > > That's really the right long-term approach, IMO. I'd like to some day see > source-only uploads, with wheel builds for various supported platforms > appearing "automatically" via the build farm. > > But ultimately you're right. If we don't have source available from > *somewhere* (with PyPI being the most obvious and easiest location for > downstreams), then oh well, you won't get your package into some Linux > distros, and your users will have to roll their own. > I apologize, I have very limited knowledge of the plans for wheels, so this may have been discussed previously: Would a build server help? We use wheels internally for our all our deployments for things like gevent, PIL, numpy, pandas, etc. but we have to maintain separate internal indexes and have the developers and operators choose the Ubuntu 14.04, 12.04, 10.04 etc index depending on which stack they are developing. We've solved this in our ansible scripts by detecting the OS it is running on and then generating a pip.conf that goes to the correct index. Is this the expected behavior? Would the build server just generate distro specific indexes and then users would define this in their pip conf to get the correctly built wheels? I think if that is the case the build server wouldn't be helpful because you couldn't just `pip install gevent` to get a wheel, you would also have to go discover which operating systems the build server supports and find the index URL to point to. Currently when a wheel is generated it looks like this: Cython-0.20.1-cp27-none-linux_x86_64.whl but since it says "Linux" without distro knowledge there is no way for pip to know that it was built on Ubuntu 14.04 and wont work on my Fedora 17 box. Thanks, John From donald at stufft.io Fri Jul 25 19:57:56 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 13:57:56 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <1406310745.4264.7.camel@aladdin> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> Message-ID: On July 25, 2014 at 1:52:55 PM, John M. Anderson (sontek at gmail.com) wrote: On Fri, 2014-07-25 at 11:10 -0400, Barry Warsaw wrote: > On Jul 25, 2014, at 08:46 AM, Donald Stufft wrote: > > >Yea, I?m not sure whether I like it or not. Probably once we get a for real > >build farm for PyPI setup that will be a pretty reasonable sized carrot for > >people to upload sources. > > That's really the right long-term approach, IMO. I'd like to some day see > source-only uploads, with wheel builds for various supported platforms > appearing "automatically" via the build farm. > > But ultimately you're right. If we don't have source available from > *somewhere* (with PyPI being the most obvious and easiest location for > downstreams), then oh well, you won't get your package into some Linux > distros, and your users will have to roll their own. > I apologize, I have very limited knowledge of the plans for wheels, so this may have been discussed previously: Would a build server help? We use wheels internally for our all our deployments for things like gevent, PIL, numpy, pandas, etc. but we have to maintain separate internal indexes and have the developers and operators choose the Ubuntu 14.04, 12.04, 10.04 etc index depending on which stack they are developing. We've solved this in our ansible scripts by detecting the OS it is running on and then generating a pip.conf that goes to the correct index. Is this the expected behavior? Would the build server just generate distro specific indexes and then users would define this in their pip conf to get the correctly built wheels? I think if that is the case the build server wouldn't be helpful because you couldn't just `pip install gevent` to get a wheel, you would also have to go discover which operating systems the build server supports and find the index URL to point to. Currently when a wheel is generated it looks like this: Cython-0.20.1-cp27-none-linux_x86_64.whl but since it says "Linux" without distro knowledge there is no way for pip to know that it was built on Ubuntu 14.04 and wont work on my Fedora 17 box. Thanks, John _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig No the plan is to expand those tags so that ``pip install gevent`` would just work. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Fri Jul 25 20:37:32 2014 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 25 Jul 2014 20:37:32 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <1406310745.4264.7.camel@aladdin> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> Message-ID: Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) Sent from my mobile - please excuse any brevity. On 25 Jul 2014 19:53, "John M. Anderson" wrote: > On Fri, 2014-07-25 at 11:10 -0400, Barry Warsaw wrote: > > On Jul 25, 2014, at 08:46 AM, Donald Stufft wrote: > > > > >Yea, I?m not sure whether I like it or not. Probably once we get a for > real > > >build farm for PyPI setup that will be a pretty reasonable sized carrot > for > > >people to upload sources. > > > > That's really the right long-term approach, IMO. I'd like to some day > see > > source-only uploads, with wheel builds for various supported platforms > > appearing "automatically" via the build farm. > > > > But ultimately you're right. If we don't have source available from > > *somewhere* (with PyPI being the most obvious and easiest location for > > downstreams), then oh well, you won't get your package into some Linux > > distros, and your users will have to roll their own. > > > > I apologize, I have very limited knowledge of the plans for wheels, so > this may have been discussed previously: > > Would a build server help? We use wheels internally for our all our > deployments for things like gevent, PIL, numpy, pandas, etc. but we > have to maintain separate internal indexes and have the developers and > operators choose the Ubuntu 14.04, 12.04, 10.04 etc index depending on > which stack they are developing. We've solved this in our ansible > scripts by detecting the OS it is running on and then generating a > pip.conf that goes to the correct index. > > Is this the expected behavior? Would the build server just generate > distro specific indexes and then users would define this in their pip > conf to get the correctly built wheels? > > I think if that is the case the build server wouldn't be helpful because > you couldn't just `pip install gevent` to get a wheel, you would also > have to go discover which operating systems the build server supports > and find the index URL to point to. > > Currently when a wheel is generated it looks like this: > > Cython-0.20.1-cp27-none-linux_x86_64.whl > > but since it says "Linux" without distro knowledge there is no way for > pip to know that it was built on Ubuntu 14.04 and wont work on my Fedora > 17 box. > > > Thanks, > John > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 25 21:06:43 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 15:06:43 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> Message-ID: On July 25, 2014 at 2:37:58 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) I completely plan on making it possible to publish Linux Wheels at some point in the future and I don?t believe the binary compat problem on Linux is unable to be overcome. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Fri Jul 25 21:42:40 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 25 Jul 2014 21:42:40 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> Message-ID: <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> > On 25 Jul 2014, at 21:06, Donald Stufft wrote: > > On July 25, 2014 at 2:37:58 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: >> Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. >> >> Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) >> > > I completely plan on making it possible to publish Linux Wheels at some point in the future and I don?t believe the binary compat problem on Linux is unable to be overcome. > I have some experience with Linux distributions, and I am struggling to image how you can possibly overcome those problems. There are a large number of reasons why binary compatibility between various different distributions, and different versions of the same distribution is not possible unless you integrate very tightly with packaging system, which is something that I don?t see being possible with wheels. I would love to hear how you envision solving that. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 25 21:44:15 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 15:44:15 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> Message-ID: On July 25, 2014 at 3:42:48 PM, Wichert Akkerman (wichert at wiggy.net) wrote: On 25 Jul 2014, at 21:06, Donald Stufft wrote: On July 25, 2014 at 2:37:58 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) I completely plan on making it possible to publish Linux Wheels at some point in the future and I don?t believe the binary compat problem on Linux is unable to be overcome. I have some experience with Linux distributions, and I am struggling to image how you can possibly overcome those problems. There are a large number of reasons why binary compatibility between various different distributions, and different versions of the same distribution is not possible unless you integrate very tightly with packaging system, which is something that I don?t see being possible with wheels. I would love to hear how you envision solving that. Wichert. Include the distro name and version in the compatibility tag, so something like: Cython-0.20.1-cp27-none-linux_x86_64-ubuntu_14_04.whl --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Fri Jul 25 21:50:25 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Fri, 25 Jul 2014 21:50:25 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> Message-ID: <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> > On 25 Jul 2014, at 21:44, Donald Stufft wrote: > > On July 25, 2014 at 3:42:48 PM, Wichert Akkerman (wichert at wiggy.net) wrote: >> >>> On 25 Jul 2014, at 21:06, Donald Stufft wrote: >>> >>> On July 25, 2014 at 2:37:58 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: >>>> Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. >>>> >>>> Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) >>>> >>> >>> I completely plan on making it possible to publish Linux Wheels at some point in the future and I don?t believe the binary compat problem on Linux is unable to be overcome. >>> >> >> I have some experience with Linux distributions, and I am struggling to image how you can possibly overcome those problems. There are a large number of reasons why binary compatibility between various different distributions, and different versions of the same distribution is not possible unless you integrate very tightly with packaging system, which is something that I don?t see being possible with wheels. I would love to hear how you envision solving that. >> >> Wichert. >> > > Include the distro name and version in the compatibility tag, so something like: > > Cython-0.20.1-cp27-none-linux_x86_64-ubuntu_14_04.whl Will that guarantee the OS-provided Python was used? Or is there still a risk someone was using a custom compiled Python on an Ubuntu 14.04 system that is not binary compatible with the Ubuntu-provided Python? Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 25 21:55:37 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 25 Jul 2014 15:55:37 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> Message-ID: On July 25, 2014 at 3:50:30 PM, Wichert Akkerman (wichert at wiggy.net) wrote: On 25 Jul 2014, at 21:44, Donald Stufft wrote: On July 25, 2014 at 3:42:48 PM, Wichert Akkerman (wichert at wiggy.net) wrote: On 25 Jul 2014, at 21:06, Donald Stufft wrote: On July 25, 2014 at 2:37:58 PM, Richard Jones (r1chardj0n3s at gmail.com) wrote: Linux wheels are generally not compatible in a non-local sense, so it's unlikely those will be distributable through PyPI. That would also mean it's probably unlikely they'll be built there. Something related to this also cane up in discussion at europython but I don't want to steal any thunder :-) I completely plan on making it possible to publish Linux Wheels at some point in the future and I don?t believe the binary compat problem on Linux is unable to be overcome. I have some experience with Linux distributions, and I am struggling to image how you can possibly overcome those problems. There are a large number of reasons why binary compatibility between various different distributions, and different versions of the same distribution is not possible unless you integrate very tightly with packaging system, which is something that I don?t see being possible with wheels. I would love to hear how you envision solving that. Wichert. Include the distro name and version in the compatibility tag, so something like: Cython-0.20.1-cp27-none-linux_x86_64-ubuntu_14_04.whl Will that guarantee the OS-provided Python was used? Or is there still a risk someone was using a custom compiled Python on an Ubuntu 14.04 system that is not binary compatible with the Ubuntu-provided Python?? Wichert. No It won?t guarantee the OS-provided Python is used. It doesn?t even guarantee that the OS provided libs are being linked to. However at that point you?ve more or less reached parity with Windows and OSX where Wheels (and Eggs before them) are generally built to target the ?System? Python and if you?re not using the ?System? Python you might end up having a bad time. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jul 26 00:08:07 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Jul 2014 08:08:07 +1000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> Message-ID: On 26 Jul 2014 05:56, "Donald Stufft" wrote: > > On July 25, 2014 at 3:50:30 PM, Wichert Akkerman (wichert at wiggy.net) wrote: >> Will that guarantee the OS-provided Python was used? Or is there still a risk someone was using a custom compiled Python on an Ubuntu 14.04 system that is not binary compatible with the Ubuntu-provided Python? >> >> Wichert. >> > > No It won?t guarantee the OS-provided Python is used. It doesn?t even guarantee that the OS provided libs are being linked to. However at that point you?ve more or less reached parity with Windows and OSX where Wheels (and Eggs before them) are generally built to target the ?System? Python and if you?re not using the ?System? Python you might end up having a bad time. As Donald says, we haven't worked out the exact technical details yet, but the general idea is that just as a Windows or Mac OS X wheel published on PyPI should be binary compatible with the corresponding python.org installer, it should be possible to get the relevant info into wheel filenames for people to be able to indicate the expected contents of "/etc/os-release". That's probably *over* specifying things, but Linux distros vary enough that anything coarser grained would likely be unreliable if used directly with the system Python. Another way to go, if softwarecollections.org (aka "virtualenv for your distro package manager") gets any traction whatsoever on the Debian/Ubuntu side of the world, would be to publish wheels based on *software collection* compatibility, rather than distro compatibility. That would be appealing to me wearing both my packaging hats - making prebuilt binaries for libraries more readily available to Python users in a way that maps to Python versions rather than distro variants, while at the same time encouraging them to get their end user applications the heck out of the system Python so they aren't tied so closely to the distro upgrade cycle :) Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jul 26 05:10:28 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 26 Jul 2014 03:10:28 +0000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: Message-ID: <20140726031028.GZ1251@yuggoth.org> On 2014-07-24 11:57:24 -0400 (-0400), Donald Stufft wrote: > This is gonna make openstack sad I think? They were relying on the > fact that pip prior to 1.4 didn?t install Wheels, and pip 1.4+ has > the pre-releases are excluded by default logic to publish > pre-releases safely to PyPI. [...] FWIW I don't think it'll be painful for very much longer... the need to accommodate pip<1.4 will dwindle now that Ubuntu has released an LTS with a later version and RHEL 7 is out as well. However it *would* be great if there was some official means of safely uploading prerelease versions to PyPI which users of old pip won't automatically pull down in an upgrade so we don't have to rely on the current hack (which I agree is really just abusing unintended behavior differences between older and newer pip versions), until a majority of the ecosystem has stopped running with <1.4. -- Jeremy Stanley From wichert at wiggy.net Sat Jul 26 07:27:03 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Sat, 26 Jul 2014 07:27:03 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> Message-ID: <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> > On 26 Jul 2014, at 00:08, Nick Coghlan wrote: > > > On 26 Jul 2014 05:56, "Donald Stufft" wrote: > > > > On July 25, 2014 at 3:50:30 PM, Wichert Akkerman (wichert at wiggy.net) wrote: > >> Will that guarantee the OS-provided Python was used? Or is there still a risk someone was using a custom compiled Python on an Ubuntu 14.04 system that is not binary compatible with the Ubuntu-provided Python? > >> > >> Wichert. > >> > > > > No It won?t guarantee the OS-provided Python is used. It doesn?t even guarantee that the OS provided libs are being linked to. However at that point you?ve more or less reached parity with Windows and OSX where Wheels (and Eggs before them) are generally built to target the ?System? Python and if you?re not using the ?System? Python you might end up having a bad time. > > As Donald says, we haven't worked out the exact technical details yet, but the general idea is that just as a Windows or Mac OS X wheel published on PyPI should be binary compatible with the corresponding python.org installer, it should be possible to get the relevant info into wheel filenames for people to be able to indicate the expected contents of "/etc/os-release". > I suspect that for Linux you mean ?system-provided Python?? Looking at https://www.python.org/downloads/release/python-341/ there is no python.org binary installer for Linux. Even if there was I would expect only a small number of people to use that over the system-provided version, considering Linux distributions generally do a excellent job packaging Python. An /etc/os-release approach does sound like a reasonable approach. Will that also allow me to put ?macports? or ?homebrew? in there you can create an Cython-0.20.1-cp27-none-lmacosx_x86_64-macports_10.10.whl distribution? That might be useful. It would be slightly abusing the spec though since macports does not have releases itself so we?re mixing an OSX version number in, but that does not feel any worse than the other binary compatibility guarantees. For Linux why not use the existing /etc/lsb_release instead of creating a new file? > Another way to go, if softwarecollections.org (aka "virtualenv for your distro package manager") gets any traction whatsoever on the Debian/Ubuntu side of the world, would be to publish wheels based on *software collection* compatibility, rather than distro compatibility. > I had never heard of that; I?ll need to take a look at it. A quick look at the website doesn?t really tell me what problem it is trying to solve or how it works. It might be in the same space as things like debootstrap, docke or PPSar. Or perhaps NixOS really is a better way to handle this. > That would be appealing to me wearing both my packaging hats - making prebuilt binaries for libraries more readily available to Python users in a way that maps to Python versions rather than distro variants, while at the same time encouraging them to get their end user applications the heck out of the system Python so they aren't tied so closely to the distro upgrade cycle :) > Isn?t that already handle in the Debian/Ubuntu world with PPAs? Those do make it completely trivial to install extra things on a system, and you could easily make PPAs that provide custom builds that don?t overlap the OS. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jul 26 08:54:52 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Jul 2014 16:54:52 +1000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> Message-ID: On 26 July 2014 15:27, Wichert Akkerman wrote: > I suspect that for Linux you mean ?system-provided Python?? Looking at > https://www.python.org/downloads/release/python-341/ there is no python.org > binary installer for Linux. Even if there was I would expect only a small > number of people to use that over the system-provided version, considering > Linux distributions generally do a excellent job packaging Python. Yes, by "system Python" on Linux, I mean the distro provided one. (Technically Apple provide one as well, but binary compatibility there is still governed by the python.org installer rather than the Apple version) > An /etc/os-release approach does sound like a reasonable approach. Will that > also allow me to put ?macports? or ?homebrew? in there you can create an > Cython-0.20.1-cp27-none-lmacosx_x86_64-macports_10.10.whl distribution? That > might be useful. It would be slightly abusing the spec though since macports > does not have releases itself so we?re mixing an OSX version number in, but > that does not feel any worse than the other binary compatibility guarantees. > > For Linux why not use the existing /etc/lsb_release instead of creating a > new file? Distros don't necessarily install their LSB support by default, and the design is really weird for looking up some basic distro info. So the freedesktop.org folks came up with os-release a couple of years ago - see http://0pointer.de/blog/projects/os-release for more details. While the format was originally motivated by systemd's needs, Debian adopted /etc/os-release long before it adopted systemd. > Another way to go, if softwarecollections.org (aka "virtualenv for your > distro package manager") gets any traction whatsoever on the Debian/Ubuntu > side of the world, would be to publish wheels based on *software collection* > compatibility, rather than distro compatibility. > > I had never heard of that; I?ll need to take a look at it. A quick look at > the website doesn?t really tell me what problem it is trying to solve or how > it works. It might be in the same space as things like debootstrap, docke or > PPSar. Or perhaps NixOS really is a better way to handle this. It's designed to provide a standard way to install independent runtimes (e.g. language interpreters, databases, web servers) in parallel with the system ones. In terms of level of isolation from the host OS, it's at the same level as rolling your own custom package, just with some naming conventions to help avoid conflicts, rather than the greater separation provided by things like Docker containers. The tooling is very, very Fedora/RHEL/CentOS specific at the moment, but the core concepts apply to any package manager, since it's ultimately just some conventions around installing things into /opt rather than into the normal system directories. The distro specific nature of the system is why I suspect something based on /etc/os-release is a far more likely path for wheel to take (since Debian/Ubuntu et al already support it), but the idea of greater standardisation of runtimes across distros is still a nice dream. > That would be appealing to me wearing both my packaging hats - making > prebuilt binaries for libraries more readily available to Python users in a > way that maps to Python versions rather than distro variants, while at the > same time encouraging them to get their end user applications the heck out > of the system Python so they aren't tied so closely to the distro upgrade > cycle :) > > Isn?t that already handle in the Debian/Ubuntu world with PPAs? Those do > make it completely trivial to install extra things on a system, and you > could easily make PPAs that provide custom builds that don?t overlap the OS. Yes, software collections are essentially just a more formalised way of handling parallel installs. For example, the Python 3.5 nightlies that Miro & Slavek now publish for Fedora are delivered as a software collection, so they don't interfere with the system Python 3 installation: https://lists.fedoraproject.org/pipermail/devel/2014-July/200791.html As Miro's post about that illustrates, one key benefit of standardising the "custom package" approach as SCLs is that it makes it possible to build additional tooling that assumes that particular approach to parallel installs. In our case, by SCL enabling other packages, we make it possible to test those packages against Python 3.5, even though it hasn't been released yet (and ditto with unreleased pip, setuptools and wheel commits). Users don't have to mess about manually figuring out how to build extensions against a different version of CPython, they can just use the SCL utilities. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ronaldoussoren at mac.com Sat Jul 26 10:28:47 2014 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sat, 26 Jul 2014 10:28:47 +0200 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> Message-ID: <933D831B-4D8D-41F2-93A3-C2128C61247A@mac.com> On 26 Jul 2014, at 08:54, Nick Coghlan wrote: > On 26 July 2014 15:27, Wichert Akkerman wrote: >> I suspect that for Linux you mean ?system-provided Python?? Looking at >> https://www.python.org/downloads/release/python-341/ there is no python.org >> binary installer for Linux. Even if there was I would expect only a small >> number of people to use that over the system-provided version, considering >> Linux distributions generally do a excellent job packaging Python. > > Yes, by "system Python" on Linux, I mean the distro provided one. > (Technically Apple provide one as well, but binary compatibility there > is still governed by the python.org installer rather than the Apple > version) And the Apple uses the same build flags as the python.org installer, at least for those flags that affect binary compatibility (such as the width of unicode characters on older Python versions). AFAIK all other Python distributions on OSX do the same, which should make extensions portable between them. Thus, the only thing that affects the compatibility of wheels is the OSX release you target, assuming that the extensions only use system provided libraries. All bets are of when you upload wheels with extension that dynamically link with other libraries (such as a homebrew package for MySQL). I guess the same is true for Linux releases: as long as the wheel only dynamically links with libc, and not with other libraries, linux wheels should also work cross-platform between old systems with a compatible libc. I have no idea how useful that would be though, I do use Linux a lot but have a homogeneous set of servers and don?t really have a need to distribute binaries across different linux distributions (or releases of them). Being in enterprise IT does have its advantages once in while :-) Ronald From ncoghlan at gmail.com Sat Jul 26 13:34:57 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Jul 2014 21:34:57 +1000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <933D831B-4D8D-41F2-93A3-C2128C61247A@mac.com> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> <933D831B-4D8D-41F2-93A3-C2128C61247A@mac.com> Message-ID: On 26 July 2014 18:28, Ronald Oussoren wrote: > > On 26 Jul 2014, at 08:54, Nick Coghlan wrote: >> Yes, by "system Python" on Linux, I mean the distro provided one. >> (Technically Apple provide one as well, but binary compatibility there >> is still governed by the python.org installer rather than the Apple >> version) > > And the Apple uses the same build flags as the python.org installer, at least > for those flags that affect binary compatibility (such as the width of unicode characters > on older Python versions). AFAIK all other Python distributions on OSX do the same, > which should make extensions portable between them. > > Thus, the only thing that affects the compatibility of wheels is the OSX release you > target, assuming that the extensions only use system provided libraries. All bets are > of when you upload wheels with extension that dynamically link with other libraries (such > as a homebrew package for MySQL). > > I guess the same is true for Linux releases: as long as the wheel only dynamically links > with libc, and not with other libraries, linux wheels should also work cross-platform between > old systems with a compatible libc. I have no idea how useful that would be though, I do > use Linux a lot but have a homogeneous set of servers and don?t really have a need to > distribute binaries across different linux distributions (or releases of them). Being in enterprise > IT does have its advantages once in while :-) Indeed. This is actually one of the reasons I spend a lot of time translating between upstream folks and downstream folks, since the kinds of things that make them lose sleep at night are so radically different - when it comes to distribution technologies, once size emphatically does not fit all :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Sat Jul 26 15:49:38 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 26 Jul 2014 13:49:38 +0000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> , <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> Message-ID: <12bf188246644b169bc8342d257e1bd9@DM2PR0301MB0734.namprd03.prod.outlook.com> "Will that also allow me to put ?macports? or ?homebrew? in there you can create an Cython-0.20.1-cp27-none-lmacosx_x86_64-macports_10.10.whl distribution?" Just how many wheels are people going to have to publish? Who has that many dev machines? Without a build farm, I can't see this being more helpful than frustrating (and the build farm is going to need a way to automatically get non-Python dependencies, which is not our business, so that's a long way off IMO). An organisation building their own wheels in-house and configuring their machines to use an OS specific index/wheelhouse sounds like a much more feasible idea that works now and could do with more traction and encouragement. (Someone mentioned earlier that they do this - apologies for forgetting who.) Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Wichert Akkerman Sent: ?7/?25/?2014 22:27 To: Nick Coghlan Cc: DistUtils mailing list Subject: Re: [Distutils] Other ideas from today's packaging meetup at EuroPython On 26 Jul 2014, at 00:08, Nick Coghlan > wrote: On 26 Jul 2014 05:56, "Donald Stufft" > wrote: > > On July 25, 2014 at 3:50:30 PM, Wichert Akkerman (wichert at wiggy.net) wrote: >> Will that guarantee the OS-provided Python was used? Or is there still a risk someone was using a custom compiled Python on an Ubuntu 14.04 system that is not binary compatible with the Ubuntu-provided Python? >> >> Wichert. >> > > No It won?t guarantee the OS-provided Python is used. It doesn?t even guarantee that the OS provided libs are being linked to. However at that point you?ve more or less reached parity with Windows and OSX where Wheels (and Eggs before them) are generally built to target the ?System? Python and if you?re not using the ?System? Python you might end up having a bad time. As Donald says, we haven't worked out the exact technical details yet, but the general idea is that just as a Windows or Mac OS X wheel published on PyPI should be binary compatible with the corresponding python.org installer, it should be possible to get the relevant info into wheel filenames for people to be able to indicate the expected contents of "/etc/os-release". I suspect that for Linux you mean ?system-provided Python?? Looking at https://www.python.org/downloads/release/python-341/ there is no python.org binary installer for Linux. Even if there was I would expect only a small number of people to use that over the system-provided version, considering Linux distributions generally do a excellent job packaging Python. An /etc/os-release approach does sound like a reasonable approach. Will that also allow me to put ?macports? or ?homebrew? in there you can create an Cython-0.20.1-cp27-none-lmacosx_x86_64-macports_10.10.whl distribution? That might be useful. It would be slightly abusing the spec though since macports does not have releases itself so we?re mixing an OSX version number in, but that does not feel any worse than the other binary compatibility guarantees. For Linux why not use the existing /etc/lsb_release instead of creating a new file? Another way to go, if softwarecollections.org (aka "virtualenv for your distro package manager") gets any traction whatsoever on the Debian/Ubuntu side of the world, would be to publish wheels based on *software collection* compatibility, rather than distro compatibility. I had never heard of that; I?ll need to take a look at it. A quick look at the website doesn?t really tell me what problem it is trying to solve or how it works. It might be in the same space as things like debootstrap, docke or PPSar. Or perhaps NixOS really is a better way to handle this. That would be appealing to me wearing both my packaging hats - making prebuilt binaries for libraries more readily available to Python users in a way that maps to Python versions rather than distro variants, while at the same time encouraging them to get their end user applications the heck out of the system Python so they aren't tied so closely to the distro upgrade cycle :) Isn?t that already handle in the Debian/Ubuntu world with PPAs? Those do make it completely trivial to install extra things on a system, and you could easily make PPAs that provide custom builds that don?t overlap the OS. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 26 17:01:30 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 26 Jul 2014 11:01:30 -0400 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <12bf188246644b169bc8342d257e1bd9@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> <12bf188246644b169bc8342d257e1bd9@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: <09B8294E-A430-478D-A6A2-1FE399167D96@stufft.io> Well you don't have to publish any, but it will be a personal decision of each project. Most likely being based on how annoying it is for them to build for any particular matrix item, how hard a compiler setup is for that matrix item, and how many people they have using using their project on that matrix item. A PyPI build farm will ideally make the answer to the first question be "really easy" and from that point it just matters if they think it's useful to have a wheel for that build target. > On Jul 26, 2014, at 9:49 AM, Steve Dower wrote: > > Just how many wheels are people going to have to publish? From ncoghlan at gmail.com Sun Jul 27 02:55:15 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 27 Jul 2014 10:55:15 +1000 Subject: [Distutils] Other ideas from today's packaging meetup at EuroPython In-Reply-To: <12bf188246644b169bc8342d257e1bd9@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <53D20169.1060205@simplistix.co.uk> <20140725111026.72172ff2@anarchist.wooz.org> <1406310745.4264.7.camel@aladdin> <0B37A424-A20C-4231-AD78-FC7C3F55C5A9@wiggy.net> <68E4E74B-3F09-4F08-BD0A-F371C1344EFE@wiggy.net> <6DC850AC-3402-48D0-A39E-81662B9ACFF3@wiggy.net> <12bf188246644b169bc8342d257e1bd9@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: On 26 July 2014 23:49, Steve Dower wrote: > "Will that also allow me to put ?macports? or ?homebrew? in there you can > create an Cython-0.20.1-cp27-none-lmacosx_x86_64-macports_10.10.whl > distribution?" > > Just how many wheels are people going to have to publish? Who has that many > dev machines? Without a build farm, I can't see this being more helpful than > frustrating (and the build farm is going to need a way to automatically get > non-Python dependencies, which is not our business, so that's a long way off > IMO). Yeah, I actually suspect feeding into a PyPI PPA (Ubuntu) and COPR (Fedora/RHEL/CentOS) may be a more sensible model for the Linux communities, with conda/binstar as a way to become *completely* independent of the distro package managers (i.e. handling the delivery of the Python runtime and other external dependencies as well, not just the Python modules). That way we'd be offloading the task of providing the appropriate external dependencies to Launchpad/COPR for the integrated libraries, and to binstar for the distro independent ones. Potentially enabling that kind of approach is one of the reasons I've emphasised a redistributor friendly design for metadata 2.0 - not everybody wants to do their own system integration the way SaaS developers do, and one of the advantages of having both pip and conda available as tools is that we can stake out two different positions on the "redistributor friendliness" scale at the same time: - pip can focus on managing packages *within* an existing Python installation. It's applicable regardless of how you obtained that Python installation in the first place, and hence plays nice with redistributors (who each have their own way of providing Python and external dependencies) - conda can focus on being a compelling "cross platform platform" for delivery direct to end users, bypassing the other redistributor channels So pip becomes a tool for cooperating with redistributors, while conda becomes a tool for competing with them. I actually had some serious discussions with the conda folks at SciPy about facilitating closer collaboration between them and PyPA, and came to an explicit conclusion that it made sense for conda to remain a distinct community that works-with-but-is-not-part-of PyPA - not only is conda itself a complex beast with multiple subcommunities, but it's also in far more direct competition with the redistributor channels than pip is :) (Hmm, I don't think I passed along the details of my SciPy keynote to the list. For anyone interested, the talk is at http://pyvideo.org/video/2785/python-beyond-cpython-adventures-in-software-dis and notes are at https://bitbucket.org/ncoghlan/misc/src/default/talks/2014-07-scipy/talk.md?at=default) > An organisation building their own wheels in-house and configuring their > machines to use an OS specific index/wheelhouse sounds like a much more > feasible idea that works now and could do with more traction and > encouragement. (Someone mentioned earlier that they do this - apologies for > forgetting who.) Yeah, this is the model we were intending to encourage when we blocked wheel uploads to PyPI for platforms other than Windows and Mac OS X. It avoids a lot of the gnarly dependency management problems. Unfortunately, there's no generally accepted technique for automating it at this point (Although there's an issue open suggesting the addition of a feature along these lines to devpi: https://bitbucket.org/hpk42/devpi/issue/110/build-put-wheel-for-pypi-released-package) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rasky at develer.com Mon Jul 28 17:01:58 2014 From: rasky at develer.com (Giovanni Bajo) Date: Mon, 28 Jul 2014 17:01:58 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing Message-ID: Hello, on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. 16 months later, we still don?t have a deployed solution for letting people install signed packages. I see that TUF is evolving, and there is now a GitHub project with documentation, but I am very worried about the implementation timeline. I was also pointed to PEP458, which I tried to read and found it very confusing; the PEP assumes that the reader must be familiar with the TUF academic paper (which I always found quite convoluted per-se), and goes with an analysis of integration of TUF with PyPI; to the best of my understanding, the PEP does not provide a clear answer to practical questions like: * what a maintainer is supposed to do to submit a new signed package * how can differ maintainers signal that they both maintain the same package * how the user interface of PyPI will change * what are the required security maintenance that will need to be regularly performed by the PyPI ops I?m not saying that the TUF team has no answers to these questions (in fact, I?m 100% sure of the opposite); I?m saying that the PEP doesn?t clearly provide such answers. I think the PEP is very complicated to read as it goes into integration details between the TUF architecture and PyPI, and thus it is very complicated to review and accept. I would love the PEP to be updated to provide an overview on the *practical* effects of the integration of TUF within PyPI/pip, that must be fully readable to somebody with zero previous knowledge of TUF. As suggested by Richard Jones during EuroPython, I isolated the package signing sections from my original document, evolved them a little bit, and rewritten them in PEP format: https://gist.github.com/rasky/bd91cf01f72bcc931000 To the best of my recollection, in the previous review round, there were no critical issues found in the design. It might well be that TUF provides more security in some of the described attack scenarios; on the other hand, my proposal: * is in line with the security of (e.g..) existing Linux distros * is very simple to review, analyze and discuss for anybody with even a basic understanding of security * is much simpler than TUF * is a clear step forward from the current situation * cover areas not covered by PEP458 (e.g.: increasing security of account management on PyPI) * can be executed in 2-3 months (to the alpha / pre-review stage), and I volunteer for the execution. I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final form and then ask for a pronouncement. I apologize in advance if I made technical mistakes in the PEP format/structure; it is my first PEP. [1] See here: https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From p.f.moore at gmail.com Mon Jul 28 21:13:35 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 28 Jul 2014 20:13:35 +0100 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: On 28 July 2014 16:01, Giovanni Bajo wrote: > I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. > I would love to get the PEP to its final form and then ask for a pronouncement. I have only scanned the initial part of the proposal thus far, but I have some comments. First of all, the proposal is well-written - I'm not a security expert and my eyes very rapidly glaze over when reading security documents. (I know what you mean about the TUF docs!) Your PEP was pretty easy for me to follow, so many thanks for that :-) Comments on the content: * I assume that installation of unsigned packages would not need GPG or any form of key download, and would work as now. That's crucial, and without that, the proposal is a non-starter (consider pip running in an environment not connected to the internet, installing local builds that don't need signing and haven't been). * I didn't look at how the signature metatata was supplied, but I assume it would only be served from full indexes and not from --find-links locations (relevant in the above scenario). * I am strongly against pip depending on an external GPG. Even though it may be a simple install, it may not be allowed in locked-down environments, and on virtual machines and testing services (like Travis or Appveyor) installing may be non-trivial or simply an annoying extra step. * Given that this leaves a pure-python GPG implementation, does one exist? Is it robust? I wouldn't want to rely on a low-quality implementation. * Also, would it be fast enough? Speed of building virtualenvs has always been something users care about (it was behind the development of wheels for example) and a key-checking step could slow down builds noticeably. * Other tools would need changing as well. There's distlib, and PyPI mirroring tools like devpi. * There will always need to be an option to install unsigned packages, even if it's only for local packages served up by a private index. Also, a couple of points that are more related to the general idea of "everything should be signed" - something that I don't disagree with but I do have opinions on, as someone who's never actually published anything on PyPI but probably will someday (I have an endless stream of "nearly ready" stuff...) and who fears that the expectations that maintainers are at least minimally organised might just exclude him ;-) * I didn't see a discussion of what happens if a maintainer loses his GPG key (not compromised, just lost - say he accidentally deleted his only copy). Would he have to generate a new one and re-sign everything? How would that affect users of his packages? * Also what about a maintainer working on a different PC where he doesn't have his key available? I guess the answer there may be "tough, you can't maintain your package without your key available". (How secure would services like SkyDrive and Dropbox be considered in that regard? I normally work between 2 PCs where the only reliable shared resource I have is SkyDrive). These aren't directly related to the specifics of the proposal itself, but might warrant a section addressing them... Paul From noah at coderanger.net Mon Jul 28 21:19:44 2014 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 28 Jul 2014 12:19:44 -0700 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <0E2582E2-A5F4-4160-9765-417651F14138@coderanger.net> To be clear, this adds literally no security. It adds some developer experience improvement in that you could do releases securely even while PyPI is unreachable. This has been repeatedly stated as cute but not a huge priority for PyPI. Our uptime has been good enough for the last several years that this doesn't address enough of a niche to be worth the _massive_ increase in complexity. Any solution like that involves both online keys and an RBAC/trust list distributed from PyPI will share these properties. Adding offline keys puts you back in the realm of TUF, and does potentially add some security benefits, though they are way way down the long tail. Overall strong -1. --Noah On Jul 28, 2014, at 8:01 AM, Giovanni Bajo wrote: > Hello, > > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. > > 16 months later, we still don?t have a deployed solution for letting people install signed packages. I see that TUF is evolving, and there is now a GitHub project with documentation, but I am very worried about the implementation timeline. > > I was also pointed to PEP458, which I tried to read and found it very confusing; the PEP assumes that the reader must be familiar with the TUF academic paper (which I always found quite convoluted per-se), and goes with an analysis of integration of TUF with PyPI; to the best of my understanding, the PEP does not provide a clear answer to practical questions like: > > * what a maintainer is supposed to do to submit a new signed package > * how can differ maintainers signal that they both maintain the same package > * how the user interface of PyPI will change > * what are the required security maintenance that will need to be regularly performed by the PyPI ops > > I?m not saying that the TUF team has no answers to these questions (in fact, I?m 100% sure of the opposite); I?m saying that the PEP doesn?t clearly provide such answers. I think the PEP is very complicated to read as it goes into integration details between the TUF architecture and PyPI, and thus it is very complicated to review and accept. I would love the PEP to be updated to provide an overview on the *practical* effects of the integration of TUF within PyPI/pip, that must be fully readable to somebody with zero previous knowledge of TUF. > > As suggested by Richard Jones during EuroPython, I isolated the package signing sections from my original document, evolved them a little bit, and rewritten them in PEP format: > https://gist.github.com/rasky/bd91cf01f72bcc931000 > > To the best of my recollection, in the previous review round, there were no critical issues found in the design. It might well be that TUF provides more security in some of the described attack scenarios; on the other hand, my proposal: > > * is in line with the security of (e.g..) existing Linux distros > * is very simple to review, analyze and discuss for anybody with even a basic understanding of security > * is much simpler than TUF > * is a clear step forward from the current situation > * cover areas not covered by PEP458 (e.g.: increasing security of account management on PyPI) > * can be executed in 2-3 months (to the alpha / pre-review stage), and I volunteer for the execution. > > I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final form and then ask for a pronouncement. > > I apologize in advance if I made technical mistakes in the PEP format/structure; it is my first PEP. > > [1] See here: https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# > -- > Giovanni Bajo :: rasky at develer.com > Develer S.r.l. :: http://www.develer.com > > My Blog: http://giovanni.bajo.it > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Mon Jul 28 21:26:28 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 28 Jul 2014 20:26:28 +0100 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: <0E2582E2-A5F4-4160-9765-417651F14138@coderanger.net> References: <0E2582E2-A5F4-4160-9765-417651F14138@coderanger.net> Message-ID: On 28 July 2014 20:19, Noah Kantrowitz wrote: > To be clear, this adds literally no security. Really? For my education, could you clarify? Is this because we can assume (with https) that every step between the developer uploading to PyPI and the user downloading to his local PC is secured? Paul From noah at coderanger.net Mon Jul 28 21:31:21 2014 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 28 Jul 2014 12:31:21 -0700 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <0E2582E2-A5F4-4160-9765-417651F14138@coderanger.net> Message-ID: The critical path on the current system is "you request the package index or package file itself from https://pypi.python.org and assert that it is correct because the certificate verifies". In the proposed system the critical path is "you request the trust file from https://pypi.python.org and assert that it is correct because the certificate verifies". As you might note, these are functionally equivalent. If you can break one, you can break the other. --Noah On Jul 28, 2014, at 12:26 PM, Paul Moore wrote: > On 28 July 2014 20:19, Noah Kantrowitz wrote: >> To be clear, this adds literally no security. > > Really? For my education, could you clarify? Is this because we can > assume (with https) that every step between the developer uploading to > PyPI and the user downloading to his local PC is secured? > > Paul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mal at egenix.com Mon Jul 28 21:43:30 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 28 Jul 2014 21:43:30 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <53D6A7E2.1060301@egenix.com> Hi Giovanni, this sounds like a good proposal. I only have one nit with it: GPG signing should not be made mandatory for package authors, but instead just be encouraged. Not everyone is happy with the way GPG works, or want to maintain their private keys and it's illegal to use / can cause problems in some countries: http://www.cryptolaw.org/cls-sum.htm Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 28 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ On 28.07.2014 17:01, Giovanni Bajo wrote: > Hello, > > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. > > 16 months later, we still don?t have a deployed solution for letting people install signed packages. I see that TUF is evolving, and there is now a GitHub project with documentation, but I am very worried about the implementation timeline. > > I was also pointed to PEP458, which I tried to read and found it very confusing; the PEP assumes that the reader must be familiar with the TUF academic paper (which I always found quite convoluted per-se), and goes with an analysis of integration of TUF with PyPI; to the best of my understanding, the PEP does not provide a clear answer to practical questions like: > > * what a maintainer is supposed to do to submit a new signed package > * how can differ maintainers signal that they both maintain the same package > * how the user interface of PyPI will change > * what are the required security maintenance that will need to be regularly performed by the PyPI ops > > I?m not saying that the TUF team has no answers to these questions (in fact, I?m 100% sure of the opposite); I?m saying that the PEP doesn?t clearly provide such answers. I think the PEP is very complicated to read as it goes into integration details between the TUF architecture and PyPI, and thus it is very complicated to review and accept. I would love the PEP to be updated to provide an overview on the *practical* effects of the integration of TUF within PyPI/pip, that must be fully readable to somebody with zero previous knowledge of TUF. > > As suggested by Richard Jones during EuroPython, I isolated the package signing sections from my original document, evolved them a little bit, and rewritten them in PEP format: > https://gist.github.com/rasky/bd91cf01f72bcc931000 > > To the best of my recollection, in the previous review round, there were no critical issues found in the design. It might well be that TUF provides more security in some of the described attack scenarios; on the other hand, my proposal: > > * is in line with the security of (e.g..) existing Linux distros > * is very simple to review, analyze and discuss for anybody with even a basic understanding of security > * is much simpler than TUF > * is a clear step forward from the current situation > * cover areas not covered by PEP458 (e.g.: increasing security of account management on PyPI) > * can be executed in 2-3 months (to the alpha / pre-review stage), and I volunteer for the execution. > > I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final form and then ask for a pronouncement. > > I apologize in advance if I made technical mistakes in the PEP format/structure; it is my first PEP. > > [1] See here: https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From donald at stufft.io Mon Jul 28 22:26:41 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 16:26:41 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: On July 28, 2014 at 1:42:54 PM, Giovanni Bajo (rasky at develer.com) wrote: > > I thus solicit a second round of review of my proposal; if you want me to upload to Google > Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final > form and then ask for a pronouncement. >? In my eyes your proposal is actually three proposals: * Use GPG to have authors sign packages and publish a trust file to use in ? verifying those signatures. * Implement 2Fa authentication. * Implement better notification when security sensitive events occur. I'm going to split up my feedback based on those three ideas as well as a general overall feedback. To be clear, I'm also strongly -1 on the package signing portions of this PEP and the other portions I'm strongly +1 but they don't require a PEP. General ------- Overall the main benefit that would be achieved in this proposal comes from the 2FA and increased notification/auditability. The package signing exists mostly as a attractive nuisance which doesn't really win us much for the increased complexity. It's true that the TUF PEP, and TUF in general is not particularly approachable if you're not familiar with it already and especially so if you're not familiar with implementing secure systems. It is also true that it comes with a fair bit of complexity, however that complexity mostly exists for good reason. Establishing trust securely is a complex topic, especially when you're trying to do it in an open system where anyone can join at anytime and should be able to be immediately trusted. For the record the TUF PEP is mostly stalled on me. I'm one of the few people who are both qualified to review it, who care enough to review it, and who have the time to review it. Use GPG for Package Signatures ------------------------------ I have to say, I'm not entirely thrilled by this part of the proposal. I do not think it gives us much above what we currently have in terms of security and I think it will muddy the waters and potentially prevent implementing a more comprehensive solution in the future. All of the trust in this proposal flows from a simple trust file. Generating and fetching this file is the weak link in this proposal. Generating this file relies on PyPI not being compromised and also having a proper state of the world. The ways in which this falls down are: * Anyone able to defeat our TLS and is in a position to MITM a regular user ? can simply supply their own trust file. * Anyone able to defeat our TLS and is in a position to MITM a package author ? can simply register their own GPG key. * Anyone who has compromised an authors PyPI account can simply register their ? own GPG key. * Fastly (which is effictively a persistent MITM) could simply modify traffic ? to register their own GPG key. * Anyone who has compromised PyPI can simply generate their own trust file ? with their own keys. Now it is true that if we sign the trust file with a key that is stored on PyPI, then the first one of those cases is no longer possible. However the remaining three are still there. This means we are still effectively relying on the security of TLS and thus we've not really gained anything except additional complexity. An additional problem is that as an append only file, this file will end up growing unbounded forever. It's not particularly hard to imagine that this can cause few problems, especially for new users, who will experience that they need to download a multi-gig file before they can download anything. It's also trivial to DoS this by simply generating a ton of things that could cause this events which need to get written to this file causing it to quickly balloon to massive proportions, and because the end clients must treat this file as append only there is zero remediation available to undo the damage caused by this kind of DoS. Implement 2FA/Better Authentication ----------------------------------- Absolutely, we don't need a PEP to do this, we just need to do it. It's on my personal TODO list but other things have had higher priority thus far. Implement Better Notifications ------------------------------ Again this is absolutely yes and are also things we don't need a PEP to do and also on my personal TODO list. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Mon Jul 28 23:17:32 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 17:17:32 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: On July 28, 2014 at 4:26:42 PM, Donald Stufft (donald at stufft.io) wrote: > On July 28, 2014 at 1:42:54 PM, Giovanni Bajo (rasky at develer.com) wrote: > > > > I thus solicit a second round of review of my proposal; if you want me to upload to Google > > Docs for easier of commenting, I can do that as well. I would love to get the PEP to its final > > form and then ask for a pronouncement. > > >? Oh, I forgot to mention also about the package signing... Actually *discovering* the packages which are to be installed is still completely dependent on the security of TLS. This means if the TLS connection was compromised then someone could trick people into being insecure by presenting them a list of packages which is not complete and which only show older, insecure ones that are missing important security updates thus tricking someone into installing known vulnerable software. --? Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rasky at develer.com Mon Jul 28 23:57:45 2014 From: rasky at develer.com (Giovanni Bajo) Date: Mon, 28 Jul 2014 23:57:45 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <83B1B29A-B268-49EF-9890-6EFDBD141834@develer.com> Il giorno 28/lug/2014, alle ore 22:26, Donald Stufft ha scritto: > Use GPG for Package Signatures > ??????????????? Many thanks for reviewing my proposal. > I have to say, I'm not entirely thrilled by this part of the proposal. I do not > think it gives us much above what we currently have in terms of security and > I think it will muddy the waters and potentially prevent implementing a more > comprehensive solution in the future. > > All of the trust in this proposal flows from a simple trust file. Generating > and fetching this file is the weak link in this proposal. Generating this file > relies on PyPI not being compromised and also having a proper state of the > world. The ways in which this falls down are: > > * Anyone able to defeat our TLS and is in a position to MITM a regular user > can simply supply their own trust file. Yes, though as you say, this is fixed by having the trust file signed. It can also be mitigated with certificate pinning and online sign of the trust file. Google+Chrome has shown that pinning is very effective for this specific use-case (specific client connecting to specific server). > * Anyone able to defeat our TLS and is in a position to MITM a package author > can simply register their own GPG key. The same happens with PEP458, for all packages but those in the ?claimed" role (which requires manual vetting; this can be added to my proposal as well). This is fixed by 2FA. Notice that implementing 2FA but *not* package signing is not enough to fix this attack; the attacker in fact would still be able to simply modify a package release in the transit, and the user would then 2FA-authorize a modified package without realizing it. This is one of many examples in which 2FA collaborates with package signing to increase security, and this is why I merged the two proposals; of course they can be split, but together they achieve more. > * Anyone who has compromised an authors PyPI account can simply register their > own GPG key. The same happens with PEP458, for all packages but those in the ?claimed? role. This is fixed again by 2FA. With notifications, there is a partial mitigation; delaying 72 hours for popular packages is also a good mitigation. For this specific scenario (attacker who has compromised the PyPI account), I agree that package signing is not specifically useful; but it is also expected as a PyPI account is the only way we associate an author with a package, so it makes sense that end-to-end security is broken. > * Fastly (which is effictively a persistent MITM) could simply modify traffic > to register their own GPG key. True. This is also fixed by PyPI online signing of the trust file. > * Anyone who has compromised PyPI can simply generate their own trust file > with their own keys. Yes, this scenario is well analyzed in my PEP, including mitigations and likely attack vectors. I don?t see PEP458 solving this, but this is where my understanding of it gets fuzzier; compromising PyPI let you compromise all online roles, as far as I can tell; and at that point, you can compromise timestamp, consistent-snapshot and recently-claimed roles at the same time (which the PEP states that must be online), and this allows you for serving malicious updates, do freeze attacks and whatnot. The only real additional security of PEP458 seems like the manual step of vetting maintainers keys, thus freezing them. If this is a wanted improvement, it can be easily added to my proposal a well with a very slight modification. The hardest part here is how the vet process should work; it?s a VERY complicated process to pull off in the real world without being subject to e.g.: social engineering; if the process is worked out, it?s trivial to modify my proposal to handle this as well. > Now it is true that if we sign the trust file with a key that is stored on > PyPI, then the first one of those cases is no longer possible. However the > remaining three are still there. This means we are still effectively relying > on the security of TLS and thus we've not really gained anything except > additional complexity. I think you?re oversimplifying the conclusion. There are many possible attack scenarios, and my proposal (as a whole) fixes many of them, as described in my threat model. Also it looks like you?re also ignoring the additional layer of manually handling a local copy of the trust file. This is very useful for some scenarios like unattended deployments and company indexes, and protects against additional attacks (e.g.: shadowing of private packages in PyPI). I?ve just updated the PEP to describe these attacks. Can you please explain which of your proposed attacks is better handled by PEP458? > An additional problem is that as an append only file, this file will end up > growing unbounded forever. It's not particularly hard to imagine that this can > cause few problems, especially for new users, who will experience that they > need to download a multi-gig file before they can download anything. It's also > trivial to DoS this by simply generating a ton of things that could cause this > events which need to get written to this file causing it to quickly balloon to > massive proportions, and because the end clients must treat this file as append > only there is zero remediation available to undo the damage caused by this kind > of DoS. With a conservative average of 60 bytes per package, the trust file would measure around 3 mb today. Assuming all authors rotate GPG keys every year, and package numbers that grow 50% every year, the trust-file would reach 170Mb in 10 years. Introducing a way to reset (?rotate?) the file every 5 years seems more than sufficient. You have a solid point on DoS, this is something that must be handled through throttling I guess. I don?t see TUF fixing this either; isn?t there a single ever-growing targets.txt? > Implement 2FA/Better Authentication > ----------------------------------- > > Absolutely, we don't need a PEP to do this, we just need to do it. It's on my > personal TODO list but other things have had higher priority thus far. Maybe not a PEP, but some discussion is needed. Would you prefer patches against PyPI or warehouse? Would you evaluate a simpler solution using a third-party 2FA provider (e.g.: Authy or Duo Security) that could be talked into a PSF sponsorship, or would you prefer an in-house solution? If in-house, is it OK to go with a standard OATH TOTP with QR Code for provisioning, or you prefer to also have alternatives like SMS? How do you propose to handle recovery for a lost token (e.g.: stolen smartphone with no backup)? Would you ask for the token for all web logins? What about package uploads: should distutils be modified to also ask for 2FA in interactive mode on the terminal, to confirm a package release? I have my own preferred answers to the above questions, but you might have a strong opinion already. -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From vladimir.v.diaz at gmail.com Tue Jul 29 00:13:30 2014 From: vladimir.v.diaz at gmail.com (Vladimir Diaz) Date: Mon, 28 Jul 2014 18:13:30 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: Hi, I'm Vladimir Diaz and have been leading the development of the TUF project. > 16 months later, we still don?t have a deployed solution for letting people install signed packages. I see that TUF is evolving, and there is now a GitHub project with documentation, but I am very worried about the implementation timeline. The implementation of TUF is not really evolving, unless you mean that it has been updated to improve test coverage and add a minor feature. The code is available and ready for use. In fact, it is about to be deployed by the LEAP project https://leap.se/en. We've largely heard that the integration of TUF (with any necessary changes by Donald) will happen once Warehouse is further along. I have helped a bit with the Warehouse migration (unrelated to TUF) and will put in more time in the next few months. We are ready to integrate TUF into Warehouse once we have the green light from Donald. On Mon, Jul 28, 2014 at 11:01 AM, Giovanni Bajo wrote: > Hello, > > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a > proposal for fixing several security problems in PyPI, pip and > distutils[1]. Some of my proposals were obvious things like downloading > packages through SSL, which was already in progress of being designed and > implemented. Others, like GPG package signing, were discussed for several > days/weeks, but ended up in discussion paralysis because of the upcoming > TUF framework. > > 16 months later, we still don?t have a deployed solution for letting > people install signed packages. I see that TUF is evolving, and there is > now a GitHub project with documentation, but I am very worried about the > implementation timeline. > > I was also pointed to PEP458, which I tried to read and found it very > confusing; the PEP assumes that the reader must be familiar with the TUF > academic paper (which I always found quite convoluted per-se), and goes > with an analysis of integration of TUF with PyPI; to the best of my > understanding, the PEP does not provide a clear answer to practical > questions like: > > * what a maintainer is supposed to do to submit a new signed package > * how can differ maintainers signal that they both maintain the same > package > * how the user interface of PyPI will change > * what are the required security maintenance that will need to be > regularly performed by the PyPI ops > > I?m not saying that the TUF team has no answers to these questions (in > fact, I?m 100% sure of the opposite); I?m saying that the PEP doesn?t > clearly provide such answers. I think the PEP is very complicated to read > as it goes into integration details between the TUF architecture and PyPI, > and thus it is very complicated to review and accept. I would love the PEP > to be updated to provide an overview on the *practical* effects of the > integration of TUF within PyPI/pip, that must be fully readable to somebody > with zero previous knowledge of TUF. > > As suggested by Richard Jones during EuroPython, I isolated the package > signing sections from my original document, evolved them a little bit, and > rewritten them in PEP format: > https://gist.github.com/rasky/bd91cf01f72bcc931000 > > To the best of my recollection, in the previous review round, there were > no critical issues found in the design. It might well be that TUF provides > more security in some of the described attack scenarios; on the other hand, > my proposal: > > * is in line with the security of (e.g..) existing Linux distros > * is very simple to review, analyze and discuss for anybody with even a > basic understanding of security > * is much simpler than TUF > * is a clear step forward from the current situation > * cover areas not covered by PEP458 (e.g.: increasing security of > account management on PyPI) > * can be executed in 2-3 months (to the alpha / pre-review stage), and I > volunteer for the execution. > > I thus solicit a second round of review of my proposal; if you want me to > upload to Google Docs for easier of commenting, I can do that as well. I > would love to get the PEP to its final form and then ask for a > pronouncement. > > I apologize in advance if I made technical mistakes in the PEP > format/structure; it is my first PEP. > > [1] See here: > https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# > -- > Giovanni Bajo :: rasky at develer.com > Develer S.r.l. :: http://www.develer.com > > My Blog: http://giovanni.bajo.it > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcappos at nyu.edu Tue Jul 29 00:22:55 2014 From: jcappos at nyu.edu (Justin Cappos) Date: Mon, 28 Jul 2014 18:22:55 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: So, I think Vlad covered the status of the implementation side well. We've also done some work on the writing / doc side, but haven't pushed fixes to the PEP. We can (and should) do so. We have an academic writeup that speaks in more detail about many of the issues you mention, along with other items. We will make the revised documents easier to find publicly, but let me address your specific concerns here. * what a maintainer is supposed to do to submit a new signed package A maintainer will upload a public key when creating a project. When uploading a package, metadata is signed and uploaded that indicates trust. Our developer tools guide ( https://github.com/theupdateframework/tuf/blob/develop/tuf/README-developer-tools.md) is meant to be a first draft at this document that answers any questions. There will also be a quick start guide which is just a few steps: generate and upload a key sign metadata and upload it with your project * how can differ maintainers signal that they both maintain the same package A project can delegate trust to multiple developers. Depending on how this is done, either developer may be trusted for the package. The developer tools guide shows this. * how the user interface of PyPI will change We're open to suggestions here. There is flexibility from our side for how this works. * what are the required security maintenance that will need to be regularly performed by the PyPI ops Essentially, the developers need to check a list of 'revoked claimed keys' and ensure that this list matches what they will sign with their offline claimed key. This is also detailed in the writeup. Giovanni: TUF retains security even when PyPI is compromised (including online keys). I didn't have time to read the latest version of your proposal, but from what I understand what is proposed will have problems in this scenario. Justin On Mon, Jul 28, 2014 at 6:13 PM, Vladimir Diaz wrote: > Hi, I'm Vladimir Diaz and have been leading the development of the TUF > project. > > > 16 months later, we still don?t have a deployed solution for letting > people install signed packages. I see that TUF is evolving, and there is > now a GitHub project with documentation, but I am very worried about the > implementation timeline. > > The implementation of TUF is not really evolving, unless you mean that it > has been updated to improve test coverage and add a minor feature. The > code is available and ready for use. In fact, it is about to be deployed > by the LEAP project https://leap.se/en. > > We've largely heard that the integration of TUF (with any necessary > changes by Donald) will happen once Warehouse is further along. I have > helped a bit with the Warehouse migration (unrelated to TUF) and will put > in more time in the next few months. We are ready to integrate TUF into > Warehouse once we have the green light from Donald. > > > On Mon, Jul 28, 2014 at 11:01 AM, Giovanni Bajo wrote: > >> Hello, >> >> on March 2013, on the now-closed catalog-sig mailing-list, I submitted a >> proposal for fixing several security problems in PyPI, pip and >> distutils[1]. Some of my proposals were obvious things like downloading >> packages through SSL, which was already in progress of being designed and >> implemented. Others, like GPG package signing, were discussed for several >> days/weeks, but ended up in discussion paralysis because of the upcoming >> TUF framework. >> >> 16 months later, we still don?t have a deployed solution for letting >> people install signed packages. I see that TUF is evolving, and there is >> now a GitHub project with documentation, but I am very worried about the >> implementation timeline. >> >> I was also pointed to PEP458, which I tried to read and found it very >> confusing; the PEP assumes that the reader must be familiar with the TUF >> academic paper (which I always found quite convoluted per-se), and goes >> with an analysis of integration of TUF with PyPI; to the best of my >> understanding, the PEP does not provide a clear answer to practical >> questions like: >> >> * what a maintainer is supposed to do to submit a new signed package >> * how can differ maintainers signal that they both maintain the same >> package >> * how the user interface of PyPI will change >> * what are the required security maintenance that will need to be >> regularly performed by the PyPI ops >> >> I?m not saying that the TUF team has no answers to these questions (in >> fact, I?m 100% sure of the opposite); I?m saying that the PEP doesn?t >> clearly provide such answers. I think the PEP is very complicated to read >> as it goes into integration details between the TUF architecture and PyPI, >> and thus it is very complicated to review and accept. I would love the PEP >> to be updated to provide an overview on the *practical* effects of the >> integration of TUF within PyPI/pip, that must be fully readable to somebody >> with zero previous knowledge of TUF. >> >> As suggested by Richard Jones during EuroPython, I isolated the package >> signing sections from my original document, evolved them a little bit, and >> rewritten them in PEP format: >> https://gist.github.com/rasky/bd91cf01f72bcc931000 >> >> To the best of my recollection, in the previous review round, there were >> no critical issues found in the design. It might well be that TUF provides >> more security in some of the described attack scenarios; on the other hand, >> my proposal: >> >> * is in line with the security of (e.g..) existing Linux distros >> * is very simple to review, analyze and discuss for anybody with even a >> basic understanding of security >> * is much simpler than TUF >> * is a clear step forward from the current situation >> * cover areas not covered by PEP458 (e.g.: increasing security of >> account management on PyPI) >> * can be executed in 2-3 months (to the alpha / pre-review stage), and >> I volunteer for the execution. >> >> I thus solicit a second round of review of my proposal; if you want me to >> upload to Google Docs for easier of commenting, I can do that as well. I >> would love to get the PEP to its final form and then ask for a >> pronouncement. >> >> I apologize in advance if I made technical mistakes in the PEP >> format/structure; it is my first PEP. >> >> [1] See here: >> https://docs.google.com/a/develer.com/document/d/1DgQdDCZY5LiTY5mvfxVVE4MTWiaqIGccK3QCUI8np4k/edit# >> -- >> Giovanni Bajo :: rasky at develer.com >> Develer S.r.l. :: http://www.develer.com >> >> My Blog: http://giovanni.bajo.it >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasky at develer.com Tue Jul 29 00:23:40 2014 From: rasky at develer.com (Giovanni Bajo) Date: Tue, 29 Jul 2014 00:23:40 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <0BB55369-2AEC-49B4-B40C-B3ACE2F7078E@develer.com> Il giorno 28/lug/2014, alle ore 21:13, Paul Moore ha scritto: > On 28 July 2014 16:01, Giovanni Bajo wrote: >> I thus solicit a second round of review of my proposal; if you want me to upload to Google Docs for easier of commenting, I can do that as well. >> I would love to get the PEP to its final form and then ask for a pronouncement. > > I have only scanned the initial part of the proposal thus far, but I > have some comments. Thanks for reviewing my proposal. > First of all, the proposal is well-written - I'm not a security expert > and my eyes very rapidly glaze over when reading security documents. > (I know what you mean about the TUF docs!) Your PEP was pretty easy > for me to follow, so many thanks for that :-) Thanks. I personally think that there is a value in simpleness of security architecture, but many would disagree. > * I assume that installation of unsigned packages would not need GPG > or any form of key download, and would work as now. That's crucial, > and without that, the proposal is a non-starter (consider pip running > in an environment not connected to the internet, installing local > builds that don't need signing and haven't been). Yes. If you ran pip with option ??allow-unsigned-packages?, it would not require GPG at all. > * I didn't look at how the signature metatata was supplied, but I > assume it would only be served from full indexes and not from > --find-links locations (relevant in the above scenario). Yes, that was my idea, but I?m a little bit fuzzy on the real-world ?find-links use case; in fact, this is one of the cases where I need some guidance to complete my PEP. > * I am strongly against pip depending on an external GPG. Even though > it may be a simple install, it may not be allowed in locked-down > environments, and on virtual machines and testing services (like > Travis or Appveyor) installing may be non-trivial or simply an > annoying extra step. > * Given that this leaves a pure-python GPG implementation, does one > exist? Is it robust? I wouldn't want to rely on a low-quality > implementation. My opinion is that depending on an external GPG is the best option, but I think that ultimately this is a decision for pip maintainers and is pretty orthogonal to my PEP. Notice that most (all?) Linux machines come with gpg preinstalled as it is a dependency of both rpm and apt, so I don?t think that a Linux-based CI system would be in trouble because of this. > * Also, would it be fast enough? Speed of building virtualenvs has > always been something users care about (it was behind the development > of wheels for example) and a key-checking step could slow down builds > noticeably. Assuming an empty environment: 1) I/O time required to download the trust file. See my discussion with Donald for some size estimates. Notice that you can ship it (or a subset of it, for increased security) to your build environment together with the project you?re building. Being a global append-only file, it might eventually become to come pre-shipped in Python build environments. 2) I/O time to download the keys. Keys are quite small and need to be downloaded by a keyserver; assuming we can?t find a fast enough keyserver to use, we might consider implementing a keyserver in PyPI itself, with Fastly as frontend (not sure about the details of the hkp protocol though, but the ?h? stands for http so I hope that we can easy the CDN with it as well). > * Other tools would need changing as well. There's distlib, and PyPI > mirroring tools like devpi. Yes. > * There will always need to be an option to install unsigned packages, > even if it's only for local packages served up by a private index. Yes. > * I didn't see a discussion of what happens if a maintainer loses his > GPG key (not compromised, just lost - say he accidentally deleted his > only copy). Would he have to generate a new one and re-sign > everything? How would that affect users of his packages? I haven?t fully-fledged the details yet. Technically, when you revoke a key, you declare the date from which the key was compromised; everything signed before that date is still considered valid, so there is no need to resign releases that predates the compromisation. My idea would be that PyPI would have a background job which routinely checks for revoked keys; when it finds one, it automatically deassociates it from the maintainers account, and removes (hides?) any package whose signature is not valid anymore. The maintainer would then have to login to PyPI, register a new GPG fingerprint, and resign releases which were disabled. > * Also what about a maintainer working on a different PC where he > doesn't have his key available? I guess the answer there may be > "tough, you can't maintain your package without your key available". > (How secure would services like SkyDrive and Dropbox be considered in > that regard? I normally work between 2 PCs where the only reliable > shared resource I have is SkyDrive). This enters the realm of GPG best practices, which would eventually end up on documentation for package users such as the pypa docs). Historically, people tend to have one GPG private key that they synchronize among all their computers. Since you just need to copy the key once, you don?t really need cloud-syncing, as there?s nothing to continually sync. -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From donald at stufft.io Tue Jul 29 00:26:21 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 18:26:21 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: <83B1B29A-B268-49EF-9890-6EFDBD141834@develer.com> References: <83B1B29A-B268-49EF-9890-6EFDBD141834@develer.com> Message-ID: On July 28, 2014 at 5:57:54 PM, Giovanni Bajo (rasky at develer.com) wrote: > Il giorno 28/lug/2014, alle ore 22:26, Donald Stufft ha scritto: > > > > > * Anyone able to defeat our TLS and is in a position to MITM a regular user > > can simply supply their own trust file. > > Yes, though as you say, this is fixed by having the trust file signed. If we're relying on an online key to sign this file, then we can trust that online key to just sign everything and we don't need the authors to do anything and we don't need a trust file. > > It can also be mitigated with certificate pinning and online sign of the trust file. Google+Chrome > has shown that pinning is very effective for this specific use-case (specific client > connecting to specific server). If we're relying on pinned certificates in order to fetch the trust file securely, then we can rely on it to securely transmit the package files. > > > * Anyone able to defeat our TLS and is in a position to MITM a package author > > can simply register their own GPG key. > > The same happens with PEP458, for all packages but those in the ?claimed" role (which > requires manual vetting; this can be added to my proposal as well). Of course. I never said it didn't. The difference being that PEP 458 had a plan for this, and the current proposal doesn't. This is a flaw with anything that doesn't use some offline key or verification. However, as this proposal copies more things from PEP 458, I struggle to see the reason for having it over just doing PEP 458 (which I'm still not entirely sure is worth it). > > This is fixed by 2FA. Notice that implementing 2FA but *not* package signing is not enough > to fix this attack; the attacker in fact would still be able to simply modify a package > release in the transit, and the user would then 2FA-authorize a modified package without > realizing it. > > This is one of many examples in which 2FA collaborates with package signing to increase > security, and this is why I merged the two proposals; of course they can be split, but together > they achieve more. No, 2FA does nothing for this, if you're in a position to MITM an exploited TLS stream, you can just steal the session cookie which does not have a second factor. > > > * Anyone who has compromised an authors PyPI account can simply register their > > own GPG key. > > The same happens with PEP458, for all packages but those in the ?claimed? role. > > This is fixed again by 2FA. With notifications, there is a partial mitigation; delaying > 72 hours for popular packages is also a good mitigation. > > For this specific scenario (attacker who has compromised the PyPI account), I agree > that package signing is not specifically useful; but it is also expected as a PyPI account > is the only way we associate an author with a package, so it makes sense that end-to-end > security is broken. To be clear, the point of the claimed role is to be the target security level for all people to be in. The other roles exist primarily to limit the trust on TLS and to enable moving from the unclaimed to claimed role. If PEP 458 didn't have the claimed role you'd see me being strongly -1 on it as well. > > > * Fastly (which is effictively a persistent MITM) could simply modify traffic > > to register their own GPG key. > > True. This is also fixed by PyPI online signing of the trust file. At which point, why don't we just trust that key completely and let it sign all the packages? > > > * Anyone who has compromised PyPI can simply generate their own trust file > > with their own keys. > > Yes, this scenario is well analyzed in my PEP, including mitigations and likely attack > vectors. > > I don?t see PEP458 solving this, but this is where my understanding of it gets fuzzier; > compromising PyPI let you compromise all online roles, as far as I can tell; and at that > point, you can compromise timestamp, consistent-snapshot and recently-claimed roles > at the same time (which the PEP states that must be online), and this allows you for serving > malicious updates, do freeze attacks and whatnot. Correct, any of the online keys can be compromised, the real benefit of TUF (as far as end to end security is concerned) is in the claimed role. > > The only real additional security of PEP458 seems like the manual step of vetting maintainers > keys, thus freezing them. If this is a wanted improvement, it can be easily added to my > proposal a well with a very slight modification. The hardest part here is how the vet process > should work; it?s a VERY complicated process to pull off in the real world without being > subject to e.g.: social engineering; if the process is worked out, it?s trivial to modify > my proposal to handle this as well. Again, as this proposal copies more from TUF I fail to see it?s benefit over TUF. > > > Now it is true that if we sign the trust file with a key that is stored on > > PyPI, then the first one of those cases is no longer possible. However the > > remaining three are still there. This means we are still effectively relying > > on the security of TLS and thus we've not really gained anything except > > additional complexity. > > I think you?re oversimplifying the conclusion. There are many possible attack scenarios, > and my proposal (as a whole) fixes many of them, as described in my threat model. I?m not oversimplifying it. > > Also it looks like you?re also ignoring the additional layer of manually handling a local > copy of the trust file. This is very useful for some scenarios like unattended deployments > and company indexes, and protects against additional attacks (e.g.: shadowing of private > packages in PyPI). I?ve just updated the PEP to describe these attacks. > > Can you please explain which of your proposed attacks is better handled by PEP458? All of them? They are more or less completely mitigated once someone is safely in the claimed role. > With a conservative average of 60 bytes per package, the trust file would measure around? > 3 mb today. Assuming all authors rotate GPG keys every year, and package numbers that > grow 50% every year, the trust-file would reach 170Mb in 10 years. Introducing a way to > reset (?rotate?) the file every 5 years seems more than sufficient. > > You have a solid point on DoS, this is something that must be handled through throttling > I guess. I don?t see TUF fixing this either; isn?t there a single ever-growing targets.txt? The targets.txt can be broken up fairly trivially if I recall. In fact I am pretty sure they do this for the unclaimed role and they stick them in different bins. > > > > Implement 2FA/Better Authentication > > ----------------------------------- > > > > Absolutely, we don't need a PEP to do this, we just need to do it. It's on my > > personal TODO list but other things have had higher priority thus far. > > Maybe not a PEP, but some discussion is needed. > > Would you prefer patches against PyPI or warehouse? Would you evaluate a simpler solution > using a third-party 2FA provider (e.g.: Authy or Duo Security) that could be talked into > a PSF sponsorship, or would you prefer an in-house solution? If in-house, is it OK to go > with a standard OATH TOTP with QR Code for provisioning, or you prefer to also have alternatives > like SMS? How do you propose to handle recovery for a lost token (e.g.: stolen smartphone > with no backup)? Would you ask for the token for all web logins? What about package uploads: > should distutils be modified to also ask for 2FA in interactive mode on the terminal, > to confirm a package release? 1. PyPI v Warehouse, if you want to implement it now, PyPI and Warehouse. I ? ?haven't done it yet because I don't want to implement it on PyPI-Legacy and ? ?I'm waiting until Warehouse is deployed. 2. I've looked at Duo Security. I'm not completely opposed to it, I know there ? ?are some who are. It has nice feature sets especially with alternative ? ?means of 2FA. 3. If we implement our own it'll be TOTP, probably with a QR code yes. 4. Backup codes should exist yes. 5. If a person has 2FA enabled on their account it should be required for any ? ?web login. Packages should also be able to mandate that a person has 2FA for ? ?manipulating that package. 6. distutils is the hard part here. We can't modify distutils for 2.6, and for ? ?2.7, 3.2, 3.3, and 3.4 it'll be hard to do. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Tue Jul 29 00:34:54 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 18:34:54 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: <0BB55369-2AEC-49B4-B40C-B3ACE2F7078E@develer.com> References: <0BB55369-2AEC-49B4-B40C-B3ACE2F7078E@develer.com> Message-ID: On July 28, 2014 at 6:24:06 PM, Giovanni Bajo (rasky at develer.com) wrote: > > I haven?t fully-fledged the details yet. Technically, when > you revoke a key, you declare the date from which the key was compromised; > everything signed before that date is still considered valid, > so there is no need to resign releases that predates the compromisation. > > My idea would be that PyPI would have a background job which routinely > checks for revoked keys; when it finds one, it automatically > deassociates it from the maintainers account, and removes (hides?) > any package whose signature is not valid anymore. The maintainer > would then have to login to PyPI, register a new GPG fingerprint, > and resign releases which were disabled. I didn?t read everything else yet, but no. That?s not how revocation works. Expiration != Revocation. If a key is revoked it is no longer trusted, for anything, ever. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Tue Jul 29 01:36:49 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Jul 2014 09:36:49 +1000 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: On 29 Jul 2014 03:43, "Giovanni Bajo" wrote: > > Hello, > > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. It stalled because end-to-end signing is a hard security problem and "just add GPG!" isn't an answer. If you add a threat model to the draft PEP, then we can have a useful discussion, since we need to know who we're trying to defend against, and what security guarantees people are after. 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure admins) are part of the threat model for PEP 458. How does your PEP defend against full compromise of those accounts? 2. What level of damage mitigation are we aiming to attain in the event of a full PyPI compromise? (i.e. attacker has full control over absolutely everything published from PyPI) 3. Assuming an attacker has fully compromised DNS and SSL (and hence can manipulate or replace *all* data purportedly being received from PyPI by a given target), what additional level of integrity is the "end-to-end" signing process adding? 4. What level of guarantee will be associated with the signing keys, and are package authors prepared to offer those guarantees? (The only dev community I've really talked to about that is a few of the Django core devs, and their reaction was "Hell, no, protecting and managing keys is too hard") 5. How do these guarantees compare to the much simpler SSH inspired "trust on first use" model already offered by "peep"? These are the critical points, as they're the aspects of the status quo that we're not currently defending against: - peep already makes it possible to ensure you get the same package you got last time, even if downloading directly from PyPI - the pervasive use of SSL protects against attacks other than a PyPI or SSL cert compromise - the wheel format already supports signature transport for private indexes Folks that want to outsource their *due diligence* are still going to have to go to a vendor, since "pip install python-nation" is always going to be a terrible idea, regardless of how the transport from developer to end user is secured. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Jul 29 01:47:49 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 19:47:49 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: On July 28, 2014 at 7:37:14 PM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure > admins) are part of the threat model for PEP 458. How does your PEP defend > against full compromise of those accounts? Quick clarification, PEP 458 trusts *someone*, most likely the PyPI admins who hold the ultimate trust root and who does the signing of the claimed roles periodically. However if I recall it has provisions that allow N of M requirements so we can opt to require more than one of those key holders to do the actual signing. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rasky at develer.com Tue Jul 29 02:01:16 2014 From: rasky at develer.com (Giovanni Bajo) Date: Tue, 29 Jul 2014 02:01:16 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Il giorno 29/lug/2014, alle ore 01:36, Nick Coghlan ha scritto: > On 29 Jul 2014 03:43, "Giovanni Bajo" wrote: > > > > Hello, > > > > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. > > It stalled because end-to-end signing is a hard security problem and "just add GPG!" isn't an answer. > I don?t find it very fair that you collapse a 700-line document to ?just add GPG?. And even in March, I wrote and proposed an extensive document. I?m not jumping into discussions handwaving a couple of buzzwords. > > If you add a threat model to the draft PEP, then we can have a useful discussion, > There is a whole section called "threat model?. If you would like to see the threat model extended to cover different attacks, I reckon there?s a less aggressive way to phrase your opinion. > 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure admins) are part of the threat model for PEP 458. How does your PEP defend against full compromise of those accounts? > It doesn?t, nor PEP 458 does. If an attacker owns a root shell on PyPI, it is game over with both PEPs; that is, unless you?re referring to the claimed role for which PEP 458 is severely lacking the most important and hard thing: how to populate it. > 2. What level of damage mitigation are we aiming to attain in the event of a full PyPI compromise? (i.e. attacker has full control over absolutely everything published from PyPI) > I?m not sure I understand the question or how it differs from the previous one. The thread model section on "PyPI server compromise? in my PEP has some details though. > 3. Assuming an attacker has fully compromised DNS and SSL (and hence can manipulate or replace *all* data purportedly being received from PyPI by a given target), what additional level of integrity is the "end-to-end" signing process adding? > In all cases in which the trust file is cached / edited with no automatic updates, it fully guarantees that no compromised packages can be installed in the user machine. This wouldn?t be the standard setup for most users. If you?re willing to think that this an important use-case (in my opinion, it?s quite far from it), I can revise my PEP to add for an offline signing like the ?claimed role? in PEP 458; that?s a very simple addition, as the hardest part is totally elsewhere (= how you setup such an offline process with the PSF staff and resources). I could also explore how to better integrate my PEP with a peep-like solution, that is an automatic caching of the subset of the trust file, connected somehow to the project-specific requirements.txt (but see below for differences). > 4. What level of guarantee will be associated with the signing keys, and are package authors prepared to offer those guarantees? (The only dev community I've really talked to about that is a few of the Django core devs, and their reaction was "Hell, no, protecting and managing keys is too hard?) > If package authors are unwilling to handle signing keys, PEP 458 is also doomed, and moreso since it would use an obscure, custom, one-purpose signing key and key tools, with no extensive documentation on proper handling on multiple operating systems and development scenarios. I thus don?t see how this question is pertinent in evaluating my PEP vs PEP 458. > 5. How do these guarantees compare to the much simpler SSH inspired "trust on first use" model already offered by "peep"? > I?m not very familiar with peep, but my understanding is that it doesn?t help at all for package upgrades; if an attacker compromises the latest release of Django, any user installing that release for the first time in each virtualenv wouldn?t realize it. Any package signing solution (both PEP458 and my PEP) fixes this, as creates a trust path that can be used to install multiple files. -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From rasky at develer.com Tue Jul 29 02:02:27 2014 From: rasky at develer.com (Giovanni Bajo) Date: Tue, 29 Jul 2014 02:02:27 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <83B1B29A-B268-49EF-9890-6EFDBD141834@develer.com> Message-ID: Il giorno 29/lug/2014, alle ore 00:26, Donald Stufft ha scritto: > On July 28, 2014 at 5:57:54 PM, Giovanni Bajo (rasky at develer.com) wrote: >> Il giorno 28/lug/2014, alle ore 22:26, Donald Stufft ha scritto: >> >>> >>> * Anyone able to defeat our TLS and is in a position to MITM a regular user >>> can simply supply their own trust file. >> >> Yes, though as you say, this is fixed by having the trust file signed. > > If we're relying on an online key to sign this file, then we can trust that > online key to just sign everything and we don't need the authors to do anything > and we don't need a trust file. Each signing protects against different attacks: * If you only sign the trust-file, a MITM that can break TLS for a package maintainer, can modify the package in the transit. * If you only sign the package file, a MITM that can break TLS for an user can modify the trust file in the transit >> It can also be mitigated with certificate pinning and online sign of the trust file. Google+Chrome >> has shown that pinning is very effective for this specific use-case (specific client >> connecting to specific server). > > If we're relying on pinned certificates in order to fetch the trust file > securely, then we can rely on it to securely transmit the package files. When I say *mitigation*, I mean ?reducing the likeliness of an attack?. I?m not saying that certificate pinning fixes the attack altogether and thus makes everything else in the PEP useless. If you reduce the likeliness of a TLS MITM down to a level that you are willing to accept the risk, then you don?t need to expect a package singing proposal to work also without TLS. Assuming unbreakable perfect TLS, you still need package signing to guard against modification of files at rest; it?s true that an attacker that can alter django.tar.gz on PyPI/Fastly can probably also alter the trust file, but 1) the trust file is publicly audited and append-only, so variations are far easier to notice (think of an external service monitoring the trust file and mailing to maintainers for each modification that appears there) 2) trust file can also be manually handled; again think enterprises; automatic deployments; freezing in a virtualenv and only updated on explicit request and with careful checking. 3) package signing still prevents attacks of package shadowing between multiple indexes You need package signing to enable these scenarios to even only exist and be possible, even if the default solution with the auto-updating trust-file is less secure than our final goals. >>> * Anyone able to defeat our TLS and is in a position to MITM a package author >>> can simply register their own GPG key. >> >> The same happens with PEP458, for all packages but those in the ?claimed" role (which >> requires manual vetting; this can be added to my proposal as well). > > Of course. I never said it didn't. The difference being that PEP 458 had a > plan for this, and the current proposal doesn't. This is a flaw with anything > that doesn't use some offline key or verification. However, as this proposal > copies more things from PEP 458, I struggle to see the reason for having it > over just doing PEP 458 (which I'm still not entirely sure is worth it). Well, you?re giving me a point. If you are saying that PEP458 and my PEP are ultimately equivalent for non vetted things, my PEP is far better as it?s several order of magnitude simpler for any of the involved parties (implementers, maintainers, ops, auditors). If you care only about the security you get with offline verification, then what about changing my proposal to sign the trust file with an offline key (with pip checking gpg only for things in the trust file)? PEP458 says that detailing offline verification is outside the scope of the PEP. If you think that?s the only reason the PEP exist, I don?t think that it?s correct to declare that it?s out of scope. We would need to say a concrete description of how an offline verification process would work at PyPI scale, with PSF staff and resources. >> This is fixed by 2FA. Notice that implementing 2FA but *not* package signing is not enough >> to fix this attack; the attacker in fact would still be able to simply modify a package >> release in the transit, and the user would then 2FA-authorize a modified package without >> realizing it. >> >> This is one of many examples in which 2FA collaborates with package signing to increase >> security, and this is why I merged the two proposals; of course they can be split, but together >> they achieve more. > > No, 2FA does nothing for this, if you're in a position to MITM an exploited TLS > stream, you can just steal the session cookie which does not have a second factor. You?re thinking 2FA too conservatively, or I?m using the term ?2FA? in a too lateral meaning. Think asking for an OTP to confirm an operation, not to login, e.g.: after a package upload or a GPG key change; the author would receive a SMS saying ?confirm that your new GPG key is 123456789abcdef by entering code 671924?. [I?ve cut the rest as we?re reiterating the same 2-3 points discussed above] >> >>> Implement 2FA/Better Authentication >>> ----------------------------------- >>> >>> Absolutely, we don't need a PEP to do this, we just need to do it. It's on my >>> personal TODO list but other things have had higher priority thus far. >> >> Maybe not a PEP, but some discussion is needed. >> >> Would you prefer patches against PyPI or warehouse? Would you evaluate a simpler solution >> using a third-party 2FA provider (e.g.: Authy or Duo Security) that could be talked into >> a PSF sponsorship, or would you prefer an in-house solution? If in-house, is it OK to go >> with a standard OATH TOTP with QR Code for provisioning, or you prefer to also have alternatives >> like SMS? How do you propose to handle recovery for a lost token (e.g.: stolen smartphone >> with no backup)? Would you ask for the token for all web logins? What about package uploads: >> should distutils be modified to also ask for 2FA in interactive mode on the terminal, >> to confirm a package release? > > 1. PyPI v Warehouse, if you want to implement it now, PyPI and Warehouse. I > haven't done it yet because I don't want to implement it on PyPI-Legacy and > I'm waiting until Warehouse is deployed. > > 2. I've looked at Duo Security. I'm not completely opposed to it, I know there > are some who are. It has nice feature sets especially with alternative > means of 2FA. > > 3. If we implement our own it'll be TOTP, probably with a QR code yes. > 4. Backup codes should exist yes. What about using only SMS instead? That would allow us to also use them for confirmation of security-related changes (e.g.: new package release), and could even be integrated with distutils prompts. > 5. If a person has 2FA enabled on their account it should be required for any > web login. Packages should also be able to mandate that a person has 2FA for > manipulating that package. > > 6. distutils is the hard part here. We can't modify distutils for 2.6, and for > 2.7, 3.2, 3.3, and 3.4 it'll be hard to do. -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From rasky at develer.com Tue Jul 29 02:07:22 2014 From: rasky at develer.com (Giovanni Bajo) Date: Tue, 29 Jul 2014 02:07:22 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: Message-ID: <7EF90BC8-485A-419D-8CCB-4C888ACEE1FA@develer.com> Il giorno 29/lug/2014, alle ore 00:22, Justin Cappos ha scritto: > So, I think Vlad covered the status of the implementation side well. > > We've also done some work on the writing / doc side, but haven't pushed fixes to the PEP. We can (and should) do so. Yes, please, that would be great. > We have an academic writeup that speaks in more detail about many of the issues you mention, along with other items. We will make the revised documents easier to find publicly, but let me address your specific concerns here. > > * what a maintainer is supposed to do to submit a new signed package > > A maintainer will upload a public key when creating a project. When uploading a package, metadata is signed and uploaded that indicates trust. Our developer tools guide (https://github.com/theupdateframework/tuf/blob/develop/tuf/README-developer-tools.md) is meant to be a first draft at this document that answers any questions. > > There will also be a quick start guide which is just a few steps: > > generate and upload a key > sign metadata and upload it with your project > > * how can differ maintainers signal that they both maintain the same package > > A project can delegate trust to multiple developers. Depending on how this is done, either developer may be trusted for the package. The developer tools guide shows this. > > * how the user interface of PyPI will change > > We're open to suggestions here. There is flexibility from our side for how this works. > * what are the required security maintenance that will need to be regularly performed by the PyPI ops > > Essentially, the developers need to check a list of 'revoked claimed keys' and ensure that this list matches what they will sign with their offline claimed key. This is also detailed in the writeup. > > Giovanni: TUF retains security even when PyPI is compromised (including online keys). Please elaborate on ?survive". What I read in the PEP, if I compromise PyPI I can get access to timestamp, consistent-snapshot, and unclaimed roles, which in turn lets me perform malicious updates, freeze attacks and metadata inconsistency attacks (= all possible attacks). -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From donald at stufft.io Tue Jul 29 02:36:36 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 20:36:36 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <83B1B29A-B268-49EF-9890-6EFDBD141834@develer.com> Message-ID: On July 28, 2014 at 8:02:35 PM, Giovanni Bajo (rasky at develer.com) wrote: > > Il giorno 29/lug/2014, alle ore 00:26, Donald Stufft ha scritto: > > > On July 28, 2014 at 5:57:54 PM, Giovanni Bajo (rasky at develer.com) wrote: > >> Il giorno 28/lug/2014, alle ore 22:26, Donald Stufft ha scritto: > >> > >>> > >>> * Anyone able to defeat our TLS and is in a position to MITM a regular user > >>> can simply supply their own trust file. > >> > >> Yes, though as you say, this is fixed by having the trust file signed. > > > > If we're relying on an online key to sign this file, then we can trust that > > online key to just sign everything and we don't need the authors to do anything > > and we don't need a trust file. > > Each signing protects against different attacks: > > * If you only sign the trust-file, a MITM that can break TLS for a package maintainer, can > modify the package in the transit. > * If you only sign the package file, a MITM that can break TLS for an user can modify the trust > file in the transit I don't understand what this is even saying. The author signature does not matter in the slightest if the trust file is compromised. > > >> It can also be mitigated with certificate pinning and online sign of the trust file. > Google+Chrome > >> has shown that pinning is very effective for this specific use-case (specific client > >> connecting to specific server). > > > > If we're relying on pinned certificates in order to fetch the trust file > > securely, then we can rely on it to securely transmit the package files. > > When I say *mitigation*, I mean ?reducing the likeliness of an attack?. I?m not saying > that certificate pinning fixes the attack altogether and thus makes everything else > in the PEP useless. > > If you reduce the likeliness of a TLS MITM down to a level that you are willing to accept > the risk, then you don?t need to expect a package singing proposal to work also without > TLS. Assuming unbreakable perfect TLS, you still need package signing to guard against > modification of files at rest; it?s true that an attacker that can alter django.tar.gz > on PyPI/Fastly can probably also alter the trust file, but If you're relying on TLS to download the trust file securely, then the maximum security provided by that trust file is limited to that of TLS. This isn't an opinion it is a fact. > > 1) the trust file is publicly audited and append-only, so variations are far easier to > notice (think of an external service monitoring the trust file and mailing to maintainers > for each modification that appears there) > 2) trust file can also be manually handled; again think enterprises; automatic deployments; > freezing in a virtualenv and only updated on explicit request and with careful checking.? This isn't really much better than TLS here again. It will protect against online compromise of a private index (vs a TLS connection) however that is a fairly minor use case and isn't worth the additional complexity. > 3) package signing still prevents attacks of package shadowing between multiple indexes? Well sometimes shadowing is what you want too (which I don?t think I saw here?), but I don't think it actually does cover this unless you prevent alternative indexes from having signed packages or you introduce a priority system, and if you introduce a priority system then you don't need signing to prevent that. > > You need package signing to enable these scenarios to even only exist and be possible, > even if the default solution with the auto-updating trust-file is less secure than our > final goals. > > >>> * Anyone able to defeat our TLS and is in a position to MITM a package author > >>> can simply register their own GPG key. > >> > >> The same happens with PEP458, for all packages but those in the ?claimed" role (which > >> requires manual vetting; this can be added to my proposal as well). > > > > Of course. I never said it didn't. The difference being that PEP 458 had a > > plan for this, and the current proposal doesn't. This is a flaw with anything > > that doesn't use some offline key or verification. However, as this proposal > > copies more things from PEP 458, I struggle to see the reason for having it > > over just doing PEP 458 (which I'm still not entirely sure is worth it). > > Well, you?re giving me a point. If you are saying that PEP458 and my PEP are ultimately > equivalent for non vetted things, my PEP is far better as it?s several order of magnitude > simpler for any of the involved parties (implementers, maintainers, ops, auditors).? My point is that without the offline verification the TUF PEP is completely uninteresting to me and I would not be willing to implement it and I would do everything I could to prevent it from being implemented. Your PEP lacks this and thus is completely uninteresting to me. > > If you care only about the security you get with offline verification, then what about > changing my proposal to sign the trust file with an offline key (with pip checking gpg > only for things in the trust file)? It's simpler, but in a broken way. The complexity that exists in TUF exists for a reason. I would suggest that you should take the time to understand TUF before proposing an alternative (going by your statements). > > PEP458 says that detailing offline verification is outside the scope of the PEP. If you > think that?s the only reason the PEP exist, I don?t think that it?s correct to declare > that it?s out of scope. We would need to say a concrete description of how an offline verification > process would work at PyPI scale, with PSF staff and resources. Well it's not the *only* reason, there are other benefits to it, but I consider them additional benefits and not the primary driver for the PEP. To be completely clear, I don't consider PEP 458 to be a for sure thing either. It is the only proposal I've seen that even begins to be acceptable to me. I'm still unsure if the added cost (in terms of complexity) is worth it for what we get from it. I do think that it's the best proposal I've seen for this, and I'm unable to come up with a better solution. There's also the question of if we should do anything. Not signing packages is a completely reasonable end result if a solution that actually protects against more things than TLS in a meaningful way isn't found where the costs don't outweigh the benefits. > > >> This is fixed by 2FA. Notice that implementing 2FA but *not* package signing is not > enough > >> to fix this attack; the attacker in fact would still be able to simply modify a package > >> release in the transit, and the user would then 2FA-authorize a modified package without > >> realizing it. > >> > >> This is one of many examples in which 2FA collaborates with package signing to increase > >> security, and this is why I merged the two proposals; of course they can be split, but > together > >> they achieve more. > > > > No, 2FA does nothing for this, if you're in a position to MITM an exploited TLS > > stream, you can just steal the session cookie which does not have a second factor. > > You?re thinking 2FA too conservatively, or I?m using the term ?2FA? in a too lateral meaning. > Think asking for an OTP to confirm an operation, not to login, e.g.: after a package upload > or a GPG key change; the author would receive a SMS saying ?confirm that your new GPG key > is 123456789abcdef by entering code 671924?. It?s unlikely that we?ll end up in a situation where any change requires a confirmation. > > > > 3. If we implement our own it'll be TOTP, probably with a QR code yes. > > 4. Backup codes should exist yes. > > What about using only SMS instead? That would allow us to also use them for confirmation > of security-related changes (e.g.: new package release), and could even be integrated > with distutils prompts. I'd prefer not requiring a cell phone. I'm also not a huge fan of SMS over TOTP. This is where Duo Security is most interesting to me, since it provides mechanisms for: * Smart phones (with a nice push UX!) * "Dumb" phones (via SMS) * Literally any phone that can receive phone calls (via Dialing) * Hardware tokens That means that in order for someone to not be able to participate in 2FA they'd have to have absolutely no phone number. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Tue Jul 29 02:39:03 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Jul 2014 10:39:03 +1000 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On 29 Jul 2014 10:01, "Giovanni Bajo" wrote: > > Il giorno 29/lug/2014, alle ore 01:36, Nick Coghlan ha scritto: >> >> On 29 Jul 2014 03:43, "Giovanni Bajo" wrote: >> > >> > Hello, >> > >> > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. >> >> It stalled because end-to-end signing is a hard security problem and "just add GPG!" isn't an answer. > > I don?t find it very fair that you collapse a 700-line document to ?just add GPG?. And even in March, I wrote and proposed an extensive document. I?m not jumping into discussions handwaving a couple of buzzwords. If your PEP defends against all the attacks TUF does, then it will be just as complicated as TUF. If it doesn't defend against all those attacks, then it offers even less justification for the complexity increase than TUF. >> If you add a threat model to the draft PEP, then we can have a useful discussion, > > There is a whole section called "threat model?. If you would like to see the threat model extended to cover different attacks, I reckon there?s a less aggressive way to phrase your opinion. My apologies, it is customary to explain the threat model *before* proposing a solution to it, and I did indeed lose patience before reaching that part of the PEP. I have now read that section, and still find it lacking compared to the comprehensive threat analysis research backing TUF. Note that, as with Donald, I have no problem with the idea of improving PyPI's account authentication mechanisms. That's purely a matter of any other major PyPI enhancements being behind the Warehouse migration in the priority queue, and can be addressed without a PEP. It's the proposed bespoke package signing and trust management system I am sceptical of. >> 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure admins) are part of the threat model for PEP 458. How does your PEP defend against full compromise of those accounts? > > It doesn?t, nor PEP 458 does. If an attacker owns a root shell on PyPI, it is game over with both PEPs; that is, unless you?re referring to the claimed role for which PEP 458 is severely lacking the most important and hard thing: how to populate it. PEP 458 includes both offline root keys and an "N of M" signing model specifically so compromise of a single administrator account can't break the whole system (aside from DoS attacks). >> 2. What level of damage mitigation are we aiming to attain in the event of a full PyPI compromise? (i.e. attacker has full control over absolutely everything published from PyPI) > > I?m not sure I understand the question or how it differs from the previous one. The thread model section on "PyPI server compromise? in my PEP has some details though. And it amounts to minimal additional defence beyond where we are today. >> >> 3. Assuming an attacker has fully compromised DNS and SSL (and hence can manipulate or replace *all* data purportedly being received from PyPI by a given target), what additional level of integrity is the "end-to-end" signing process adding? > > In all cases in which the trust file is cached / edited with no automatic updates, it fully guarantees that no compromised packages can be installed in the user machine. This wouldn?t be the standard setup for most users. Except it allows known-vulnerable versions to be installed. > > If you?re willing to think that this an important use-case (in my opinion, it?s quite far from it), I can revise my PEP to add for an offline signing like the ?claimed role? in PEP 458; that?s a very simple addition, as the hardest part is totally elsewhere (= how you setup such an offline process with the PSF staff and resources). I could also explore how to better integrate my PEP with a peep-like solution, that is an automatic caching of the subset of the trust file, connected somehow to the project-specific requirements.txt (but see below for differences). Now you're getting to one of the key reasons TUF integration stalled: the requirement for offline root keys is there for good reason, but it's a hard data management problem for a distributed administration team. >> 4. What level of guarantee will be associated with the signing keys, and are package authors prepared to offer those guarantees? (The only dev community I've really talked to about that is a few of the Django core devs, and their reaction was "Hell, no, protecting and managing keys is too hard?) > > If package authors are unwilling to handle signing keys, PEP 458 is also doomed, and moreso since it would use an obscure, custom, one-purpose signing key and key tools, with no extensive documentation on proper handling on multiple operating systems and development scenarios. I thus don?t see how this question is pertinent in evaluating my PEP vs PEP 458. The signing algorithm isn't the interesting part of TUF - it's the key management and trust delegation. This question is relevant to both proposals, as the case where PyPI is signing on behalf of the developer is a critical one to handle in order to keep barriers to entry low. A "solution" that leads to end users just turning off signature checking (because most packages aren't signed) is undesirable, as it means missing out on the additional integrity checks for the PyPI-user link, even if those packages are still vulnerable to PyPI compromise. >> >> 5. How do these guarantees compare to the much simpler SSH inspired "trust on first use" model already offered by "peep"? > > I?m not very familiar with peep, but my understanding is that it doesn?t help at all for package upgrades; if an attacker compromises the latest release of Django, any user installing that release for the first time in each virtualenv wouldn?t realize it. peep is designed to be used in conjunction with version pinning - upgrades happen deliberately rather than implicitly. > Any package signing solution (both PEP458 and my PEP) fixes this, as creates a trust path that can be used to install multiple files. Yes, the challenge is to create such a trust path in a way that it *adds* significantly to the security offered by SSL, rather than being vulnerable to the same classes of attacks. Regards, Nick. > > -- > Giovanni Bajo :: rasky at develer.com > Develer S.r.l. :: http://www.develer.com > > My Blog: http://giovanni.bajo.it > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasky at develer.com Tue Jul 29 03:12:37 2014 From: rasky at develer.com (Giovanni Bajo) Date: Tue, 29 Jul 2014 03:12:37 +0200 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: Il giorno 29/lug/2014, alle ore 02:39, Nick Coghlan ha scritto: > On 29 Jul 2014 10:01, "Giovanni Bajo" wrote: > > > > Il giorno 29/lug/2014, alle ore 01:36, Nick Coghlan ha scritto: > >> > >> On 29 Jul 2014 03:43, "Giovanni Bajo" wrote: > >> > > >> > Hello, > >> > > >> > on March 2013, on the now-closed catalog-sig mailing-list, I submitted a proposal for fixing several security problems in PyPI, pip and distutils[1]. Some of my proposals were obvious things like downloading packages through SSL, which was already in progress of being designed and implemented. Others, like GPG package signing, were discussed for several days/weeks, but ended up in discussion paralysis because of the upcoming TUF framework. > >> > >> It stalled because end-to-end signing is a hard security problem and "just add GPG!" isn't an answer. > > > > I don?t find it very fair that you collapse a 700-line document to ?just add GPG?. And even in March, I wrote and proposed an extensive document. I?m not jumping into discussions handwaving a couple of buzzwords. > > If your PEP defends against all the attacks TUF does, then it will be just as complicated as TUF. If it doesn't defend against all those attacks, then it offers even less justification for the complexity increase than TUF. > 1) TUF isn?t designed for PyPI and pip. It?s a generic system meant for many different scenarios, which is then adapted (with many different compromises) to our use case. So you can?t really postulate that you absolutely need something as complicated to get to the same level of defense. 2) Security is never perfection. You might very well decide that the increased level of security is not worth the complexity increase. My solution is far far simpler than TUF. To me, it?s a reasonable compromise between complexity of implementation, time/costs required for all involved parties, and decreased security. > >> If you add a threat model to the draft PEP, then we can have a useful discussion, > > > > There is a whole section called "threat model?. If you would like to see the threat model extended to cover different attacks, I reckon there?s a less aggressive way to phrase your opinion. > > My apologies, it is customary to explain the threat model *before* proposing a solution to it, and I did indeed lose patience before reaching that part of the PEP. I have now read that section, and still find it lacking compared to the comprehensive threat analysis research backing TUF. > If it?s only ?lacking", then i?m happy, as that PEP is not my PhD dissertation :) > >> 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure admins) are part of the threat model for PEP 458. How does your PEP defend against full compromise of those accounts? > > > > > It doesn?t, nor PEP 458 does. If an attacker owns a root shell on PyPI, it is game over with both PEPs; that is, unless you?re referring to the claimed role for which PEP 458 is severely lacking the most important and hard thing: how to populate it. > > PEP 458 includes both offline root keys and an "N of M" signing model specifically so compromise of a single administrator account can't break the whole system (aside from DoS attacks). > It depends on your definition of ?compromise of a single administrator?. Obviously if you compromise the root signing key, then yes, the N/M model helps, but that key doesn?t exist in my PEP in the first place (which means that my PEP is simpler, requires less work on administrators, less key management, etc.). If with ?compromise of an administrator? you intend that an attacker can login into PyPI and become root, than I maintain that PEP458 and my PEP are mostly equivalent, in that they offer close to no protection; the attacker can compromise all packages. PEP458 then has this ?claimed? role that is signed with an offline key, which would protect from a PyPI compromise, and which Donald says it?s the only reason he cares about PEP458. But it totally punts on how to maintain that role from a practical standpoint; how do you move projects into there? How do verify author keys (and identities)? How do you handle the offline verifications of keys from maintainers all over the world? Should the PSF setup a telephone line for that? Should outsource some identity verification to an external company? How do you protect from different kind of social engineering? I did some researches on the topic a couple of years ago, and it?s a very tough topic. Having the software part that can handle offline signing is the easiest part. > >> 2. What level of damage mitigation are we aiming to attain in the event of a full PyPI compromise? (i.e. attacker has full control over absolutely everything published from PyPI) > > > > I?m not sure I understand the question or how it differs from the previous one. The thread model section on "PyPI server compromise? in my PEP has some details though. > > And it amounts to minimal additional defence beyond where we are today. > Yes. I don?t claim otherwise either. > >> 3. Assuming an attacker has fully compromised DNS and SSL (and hence can manipulate or replace *all* data purportedly being received from PyPI by a given target), what additional level of integrity is the "end-to-end" signing process adding? > > > > In all cases in which the trust file is cached / edited with no automatic updates, it fully guarantees that no compromised packages can be installed in the user machine. This wouldn?t be the standard setup for most users. > > Except it allows known-vulnerable versions to be installed. > Well, known-vulnerable versions are a problem in deployments, not development. If you?re fully compromising DNS and SSL of a deployment machine, there are far easier way to attack your target. Moreover, in deployments, most people do fix versions of packages they install. I don?t think anybody is running deployments in which they install whatever latest version of Django PyPI serves them. In that case, serving a different version would still make pip abort installation. Protecting development machines against code execution is fully achieved by my PEP with a manually-edited (or peep-like edited) trust file. > >> 4. What level of guarantee will be associated with the signing keys, and are package authors prepared to offer those guarantees? (The only dev community I've really talked to about that is a few of the Django core devs, and their reaction was "Hell, no, protecting and managing keys is too hard?) > > > > If package authors are unwilling to handle signing keys, PEP 458 is also doomed, and moreso since it would use an obscure, custom, one-purpose signing key and key tools, with no extensive documentation on proper handling on multiple operating systems and development scenarios. I thus don?t see how this question is pertinent in evaluating my PEP vs PEP 458. > > The signing algorithm isn't the interesting part of TUF - it's the key management and trust delegation. This question is relevant to both proposals, as the case where PyPI is signing on behalf of the developer is a critical one to handle in order to keep barriers to entry low. > > A "solution" that leads to end users just turning off signature checking (because most packages aren't signed) is undesirable, as it means missing out on the additional integrity checks for the PyPI-user link, even if those packages are still vulnerable to PyPI compromise. > you don?t need to turn off signature checking, you can simply allow installation of packages with missing GPG signatures. So again, if developers refuse to touch keys, you?re not affecting security of users installing signed packages. > >> 5. How do these guarantees compare to the much simpler SSH inspired "trust on first use" model already offered by "peep"? > > > > I?m not very familiar with peep, but my understanding is that it doesn?t help at all for package upgrades; if an attacker compromises the latest release of Django, any user installing that release for the first time in each virtualenv wouldn?t realize it. > > peep is designed to be used in conjunction with version pinning - upgrades happen deliberately rather than implicitly. > yes, but WHEN upgrades do happen, peep offers no security; instead, my proposal does offer security in that the new never-seen-before package can still be checked for authentication (through the signature). -- Giovanni Bajo :: rasky at develer.com Develer S.r.l. :: http://www.develer.com My Blog: http://giovanni.bajo.it -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4207 bytes Desc: not available URL: From donald at stufft.io Tue Jul 29 03:48:05 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 21:48:05 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On July 28, 2014 at 9:13:03 PM, Giovanni Bajo (rasky at develer.com) wrote: > > My solution is far far simpler than TUF. To me, it?s a reasonable compromise between complexity > of implementation, time/costs required for all involved parties, and decreased security.? The package signing portions of it have little to no benefit over just using TLS, or just using an online key and not involving the authors at all. I don't believe that the additional complexity for little to no additional security past what is already there is worthwhile. > > PEP 458 includes both offline root keys and an "N of M" signing model specifically so? > compromise of a single administrator account can't break the whole system (aside from > DoS attacks). > > > It depends on your definition of ?compromise of a single administrator?. Obviously > if you compromise the root signing key, then yes, the N/M model helps, but that key doesn?t > exist in my PEP in the first place (which means that my PEP is simpler, requires less work > on administrators, less key management, etc.). If with ?compromise of an administrator? > you intend that an attacker can login into PyPI and become root, than I maintain that PEP458 > and my PEP are mostly equivalent, in that they offer close to no protection; the attacker > can compromise all packages. The lack of an offline key also means your PEP provides little to no additional benefit over TLS or just having an online key and not involving the authors at all. > > >> 2. What level of damage mitigation are we aiming to attain in the event of a full PyPI? > compromise? (i.e. attacker has full control over absolutely everything published > from PyPI) > > > > > > I?m not sure I understand the question or how it differs from the previous one. The thread > model section on "PyPI server compromise? in my PEP has some details though. > > > > And it amounts to minimal additional defence beyond where we are today. > > > Yes. I don?t claim otherwise either. I'm not sure I'm understanding you correctly, if you don't claim that this PEP provides more than minimal additional defense beyond where we are today, then what is the point of it? Why would we replace something that already exists for something that doesn't exist and is more complex if it doesn't provide more than a minimal amount of additional defense? > > >> 3. Assuming an attacker has fully compromised DNS and SSL (and hence can manipulate? > or replace *all* data purportedly being received from PyPI by a given target), what additional > level of integrity is the "end-to-end" signing process adding? > > > > > > In all cases in which the trust file is cached / edited with no automatic updates, it > fully guarantees that no compromised packages can be installed in the user machine. > This wouldn?t be the standard setup for most users. > > > > Except it allows known-vulnerable versions to be installed. > > > Well, known-vulnerable versions are a problem in deployments, not development. If > you?re fully compromising DNS and SSL of a deployment machine, there are far easier way > to attack your target. Moreover, in deployments, most people do fix versions of packages > they install. I don?t think anybody is running deployments in which they install whatever > latest version of Django PyPI serves them. In that case, serving a different version > would still make pip abort installation. > > Protecting development machines against code execution is fully achieved by my PEP > with a manually-edited (or peep-like edited) trust file. If someone does ``pip install thing``. They are going to get whatever the latest version is according to a file which is only secured by TLS. Maybe that's version 2.2.1 instead of 2.2.2 where 2.2.1 has a known security flaw that allows arbitrary code execution or something else bad. > > > >> 4. What level of guarantee will be associated with the signing keys, and are package > authors prepared to offer those guarantees? (The only dev community I've really talked > to about that is a few of the Django core devs, and their reaction was "Hell, no, protecting > and managing keys is too hard?) > > > > > > If package authors are unwilling to handle signing keys, PEP 458 is also doomed, and > moreso since it would use an obscure, custom, one-purpose signing key and key tools, > with no extensive documentation on proper handling on multiple operating systems and > development scenarios. I thus don?t see how this question is pertinent in evaluating > my PEP vs PEP 458. Right, I don't think anyone here have said that implementing TUF is a forgone thing. It's a perfectly valid answer that we don't do package signing, or we only do package signing by PyPI itself and we don't involve the authors at all because we don't believe the cost is worth it. This is a question that needs answered still, and it's as valid of a question on the TUF PEP as it is on this PEP. Are people even going to bother using this? Even the most amazing solution ever is worthless if nobody uses it. > > > > The signing algorithm isn't the interesting part of TUF - it's the key management and > trust delegation. This question is relevant to both proposals, as the case where PyPI > is signing on behalf of the developer is a critical one to handle in order to keep barriers > to entry low. > > > > A "solution" that leads to end users just turning off signature checking (because most > packages aren't signed) is undesirable, as it means missing out on the additional integrity > checks for the PyPI-user link, even if those packages are still vulnerable to PyPI compromise. > > > you don?t need to turn off signature checking, you can simply allow installation of packages > with missing GPG signatures. So again, if developers refuse to touch keys, you?re not > affecting security of users installing signed packages. You're not affecting the security of people who only choose to install signed packages, however if any reasonable set of dependencies as a high chance of requiring allowing unsigned dependencies then that does affect the security of things. IOW the likelihood that people are going to be willing to use this thing at all is an import consideration, and one that the TUF PEP handles by offering the unclaimed/recently-claimed/claimed roles to allow authors to opt out of managing keys and present their users with a lesser secured system. Now a problem with the TUF PEP is similar to what I just said above, if most people opt to stay in unclaimed then the benefits of TUF over just using TLS or using a single online key that PyPI uses to sign everything and the authors don't participate is severely limited. This is one of the questions that weigh heavily on my mind about if TUF is worth it. Will enough authors opt in to manage their own keys? How likely will it be that any particular set of reasonable dependencies will require at least one package in unclaimed? -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From graffatcolmingov at gmail.com Tue Jul 29 03:50:15 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 28 Jul 2014 20:50:15 -0500 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On Mon, Jul 28, 2014 at 8:12 PM, Giovanni Bajo wrote: > Il giorno 29/lug/2014, alle ore 02:39, Nick Coghlan ha > scritto: > > On 29 Jul 2014 10:01, "Giovanni Bajo" wrote: >> >> Il giorno 29/lug/2014, alle ore 01:36, Nick Coghlan >> ha scritto: >>> >>> On 29 Jul 2014 03:43, "Giovanni Bajo" wrote: >>> > >>> > Hello, >>> > >>> > on March 2013, on the now-closed catalog-sig mailing-list, I submitted >>> > a proposal for fixing several security problems in PyPI, pip and >>> > distutils[1]. Some of my proposals were obvious things like downloading >>> > packages through SSL, which was already in progress of being designed and >>> > implemented. Others, like GPG package signing, were discussed for several >>> > days/weeks, but ended up in discussion paralysis because of the upcoming TUF >>> > framework. >>> >>> It stalled because end-to-end signing is a hard security problem and >>> "just add GPG!" isn't an answer. >> >> I don?t find it very fair that you collapse a 700-line document to ?just >> add GPG?. And even in March, I wrote and proposed an extensive document. I?m >> not jumping into discussions handwaving a couple of buzzwords. > > If your PEP defends against all the attacks TUF does, then it will be just > as complicated as TUF. If it doesn't defend against all those attacks, then > it offers even less justification for the complexity increase than TUF. > > 1) TUF isn?t designed for PyPI and pip. It?s a generic system meant for many > different scenarios, which is then adapted (with many different compromises) > to our use case. So you can?t really postulate that you absolutely need > something as complicated to get to the same level of defense. > 2) Security is never perfection. You might very well decide that the > increased level of security is not worth the complexity increase. > > My solution is far far simpler than TUF. To me, it?s a reasonable compromise > between complexity of implementation, time/costs required for all involved > parties, and decreased security. While there's a significant difference between the complexity of the proposals to implement, there's also a significant difference in complexity for the end users. On the one hand, to implement PEP458, would mean a great deal of work on those working on PyPI/Warehouse and pip, but it would have little (if any) end user implications or complications. Your PEP on the other hand, causes some instabilities (especially if PGP/GPG isn't installed, or if someone has hijacked the PATH) and will create only headaches for the users. They'll have to install GPG, generate a Key, upload the key, secure the key, and make sure they don't ever lose it. While there's less complexity for the implementers, there's much more for end users. We don't want to make packaging worse for users in exchange for a negligible (questionable?) amount of increased security. >>> If you add a threat model to the draft PEP, then we can have a useful >>> discussion, >> >> There is a whole section called "threat model?. If you would like to see >> the threat model extended to cover different attacks, I reckon there?s a >> less aggressive way to phrase your opinion. > > My apologies, it is customary to explain the threat model *before* proposing > a solution to it, and I did indeed lose patience before reaching that part > of the PEP. I have now read that section, and still find it lacking compared > to the comprehensive threat analysis research backing TUF. > > If it?s only ?lacking", then i?m happy, as that PEP is not my PhD > dissertation :) > >>> 1. People like Donald, Ernest, Richard Noah (i.e. PyPI and infrastructure >>> admins) are part of the threat model for PEP 458. How does your PEP defend >>> against full compromise of those accounts? > >> >> It doesn?t, nor PEP 458 does. If an attacker owns a root shell on PyPI, it >> is game over with both PEPs; that is, unless you?re referring to the claimed >> role for which PEP 458 is severely lacking the most important and hard >> thing: how to populate it. > > PEP 458 includes both offline root keys and an "N of M" signing model > specifically so compromise of a single administrator account can't break the > whole system (aside from DoS attacks). > > It depends on your definition of ?compromise of a single administrator?. > Obviously if you compromise the root signing key, then yes, the N/M model > helps, but that key doesn?t exist in my PEP in the first place (which means > that my PEP is simpler, requires less work on administrators, less key > management, etc.). If with ?compromise of an administrator? you intend that > an attacker can login into PyPI and become root, than I maintain that PEP458 > and my PEP are mostly equivalent, in that they offer close to no protection; > the attacker can compromise all packages. > > PEP458 then has this ?claimed? role that is signed with an offline key, > which would protect from a PyPI compromise, and which Donald says it?s the > only reason he cares about PEP458. But it totally punts on how to maintain > that role from a practical standpoint; how do you move projects into there? > How do verify author keys (and identities)? How do you handle the offline > verifications of keys from maintainers all over the world? Should the PSF > setup a telephone line for that? Should outsource some identity verification > to an external company? How do you protect from different kind of social > engineering? I did some researches on the topic a couple of years ago, and > it?s a very tough topic. Having the software part that can handle offline > signing is the easiest part. > >>> 2. What level of damage mitigation are we aiming to attain in the event >>> of a full PyPI compromise? (i.e. attacker has full control over absolutely >>> everything published from PyPI) >> >> I?m not sure I understand the question or how it differs from the previous >> one. The thread model section on "PyPI server compromise? in my PEP has some >> details though. > > And it amounts to minimal additional defence beyond where we are today. > > Yes. I don?t claim otherwise either. > >>> 3. Assuming an attacker has fully compromised DNS and SSL (and hence can >>> manipulate or replace *all* data purportedly being received from PyPI by a >>> given target), what additional level of integrity is the "end-to-end" >>> signing process adding? >> >> In all cases in which the trust file is cached / edited with no automatic >> updates, it fully guarantees that no compromised packages can be installed >> in the user machine. This wouldn?t be the standard setup for most users. > > Except it allows known-vulnerable versions to be installed. > > Well, known-vulnerable versions are a problem in deployments, not > development. If you?re fully compromising DNS and SSL of a deployment > machine, there are far easier way to attack your target. Moreover, in > deployments, most people do fix versions of packages they install. I don?t > think anybody is running deployments in which they install whatever latest > version of Django PyPI serves them. In that case, serving a different > version would still make pip abort installation. > > Protecting development machines against code execution is fully achieved by > my PEP with a manually-edited (or peep-like edited) trust file. > >>> 4. What level of guarantee will be associated with the signing keys, and >>> are package authors prepared to offer those guarantees? (The only dev >>> community I've really talked to about that is a few of the Django core devs, >>> and their reaction was "Hell, no, protecting and managing keys is too hard?) >> >> If package authors are unwilling to handle signing keys, PEP 458 is also >> doomed, and moreso since it would use an obscure, custom, one-purpose >> signing key and key tools, with no extensive documentation on proper >> handling on multiple operating systems and development scenarios. I thus >> don?t see how this question is pertinent in evaluating my PEP vs PEP 458. > > The signing algorithm isn't the interesting part of TUF - it's the key > management and trust delegation. This question is relevant to both > proposals, as the case where PyPI is signing on behalf of the developer is a > critical one to handle in order to keep barriers to entry low. > > A "solution" that leads to end users just turning off signature checking > (because most packages aren't signed) is undesirable, as it means missing > out on the additional integrity checks for the PyPI-user link, even if those > packages are still vulnerable to PyPI compromise. > > you don?t need to turn off signature checking, you can simply allow > installation of packages with missing GPG signatures. So again, if > developers refuse to touch keys, you?re not affecting security of users > installing signed packages. > >>> 5. How do these guarantees compare to the much simpler SSH inspired >>> "trust on first use" model already offered by "peep"? >> >> I?m not very familiar with peep, but my understanding is that it doesn?t >> help at all for package upgrades; if an attacker compromises the latest >> release of Django, any user installing that release for the first time in >> each virtualenv wouldn?t realize it. > > peep is designed to be used in conjunction with version pinning - upgrades > happen deliberately rather than implicitly. > > yes, but WHEN upgrades do happen, peep offers no security; instead, my > proposal does offer security in that the new never-seen-before package can > still be checked for authentication (through the signature). > > -- > Giovanni Bajo :: rasky at develer.com > Develer S.r.l. :: http://www.develer.com > > My Blog: http://giovanni.bajo.it > > > > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From donald at stufft.io Tue Jul 29 04:09:07 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 28 Jul 2014 22:09:07 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On July 28, 2014 at 9:48:05 PM, Donald Stufft (donald at stufft.io) wrote: > > > > >> 2. What level of damage mitigation are we aiming to attain > in the event of a full PyPI > > compromise? (i.e. attacker has full control over absolutely > everything published > > from PyPI) > > > > > > > > I?m not sure I understand the question or how it differs from > the previous one. The thread > > model section on "PyPI server compromise? in my PEP has some > details though. > > > > > > And it amounts to minimal additional defence beyond where > we are today. > > > > > Yes. I don?t claim otherwise either. > > I'm not sure I'm understanding you correctly, if you don't claim > that this PEP > provides more than minimal additional defense beyond where > we are today, then > what is the point of it? Why would we replace something that already > exists > for something that doesn't exist and is more complex if it doesn't > provide > more than a minimal amount of additional defense? Ok, Richard pointed out to me that this is probably saying you don't claim it does in this one particular aspect and not in general. If that is the case then I retract this question. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Tue Jul 29 05:42:44 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Jul 2014 13:42:44 +1000 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On 29 July 2014 11:50, Ian Cordasco wrote: > On Mon, Jul 28, 2014 at 8:12 PM, Giovanni Bajo wrote: >> Il giorno 29/lug/2014, alle ore 02:39, Nick Coghlan ha >> scritto: >>> If your PEP defends against all the attacks TUF does, then it will be just >>> as complicated as TUF. If it doesn't defend against all those attacks, then >>> it offers even less justification for the complexity increase than TUF. >> >> 1) TUF isn?t designed for PyPI and pip. It?s a generic system meant for many >> different scenarios, which is then adapted (with many different compromises) >> to our use case. So you can?t really postulate that you absolutely need >> something as complicated to get to the same level of defense. >> 2) Security is never perfection. You might very well decide that the >> increased level of security is not worth the complexity increase. >> >> My solution is far far simpler than TUF. To me, it?s a reasonable compromise >> between complexity of implementation, time/costs required for all involved >> parties, and decreased security. > > While there's a significant difference between the complexity of the > proposals to implement, there's also a significant difference in > complexity for the end users. > > On the one hand, to implement PEP458, would mean a great deal of work > on those working on PyPI/Warehouse and pip, but it would have little > (if any) end user implications or complications. > > Your PEP on the other hand, causes some instabilities (especially if > PGP/GPG isn't installed, or if someone has hijacked the PATH) and will > create only headaches for the users. They'll have to install GPG, > generate a Key, upload the key, secure the key, and make sure they > don't ever lose it. While there's less complexity for the > implementers, there's much more for end users. We don't want to make > packaging worse for users in exchange for a negligible (questionable?) > amount of increased security. Right. Properly securing signing keys is incredibly painful. Pushing that burden onto package authors doesn't magically make it less painful, it just distributes the pain to more people. The challenges that PEP 458 waves away regarding "How do the PyPI admins manage the root signing keys?" are exactly the same challenges that package authors would face under either PEP, particularly when a project is maintained by a distributed team with multiple people that can make releases. Managing signing keys in distributed development in such a way that they're actually worthy of trust is *hard*. Attaching a signature to something is relatively meaningless if we don't know how the signing key is itself being managed. If it isn't handled carefully, then the signing process *itself* can become a vulnerability, since difficulties signing a new release can prevent the release of a new version. We actually face this problem with CPython: we *cannot* quickly hand over the task of creating the binary installers to new maintainers, as we have to obtain and communicate the relevant information regarding the new signing keys. One of the things Red Hat subscribers are paying for is the fact that we *do* have a certified build system, so our signing key actually means something. Other Linux distros are also paranoid about protecting their signing keys, even if they're not in a position to go for full security certifications. You can get a feel for the complexity involved in doing signing key management properly by looking at what's involved in running your own Certificate Authority: http://pki.fedoraproject.org/wiki/PKI_Main_Page (or see Ade's talk at Flock last year: https://www.youtube.com/watch?v=OvAdCxvPjmM) And that's the conundrum we face with end to end signing proposals; they shift the responsibility for managing key integrity to folks that by and large will fall into one of the following camps: 1. They have no idea what "key management" means, or why they should care 2. They have some idea what "key management" means, and, mistakenly, think it's easy 3. The know exactly what "key management" means, and hence, quite sensibly, don't want to have anything to do with it on a volunteer driven project Folks in categories 1 & 3 won't make use of end-to-end signing on PyPI. Folks in category 2 might make use of it, but would be at significant risk of not securing their signing keys properly (folks that are appropriately paranoid about the difficulties of managing signing keys safely are the ones that will say "no, I don't want to"). This is why end-to-end signing support isn't really a technology problem, it's a "people and processes" problem. It's such a hard one that even commercial software companies struggle to deal with it, hence the rise of the app store model to protect the integrity of mobile devices, while still allowing the use of applications from a wide variety of developers. The Linux distros have *always* worked that way - the signing keys are controlled as part of the distro build infrastructure, not by the individual package maintainers. The package maintainers just upload source packages, and the build system takes care of the rest (including signing the result). It is thus very, very tempting to declare the end-to-end signing problem unsolvable in the general case, given the diverse range of publishers that PyPI supports. If we *do* take that step, then the threat model shifts, since we declare that yes, an attacker that fully compromises PyPI *will* be able to masquerade as any publisher. At that point, the security response shifts to focusing on: 1. SSL-independent authentication of the link from PyPI to the end user. PEP 458 largely covers this if you drop the "claimed" role. (We still have the "root key" management problem, but at least that's relatively narrow in scope) 2. Preventing a compromise of PyPI. This is just normal web app perimeter defence, and generally account security management. 2FA proposals fit into this. They're largely a matter for the PyPI development team and the PSF infrastructure team, rather than needing broad collaboration through the PEP process. 3. *Recovering* from a compromise of PyPI. Assuming that an attack has happened, and PyPI *has* been compromised, how do we identify which packages were compromised and remove them, reverting PyPI to a known good state. The "whole system" snapshot approach of TUF can potentially help here, since it makes it substantially more difficult to go back and surreptitiously modify past snapshots. To be honest, I'm become more convinced over time that this is the right approach, particularly as we're contemplating the idea of adding a build farm directly to PyPI at some point in the future. At that point, PyPI emphatically *is* the publisher, and the link to be secured is the one from PyPI to the end user, just as the link being secured in the Linux distro case is the one from the distro build system to the end user, not from the package maintainer (let alone the upstream project). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Tue Jul 29 07:01:53 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 29 Jul 2014 01:01:53 -0400 Subject: [Distutils] PEP draft on PyPI/pip package signing In-Reply-To: References: <34391C8E-E655-4E37-9E0B-ECFE8D858816@develer.com> Message-ID: On July 28, 2014 at 11:43:08 PM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > 1. SSL-independent authentication of the link from PyPI to the end > user. PEP 458 largely covers this if you drop the "claimed" role. (We > still have the "root key" management problem, but at least that's > relatively narrow in scope) > 2. Preventing a compromise of PyPI. This is just normal web app > perimeter defence, and generally account security management. 2FA > proposals fit into this. They're largely a matter for the PyPI > development team and the PSF infrastructure team, rather than needing > broad collaboration through the PEP process. > 3. *Recovering* from a compromise of PyPI. Assuming that an attack has > happened, and PyPI *has* been compromised, how do we identify which > packages were compromised and remove them, reverting PyPI to a known > good state. The "whole system" snapshot approach of TUF can > potentially help here, since it makes it substantially more difficult > to go back and surreptitiously modify past snapshots. > > To be honest, I'm become more convinced over time that this is the > right approach, particularly as we're contemplating the idea of adding > a build farm directly to PyPI at some point in the future. At that > point, PyPI emphatically *is* the publisher, and the link to be > secured is the one from PyPI to the end user, just as the link being > secured in the Linux distro case is the one from the distro build > system to the end user, not from the package maintainer (let alone the > upstream project). I've not completely given up on the idea of E2E validation, but I do worry a lot about if it's actually useful or if it's just feel good snake oil. As Nick mentioned the biggest problem is whether or not people are going to use it at all, and if they do are they going to secure that in a way that it actually provides significant value. There is some benefit, even if people do not properly secure the keys, just in that in order to do a complete compromise of PyPI you have to compromise all of the author's too. However it's likely that a significant number of people will never use a key unless we force them to. In that case you can probably daisy chain a compromise to everyone by compromising the ones you can, and using those compromised packages to compromise the remaining people who are installing those packages. Even if you can't, it's likely that even a 75% compromise is going to be sufficiently bad that the remaining 25% isn't particularly meaningful (and I think that 25% use is extremely generous). Nick also pointed out the problem we have once we have a build farm on PyPI. One of the problems with Wheels is that it requires you to have access to a large number of build machines. You'll need 1-2 Windows boxes, an OSX box, and potentially a variety of Linux boxes. This is a lot of infra for people to maintain, especially for small one person FOSS projects. A PyPI run build farm is a great way to ease this burden... however it mandates that a PyPI owned machine has a key that is able to publish for those projects. In an E2E scheme this is a super valuable target because it represents a central machine that is connected to the Internet, has a decrypted key in memory, and is able to sign for a lot of packages. When you look at all of this, I think you have to question if E2E is actually even possible for us in a meaningful way. If it's not possible for us in a meaningful way, then why should we pay the cost and why should we pass that cost on to our end users. Now, if you take E2E of the table (and I haven't done that, but I think about it often) you can start looking at radically simpler proposals. Right now we rely on the security of TLS. TLS obviously has a lot of problems that it would be nicer to solve. If we take E2E off the table then we could implement signing using only online keys on PyPI itself (Possibly with an offline key as the trust root just to make revocations and the like simpler). This would eliminate the need to trust anything but the machine that has that particular signing key. This could then have a higher level of protect placed around it in order to protect this key more. This isn't a fully fleshed out thing by any means, but it's a possiblity for a much simpler proposal that also includes trust of mirrors, breaks through proxies, corps with MITM connections etc. More or less I agree that at least looking at the 3 points that Nick mentioned are really good ideas and in my mind will be far more constructive than more E2E proposals. In my mind if we're going to do E2E TUF is the base line that a project needs to do better than in either security or usability for the end user and unfortunately this proposal has worse security and it requires the same sort of things from the end user and it primarily makes it easier to implement. -- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From chris.barker at noaa.gov Wed Jul 30 01:46:57 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 29 Jul 2014 16:46:57 -0700 Subject: [Distutils] build a wheel with waf instead of setuptools In-Reply-To: References: Message-ID: On Fri, Jul 25, 2014 at 7:21 AM, Daniel Holth wrote: > > This kind of thing will require us to implement a flag that tells pip > > "setup.py cannot install; go through wheel" which is somewhere in the > > plans.. > couldn't you write a file called "setup.py", with the core API (i.e setup.py build | install), but that called waf instead of distutils to do the actual work? or does pip doe more than simply call the setup.py script? -Chris > > I don?t think there is any plans to tell pip *not* to use a setup.py > and to > > use a Wheel instead. Rather I think the plans are to enable pluggable > > builders so that a sdist 2.0 package doesn?t rely on setup.py and could > use > > a waf builder (for instance) plugin. > > Just a flag that tells pip it can't use the "install" command and has > to do package -> install package on an sdist. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Jul 30 15:24:21 2014 From: dholth at gmail.com (Daniel Holth) Date: Wed, 30 Jul 2014 09:24:21 -0400 Subject: [Distutils] build a wheel with waf instead of setuptools In-Reply-To: References: Message-ID: Pip compatibility is very useful, so I was thinking about doing something like that. Along with using setup-requires-dist to download an appropriately forked waf: https://bitbucket.org/dholth/setup-requires . Then a little boilerplate code later you have a non-distutils package build without giving up compatibility. Removing "setup.py install" and delegating installation to pip is one of the major packaging goals. One way to do that is to always build the wheel, and then install the wheel, when pip has to install from sdist. Since not every legacy package works correctly in wheel it makes sense to have a per-package flag saying it's allowed. On Tue, Jul 29, 2014 at 7:46 PM, Chris Barker wrote: > On Fri, Jul 25, 2014 at 7:21 AM, Daniel Holth wrote: >> >> > This kind of thing will require us to implement a flag that tells pip >> > "setup.py cannot install; go through wheel" which is somewhere in the >> > plans.. > > > couldn't you write a file called "setup.py", with the core API (i.e setup.py > build | install), but that called waf instead of distutils to do the actual > work? > > or does pip doe more than simply call the setup.py script? > > -Chris > > > > >> >> > I don?t think there is any plans to tell pip *not* to use a setup.py and >> > to >> > use a Wheel instead. Rather I think the plans are to enable pluggable >> > builders so that a sdist 2.0 package doesn?t rely on setup.py and could >> > use >> > a waf builder (for instance) plugin. >> >> Just a flag that tells pip it can't use the "install" command and has >> to do package -> install package on an sdist. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig >