From holger at merlinux.eu Mon Sep 1 12:50:25 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Sep 2014 10:50:25 +0000 Subject: [Distutils] devpi-server-2.0.4 hotfix for pypi.python.org change Message-ID: <20140901105025.GN28217@merlinux.eu> devpi-server-2.0.4 is a hotfix release to adapt to a pypi.python.org change from three days ago which would cause devpi to fail installations for packages like "Sphinx", "Django" ... because pypi now serves them under their canonical name instead of the registered one. As usual, docs for the devpi system are at http://doc.devpi.net best and have fun, Holger Krekel, merlinux GmbH devpi-server 2.0.4 -------------------- - fix issue139: adapt to a recent change in pypi which now serves under URLs using normalized project names instead of the "real" registered name Thanks Timothy Allen and others for sorting this out. - fix issue129: fix __init__ provided version and add a test that it always matches the one which pkg_resources sees (which gets it effectively from setup.py) From qwcode at gmail.com Mon Sep 1 19:12:15 2014 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 1 Sep 2014 10:12:15 -0700 Subject: [Distutils] Packages that have problems being installed from wheels In-Reply-To: References: Message-ID: > My view is that Python packaging should not support installation of > files to anywhere other than subdirectories of the scheme [...] For packages that need to install to absolute locations, I would suggest that this be handled by a post-install scriptlocations [...] Comments? I'd prefer, and think it's reasonable for a new wheel spec support absolute paths (minimally, so we don't break existing distributions) Nick posted this a while back: https://bitbucket.org/pypa/pypi-metadata-formats/issue/13/add-a-new-subdirectory-to-allow-wheels-to -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Sep 1 22:53:15 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 1 Sep 2014 20:53:15 +0000 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> Message-ID: <20140901205315.GQ28217@merlinux.eu> On Thu, Aug 28, 2014 at 14:58 -0400, Donald Stufft wrote: > Right now the ?canonical? page for a particular project on PyPI is whatever the > author happened to name their package (e.g. Django). This requires PyPI to have > some "smarts" so that it can redirect things like /simple/django/ to > /simple/Django/ otherwise someone doing ``pip install django`` would fall back > to a much worse behavior. > > If this redirect doesn't happen, then pip will issue a request for just > /simple/ and look for a link that, when both sides are normalized, compares > equal to the name it's looking for. It will then follow the link, get > /simple/Django/ and everything works... Except it doesn't. The problem here > comes from the external link classification that we have now. Pip sees the > link to /simple/Django/ as an external link (because it lacks the required > rels) and the installation finally fails. > > The /simple/ case rarely happens when installing from PyPI itself because of > the redirect, however it happens quite often when someone is attempting to > instal from a mirror instead. Even when everything works correctly the penality > for not knowing exactly what name to type in results in at least 1 extra http > request, one of which (/simple/) requires pulling down a 2.1MB file. > > To fix this I'm going to modify PyPI so that it uses the normalized name in > the /simple/ URL and redirects everything else to the non-normalized name. Of course you mean redirecting everything to the normalized name. > I'm also going to submit a PR to bandersnatch so that it will use > normalized names ... devpi-server also broke and I did a hotfix release today. Older installs will still have a problem, though (not all companies run the newest version all the time). Apart form the fact i was on vacation and on business travels, the notice for that breaking change was only one day which i think is a bit too quick. I'd really appreciate if you send a mail to Christian for bandersnatch and me for devpi before such changes happen and with a bit more reasonable ahead time. Besides, i think it's a good change in principle. best and thanks, holger > for it's directories and such as well. These two changes will make it so that > the client side will know ahead of time exactly what form the server expects > any given name to be in. This will allow a change in pip to happen which > will pre-normalize all names which will make the interaction with mirrors better > and will reduce the number of HTTP requests that a single ``pip install`` needs > to make. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Tue Sep 2 01:07:51 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Sep 2014 19:07:51 -0400 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <20140901205315.GQ28217@merlinux.eu> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> Message-ID: <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> > On Sep 1, 2014, at 4:53 PM, holger krekel wrote: > > On Thu, Aug 28, 2014 at 14:58 -0400, Donald Stufft wrote: >> Right now the ?canonical? page for a particular project on PyPI is whatever the >> author happened to name their package (e.g. Django). This requires PyPI to have >> some "smarts" so that it can redirect things like /simple/django/ to >> /simple/Django/ otherwise someone doing ``pip install django`` would fall back >> to a much worse behavior. >> >> If this redirect doesn't happen, then pip will issue a request for just >> /simple/ and look for a link that, when both sides are normalized, compares >> equal to the name it's looking for. It will then follow the link, get >> /simple/Django/ and everything works... Except it doesn't. The problem here >> comes from the external link classification that we have now. Pip sees the >> link to /simple/Django/ as an external link (because it lacks the required >> rels) and the installation finally fails. >> >> The /simple/ case rarely happens when installing from PyPI itself because of >> the redirect, however it happens quite often when someone is attempting to >> instal from a mirror instead. Even when everything works correctly the penality >> for not knowing exactly what name to type in results in at least 1 extra http >> request, one of which (/simple/) requires pulling down a 2.1MB file. >> >> To fix this I'm going to modify PyPI so that it uses the normalized name in >> the /simple/ URL and redirects everything else to the non-normalized name. > > Of course you mean redirecting everything to the normalized name. > >> I'm also going to submit a PR to bandersnatch so that it will use >> normalized names ... > > devpi-server also broke and I did a hotfix release today. Older > installs will still have a problem, though (not all companies run the > newest version all the time). Apart form the fact i was on vacation and > on business travels, the notice for that breaking change was only one > day which i think is a bit too quick. I'd really appreciate if you send > a mail to Christian for bandersnatch and me for devpi before such > changes happen and with a bit more reasonable ahead time. > > Besides, i think it's a good change in principle. > > best and thanks, > holger I can only really replete this with https://xkcd.com/1172/. This shouldn?t have been a breaking change, anyone following the HTTP spec dealt with this change just fine. As far as I can tell the only reason it broke devpi was because of an assertion in the code that was asserting against an implementation detail, an implementation detail that I changed. I?m sorry it broke devpi and that it happened at a time when you were on vacation, but honestly I don?t think it?s reasonable to expect every little thing to have to be run past a list of people. Due to the undocumented nature of these tools people have put a lot of (also undocumented) assumptions into their code, many of which are simply depending on implementation details. I try to test my changes against what I can, in this case pip, setuptools, and bandersnatch, but I can?t test against everything. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Sep 2 01:44:00 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 1 Sep 2014 16:44:00 -0700 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> Message-ID: FWIW, as a community member it doesn't seem unreasonable to me to expect that a certain amount of advance notice be given for changes like this, *especially* given that the tools are undocumented. Also, there's a difference between notifying people and "running it by" people (for permission). I think Holger is just asking for enough notice, which shouldn't slow you down like getting sign-off would, say. --Chris On Mon, Sep 1, 2014 at 4:07 PM, Donald Stufft wrote: > > > On Sep 1, 2014, at 4:53 PM, holger krekel wrote: > > On Thu, Aug 28, 2014 at 14:58 -0400, Donald Stufft wrote: > > Right now the ?canonical? page for a particular project on PyPI is whatever the > author happened to name their package (e.g. Django). This requires PyPI to have > some "smarts" so that it can redirect things like /simple/django/ to > /simple/Django/ otherwise someone doing ``pip install django`` would fall back > to a much worse behavior. > > If this redirect doesn't happen, then pip will issue a request for just > /simple/ and look for a link that, when both sides are normalized, compares > equal to the name it's looking for. It will then follow the link, get > /simple/Django/ and everything works... Except it doesn't. The problem here > comes from the external link classification that we have now. Pip sees the > link to /simple/Django/ as an external link (because it lacks the required > rels) and the installation finally fails. > > The /simple/ case rarely happens when installing from PyPI itself because of > the redirect, however it happens quite often when someone is attempting to > instal from a mirror instead. Even when everything works correctly the penality > for not knowing exactly what name to type in results in at least 1 extra http > request, one of which (/simple/) requires pulling down a 2.1MB file. > > To fix this I'm going to modify PyPI so that it uses the normalized name in > the /simple/ URL and redirects everything else to the non-normalized name. > > > Of course you mean redirecting everything to the normalized name. > > I'm also going to submit a PR to bandersnatch so that it will use > normalized names ... > > > devpi-server also broke and I did a hotfix release today. Older > installs will still have a problem, though (not all companies run the > newest version all the time). Apart form the fact i was on vacation and > on business travels, the notice for that breaking change was only one > day which i think is a bit too quick. I'd really appreciate if you send > a mail to Christian for bandersnatch and me for devpi before such > changes happen and with a bit more reasonable ahead time. > > Besides, i think it's a good change in principle. > > best and thanks, > holger > > > I can only really replete this with https://xkcd.com/1172/. > > This shouldn?t have been a breaking change, anyone following the HTTP > spec dealt with this change just fine. As far as I can tell the only reason > it broke devpi was because of an assertion in the code that was asserting > against an implementation detail, an implementation detail that I changed. > > I?m sorry it broke devpi and that it happened at a time when you were > on vacation, but honestly I don?t think it?s reasonable to expect every > little thing to have to be run past a list of people. Due to the undocumented > nature of these tools people have put a lot of (also undocumented) > assumptions into their code, many of which are simply depending on > implementation details. I try to test my changes against what I can, > in this case pip, setuptools, and bandersnatch, but I can?t test against > everything. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From donald at stufft.io Tue Sep 2 02:03:38 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Sep 2014 20:03:38 -0400 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> Message-ID: <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> Changes like what exactly? This was a fairly minor change which is why there wasn't more notice. > On Sep 1, 2014, at 7:44 PM, Chris Jerdonek wrote: > > FWIW, as a community member it doesn't seem unreasonable to me to > expect that a certain amount of advance notice be given for changes > like this, *especially* given that the tools are undocumented. From ncoghlan at gmail.com Tue Sep 2 01:59:47 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 2 Sep 2014 09:59:47 +1000 Subject: [Distutils] Packages that have problems being installed from wheels In-Reply-To: References: Message-ID: On 2 Sep 2014 03:19, "Marcus Smith" wrote: > > >> >> My view is that Python packaging should not support installation of >> files to anywhere other than subdirectories of the scheme [...] >> >> For packages that need to install to absolute locations, I would >> >> suggest that this be handled by a post-install scriptlocations >> >> [...] Comments? > > > I'd prefer, and think it's reasonable for a new wheel spec support absolute paths (minimally, so we don't break existing distributions) > Nick posted this a while back: https://bitbucket.org/pypa/pypi-metadata-formats/issue/13/add-a-new-subdirectory-to-allow-wheels-to Yep, while we may eventually end up having to relent on allowing arbitrary code execution when installing from a wheel file, I'd like to see how far we can get without it (installers are often run with elevated privileges, so arbitrary code execution = teh horribleness) For the "install to arbitrary locations" case, that should just be a matter of defining a new scheme directory that means "paths here are relative to the root of the installation target". There's no defined process for how we go about doing that, but one plausible option would be to write up a PEP for Python 3.5 that defines the new scheme directory in sysconfig (off the top of my head, I suggest "root"), where files placed there will end up on Windows, POSIX and in virtual environments, and how it impacts other things (like the fact wheels using it will be platform dependent). The PyPA toolset would then take care of ensuring the new scheme directory was also supported on earlier Python versions. Cheers, Nick. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Sep 2 02:15:14 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 1 Sep 2014 17:15:14 -0700 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> Message-ID: I don't know exactly. I'd say a change that in your judgment you think has a non-trivial chance of breaking existing tools. Holger is probably in a better position to say. I was just speaking in support of his request, which seemed reasonable to me. --Chris On Mon, Sep 1, 2014 at 5:03 PM, Donald Stufft wrote: > Changes like what exactly? This was a fairly minor change which is why there wasn't more notice. > >> On Sep 1, 2014, at 7:44 PM, Chris Jerdonek wrote: >> >> FWIW, as a community member it doesn't seem unreasonable to me to >> expect that a certain amount of advance notice be given for changes >> like this, *especially* given that the tools are undocumented. From donald at stufft.io Tue Sep 2 04:15:29 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 01 Sep 2014 22:15:29 -0400 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> Message-ID: <1409624129.738721.162487137.339D216F@webmail.messagingengine.com> On Mon, Sep 1, 2014, at 08:15 PM, Chris Jerdonek wrote: > I don't know exactly. I'd say a change that in your judgment you > think has a non-trivial chance of breaking existing tools. Holger is > probably in a better position to say. I was just speaking in support > of his request, which seemed reasonable to me. > > --Chris Which is exactly my point. This change was minor. It didn't break anything but devpi and it wouldn't have broken devpi to my knowledge except for an assert statement that wasn't particularly needed. I already give notice (and discussion, often times even PEPs) for any change that I believe to be breaking. Wanting more is wanting notice on every single change on the off chance someone somewhere might have some dependency on any random implementation detail. > > > On Mon, Sep 1, 2014 at 5:03 PM, Donald Stufft wrote: > > Changes like what exactly? This was a fairly minor change which is why there wasn't more notice. > > > >> On Sep 1, 2014, at 7:44 PM, Chris Jerdonek wrote: > >> > >> FWIW, as a community member it doesn't seem unreasonable to me to > >> expect that a certain amount of advance notice be given for changes > >> like this, *especially* given that the tools are undocumented. From chris.jerdonek at gmail.com Tue Sep 2 04:54:30 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 1 Sep 2014 19:54:30 -0700 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <1409624129.738721.162487137.339D216F@webmail.messagingengine.com> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> <1409624129.738721.162487137.339D216F@webmail.messagingengine.com> Message-ID: On Mon, Sep 1, 2014 at 7:15 PM, Donald Stufft wrote: > On Mon, Sep 1, 2014, at 08:15 PM, Chris Jerdonek wrote: >> I don't know exactly. I'd say a change that in your judgment you >> think has a non-trivial chance of breaking existing tools. Holger is >> probably in a better position to say. I was just speaking in support >> of his request, which seemed reasonable to me. >> >> --Chris > > Which is exactly my point. This change was minor. It didn't break > anything > but devpi and it wouldn't have broken devpi to my knowledge except for > an assert statement that wasn't particularly needed. > > I already give notice (and discussion, often times even PEPs) for any > change > that I believe to be breaking. Wanting more is wanting notice on every > single change on the off chance someone somewhere might have some > dependency on any random implementation detail. If you don't have a good sense of what changes might break existing tools and don't want to notify people, one possibility is to build in a delay between committing to the repo and deploying to production. Interested folks could monitor commits to the repo -- giving them a chance to ask questions and update their tools if necessary. --Chris > >> >> >> On Mon, Sep 1, 2014 at 5:03 PM, Donald Stufft wrote: >> > Changes like what exactly? This was a fairly minor change which is why there wasn't more notice. >> > >> >> On Sep 1, 2014, at 7:44 PM, Chris Jerdonek wrote: >> >> >> >> FWIW, as a community member it doesn't seem unreasonable to me to >> >> expect that a certain amount of advance notice be given for changes >> >> like this, *especially* given that the tools are undocumented. From ncoghlan at gmail.com Tue Sep 2 06:16:51 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 2 Sep 2014 14:16:51 +1000 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> <612C1F2B-8A08-4AF1-A692-B641FAAFB6A9@stufft.io> <1409624129.738721.162487137.339D216F@webmail.messagingengine.com> Message-ID: On 2 September 2014 12:54, Chris Jerdonek wrote: > On Mon, Sep 1, 2014 at 7:15 PM, Donald Stufft wrote: >> I already give notice (and discussion, often times even PEPs) for any >> change >> that I believe to be breaking. Wanting more is wanting notice on every >> single change on the off chance someone somewhere might have some >> dependency on any random implementation detail. > > If you don't have a good sense of what changes might break existing > tools and don't want to notify people, one possibility is to build in > a delay between committing to the repo and deploying to production. > Interested folks could monitor commits to the repo -- giving them a > chance to ask questions and update their tools if necessary. That will pick up noise from internal or web only changes that don't affect the programmatic APIs. Ideally, we'd have an integration environment where tests for pip, bandersnatch and devpi were all automatically run against pypi commits before they went live, but that's rather a lot of work to set up. Until we have such a system, we may continue to see occasional incidents like this one. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From marius at pov.lt Tue Sep 2 08:43:41 2014 From: marius at pov.lt (Marius Gedminas) Date: Tue, 2 Sep 2014 09:43:41 +0300 Subject: [Distutils] Accepting PEP 440: Version Identification and Dependency Specification In-Reply-To: References: Message-ID: <20140902064341.GA5732@fridge.pov.lt> On Fri, Aug 22, 2014 at 10:34:39PM +1000, Nick Coghlan wrote: > I just pushed Donald's final round of edits in response to the > feedback on the last PEP 440 thread, and as such I'm happy to announce > that I am accepting PEP 440 as the recommended approach to identifying > versions and specifying dependencies when distributing Python > software. > > The PEP is available in the usual place at > http://www.python.org/dev/peps/pep-0440/ Awesome! Minor nit: http://legacy.python.org/dev/peps/pep-0440/#final-releases still uses the older N[.N]+ spelling, which perhaps should be changed to N(.N)* to be consistent with http://legacy.python.org/dev/peps/pep-0440/#public-version-identifiers and to make it more apparent at a glance that one number with no dots is also a valid version identifier. Marius Gedminas -- NT 5.0 is the last nail in the Unix coffin. Interestingly, Unix isn't in the coffin... It's wondering what the heck is sealing itself into a wooden box 6 feet underground... -- Jason McMullan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From holger at merlinux.eu Tue Sep 2 10:23:22 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 2 Sep 2014 08:23:22 +0000 Subject: [Distutils] Accepting PEP 440: Version Identification and Dependency Specification In-Reply-To: References: Message-ID: <20140902082322.GT28217@merlinux.eu> Hi all, On Fri, Aug 22, 2014 at 22:34 +1000, Nick Coghlan wrote: > I just pushed Donald's final round of edits in response to the > feedback on the last PEP 440 thread, and as such I'm happy to announce > that I am accepting PEP 440 as the recommended approach to identifying > versions and specifying dependencies when distributing Python > software. > > The PEP is available in the usual place at > http://www.python.org/dev/peps/pep-0440/ > It's been a long road to get to an implementation independent > versioning standard that has a feasible migration path from the > current pkg_resources defined de facto standard, and I'd like to thank > a few folks: > > * Donald Stufft for his extensive work on PEP 440 itself, especially > the proof of concept integration into pip > * Vinay Sajip for his efforts in validating earlier versions of the PEP > * Tarek Ziad? for starting us down the road to an implementation > independent versioning standard with the initial creation of PEP 386 > back in June 2009, more than five years ago! Only got to look at PEP440 now and like it. Thanks to all who helped to sort this out! best, holger > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From holger at merlinux.eu Tue Sep 2 11:36:47 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 2 Sep 2014 09:36:47 +0000 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> Message-ID: <20140902093647.GX28217@merlinux.eu> On Mon, Sep 01, 2014 at 19:07 -0400, Donald Stufft wrote: > > On Sep 1, 2014, at 4:53 PM, holger krekel wrote: > > > > On Thu, Aug 28, 2014 at 14:58 -0400, Donald Stufft wrote: > >> Right now the ?canonical? page for a particular project on PyPI is whatever the > >> author happened to name their package (e.g. Django). This requires PyPI to have > >> some "smarts" so that it can redirect things like /simple/django/ to > >> /simple/Django/ otherwise someone doing ``pip install django`` would fall back > >> to a much worse behavior. > >> > >> If this redirect doesn't happen, then pip will issue a request for just > >> /simple/ and look for a link that, when both sides are normalized, compares > >> equal to the name it's looking for. It will then follow the link, get > >> /simple/Django/ and everything works... Except it doesn't. The problem here > >> comes from the external link classification that we have now. Pip sees the > >> link to /simple/Django/ as an external link (because it lacks the required > >> rels) and the installation finally fails. > >> > >> The /simple/ case rarely happens when installing from PyPI itself because of > >> the redirect, however it happens quite often when someone is attempting to > >> instal from a mirror instead. Even when everything works correctly the penality > >> for not knowing exactly what name to type in results in at least 1 extra http > >> request, one of which (/simple/) requires pulling down a 2.1MB file. > >> > >> To fix this I'm going to modify PyPI so that it uses the normalized name in > >> the /simple/ URL and redirects everything else to the non-normalized name. > > > > Of course you mean redirecting everything to the normalized name. > > > >> I'm also going to submit a PR to bandersnatch so that it will use > >> normalized names ... > > > > devpi-server also broke and I did a hotfix release today. Older > > installs will still have a problem, though (not all companies run the > > newest version all the time). Apart form the fact i was on vacation and > > on business travels, the notice for that breaking change was only one > > day which i think is a bit too quick. I'd really appreciate if you send > > a mail to Christian for bandersnatch and me for devpi before such > > changes happen and with a bit more reasonable ahead time. > > > > Besides, i think it's a good change in principle. > > > > best and thanks, > > holger > > I can only really replete this with https://xkcd.com/1172/. > This shouldn?t have been a breaking change, anyone following the HTTP > spec dealt with this change just fine. As far as I can tell the only reason > it broke devpi was because of an assertion in the code that was asserting > against an implementation detail, an implementation detail that I changed. Right, the assertion was there to ensure pypi's "realname" and devpi's internal "realname" of a project are the same. This check is now relaxed. FWIW I'd prefer it we just said in all pypi APIs (http and xmlrpc/json) that a project name is always kept in canonical form, i.e. you can maybe register "HeLlo_World" but it just means "hello-world" next time someone asks for it. What is the relevance of the "realname" anyway? Do you keep "realnames" in warehouse? > I?m sorry it broke devpi and that it happened at a time when you were > on vacation, but honestly I don?t think it?s reasonable to expect every > little thing to have to be run past a list of people. Due to the undocumented > nature of these tools people have put a lot of (also undocumented) > assumptions into their code, many of which are simply depending on > implementation details. I try to test my changes against what I can, > in this case pip, setuptools, and bandersnatch, but I can?t test against > everything. Thanks for all your work and eagerness to improve things. I think it's safe to assume that any change in PyPI's pip/bandersnatch/devpi facing http API has potential for disruption even if some http specification says otherwise -- at least until we have some specification of how tool/pypi interactions work. best, holger From ncoghlan at gmail.com Tue Sep 2 15:05:34 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 2 Sep 2014 23:05:34 +1000 Subject: [Distutils] Accepting PEP 440: Version Identification and Dependency Specification In-Reply-To: <20140902064341.GA5732@fridge.pov.lt> References: <20140902064341.GA5732@fridge.pov.lt> Message-ID: On 2 September 2014 16:43, Marius Gedminas wrote: > Minor nit: http://legacy.python.org/dev/peps/pep-0440/#final-releases > still uses the older > > N[.N]+ > > spelling, which perhaps should be changed to > > N(.N)* > > to be consistent with > http://legacy.python.org/dev/peps/pep-0440/#public-version-identifiers > and to make it more apparent at a glance that one number with no dots is > also a valid version identifier. Indeed, I missed that reference when we made the change to allow Firefox/Chrome style version numbers. Fixed in http://hg.python.org/peps/rev/ff38b758e584 Thanks for pointing it out! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Tue Sep 2 17:43:33 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 2 Sep 2014 11:43:33 -0400 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <20140902093647.GX28217@merlinux.eu> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> <20140901205315.GQ28217@merlinux.eu> <8AF59341-3970-44D2-ABCF-32DDF2CD04CC@stufft.io> <20140902093647.GX28217@merlinux.eu> Message-ID: <50FF6099-ED5A-4041-8E7B-F95B49F577CE@stufft.io> > On Sep 2, 2014, at 5:36 AM, holger krekel wrote: > > On Mon, Sep 01, 2014 at 19:07 -0400, Donald Stufft wrote: >>> On Sep 1, 2014, at 4:53 PM, holger krekel wrote: >>> >>> On Thu, Aug 28, 2014 at 14:58 -0400, Donald Stufft wrote: >>>> Right now the ?canonical? page for a particular project on PyPI is whatever the >>>> author happened to name their package (e.g. Django). This requires PyPI to have >>>> some "smarts" so that it can redirect things like /simple/django/ to >>>> /simple/Django/ otherwise someone doing ``pip install django`` would fall back >>>> to a much worse behavior. >>>> >>>> If this redirect doesn't happen, then pip will issue a request for just >>>> /simple/ and look for a link that, when both sides are normalized, compares >>>> equal to the name it's looking for. It will then follow the link, get >>>> /simple/Django/ and everything works... Except it doesn't. The problem here >>>> comes from the external link classification that we have now. Pip sees the >>>> link to /simple/Django/ as an external link (because it lacks the required >>>> rels) and the installation finally fails. >>>> >>>> The /simple/ case rarely happens when installing from PyPI itself because of >>>> the redirect, however it happens quite often when someone is attempting to >>>> instal from a mirror instead. Even when everything works correctly the penality >>>> for not knowing exactly what name to type in results in at least 1 extra http >>>> request, one of which (/simple/) requires pulling down a 2.1MB file. >>>> >>>> To fix this I'm going to modify PyPI so that it uses the normalized name in >>>> the /simple/ URL and redirects everything else to the non-normalized name. >>> >>> Of course you mean redirecting everything to the normalized name. >>> >>>> I'm also going to submit a PR to bandersnatch so that it will use >>>> normalized names ... >>> >>> devpi-server also broke and I did a hotfix release today. Older >>> installs will still have a problem, though (not all companies run the >>> newest version all the time). Apart form the fact i was on vacation and >>> on business travels, the notice for that breaking change was only one >>> day which i think is a bit too quick. I'd really appreciate if you send >>> a mail to Christian for bandersnatch and me for devpi before such >>> changes happen and with a bit more reasonable ahead time. >>> >>> Besides, i think it's a good change in principle. >>> >>> best and thanks, >>> holger >> >> I can only really replete this with https://xkcd.com/1172/. >> This shouldn?t have been a breaking change, anyone following the HTTP >> spec dealt with this change just fine. As far as I can tell the only reason >> it broke devpi was because of an assertion in the code that was asserting >> against an implementation detail, an implementation detail that I changed. > > Right, the assertion was there to ensure pypi's "realname" and devpi's > internal "realname" of a project are the same. This check is now relaxed. > > FWIW I'd prefer it we just said in all pypi APIs (http and xmlrpc/json) > that a project name is always kept in canonical form, i.e. you can maybe > register "HeLlo_World" but it just means "hello-world" next time someone > asks for it. What is the relevance of the "realname" anyway? > Do you keep "realnames" in warehouse? As of right now we do, although I think it?s likely that Warehouse will end up with the normalized name being used as the ?identifier? for a project and the name that an author typed in being used as the ?display name?. > >> I?m sorry it broke devpi and that it happened at a time when you were >> on vacation, but honestly I don?t think it?s reasonable to expect every >> little thing to have to be run past a list of people. Due to the undocumented >> nature of these tools people have put a lot of (also undocumented) >> assumptions into their code, many of which are simply depending on >> implementation details. I try to test my changes against what I can, >> in this case pip, setuptools, and bandersnatch, but I can?t test against >> everything. > > Thanks for all your work and eagerness to improve things. I think > it's safe to assume that any change in PyPI's pip/bandersnatch/devpi > facing http API has potential for disruption even if some http > specification says otherwise -- at least until we have some specification > of how tool/pypi interactions work. > > best, > holger --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence at underyx.me Tue Sep 2 19:08:49 2014 From: bence at underyx.me (Bence Nagy) Date: Tue, 2 Sep 2014 19:08:49 +0200 Subject: [Distutils] Taking over an inactive package Message-ID: Hey there, I've submitted a pull request on GitHub[0] for the package Flask-Redis[1] 6 months ago. It didn't contain anything of real importance, but I haven't heard back from the developer since then. I've contacted him both via Twitter and email, to no avail. Is there any way to take over the package in these cases? I figured this would be a small enough package to start my open source career off with. [0]: https://github.com/rhyselsmore/flask-redis/pull/7 [1]: https://pypi.python.org/pypi/Flask-Redis Excuse me if this is irrelevant (as this list usually seems to accomodate for more technical talk), #python directed me here with my question. Thanks! -- Bence Nagy (Underyx) From richard at python.org Tue Sep 2 20:15:36 2014 From: richard at python.org (Richard Jones) Date: Tue, 2 Sep 2014 13:15:36 -0500 Subject: [Distutils] Taking over an inactive package In-Reply-To: References: Message-ID: Hi Bence, Please file a support issue using the link in the pypi sidebar. Richard On 2 September 2014 12:08, Bence Nagy wrote: > Hey there, > > I've submitted a pull request on GitHub[0] for the package > Flask-Redis[1] 6 months ago. It didn't contain anything of real > importance, but I haven't heard back from the developer since then. > I've contacted him both via Twitter and email, to no avail. Is there > any way to take over the package in these cases? I figured this would > be a small enough package to start my open source career off with. > > [0]: https://github.com/rhyselsmore/flask-redis/pull/7 > [1]: https://pypi.python.org/pypi/Flask-Redis > > Excuse me if this is irrelevant (as this list usually seems to > accomodate for more technical talk), #python directed me here with my > question. > > Thanks! > > -- > Bence Nagy (Underyx) > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.sluijters at gmail.com Tue Sep 2 19:40:35 2014 From: daniele.sluijters at gmail.com (Daniele Sluijters) Date: Tue, 2 Sep 2014 19:40:35 +0200 Subject: [Distutils] Taking over an inactive package In-Reply-To: References: Message-ID: Hey, I'm not aware of any MIA procedures or teams like Debian has them. They try to track down inactive maintainers and can eventually orphan a package, essentially releasing it and allowing someone else to take over. However, https://wiki.python.org/moin/CheeseShopDev mentions in the ToDO: * documented procedures for "taking over" entries should the original owner of the entry go away (and any required system support) Seems to suggest there is some kind of way/procedure but I can't find any other reference to it. On 2 September 2014 19:08, Bence Nagy wrote: > Hey there, > > I've submitted a pull request on GitHub[0] for the package > Flask-Redis[1] 6 months ago. It didn't contain anything of real > importance, but I haven't heard back from the developer since then. > I've contacted him both via Twitter and email, to no avail. Is there > any way to take over the package in these cases? I figured this would > be a small enough package to start my open source career off with. > > [0]: https://github.com/rhyselsmore/flask-redis/pull/7 > [1]: https://pypi.python.org/pypi/Flask-Redis > > Excuse me if this is irrelevant (as this list usually seems to > accomodate for more technical talk), #python directed me here with my > question. > > Thanks! > > -- > Bence Nagy (Underyx) > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Daniele Sluijters From reinout at vanrees.org Wed Sep 3 12:24:10 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 03 Sep 2014 12:24:10 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu Message-ID: Hi, I'm investigating some options for making our servers a bit more neat. Basic problem: lots of what we do needs mapnik, numpy, gdal, psycopg2 and so. Python libraries with C code and system dependencies. All of them have ubuntu packages, but especially for gdal we sometimes need a newer version. A PPA can help here, but I thought "a wheel could be nice, too". System packages? Yes, we use buildout with "syseggrecipe". You pass syseggrecipe a bunch of packages ("mapnik, gdal"), it looks up those packages in the OS and installs them in buildout's "develop-eggs/" directory. Works quite well. Isolation + selective use of system packages. Two questions: a) If I use ubuntu packages, I'll have to run pip/virtualenv with --system-site-packages. "pip install numpy" will find the global install just fine. But "pip freeze" will give me all site packages' versions, which is not what I want. => is there a way to *selectively* use such a system package in an otherwise-isolated virtualenv? b) Making a bunch of wheels seems like a nice solution. Then you can just use a virtualenv and "pip install numpy gdal psycopg2...". But how do you differentiate between ubuntu versions? Not every wheel will work with both 12.04 and 14.04, I'd say. But both ubuntu versions will have the same "linux_x86_64" tag. => What is the best way to differentiate here? Separate company-internal "wheelhouse" per ubuntu version? Custom tags? Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From strawman at astraw.com Wed Sep 3 13:17:30 2014 From: strawman at astraw.com (Andrew Straw) Date: Wed, 3 Sep 2014 13:17:30 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: Message-ID: On 3 Sep 2014, at 12:24 PM, Reinout van Rees wrote: > b) Making a bunch of wheels seems like a nice solution. Then you can just use a virtualenv and "pip install numpy gdal psycopg2...". But how do you differentiate between ubuntu versions? Not every wheel will work with both 12.04 and 14.04, I'd say. But both ubuntu versions will have the same "linux_x86_64" tag. > > => What is the best way to differentiate here? Separate company-internal "wheelhouse" per ubuntu version? Custom tags? As a related but alternative suggestion, you could also use [stdeb](https://github.com/astraw/stdeb) either to: 1) build .deb files and host your own repositories ? one for 12.04 and one for 14.04 ? with mini-dinstall or 2) on each target machine do "pypi-install package-name?. Option #1 is what we do in my lab with the repository at https://debs.strawlab.org/ . -Andrew From strawman at astraw.com Wed Sep 3 13:28:25 2014 From: strawman at astraw.com (Andrew Straw) Date: Wed, 3 Sep 2014 13:28:25 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: Message-ID: <90FA1F62-9D43-4619-9BCE-9D009258914B@astraw.com> On 3 Sep 2014, at 1:17 PM, Andrew Straw wrote: > > On 3 Sep 2014, at 12:24 PM, Reinout van Rees wrote: > >> b) Making a bunch of wheels seems like a nice solution. Then you can just use a virtualenv and "pip install numpy gdal psycopg2...". But how do you differentiate between ubuntu versions? Not every wheel will work with both 12.04 and 14.04, I'd say. But both ubuntu versions will have the same "linux_x86_64" tag. >> >> => What is the best way to differentiate here? Separate company-internal "wheelhouse" per ubuntu version? Custom tags? > > As a related but alternative suggestion, you could also use [stdeb](https://github.com/astraw/stdeb) either to: 1) build .deb files and host your own repositories ? one for 12.04 and one for 14.04 ? with mini-dinstall or 2) on each target machine do "pypi-install package-name?. Option #1 is what we do in my lab with the repository at https://debs.strawlab.org/ . > Sorry, that last URL is http://debs.strawlab.org/ . From marius at pov.lt Wed Sep 3 14:22:56 2014 From: marius at pov.lt (Marius Gedminas) Date: Wed, 3 Sep 2014 15:22:56 +0300 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: Message-ID: <20140903122256.GA10579@fridge.pov.lt> On Wed, Sep 03, 2014 at 12:24:10PM +0200, Reinout van Rees wrote: > I'm investigating some options for making our servers a bit more > neat. Basic problem: lots of what we do needs mapnik, numpy, gdal, > psycopg2 and so. Python libraries with C code and system > dependencies. > > All of them have ubuntu packages, but especially for gdal we > sometimes need a newer version. A PPA can help here, but I thought > "a wheel could be nice, too". > > System packages? Yes, we use buildout with "syseggrecipe". You pass > syseggrecipe a bunch of packages ("mapnik, gdal"), it looks up those > packages in the OS and installs them in buildout's "develop-eggs/" > directory. Works quite well. Isolation + selective use of system > packages. Do you use buildout 1.x then? buildout 2.x doesn't support isolation, so all system packages are available (unless you wrap it with a virtualenv). > Two questions: > > a) If I use ubuntu packages, I'll have to run pip/virtualenv with > --system-site-packages. "pip install numpy" will find the global > install just fine. But "pip freeze" will give me all site packages' > versions, which is not what I want. > > => is there a way to *selectively* use such a system package in an > otherwise-isolated virtualenv? You can symlink selected directories (the Python package directory and sometimes .so files from other subdirs) into the virtualenv. This is hacky and I know of no tool to automate it. > b) Making a bunch of wheels seems like a nice solution. Then you can > just use a virtualenv and "pip install numpy gdal psycopg2...". But > how do you differentiate between ubuntu versions? Not every wheel > will work with both 12.04 and 14.04, I'd say. But both ubuntu > versions will have the same "linux_x86_64" tag. > > => What is the best way to differentiate here? Separate > company-internal "wheelhouse" per ubuntu version? Custom tags? My gut feeling says go with a company-internal wheelhouse. A simple directory with a bunch of *.whl files exposed via Apache's mod_autoindex suffices. export PIP_FIND_LINKS=https://intranet.server/wheels/trusty/ to make use of it automatically. $DISTRIB_CODENAME, defined by sourcing /etc/lsb-release, can be helpful here. Marius Gedminas -- Life begins when you can spend your spare time programming instead of watching television. -- Cal Keegan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From bence at underyx.me Tue Sep 2 20:32:26 2014 From: bence at underyx.me (Bence Nagy) Date: Tue, 2 Sep 2014 20:32:26 +0200 Subject: [Distutils] Taking over an inactive package In-Reply-To: References: Message-ID: Hey there, Daniele, thanks for taking the time to look into this! Richard, thank you, I've submitted a ticket, got the nice lil' issue ID of 414[0]. Just for the record. [0]: https://sourceforge.net/p/pypi/support-requests/414/ Cheers! On Tue, Sep 2, 2014 at 8:15 PM, Richard Jones wrote: > Hi Bence, > > Please file a support issue using the link in the pypi sidebar. > > > Richard > > > On 2 September 2014 12:08, Bence Nagy wrote: >> >> Hey there, >> >> I've submitted a pull request on GitHub[0] for the package >> Flask-Redis[1] 6 months ago. It didn't contain anything of real >> importance, but I haven't heard back from the developer since then. >> I've contacted him both via Twitter and email, to no avail. Is there >> any way to take over the package in these cases? I figured this would >> be a small enough package to start my open source career off with. >> >> [0]: https://github.com/rhyselsmore/flask-redis/pull/7 >> [1]: https://pypi.python.org/pypi/Flask-Redis >> >> Excuse me if this is irrelevant (as this list usually seems to >> accomodate for more technical talk), #python directed me here with my >> question. >> >> Thanks! >> >> -- >> Bence Nagy (Underyx) >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > -- Bence Nagy (Underyx) +36 30 493 8089 underyx.me | GitHub | LinkedIn From reinout at vanrees.org Wed Sep 3 14:32:36 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 03 Sep 2014 14:32:36 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: Message-ID: On 03-09-14 13:17, Andrew Straw wrote: > > mini-dinstall I'm not going to create debian packages for every python lib: there are often more than one site per server, so different versions are normal. The standard that's-why-I-like-virtualenv-or-buildout scenario :-) "mini-dinstall" is something that looks handier than the what-was-it-again that I use for hosting the two custom debian packages that I have. Need to look at that. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From reinout at vanrees.org Wed Sep 3 14:38:24 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 03 Sep 2014 14:38:24 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: <20140903122256.GA10579@fridge.pov.lt> References: <20140903122256.GA10579@fridge.pov.lt> Message-ID: On 03-09-14 14:22, Marius Gedminas wrote: > > Do you use buildout 1.x then? buildout 2.x doesn't support isolation, > so all system packages are available (unless you wrap it with a > virtualenv). Buildout 2.x. syseggrecipe basically tells buildout that some package is available globally (by adding a develop-eggs/the_package.egg-link file or so). Otherwise buildout will download and compile numpy/psycopg2/mapnik if it encounters such a dependency in the setup.py. >> => is there a way to *selectively* use such a system package in an >> otherwise-isolated virtualenv? > > You can symlink selected directories (the Python package directory and > sometimes .so files from other subdirs) into the virtualenv. This is > hacky and I know of no tool to automate it. I was looking for an automated non-hacky way :-) Good to know I haven't missed anything. I'm almost exclusively using buildout, so I'm not that knowledgeable regarding pip. >> => What is the best way to differentiate here? Separate >> company-internal "wheelhouse" per ubuntu version? Custom tags? > > My gut feeling says go with a company-internal wheelhouse. A simple > directory with a bunch of *.whl files exposed via Apache's mod_autoindex > suffices. export PIP_FIND_LINKS=https://intranet.server/wheels/trusty/ > to make use of it automatically. $DISTRIB_CODENAME, defined by sourcing > /etc/lsb-release, can be helpful here. Ok, sounds fine. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From chris.barker at noaa.gov Wed Sep 3 16:51:17 2014 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 3 Sep 2014 07:51:17 -0700 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: Message-ID: <-1419825098835957265@unknownmsgid> Your might want to consider conda and conda environments for this. http://www.continuum.io/blog/conda It provides a single packaging solution for both python and dependencies. And there are probably already recipes for everything you need. -Chris > On Sep 3, 2014, at 3:24 AM, Reinout van Rees wrote: > > Hi, > > I'm investigating some options for making our servers a bit more neat. Basic problem: lots of what we do needs mapnik, numpy, gdal, psycopg2 and so. Python libraries with C code and system dependencies. > > All of them have ubuntu packages, but especially for gdal we sometimes need a newer version. A PPA can help here, but I thought "a wheel could be nice, too". > > System packages? Yes, we use buildout with "syseggrecipe". You pass syseggrecipe a bunch of packages ("mapnik, gdal"), it looks up those packages in the OS and installs them in buildout's "develop-eggs/" directory. Works quite well. Isolation + selective use of system packages. > > > Two questions: > > a) If I use ubuntu packages, I'll have to run pip/virtualenv with --system-site-packages. "pip install numpy" will find the global install just fine. But "pip freeze" will give me all site packages' versions, which is not what I want. > > => is there a way to *selectively* use such a system package in an otherwise-isolated virtualenv? > > > b) Making a bunch of wheels seems like a nice solution. Then you can just use a virtualenv and "pip install numpy gdal psycopg2...". But how do you differentiate between ubuntu versions? Not every wheel will work with both 12.04 and 14.04, I'd say. But both ubuntu versions will have the same "linux_x86_64" tag. > > => What is the best way to differentiate here? Separate company-internal "wheelhouse" per ubuntu version? Custom tags? > > > > Reinout > > -- > Reinout van Rees http://reinout.vanrees.org/ > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > "Learning history by destroying artifacts is a time-honored atrocity" > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From yasumoto7 at gmail.com Wed Sep 3 21:11:08 2014 From: yasumoto7 at gmail.com (Joe Smith) Date: Wed, 3 Sep 2014 12:11:08 -0700 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: <-1419825098835957265@unknownmsgid> References: <-1419825098835957265@unknownmsgid> Message-ID: Another option (along the lines of conda) is pex, which zips up you code + dependencies into a single, zipped executable. https://github.com/pantsbuild/pex Pex has been relatively nice for us, as we can bundle our applications into (mainly) hermetically-sealed binaries, which works well on machines that may have separate system dependencies compared to the applications that run on them. We have a company-internal warehouse that we upload compiled eggs/wheels to, and use the pants build tool to resolve dependencies at build-time to pull them in. On Wed, Sep 3, 2014 at 7:51 AM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > Your might want to consider conda and conda environments for this. > > http://www.continuum.io/blog/conda > > It provides a single packaging solution for both python and > dependencies. And there are probably already recipes for everything > you need. > > -Chris > > > On Sep 3, 2014, at 3:24 AM, Reinout van Rees > wrote: > > > > Hi, > > > > I'm investigating some options for making our servers a bit more neat. > Basic problem: lots of what we do needs mapnik, numpy, gdal, psycopg2 and > so. Python libraries with C code and system dependencies. > > > > All of them have ubuntu packages, but especially for gdal we sometimes > need a newer version. A PPA can help here, but I thought "a wheel could be > nice, too". > > > > System packages? Yes, we use buildout with "syseggrecipe". You pass > syseggrecipe a bunch of packages ("mapnik, gdal"), it looks up those > packages in the OS and installs them in buildout's "develop-eggs/" > directory. Works quite well. Isolation + selective use of system packages. > > > > > > Two questions: > > > > a) If I use ubuntu packages, I'll have to run pip/virtualenv with > --system-site-packages. "pip install numpy" will find the global install > just fine. But "pip freeze" will give me all site packages' versions, which > is not what I want. > > > > => is there a way to *selectively* use such a system package in an > otherwise-isolated virtualenv? > > > > > > b) Making a bunch of wheels seems like a nice solution. Then you can > just use a virtualenv and "pip install numpy gdal psycopg2...". But how do > you differentiate between ubuntu versions? Not every wheel will work with > both 12.04 and 14.04, I'd say. But both ubuntu versions will have the same > "linux_x86_64" tag. > > > > => What is the best way to differentiate here? Separate company-internal > "wheelhouse" per ubuntu version? Custom tags? > > > > > > > > Reinout > > > > -- > > Reinout van Rees http://reinout.vanrees.org/ > > reinout at vanrees.org http://www.nelen-schuurmans.nl/ > > "Learning history by destroying artifacts is a time-honored atrocity" > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Sep 4 17:36:35 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 4 Sep 2014 15:36:35 +0000 Subject: [Distutils] depvi-2.0.2: server/client fixes and generic pypi whitelisting Message-ID: <20140904153635.GG28217@merlinux.eu> Hi, I just released devpi-2.0.2, the private pypi server system including a self-updating pypi cache. This release brings devpi-server-2.0.5 and devpi-client-2.0.2 fixing a number of bugs and allowing to set pypi_whitelist=* to generically whitelist packages for easier creation of indexes containing platform specific wheels + all pypi packages. Please see the CHANGELOG below for more info and http://doc.devpi.net for various quickstart documents and a manual. As usual, special thanks to Florian Schulze for majorly helping with the release and to Juergen Hermann, Trevor Joynoson and many others for very useful issue contributions. best, holger krekel, merlinux GmbH devpi-2.0.2 (metapackage) --------------------------------- devpi-server-2.0.5 (compared to 2.0.2): - fix issue145: restrict devpi_common dependency so that a future "pip install 'devpi-server<2.0'" has a higher chance of working. - fix issue144: fix interaction with requests-2.4.0 -- use new devpi-common-offered "Errors" enumeration to check for exceptions. - add '*' as possible option for pypi_whitelist to whitelist all packages of an index at once. Refs issue110 - outside url now works with paths, so you can host a devpi server on something like http://example.com/foo/ - fix issue84: during upload: if a previously registered name diverges from a freshly submitted one take the previously registered one. This can happen when uploading wheels and in other situations. - fix issue132: during exporting use whatever name comes with the versiondata instead of trying too hard to assert consistency of different versions. - fix issue130: fix deletion of users so that is properly deletes all indexes and projects and files on each index. - fix issue139: adapt to a recent change in pypi which now serves under URLs using normalized project names instead of the "real" registered name Thanks Timothy Allen and others for sorting this out. - fix issue129: fix __init__ provided version and add a test that it always matches the one which pkg_resources sees (which gets it effectively from setup.py) - fix issue128: a basic auth challenge needs to be sent back on submit when no authorization headers are sent with the post request. devpi-client-2.0.2 (compared to 2.0.1): - fix issue135: fix port mismatch for basic auth if port isn't explicitly given on https urls. - refs issue75: pass on basic auth info into pip.cfg and co. - fix issue144: fix interaction with requests-2.4 by depending on devpi-common's new support for enumerating possible Errors - keep basic authentication info when listing indices or switching index by using path only instead of full URL. Thanks Trevor Joynson - only write new client config if it actually changed and pretty print it. Thanks J?rgen Hermann for initial PR and ideas. From barry at python.org Thu Sep 4 18:58:31 2014 From: barry at python.org (Barry Warsaw) Date: Thu, 4 Sep 2014 12:58:31 -0400 Subject: [Distutils] wheels or system packages for pip on ubuntu References: Message-ID: <20140904125831.5f8a5994@anarchist.wooz.org> On Sep 03, 2014, at 12:24 PM, Reinout van Rees wrote: >All of them have ubuntu packages, but especially for gdal we sometimes need a >newer version. A PPA can help here, but I thought "a wheel could be nice, >too". In many cases, it mostly takes an interested person to get Ubuntu and Debian packages updated. Generally it's best to get the latest versions in Debian first (and that may be easy or hard depending on the maintainership of the package), and then sync or merge the newer Debian versions into Ubuntu. Looking quickly at gdal, I can see that Ubuntu carries some deltas over Debian, but I haven't looked at the details. It would take some evaluation as to whether the Ubuntu deltas can be dropped or need to be merged. #debian-python on OTFC or the debian-python mailing lists are good places to start about getting some help with the packages you care about. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From qwcode at gmail.com Thu Sep 4 19:39:05 2014 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 4 Sep 2014 10:39:05 -0700 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: <20140904125831.5f8a5994@anarchist.wooz.org> References: <20140904125831.5f8a5994@anarchist.wooz.org> Message-ID: On Thu, Sep 4, 2014 at 9:58 AM, Barry Warsaw wrote: > On Sep 03, 2014, at 12:24 PM, Reinout van Rees wrote: > > >All of them have ubuntu packages, but especially for gdal we sometimes > need a > >newer version. A PPA can help here, but I thought "a wheel could be nice, > >too". > > In many cases, it mostly takes an interested person to get Ubuntu and > Debian > packages updated. wouldn't that only update it for the *next* release of debian/ubuntu? maybe you can clarify if/when a package can be updated so that it appears as an update for your current distro version? I roughly have the idea that it's only for security issues. thanks Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Sep 4 19:54:03 2014 From: barry at python.org (Barry Warsaw) Date: Thu, 4 Sep 2014 13:54:03 -0400 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: <20140904125831.5f8a5994@anarchist.wooz.org> Message-ID: <20140904135403.49af7aad@anarchist.wooz.org> On Sep 04, 2014, at 10:39 AM, Marcus Smith wrote: >wouldn't that only update it for the *next* release of debian/ubuntu? Generally yes. There's also backports, but that's more effort. https://help.ubuntu.com/community/UbuntuBackports https://wiki.debian.org/Backports Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From dholth at gmail.com Thu Sep 4 19:58:32 2014 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Sep 2014 13:58:32 -0400 Subject: [Distutils] additional paths in wheel Message-ID: It's always been obvious that wheel would probably need additional paths besides the sysconfig ones, and there's been some discussion here recently. For the next version we should: 1. Add the autoconf dirs. https://www.gnu.org/prep/standards/html_node/Directory-Variables.html. "packagename-1.0/data/dvidir/" or any of the other autoconf paths would be valid, in addition to the existing distutils paths. (The autoconf paths are defined relative to a $prefix, in Python's case $prefix is usually the base of the virtualenv). 2. Replace WHEEL with wheel.json. wheel.json contains all the information from WHEEL but is json which is rather popular these days. wheel.json may contain custom paths with string Template() interpolation. { "paths": "name":"$prefix/mypath", "othername":"$bindir/etc", "thirdname": "$othername/subfolder" } (the sysconfig names, autoconf names, and custom path names can be interpolated here) 3. The extra paths are not very useful if you can't find them at runtime. Provide a mechanism for recording the actual paths used during installation (in wheel.json). The installer should record the installation scheme only when requested, in one or more of the formats a) an importable Python file b) a .json file somewhere in the installation c) a file inside the .dist-info directory itself. Of course this is only useful if you have a build system that would actually produce wheels with files under these extra paths. I don't have that bit. From vinay_sajip at yahoo.co.uk Thu Sep 4 20:43:33 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 4 Sep 2014 19:43:33 +0100 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: <1409856213.48765.YahooMailNeo@web172403.mail.ir2.yahoo.com> > Provide a mechanism for recording the actual paths used > during installation (in wheel.json). Well, there is already a list of paths used which is used during uninstallation (RECORD). It would make sense to record all the installed files there, and no reason to duplicate that elsewhere. In general, IMO importable (executable) Python files should be avoided for this kind of use case - JSON files in the .dist-info would be better. We see from the .pth file and setup.py examples that executable Python for these types of usages can be a bit of a double-edged sword. Possibly we should replace RECORD, etc. with JSON equivalents (to be neat and tidy). Regards, Vinay Sajip From donald at stufft.io Thu Sep 4 20:55:59 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Sep 2014 14:55:59 -0400 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: > > On Sep 4, 2014, at 1:58 PM, Daniel Holth wrote: > > It's always been obvious that wheel would probably need additional > paths besides the sysconfig ones, and there's been some discussion > here recently. > > For the next version we should: The next version of the Wheel spec? > > 1. Add the autoconf dirs. > https://www.gnu.org/prep/standards/html_node/Directory-Variables.html. > "packagename-1.0/data/dvidir/" or any of the other autoconf paths > would be valid, in addition to the existing distutils paths. (The > autoconf paths are defined relative to a $prefix, in Python's case > $prefix is usually the base of the virtualenv). Sounds plausible. > > 2. Replace WHEEL with wheel.json. wheel.json contains all the > information from WHEEL but is json which is rather popular these days. > > wheel.json may contain custom paths with string Template() interpolation. > > { "paths": "name":"$prefix/mypath", "othername":"$bindir/etc", > "thirdname": "$othername/subfolder" } > > (the sysconfig names, autoconf names, and custom path names can be > interpolated here) I don?t understand this paths stuff, what is it supposed to be doing? Also with JSON, the problem is the current tooling is now setup to handle a key: value WHEEL store, so we?ll need some sort of a migration path for old tools to know that this is a Wheel they can?t handle. It?s possible that it?s not worth it to do this. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Sep 4 22:04:53 2014 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Sep 2014 16:04:53 -0400 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: On Thu, Sep 4, 2014 at 2:55 PM, Donald Stufft wrote: > > On Sep 4, 2014, at 1:58 PM, Daniel Holth wrote: > > It's always been obvious that wheel would probably need additional > paths besides the sysconfig ones, and there's been some discussion > here recently. > > For the next version we should: > > > The next version of the Wheel spec? This would be Wheel 2. It would still include WHEEL with the file format version, older clients would be able to show the error. Why allow paths to be written in a .py? The .py would be purely declarative; it would allow a program to find its files without using any pkg_resources API. IMO it's important to allow programs to work without iterating over the metadata of all installed packages or without participating in the package management system at all. # possibly only the paths that were actually used... paths = { 'prefix':'/usr/local/', 'bindir':'/usr/local/bin' , ...} # OR PREFIX="/usr/local" BINDIR="/usr/local/bin" MANDIR="/usr/local/share/man/..." > 1. Add the autoconf dirs. > https://www.gnu.org/prep/standards/html_node/Directory-Variables.html. > "packagename-1.0/data/dvidir/" or any of the other autoconf paths > would be valid, in addition to the existing distutils paths. (The > autoconf paths are defined relative to a $prefix, in Python's case > $prefix is usually the base of the virtualenv). > > > Sounds plausible. > > > 2. Replace WHEEL with wheel.json. wheel.json contains all the > information from WHEEL but is json which is rather popular these days. > > wheel.json may contain custom paths with string Template() interpolation. > > { "paths": "name":"$prefix/mypath", "othername":"$bindir/etc", > "thirdname": "$othername/subfolder" } > > (the sysconfig names, autoconf names, and custom path names can be > interpolated here) > > > I don?t understand this paths stuff, what is it supposed to be doing Instead of just having predefined paths in the packagename-1.0.data/[path] directories, the user could define additional values for "path" in the {"path" : "installation directory"} dictionary used by wheel installers. (I'm not 100% sure this proposed feature does not cause more problems than it solves...) > Also with JSON, the problem is the current tooling is now setup to > handle a key: value WHEEL store, so we?ll need some sort of a migration > path for old tools to know that this is a Wheel they can?t handle. It?s > possible that it?s not worth it to do this. They would just contain a stub WHEEL advertising version 2. > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From dholth at gmail.com Thu Sep 4 22:09:59 2014 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Sep 2014 16:09:59 -0400 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: For example, a wheel has these directories: pyramid-1.4.data/scripts/prequest pyramid-1.4.data/scripts/pshell With custom paths it could also have pyramid-1.4.data/spam/pharmacy.txt and "paths" : { "spam" : "$datadir/pyramid/junkmail" } pharmacy.txt would be installed into $datadir/pyramid/junkmail/pharmacy.txt It would also be appealing to be able to interpolate the package name and version into the "paths" dictionary... On Thu, Sep 4, 2014 at 4:04 PM, Daniel Holth wrote: > On Thu, Sep 4, 2014 at 2:55 PM, Donald Stufft wrote: >> >> On Sep 4, 2014, at 1:58 PM, Daniel Holth wrote: >> >> It's always been obvious that wheel would probably need additional >> paths besides the sysconfig ones, and there's been some discussion >> here recently. >> >> For the next version we should: >> >> >> The next version of the Wheel spec? > > This would be Wheel 2. > > It would still include WHEEL with the file format version, older > clients would be able to show the error. > > Why allow paths to be written in a .py? The .py would be purely > declarative; it would allow a program to find its files without using > any pkg_resources API. IMO it's important to allow programs to work > without iterating over the metadata of all installed packages or > without participating in the package management system at all. > > # possibly only the paths that were actually used... > paths = { 'prefix':'/usr/local/', 'bindir':'/usr/local/bin' , ...} > # OR > PREFIX="/usr/local" > BINDIR="/usr/local/bin" > MANDIR="/usr/local/share/man/..." > >> 1. Add the autoconf dirs. >> https://www.gnu.org/prep/standards/html_node/Directory-Variables.html. >> "packagename-1.0/data/dvidir/" or any of the other autoconf paths >> would be valid, in addition to the existing distutils paths. (The >> autoconf paths are defined relative to a $prefix, in Python's case >> $prefix is usually the base of the virtualenv). >> >> >> Sounds plausible. >> >> >> 2. Replace WHEEL with wheel.json. wheel.json contains all the >> information from WHEEL but is json which is rather popular these days. >> >> wheel.json may contain custom paths with string Template() interpolation. >> >> { "paths": "name":"$prefix/mypath", "othername":"$bindir/etc", >> "thirdname": "$othername/subfolder" } >> >> (the sysconfig names, autoconf names, and custom path names can be >> interpolated here) >> >> >> I don?t understand this paths stuff, what is it supposed to be doing > > Instead of just having predefined paths in the > packagename-1.0.data/[path] directories, the user could define > additional values for "path" in the {"path" : "installation > directory"} dictionary used by wheel installers. (I'm not 100% sure > this proposed feature does not cause more problems than it solves...) > >> Also with JSON, the problem is the current tooling is now setup to >> handle a key: value WHEEL store, so we?ll need some sort of a migration >> path for old tools to know that this is a Wheel they can?t handle. It?s >> possible that it?s not worth it to do this. > > They would just contain a stub WHEEL advertising version 2. > >> --- >> Donald Stufft >> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> From p.f.moore at gmail.com Thu Sep 4 22:25:31 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 4 Sep 2014 21:25:31 +0100 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: On 4 September 2014 18:58, Daniel Holth wrote: > For the next version we should: > > 1. Add the autoconf dirs. OK. Can we make it clear in the documentation that these are non-portable when used in distributions installed in the system Python? And can we have some documentation on how code should be written to find a file that's installed in "$libdir/foo" so that the code works for a system installation and a virtualenv installation? > 2. Replace WHEEL with wheel.json. wheel.json contains all the > information from WHEEL but is json which is rather popular these days. Formats parseable by stdlib tools is a good thing, IMO. I'm not sure it's worth switching to JSON just for the sake of doing so - we need format stability as well, because tools like pip will still have to parse the older formats for some time yet. But you mention extending the format below - that may be a good reason to switch to the more general JSON format at the same time. > wheel.json may contain custom paths with string Template() interpolation. > > { "paths": "name":"$prefix/mypath", "othername":"$bindir/etc", > "thirdname": "$othername/subfolder" } > > (the sysconfig names, autoconf names, and custom path names can be > interpolated here) Not 100% sure what you're intending here. > 3. The extra paths are not very useful if you can't find them at > runtime. Provide a mechanism for recording the actual paths used > during installation (in wheel.json). The installer should record the > installation scheme only when requested, in one or more of the formats > a) an importable Python file b) a .json file somewhere in the > installation c) a file inside the .dist-info directory itself. This is basically much like RECORD, as Vinay says, and should be a file inside the .dist-info directory for that reason. I still wish there was a *really* lightweight way of getting dist-info files at runtime, distlib and setuptools cover far more ground, but getting at the extra paths needs nothing more than get_distinfo_file('mydist', 'paths.json'), and a runtime dependency on distlib/setuptools just for that seems like a lot. > Of course this is only useful if you have a build system that would > actually produce wheels with files under these extra paths. I don't > have that bit. By this, do you mean that it's not possible to get distutils/setuptools/bdist_wheel to support this, or just that extra work is needed to add support to those tools? Paul From p.f.moore at gmail.com Thu Sep 4 22:33:40 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 4 Sep 2014 21:33:40 +0100 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: On 4 September 2014 21:09, Daniel Holth wrote: > For example, a wheel has these directories: > > pyramid-1.4.data/scripts/prequest > pyramid-1.4.data/scripts/pshell > > With custom paths it could also have > > pyramid-1.4.data/spam/pharmacy.txt > > and "paths" : { "spam" : "$datadir/pyramid/junkmail" } > > pharmacy.txt would be installed into $datadir/pyramid/junkmail/pharmacy.txt > > It would also be appealing to be able to interpolate the package name > and version into the "paths" dictionary... Ah, I see what you're trying to achieve. No idea how useful it would be in practice, most projects seem to get away without it, but maybe they shouldn't have to. But based on this, it's critical IMO that there's good docs on how to find these files at runtime, otherwise projects will hard-code for the simple case (installed in the system Python on Unix - I imagine it'll be the Unix developers who will be the typical users of this feature) and end up breaking in all other environments... Paul From qwcode at gmail.com Thu Sep 4 22:42:24 2014 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 4 Sep 2014 13:42:24 -0700 Subject: [Distutils] additional paths in wheel In-Reply-To: References: Message-ID: > > > Instead of just having predefined paths in the > packagename-1.0.data/[path] directories, the user could define > additional values for "path" in the {"path" : "installation > directory"} dictionary used by wheel installers. (I'm not 100% sure > this proposed feature does not cause more problems than it solves...) > well, as it is now, if "data_files" contains absolute paths, wheel silently produces broken whl packages. i.e. it produces packages that install your data_files relative to sys.prefix (not as absolute) so the proposed feature of custom paths, would at least offer a way to un-break those packages. another option for right now (and maybe in the long term too) though is to refuse to ever build wheels for these packages in the first place (see discussion in https://bitbucket.org/pypa/wheel/issue/92/bdist_wheel-makes-absolute-data_files ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Sep 4 23:42:16 2014 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Sep 2014 17:42:16 -0400 Subject: [Distutils] additional paths in wheel In-Reply-To: <1409856213.48765.YahooMailNeo@web172403.mail.ir2.yahoo.com> References: <1409856213.48765.YahooMailNeo@web172403.mail.ir2.yahoo.com> Message-ID: Only the installer cares about RECORD. The application knows it's looking for doom1.wad and it just needs to know what the $wadfiles path was - it just needs the prefix, which is all the new option writes. That is why I don't propose using record for this. We will not allow every file in the dist to be relocated relative to every other. On Sep 4, 2014 2:43 PM, "Vinay Sajip" wrote: > > Provide a mechanism for recording the actual paths used > > during installation (in wheel.json). > > Well, there is already a list of paths used which is used during > uninstallation (RECORD). It would make sense to record all the > installed files there, and no reason to duplicate that elsewhere. > > In general, IMO importable (executable) Python files should be > avoided for this kind of use case - JSON files in the .dist-info > would be better. We see from the .pth file and setup.py examples > that executable Python for these types of usages can be a bit of > a double-edged sword. > > Possibly we should replace RECORD, etc. with JSON equivalents > (to be neat and tidy). > > Regards, > > Vinay Sajip > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Fri Sep 5 09:52:09 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Fri, 05 Sep 2014 09:52:09 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: <20140904135403.49af7aad@anarchist.wooz.org> References: <20140904125831.5f8a5994@anarchist.wooz.org> <20140904135403.49af7aad@anarchist.wooz.org> Message-ID: On 04-09-14 19:54, Barry Warsaw wrote: > On Sep 04, 2014, at 10:39 AM, Marcus Smith wrote: > >> >wouldn't that only update it for the*next* release of debian/ubuntu? > Generally yes. There's also backports, but that's more effort. > > https://help.ubuntu.com/community/UbuntuBackports > https://wiki.debian.org/Backports For my usecase it is mostly that I have a colleague that keeps doing absolute magic with gdal. And when he says "this calculation can be done with 30% of the effort if we use the one-month-old gdal version", then you need *very* up to date ubuntu packages :-) We did use (for this case) the ubuntugis-stable PPA, but that broke a number of our servers somewhere in May or so as it also updated some other dependency which caused breakage. So... that's why Wheels started to sound nice. And compiling wheels yourself and placing them on a server in a directory with various wheels for a specific distribution... Sounds like the most standard option right now. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From strawman at astraw.com Fri Sep 5 10:00:40 2014 From: strawman at astraw.com (Andrew Straw) Date: Fri, 5 Sep 2014 10:00:40 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: References: <20140904125831.5f8a5994@anarchist.wooz.org> <20140904135403.49af7aad@anarchist.wooz.org> Message-ID: <669AAAB4-FF57-4B6D-B0CF-8105B1596488@astraw.com> On 5 Sep 2014, at 9:52 AM, Reinout van Rees wrote: > So... that's why Wheels started to sound nice. And compiling wheels yourself and placing them on a server in a directory with various wheels for a specific distribution... Sounds like the most standard option right now. I haven?t tried it myself but this may also be interesting: https://github.com/spotify/dh-virtualenv -Andrew From p at lists-2014.dobrogost.net Fri Sep 5 14:38:06 2014 From: p at lists-2014.dobrogost.net (Piotr Dobrogost) Date: Fri, 5 Sep 2014 14:38:06 +0200 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= Message-ID: Hi! According to http://www.buildout.org/en/latest/community.html this mailing list is the right place to ask about buildout so here is my question. I'm having hard time finding out why buildout doesn't work with my proxy. Here is the error I get: pdobrogost at host:~/projects/projectx/projectx_buildout$ ./bin/buildout -vNc buildout-devel.cfg custom:cvsuser=pdobrogost Installing 'mr.developer'. We have no distributions for mr.developer that satisfies 'mr.developer'. Download error on http://pypi.python.org/simple/mr.developer/: hostname 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl.fastly. (...) I'm using Buildout 1.4.3 with Python 2.7.7 on Debian 6.0.10 Details can be found at http://stackoverflow.com/q/25682165/95735 Btw, does buildout use easy_install to download/build/install eggs or something else? If it uses easy_install is there any way to pass options to easy_install used? Thank you in advance, Piotr Dobrogost From jim at zope.com Sun Sep 7 16:09:56 2014 From: jim at zope.com (Jim Fulton) Date: Sun, 7 Sep 2014 10:09:56 -0400 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: On Fri, Sep 5, 2014 at 8:38 AM, Piotr Dobrogost

wrote: > Hi! > > According to http://www.buildout.org/en/latest/community.html this > mailing list is the right place to ask about buildout Yup. > so here is my > question. > > I'm having hard time finding out why buildout doesn't work with my proxy. > Here is the error I get: > > pdobrogost at host:~/projects/projectx/projectx_buildout$ ./bin/buildout > -vNc buildout-devel.cfg custom:cvsuser=pdobrogost > Installing 'mr.developer'. > We have no distributions for mr.developer that satisfies 'mr.developer'. > Download error on http://pypi.python.org/simple/mr.developer/: > hostname 'proxy.site.local' doesn't match either of > '*.c.ssl.fastly.net', 'c.ssl.fastly. (...) > > I'm using Buildout 1.4.3 Wow. That's really old and not really supported any more. It also uses very old and unsupported versions of setuptools. > with Python 2.7.7 on Debian 6.0.10 > Details can be found at http://stackoverflow.com/q/25682165/95735 > > Btw, does buildout use easy_install to download/build/install eggs or > something else? If it uses easy_install is there any way to pass > options to easy_install used? Buildout uses setuptools, which is what easy_install uses. (Buildout originally used easy_install more or less directly and still does in some narrow cases.) Please upgrade to buildout 2. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From reinout at vanrees.org Mon Sep 8 13:44:48 2014 From: reinout at vanrees.org (Reinout van Rees) Date: Mon, 08 Sep 2014 13:44:48 +0200 Subject: [Distutils] wheels or system packages for pip on ubuntu In-Reply-To: <669AAAB4-FF57-4B6D-B0CF-8105B1596488@astraw.com> References: <20140904125831.5f8a5994@anarchist.wooz.org> <20140904135403.49af7aad@anarchist.wooz.org> <669AAAB4-FF57-4B6D-B0CF-8105B1596488@astraw.com> Message-ID: On 05-09-14 10:00, Andrew Straw wrote: > > On 5 Sep 2014, at 9:52 AM, Reinout van Rees wrote: > >> So... that's why Wheels started to sound nice. And compiling wheels yourself and placing them on a server in a directory with various wheels for a specific distribution... Sounds like the most standard option right now. > > I haven?t tried it myself but this may also be interesting: https://github.com/spotify/dh-virtualenv Agreed, looks interesting. I watched the youtube video of the europython Berlin talk about it. In a way it was what caused my original question :-) Why? With dh-virtualenv you can have a debian package with debian dependencies and a virtualenv all ready to go. So... what do you do with debian-level dependencies and how do you tell pip you've got them? Or can you perhaps easily use wheels and get those into the virtualenv and thus into the debian package? So I started thinking and started asking :-) My current thinking is as follows: - One or two basic ubuntu-based boxes: basic ubuntu with a special custom package that pre-installs necessities such as memcached, libjpeg and postgres bindings. - Wheels created for those one or two basic boxes, in a directory per box-type. This way you have some machinery in just one place where you can create wheels at will for your boxes. - Regular pip (or buildout once it supports wheels, I haven't checked yet if it does) to manage the python packages. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From p at 2014.dobrogost.net Mon Sep 8 11:10:37 2014 From: p at 2014.dobrogost.net (Piotr Dobrogost) Date: Mon, 8 Sep 2014 11:10:37 +0200 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: Jim, thanks for taking time to reply. On Sun, Sep 7, 2014 at 4:09 PM, Jim Fulton wrote: > > Wow. That's really old and not really supported any more. > > It also uses very old and unsupported versions of setuptools. > (...) > > Buildout uses setuptools, which is what easy_install uses. (Buildout originally > used easy_install more or less directly and still does in some narrow cases.) > > Please upgrade to buildout 2. Ok, I tried with current bootstrap.py from http://downloads.buildout.org/2/bootstrap.py and got the same error. This time when running bootstrap itself (not buildout): pdobrogost at host:~/projects/projectx/projectx_buildout$ python bootstrap.py Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-5.7.zip Extracting in /tmp/tmpj6ZiDN Now working in /tmp/tmpj6ZiDN/setuptools-5.7 Building a Setuptools egg in /tmp/tmpF9r1g6 /tmp/tmpF9r1g6/setuptools-5.7-py2.7.egg Download error on https://pypi.python.org/simple/zc.buildout/: hostname 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', '*.atlassian.net', 'secure.lessthan3.com', '*.atlassian.com', 'a.sellpoint.net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.python.org', '*.theverge.com', '*.sbnation.com', '*.polygon.com', '*.twobrightlights.com', '*.2brightlights.info', '*.vox.com', 'staging-cdn.upthere.com', '*.zeebox.com', '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', 'www.turn.com', 'rivergathering.org', 'social.icfglobal2014-europe.org', '*.innogamescdn.com', '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', 'scribd.com' -- Some packages may not be found! Couldn't find index page for 'zc.buildout' (maybe misspelled?) Download error on https://pypi.python.org/simple/: hostname 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', '*.atlassian.net', 'secure.lessthan3.com', '*.atlassian.com', 'a.sellpoint.net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.python.org', '*.theverge.com', '*.sbnation.com', '*.polygon.com', '*.twobrightlights.com', '*.2brightlights.info', '*.vox.com', 'staging-cdn.upthere.com', '*.zeebox.com', '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', 'www.turn.com', 'rivergathering.org', 'social.icfglobal2014-europe.org', '*.innogamescdn.com', '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', 'scribd.com' -- Some packages may not be found! No local packages or download links found for zc.buildout==2.2.1 error: Could not find suitable distribution for Requirement.parse('zc.buildout==2.2.1') Traceback (most recent call last): File "bootstrap.py", line 161, in "Failed to execute command:\n%s" % repr(cmd)[1:-1]) Exception: Failed to execute command: '/opt/python/2.7/bin/python', '-c', 'from setuptools.command.easy_install import main; main()', '-mZqNxd', '/tmp/tmpF9r1g6', 'zc.buildout==2.2.1' This time however, there's a traceback and one can see it's easy_install invocation that fails. I was going to run easy_install with the same options as bootstrap passes (trying to repeat the error) but before this I upgraded setuptools from 3.6 which I had to the current version 5.7. After this easy_install run without problems: pdobrogost at host:~$ easy_install -mZNxd /tmp/1 zc.buildout==2.2.1 Searching for zc.buildout==2.2.1 Reading https://pypi.python.org/simple/zc.buildout/ Best match: zc.buildout 2.2.1 Downloading https://pypi.python.org/packages/source/z/zc.buildout/zc.buildout-2.2.1.tar.gz#md5=476a06eed08506925c700109119b6e41 Processing zc.buildout-2.2.1.tar.gz Writing /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/setup.cfg Running zc.buildout-2.2.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/egg-dist-tmp-7a53i0 Installed /home/users/pdobrogost/.local/lib/python2.7/site-packages/zc.buildout-2.2.1-py2.7.egg Because this distribution was installed --multi-version, before you can import modules from this package in an application, you will need to 'import pkg_resources' and then use a 'require()' call similar to one of these examples, in order to select the desired version: pkg_resources.require("zc.buildout") # latest installed version pkg_resources.require("zc.buildout==2.2.1") # this exact version pkg_resources.require("zc.buildout>=2.2.1") # this version or higher (Curiously it installed buildout into my local site-packages instead of folder which was passed to easy_install as the value of -d option ? /tmp/1). After this when I run bootstrap again the original problem was gone. Out of curiosity I downgraded setuptools to version 3.6 which I had originally installed and run bootstrap again. No problem this time. I have no idea what happened as upgrading and downgrading setuptools should not have any effect... Thank you for help. Regards, Piotr From p at lists-2014.dobrogost.net Mon Sep 8 13:44:20 2014 From: p at lists-2014.dobrogost.net (Piotr Dobrogost) Date: Mon, 8 Sep 2014 13:44:20 +0200 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: Jim, thanks for taking time to reply. On Sun, Sep 7, 2014 at 4:09 PM, Jim Fulton wrote: > > Wow. That's really old and not really supported any more. > > It also uses very old and unsupported versions of setuptools. > (...) > > Buildout uses setuptools, which is what easy_install uses. (Buildout originally > used easy_install more or less directly and still does in some narrow cases.) > > Please upgrade to buildout 2. Ok, I tried with current bootstrap.py from http://downloads.buildout.org/2/bootstrap.py and got the same error. This time when running bootstrap itself (not buildout): pdobrogost at host:~/projects/ projectx/projectx_buildout$ python bootstrap.py Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-5.7.zip Extracting in /tmp/tmpj6ZiDN Now working in /tmp/tmpj6ZiDN/setuptools-5.7 Building a Setuptools egg in /tmp/tmpF9r1g6 /tmp/tmpF9r1g6/setuptools-5.7-py2.7.egg Download error on https://pypi.python.org/simple/zc.buildout/: hostname 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', '*.atlassian.net', 'secure.lessthan3.com', '*.atlassian.com', 'a.sellpoint.net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.python.org', '*.theverge.com', '*.sbnation.com', '*.polygon.com', '*.twobrightlights.com', '*.2brightlights.info', '*.vox.com', 'staging-cdn.upthere.com', '*.zeebox.com', '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', 'www.turn.com', 'rivergathering.org', 'social.icfglobal2014-europe.org', '*.innogamescdn.com', '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', 'scribd.com' -- Some packages may not be found! Couldn't find index page for 'zc.buildout' (maybe misspelled?) Download error on https://pypi.python.org/simple/: hostname 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', '*.atlassian.net', 'secure.lessthan3.com', '*.atlassian.com', 'a.sellpoint.net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.python.org', '*.theverge.com', '*.sbnation.com', '*.polygon.com', '*.twobrightlights.com', '*.2brightlights.info', '*.vox.com', 'staging-cdn.upthere.com', '*.zeebox.com', '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', 'www.turn.com', 'rivergathering.org', 'social.icfglobal2014-europe.org', '*.innogamescdn.com', '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', 'scribd.com' -- Some packages may not be found! No local packages or download links found for zc.buildout==2.2.1 error: Could not find suitable distribution for Requirement.parse('zc.buildout==2.2.1') Traceback (most recent call last): File "bootstrap.py", line 161, in "Failed to execute command:\n%s" % repr(cmd)[1:-1]) Exception: Failed to execute command: '/opt/python/2.7/bin/python', '-c', 'from setuptools.command.easy_install import main; main()', '-mZqNxd', '/tmp/tmpF9r1g6', 'zc.buildout==2.2.1' This time however, there's a traceback and one can see it's easy_install invocation that fails. I was going to run easy_install with the same options as bootstrap passes (trying to repeat the error) but before this I upgraded setuptools from 3.6 which I had to the current version 5.7. After this easy_install run without problems: pdobrogost at host:~$ easy_install -mZNxd /tmp/1 zc.buildout==2.2.1 Searching for zc.buildout==2.2.1 Reading https://pypi.python.org/simple/zc.buildout/ Best match: zc.buildout 2.2.1 Downloading https://pypi.python.org/packages/source/z/zc.buildout/zc.buildout-2.2.1.tar.gz#md5=476a06eed08506925c700109119b6e41 Processing zc.buildout-2.2.1.tar.gz Writing /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/setup.cfg Running zc.buildout-2.2.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/egg-dist-tmp-7a53i0 Installed /home/users/pdobrogost/.local/lib/python2.7/site-packages/zc.buildout-2.2.1-py2.7.egg Because this distribution was installed --multi-version, before you can import modules from this package in an application, you will need to 'import pkg_resources' and then use a 'require()' call similar to one of these examples, in order to select the desired version: pkg_resources.require("zc.buildout") # latest installed version pkg_resources.require("zc.buildout==2.2.1") # this exact version pkg_resources.require("zc.buildout>=2.2.1") # this version or higher (Curiously it installed buildout into my local site-packages instead of folder which was passed to easy_install as the value of -d option ? /tmp/1). After this when I run bootstrap again the original problem was gone. Out of curiosity I downgraded setuptools to version 3.6 which I had originally installed and run bootstrap again. No problem this time. I have no idea what happened as upgrading and downgrading setuptools should not have any effect... Thank you for help. Regards, Piotr From jim at zope.com Mon Sep 8 15:46:08 2014 From: jim at zope.com (Jim Fulton) Date: Mon, 8 Sep 2014 09:46:08 -0400 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: On Mon, Sep 8, 2014 at 7:44 AM, Piotr Dobrogost

wrote: > Jim, thanks for taking time to reply. > > On Sun, Sep 7, 2014 at 4:09 PM, Jim Fulton wrote: >> >> Wow. That's really old and not really supported any more. >> >> It also uses very old and unsupported versions of setuptools. >> > (...) >> >> Buildout uses setuptools, which is what easy_install uses. (Buildout originally >> used easy_install more or less directly and still does in some narrow cases.) >> >> Please upgrade to buildout 2. > > Ok, I tried with current bootstrap.py from > http://downloads.buildout.org/2/bootstrap.py and got the same error. > This time when running bootstrap itself (not buildout): > > pdobrogost at host:~/projects/ > projectx/projectx_buildout$ python bootstrap.py > Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-5.7.zip > Extracting in /tmp/tmpj6ZiDN > Now working in /tmp/tmpj6ZiDN/setuptools-5.7 > Building a Setuptools egg in /tmp/tmpF9r1g6 > /tmp/tmpF9r1g6/setuptools-5.7-py2.7.egg > Download error on https://pypi.python.org/simple/zc.buildout/: > hostname 'proxy.site.local' doesn't match either of > '*.c.ssl.fastly.net', 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', > '*.snappytv.com', '*.atlassian.net', 'secure.lessthan3.com', > '*.atlassian.com', 'a.sellpoint.net', 'cdn.upthere.com', > '*.tissuu.com', '*.issuu.com', '*.kekofan.com', '*.python.org', > '*.theverge.com', '*.sbnation.com', '*.polygon.com', > '*.twobrightlights.com', '*.2brightlights.info', '*.vox.com', > 'staging-cdn.upthere.com', '*.zeebox.com', '*.beamly.com', > '*.aticpan.org', 'stream.svc.7digital.net', > 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', > 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', > 'www.turn.com', 'rivergathering.org', > 'social.icfglobal2014-europe.org', '*.innogamescdn.com', > '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', > 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', > 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', > 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', > 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', > 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', > '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', > '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', > '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', > 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', > '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', > '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', > 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', > 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', > 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', > 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', > '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', > 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', > 'scribd.com' -- Some packages may not be found! > Couldn't find index page for 'zc.buildout' (maybe misspelled?) > Download error on https://pypi.python.org/simple/: hostname > 'proxy.site.local' doesn't match either of '*.c.ssl.fastly.net', > 'c.ssl.fastly.net', '*.target.com', '*.vhx.tv', '*.snappytv.com', > '*.atlassian.net', 'secure.lessthan3.com', '*.atlassian.com', > 'a.sellpoint.net', 'cdn.upthere.com', '*.tissuu.com', '*.issuu.com', > '*.kekofan.com', '*.python.org', '*.theverge.com', '*.sbnation.com', > '*.polygon.com', '*.twobrightlights.com', '*.2brightlights.info', > '*.vox.com', 'staging-cdn.upthere.com', '*.zeebox.com', > '*.beamly.com', '*.aticpan.org', 'stream.svc.7digital.net', > 'stream-test.svc.7digital.net', '*.articulate.com', 's.t.st', > 'vid.thestreet.com', '*.planet-labs.com', '*.url2png.com', 'turn.com', > 'www.turn.com', 'rivergathering.org', > 'social.icfglobal2014-europe.org', '*.innogamescdn.com', > '*.pathable.com', '*.staging.pathable.com', '*.kickstarter.com', > 'sparkingchange.org', 'www.swedavia.se', 'www.swedavia.com', > 'js-agent.newrelic.com', '*.fastly-streams.com', 'cdn.brandisty.com', > 'fastly.hightailcdn.com', '*.fl.yelpcdn.com', '*.feedmagnet.com', > 'api.contentbody.com', '*.acquia.com', '*.swarmapp.com', '*.pypa.io', > 'pypa.io', 'static.qbranch.se', '*.krxd.net', '*.room.co', > '*.metrological.com', 'room.co', 'cdn.evbuc.com', 'cdn.adagility.com', > '*.bandpage.com', '*.ibmserviceengage.com', '*.quirky.com', > '*.veez.co', '*.x.io', '*.otoycdn.net', '*.scribd.com', > 'www.dwin1.com', 'api.imgur-ysports.com', 'i.imgur-ysports.com', > '*.fxcm.co.jp', 'listora.com', '*.listora.com', 'blendle.nl', > '*.blendle.nl', '*.modeanalytics.com', 'modeanalytics.com', > 'krux.com', '*.krux.com', '*.udemy.com', '*.1stdibs.com', > 'api.keep.com', 'www.piriform.com', '*.ustream.tv', 'www.zimbio.com', > 'm.zimbio.com', 'www.stylebistro.com', 'm.stylebistro.com', > 'm.lonny.com', 'www.lonny.com', 'assets.trabiancdn.com', > '*.socialchorus.com', '*.heritagestatic.com', '*.theoutbound.com', > 'img.rakuten.com', 'images.rakuten.com', 'img1.r10.io', 'ast1.r10.io', > 'scribd.com' -- Some packages may not be found! > No local packages or download links found for zc.buildout==2.2.1 > error: Could not find suitable distribution for > Requirement.parse('zc.buildout==2.2.1') > Traceback (most recent call last): > File "bootstrap.py", line 161, in > "Failed to execute command:\n%s" % repr(cmd)[1:-1]) > Exception: Failed to execute command: > '/opt/python/2.7/bin/python', '-c', 'from > setuptools.command.easy_install import main; main()', '-mZqNxd', > '/tmp/tmpF9r1g6', 'zc.buildout==2.2.1' > > This time however, there's a traceback and one can see it's > easy_install invocation that fails. > > I was going to run easy_install with the same options as bootstrap > passes (trying to repeat the error) but before this I upgraded > setuptools from 3.6 which I had to the current version 5.7. After this > easy_install run without problems: > > pdobrogost at host:~$ easy_install -mZNxd /tmp/1 zc.buildout==2.2.1 > Searching for zc.buildout==2.2.1 > Reading https://pypi.python.org/simple/zc.buildout/ > Best match: zc.buildout 2.2.1 > Downloading https://pypi.python.org/packages/source/z/zc.buildout/zc.buildout-2.2.1.tar.gz#md5=476a06eed08506925c700109119b6e41 > Processing zc.buildout-2.2.1.tar.gz > Writing /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/setup.cfg > Running zc.buildout-2.2.1/setup.py -q bdist_egg --dist-dir > /tmp/easy_install-wQGE2_/zc.buildout-2.2.1/egg-dist-tmp-7a53i0 > > Installed /home/users/pdobrogost/.local/lib/python2.7/site-packages/zc.buildout-2.2.1-py2.7.egg > > Because this distribution was installed --multi-version, before you can > import modules from this package in an application, you will need to > 'import pkg_resources' and then use a 'require()' call similar to one of > these examples, in order to select the desired version: > > pkg_resources.require("zc.buildout") # latest installed version > pkg_resources.require("zc.buildout==2.2.1") # this exact version > pkg_resources.require("zc.buildout>=2.2.1") # this version or higher > > > (Curiously it installed buildout into my local site-packages instead > of folder which was passed to easy_install as the value of -d option ? > /tmp/1). > > After this when I run bootstrap again the original problem was gone. > Out of curiosity I downgraded setuptools to version 3.6 which I had > originally installed and run bootstrap again. No problem this time. I > have no idea what happened as upgrading and downgrading setuptools > should not have any effect... > > Thank you for help. This is why I *always* use a clean python built from source. I recommend people do the same, or use a virtualenv. Anyway, I'm glad you got past the problem. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From jim at zope.com Mon Sep 8 17:39:03 2014 From: jim at zope.com (Jim Fulton) Date: Mon, 8 Sep 2014 11:39:03 -0400 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: On Mon, Sep 8, 2014 at 11:31 AM, Piotr Dobrogost

wrote: > On Mon, Sep 8, 2014 at 3:46 PM, Jim Fulton wrote: >> >> This is why I *always* use a clean python built from source. >> I recommend people do the same, or use a virtualenv. > > Well, I read http://www.buildout.org/en/latest/docs/tutorial.html but > nowhere there was virtualenv mentioned. I guess this might be due to > the fact virtualenv didn't even exist in 2007 or was not popular? :) I > somehow had an impression that one of the main features of buildout > has always been isolation from system Python thus I did not even > thought about running it inside virtualenv. However, looking now for > "virtualenv" at https://pypi.python.org/pypi/zc.buildout/2.2.1 brought > the following bullet from version 2.0.0: > > "Buildout no-longer tries to provide full or partial isolation from > system Python installations. If you want isolation, use buildout with > virtualenv, or use a clean build of Python to begin with." > > which is quite clear. > > Btw, what's the reason documentation at > http://www.buildout.org/en/latest/docs/index.html is for ancient > version 1.2.1? Whimper. Sadly, that's the most recent version independent of the doctests. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From p at 2014.dobrogost.net Mon Sep 8 17:31:05 2014 From: p at 2014.dobrogost.net (Piotr Dobrogost) Date: Mon, 8 Sep 2014 17:31:05 +0200 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: On Mon, Sep 8, 2014 at 3:46 PM, Jim Fulton wrote: > > This is why I *always* use a clean python built from source. > I recommend people do the same, or use a virtualenv. Well, I read http://www.buildout.org/en/latest/docs/tutorial.html but nowhere there was virtualenv mentioned. I guess this might be due to the fact virtualenv didn't even exist in 2007 or was not popular? :) I somehow had an impression that one of the main features of buildout has always been isolation from system Python thus I did not even thought about running it inside virtualenv. However, looking now for "virtualenv" at https://pypi.python.org/pypi/zc.buildout/2.2.1 brought the following bullet from version 2.0.0: "Buildout no-longer tries to provide full or partial isolation from system Python installations. If you want isolation, use buildout with virtualenv, or use a clean build of Python to begin with." which is quite clear. Btw, what's the reason documentation at http://www.buildout.org/en/latest/docs/index.html is for ancient version 1.2.1? Regards, Piotr From p at lists-2014.dobrogost.net Mon Sep 8 18:25:15 2014 From: p at lists-2014.dobrogost.net (Piotr Dobrogost) Date: Mon, 8 Sep 2014 18:25:15 +0200 Subject: [Distutils] =?utf-8?q?Download_error_on_=28=E2=80=A6=29_hostname_?= =?utf-8?q?=3Cproxy=3E_doesn=27t_match_either_of_=27*=2Ec=2Essl=2Ef?= =?utf-8?b?YXN0bHkubmV0JywgKOKApikgd2hlbiBydW5uaW5nIGJ1aWxkb3V0IGJl?= =?utf-8?q?hind_proxy?= In-Reply-To: References: Message-ID: On Mon, Sep 8, 2014 at 5:39 PM, Jim Fulton wrote: > > Whimper. Sadly, that's the most recent version independent of the doctests. I'm not following. Could you please explain? Regards, Piotr From collinmanderson at gmail.com Tue Sep 9 17:59:27 2014 From: collinmanderson at gmail.com (Collin Anderson) Date: Tue, 9 Sep 2014 11:59:27 -0400 Subject: [Distutils] C extension dependencies Message-ID: Hi All, pip is great for installing things, except when there are C extensions. If I run "pip install pillow" out of the box, I get a bunch of information showing up on the console, with an error in there somewhere. The problem is that I don't have the necessary header files and libraries installed, but the error message doesn't really say that. Having ran into the problem enough, I have now learned I need to go to pillow's website and read up on what I need to install. Is there any way packages could list external dependencies? Maybe something like requires = ['Python.h', 'somethingelse.h']. I realize there's not much that pip can do about it automatically, but maybe pip could try to figure out if you have those files around and warn if it doesn't see them. Eventually, I could imagine a distro-specific tool coming along that could more-reliably translate those dependencies into system packages. Something like: yum whatprovides '/usr/include/*/Python.h' But, for starters, I think somehow listing external dependencies would be helpful. Ideally it would be great if "pip install pillow" Just Worked, though I imagine that would be pretty impossible. We would likely need to start packaging the c dependencies on pypi, and would probably start approximating something like Gentoo Portage, which I assume is a road we don't want to go down. Is there anything we can do to improve the situation? Thanks, Collin From cmawebsite at gmail.com Tue Sep 9 23:22:57 2014 From: cmawebsite at gmail.com (Collin Anderson) Date: Tue, 9 Sep 2014 17:22:57 -0400 Subject: [Distutils] C extension dependencies Message-ID: Hi All, pip is great for installing things, except when there are C extensions. If I run "pip install pillow" out of the box, I get a bunch of information showing up on the console, with an error in there somewhere. The problem is that I don't have the necessary header files and libraries installed, but the error message doesn't really say that. Having ran into the problem enough, I have now learned I need to go to pillow's website and read up on what I need to install. Is there any way packages could list external dependencies? Maybe something like requires = ['Python.h', 'somethingelse.h']. I realize there's not much that pip can do about it automatically, but maybe pip could try to figure out if you have those files around and warn if it doesn't see them. Eventually, I could imagine a distro-specific tool coming along that could more-reliably translate those dependencies into system packages. Something like: yum whatprovides '/usr/include/*/Python.h' But, for starters, I think somehow listing external dependencies would be helpful. Ideally it would be great if "pip install pillow" Just Worked, though I imagine that would be pretty impossible. We would likely need to start packaging the c dependencies on pypi, and would probably start approximating something like Gentoo Portage, which I assume is a road we don't want to go down. Is there anything we can do to improve the situation? Thanks, Collin (Sorry if this is a double-email for anyone) From p.f.moore at gmail.com Thu Sep 11 09:10:09 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Sep 2014 08:10:09 +0100 Subject: [Distutils] C extension dependencies In-Reply-To: References: Message-ID: On 9 September 2014 16:59, Collin Anderson wrote: > > Is there any way packages could list external dependencies? Maybe > something like requires = ['Python.h', 'somethingelse.h']. I realize > there's not much that pip can do about it automatically, but maybe pip > could try to figure out if you have those files around and warn if it > doesn't see them. I don't think pip could realistically check if such files exist (there are too many variations in where they might be, etc) but it might be plausible to allow a package to declare its external dependencies somehow, maybe just as free text, and then if the package build fails, pip could display that information. Maybe have a build_hint metadata item that pip could display if a build fails. That could be reasonably easy with Metadata 2.0, which allows for metadata extensions. The trick is to get packages to include such a thing. It would be a shame to add the mechanism and never have it be used. Paul From donald at stufft.io Thu Sep 11 09:15:01 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Sep 2014 03:15:01 -0400 Subject: [Distutils] C extension dependencies In-Reply-To: References: Message-ID: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> > On Sep 11, 2014, at 3:10 AM, Paul Moore wrote: > > On 9 September 2014 16:59, Collin Anderson wrote: >> >> Is there any way packages could list external dependencies? Maybe >> something like requires = ['Python.h', 'somethingelse.h']. I realize >> there's not much that pip can do about it automatically, but maybe pip >> could try to figure out if you have those files around and warn if it >> doesn't see them. > > I don't think pip could realistically check if such files exist (there > are too many variations in where they might be, etc) but it might be > plausible to allow a package to declare its external dependencies > somehow, maybe just as free text, and then if the package build fails, > pip could display that information. > > Maybe have a build_hint metadata item that pip could display if a > build fails. That could be reasonably easy with Metadata 2.0, which > allows for metadata extensions. > > The trick is to get packages to include such a thing. It would be a > shame to add the mechanism and never have it be used. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig I think most (all?) of the Linux distros have some method of saying ?Tell me what packages provide this file?. Perhaps this code wouldn?t live in pip itself but it could have an extension point which people could provide pip-apt or whatever. That of course doesn?t help Windows or OS X, so it might not be worth it (though I imagine someone could build up such a DB there too and just map them to .exe?s or project home pages or something). Perhaps the gains wouldn?t be worth the complexity though and it?d just be easier to allow projects to have a build hint thing that gets printed if the build fails. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Sep 11 10:37:46 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Sep 2014 09:37:46 +0100 Subject: [Distutils] C extension dependencies In-Reply-To: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> Message-ID: On 11 September 2014 08:15, Donald Stufft wrote: > Perhaps the gains wouldn?t be worth the complexity though and it?d > just be easier to allow projects to have a build hint thing that gets > printed if the build fails. Putting it in core pip sounds to me like a recipe for endless system-specific hacks, TBH. Having a plugin system that allowed external packages to add (and maintain!) system-specific checks might work, but that's pretty complex. Paul From donald at stufft.io Thu Sep 11 10:48:26 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Sep 2014 04:48:26 -0400 Subject: [Distutils] C extension dependencies In-Reply-To: References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> Message-ID: <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> > On Sep 11, 2014, at 4:37 AM, Paul Moore wrote: > > On 11 September 2014 08:15, Donald Stufft wrote: >> Perhaps the gains wouldn?t be worth the complexity though and it?d >> just be easier to allow projects to have a build hint thing that gets >> printed if the build fails. > > Putting it in core pip sounds to me like a recipe for endless > system-specific hacks, TBH. Having a plugin system that allowed > external packages to add (and maintain!) system-specific checks might > work, but that's pretty complex. > > Paul Yes to be specific the only thing I would personally be OK with adding to the pip core is something that added the appropiate hooks to let some other thing provide the platform specific mechanisms. I'm still not sure it's worth the effort over the simpler idea of just providing a build_hint metadata that authors can use to say "Hey you need to isntall libxml2 for this thing" or whatever. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Sep 11 14:18:38 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Sep 2014 22:18:38 +1000 Subject: [Distutils] C extension dependencies In-Reply-To: <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> Message-ID: On 11 September 2014 18:48, Donald Stufft wrote: > > On Sep 11, 2014, at 4:37 AM, Paul Moore wrote: > > On 11 September 2014 08:15, Donald Stufft wrote: > > Perhaps the gains wouldn?t be worth the complexity though and it?d > just be easier to allow projects to have a build hint thing that gets > printed if the build fails. > > > Putting it in core pip sounds to me like a recipe for endless > system-specific hacks, TBH. Having a plugin system that allowed > external packages to add (and maintain!) system-specific checks might > work, but that's pretty complex. > > Paul > > > Yes to be specific the only thing I would personally be OK with adding to > the > pip core is something that added the appropiate hooks to let some other > thing > provide the platform specific mechanisms. I'm still not sure it's worth the > effort over the simpler idea of just providing a build_hint metadata that > authors can use to say "Hey you need to isntall libxml2 for this thing" or > whatever. It actually occurs to me that the GNU autoconf directory scheme may be useful here - if people define their dependencies in terms of that scheme, it should be possible for a plugin to figure out how to ask the OS installer for them, or otherwise check for them in a virtualenv or conda environment. And, if no such plugin is available, the fallback option would be to just tell the user what's missing. (FWIW, the only major barrier I see to formalising the metadata 2.0 spec at this point is the lack of up to date jsonschema files. Getting that out the door may be something to explore post pip 1.6) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu Sep 11 14:20:21 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Sep 2014 08:20:21 -0400 Subject: [Distutils] C extension dependencies In-Reply-To: References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> Message-ID: <85448E4E-7E09-4D6C-9D7C-792DC25B7827@stufft.io> > On Sep 11, 2014, at 8:18 AM, Nick Coghlan wrote: > > On 11 September 2014 18:48, Donald Stufft > wrote: >> >> On Sep 11, 2014, at 4:37 AM, Paul Moore wrote: >> >> On 11 September 2014 08:15, Donald Stufft wrote: >> >> Perhaps the gains wouldn?t be worth the complexity though and it?d >> just be easier to allow projects to have a build hint thing that gets >> printed if the build fails. >> >> >> Putting it in core pip sounds to me like a recipe for endless >> system-specific hacks, TBH. Having a plugin system that allowed >> external packages to add (and maintain!) system-specific checks might >> work, but that's pretty complex. >> >> Paul >> >> >> Yes to be specific the only thing I would personally be OK with adding to >> the >> pip core is something that added the appropiate hooks to let some other >> thing >> provide the platform specific mechanisms. I'm still not sure it's worth the >> effort over the simpler idea of just providing a build_hint metadata that >> authors can use to say "Hey you need to isntall libxml2 for this thing" or >> whatever. > > It actually occurs to me that the GNU autoconf directory scheme may be > useful here - if people define their dependencies in terms of that > scheme, it should be possible for a plugin to figure out how to ask > the OS installer for them, or otherwise check for them in a virtualenv > or conda environment. > > And, if no such plugin is available, the fallback option would be to > just tell the user what's missing. > > (FWIW, the only major barrier I see to formalising the metadata 2.0 > spec at this point is the lack of up to date jsonschema files. Getting > that out the door may be something to explore post pip 1.6) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia I?d like to take a close look at Metadata 2.0 and see about doing some proof of concept implementations before we actually accept that PEP. Basically the same thing I did for PEP 440. I think the feedback from actually attempting to use it was invaluable. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Sep 11 15:36:29 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Sep 2014 23:36:29 +1000 Subject: [Distutils] C extension dependencies In-Reply-To: <85448E4E-7E09-4D6C-9D7C-792DC25B7827@stufft.io> References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> <85448E4E-7E09-4D6C-9D7C-792DC25B7827@stufft.io> Message-ID: On 11 September 2014 22:20, Donald Stufft wrote: > > I?d like to take a close look at Metadata 2.0 and see about doing some > proof of concept implementations before we actually accept that PEP. > Basically the same thing I did for PEP 440. I think the feedback from > actually attempting to use it was invaluable. Absolutely! As it turns out, I was wrong anyway. Checking the issue list at https://bitbucket.org/pypa/pypi-metadata-formats/issues?status=new&status=open&component=Metadata%202.x shows there is still at least the recommendation to use SPDX tags that I'd like to add to PEP 459. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From wgordonw1 at gmail.com Thu Sep 11 14:28:35 2014 From: wgordonw1 at gmail.com (gordon) Date: Thu, 11 Sep 2014 08:28:35 -0400 Subject: [Distutils] force static linking Message-ID: Hello, I am attempting to build statically linked distributions. I am using docker containers to ensure the deployment environment matches the build environment so there is no compatibility concern. Is there any way to force static linking so that wheels can be installed into a virtual env without requiring specific packages on the host? Thanks, Gordon -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkabrda at redhat.com Thu Sep 11 15:59:40 2014 From: bkabrda at redhat.com (Bohuslav Kabrda) Date: Thu, 11 Sep 2014 09:59:40 -0400 (EDT) Subject: [Distutils] C extension dependencies In-Reply-To: <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> Message-ID: <1222283177.11690959.1410443980499.JavaMail.zimbra@redhat.com> ----- Original Message ----- > > On Sep 11, 2014, at 4:37 AM, Paul Moore < p.f.moore at gmail.com > wrote: > > > On 11 September 2014 08:15, Donald Stufft < donald at stufft.io > wrote: > > > > Perhaps the gains wouldn?t be worth the complexity though and it?d > > > > > > just be easier to allow projects to have a build hint thing that gets > > > > > > printed if the build fails. > > > > > Putting it in core pip sounds to me like a recipe for endless > > > system-specific hacks, TBH. Having a plugin system that allowed > > > external packages to add (and maintain!) system-specific checks might > > > work, but that's pretty complex. > > > Paul > > Yes to be specific the only thing I would personally be OK with adding to the > pip core is something that added the appropiate hooks to let some other thing > provide the platform specific mechanisms. I'm still not sure it's worth the > effort over the simpler idea of just providing a build_hint metadata that > authors can use to say "Hey you need to isntall libxml2 for this thing" or > whatever. While working on packaging Ruby and Rubygems for Fedora, we actually used a Rubygems hook to create a plugin that did precisely this, it's called gem-nice-install [1] (we did use it for some time, but I'm not sure whether it's still being actively developed and used) We actually went a step further and implemented the *actual installation* in that plugin (sending list of packages to install to PackageKit via dbus) and it worked really nice. I think allowing plugins is much better than providing build hint, because build hint may not give you information about *what* (different distros name some packages differently), but most importantly *how* the missing packages should be installed. And IMO upstreams shouldn't care about this (they shouldn't *need* to care), this is the work of distro packagers. So I vote for plugins. Thanks, Slavek > --- > Donald Stufft [1] https://github.com/voxik/gem-nice-install -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Sep 11 21:36:27 2014 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 11 Sep 2014 12:36:27 -0700 Subject: [Distutils] C extension dependencies In-Reply-To: <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> Message-ID: > > > Yes to be specific the only thing I would personally be OK with adding to > the > pip core is something that added the appropiate hooks to let some other > thing > provide the platform specific mechanisms. I'm still not sure it's worth the > effort over the simpler idea of just providing a build_hint metadata that > authors can use to say "Hey you need to isntall libxml2 for this thing" or > whatever. > I'd like to see us use PEP459 extensions to hold distro-specific dependencies. (as mentioned here: https://bitbucket.org/pypa/pypi-metadata-formats/issue/16/external-requirements ) I'm imagining a cross-distro community-maintained project that holds all the json dependency data for all of pypi. and then tooling could build up around that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Sep 11 21:40:05 2014 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 11 Sep 2014 12:40:05 -0700 Subject: [Distutils] C extension dependencies In-Reply-To: References: <4478A0E4-ECAD-4711-8F86-BCD9064BFBD4@stufft.io> <9671B2D8-BC36-4394-B6C7-B280F220E8BB@stufft.io> Message-ID: > > > I'd like to see us use PEP459 extensions to hold distro-specific > dependencies. > well, not "PEP459 extensions", just "extensions" generally, since PEP459 is about certain specific extensions -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Sep 11 21:51:04 2014 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 11 Sep 2014 12:51:04 -0700 Subject: [Distutils] Metadata extension discovery? Message-ID: Is the assumption for extensions in PEP426 that they would be added to the one an only structure (in pydist.json) at the time of building the archive? i.e. something that the project author adds. The reason I ask is related to my comment in the other thread occuring right now about how to handle external dependencies. If the idea of using extensions for external dependencies is going to work (and not require the project authors to solely maintain it), then PEP426 would need to support some system of discovery that can layer on more metadata at the time of install that is not present in the archive. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Sep 12 08:24:22 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Sep 2014 18:24:22 +1200 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: On 12 September 2014 07:51, Marcus Smith wrote: > Is the assumption for extensions in PEP426 that they would be added to the > one an only structure (in pydist.json) at the time of building the archive? > i.e. something that the project author adds. > > The reason I ask is related to my comment in the other thread occuring right > now about how to handle external dependencies. If the idea of using > extensions for external dependencies is going to work (and not require the > project authors to solely maintain it), then PEP426 would need to support > some system of discovery that can layer on more metadata at the time of > install that is not present in the archive. This is actually an open question. One possible way to go would be to add a "pydist.d" directory to wheels and the installed metadata, where packages can drop arbitrary additional extension files (where the file is called "name.of.extension.json"). A lot of Linux tools with plugin systems or otherwise extensible configuration have switched to that model, since adding files to a directory is much easier than editing an existing config file in a way that can be cleanly reverted. Regards, Nick. P.S. OK, I take back my earlier comment about PEP 426 being almost ready to go :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Fri Sep 12 13:59:21 2014 From: dholth at gmail.com (Daniel Holth) Date: Fri, 12 Sep 2014 07:59:21 -0400 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: I didn't catch what this kind of extension was for. Would you generate a wheel, add extensions to it with a tool, and then install it afterwards? Why would this be any easier than appending to the metadata file? Would anyone understand it? I would prefer that the .dist-info directory stay subdirectory-free. On Fri, Sep 12, 2014 at 2:24 AM, Nick Coghlan wrote: > On 12 September 2014 07:51, Marcus Smith wrote: >> Is the assumption for extensions in PEP426 that they would be added to the >> one an only structure (in pydist.json) at the time of building the archive? >> i.e. something that the project author adds. >> >> The reason I ask is related to my comment in the other thread occuring right >> now about how to handle external dependencies. If the idea of using >> extensions for external dependencies is going to work (and not require the >> project authors to solely maintain it), then PEP426 would need to support >> some system of discovery that can layer on more metadata at the time of >> install that is not present in the archive. > > This is actually an open question. One possible way to go would be to > add a "pydist.d" directory to wheels and the installed metadata, where > packages can drop arbitrary additional extension files (where the file > is called "name.of.extension.json"). > > A lot of Linux tools with plugin systems or otherwise extensible > configuration have switched to that model, since adding files to a > directory is much easier than editing an existing config file in a way > that can be cleanly reverted. > > Regards, > Nick. > > P.S. OK, I take back my earlier comment about PEP 426 being almost > ready to go :) > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Fri Sep 12 14:20:55 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 12 Sep 2014 13:20:55 +0100 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: On 12 September 2014 12:59, Daniel Holth wrote: > I didn't catch what this kind of extension was for. Would you generate > a wheel, add extensions to it with a tool, and then install it > afterwards? Why would this be any easier than appending to the > metadata file? Would anyone understand it? I would prefer that the > .dist-info directory stay subdirectory-free. Yes, it sounds like things are getting complex here and I'm not sure I follow why. At the moment, the metadata for a distribution is generated when setup.py is run, and is stored in the wheel and in the installed dist-info directory when the distribution is installed. The proposal here seems to be that something *else* could add metadata to a distribution, from outside of that distribution. I don't honestly see how that would work, and regardless I'd prefer it if we got the metadata standard agreed and out of the door on the basis that the project defines its own metadata all at once, before we start adding features. I can see that being able to have "someone else" manage extended metadata might be useful. But many things *might* be useful, and I'd rather we got PEP 426 out of the door. I want to see pip support Metadata 2.0, projects start to be able to define metadata in that format, and support thrashed out for things like postinstall scripts that are needed right now, before we look for another set of problems to solve... ;-) Paul From theller at ctypes.org Fri Sep 12 14:38:52 2014 From: theller at ctypes.org (Thomas Heller) Date: Fri, 12 Sep 2014 14:38:52 +0200 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules Message-ID: While trying to add support for PEP420 implicit namespace packages support to py2exe's modulefinder, I noticed something strange. I'm using the wheezy.template and zope.interface packages for my tests, they are installed with pip. pip creates wheezy.template-0.1.155-py3.4-nspkg.pth and zope.interface-4.1.1-py3.4-nspkg.pth files that are not present in the package sources. The contents is like this: """ import sys,types,os; p = os.path.join(sys._getframe(1).f_locals['sitedir'], *('wheezy',)); ie = os.path.exists(os.path.join(p,'__init__.py')); m = not ie and sys.modules.setdefault('wheezy',types.ModuleType('wheezy')); mp = (m or []) and m.__dict__.setdefault('__path__',[]); (p not in mp) and mp.append(p) """ What is the purpose of these files? (The packages seem to work also when I delete these files; which is what I expected from implicit namespace packages...) These .pth-files have the effect that they insert the modules 'wheezy' and 'zope' into sys.modules, even when I don't import anything from the packages. However, these modules are IMO damaged, they only have __doc__, __name__, and __path__ attributes (plus __loader__ and __spec__ in Python 3.4, both set to None). Reloading the package in Python 3.3 raises an error, and importlib.find_loader("zope") also: >>> sys.modules["zope"] >>> import zope >>> zope >>> >>> imp.reload(zope) Traceback (most recent call last): File "", line 1, in File "C:\Python33\lib\imp.py", line 276, in reload module.__loader__.load_module(name) AttributeError: 'module' object has no attribute '__loader__' >>> >>> importlib.find_loader("zope", None) Traceback (most recent call last): File "", line 1, in File "C:\Python33\lib\importlib\__init__.py", line 64, in find_loader loader = sys.modules[name].__loader__ AttributeError: 'module' object has no attribute '__loader__' >>> Trying the same in Python 3.4, behaviour is quite similar although imp.reload() 'fixes' that package: >>> import sys >>> sys.modules["zope"] >>> dir(sys.modules["zope"]) ['__doc__', '__loader__', '__name__', '__package__', '__path__', '__spec__'] >>> sys.modules["zope"].__spec__ >>> >>> import zope >>> zope >>> import importlib.util >>> importlib.util.find_spec("zope", None) Traceback (most recent call last): File "", line 1, in File "C:\Python34\lib\importlib\util.py", line 100, in find_spec raise ValueError('{}.__spec__ is None'.format(name)) ValueError: zope.__spec__ is None >>> import imp >>> imp.reload(zope) >>> importlib.util.find_spec("zope", None) ModuleSpec(name='zope', loader=None, origin='namespace', submodule_search_locations=_NamespacePath(['C:\\Python34\\lib\\site-packages\\zope'])) >>> Thanks for any insights, Thomas From ncoghlan at gmail.com Fri Sep 12 17:25:51 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Sep 2014 03:25:51 +1200 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: On 13 Sep 2014 00:20, "Paul Moore" wrote: > > Yes, it sounds like things are getting complex here and I'm not sure I > follow why. At the moment, the metadata for a distribution is > generated when setup.py is run, and is stored in the wheel and in the > installed dist-info directory when the distribution is installed. > > The proposal here seems to be that something *else* could add metadata > to a distribution, from outside of that distribution. Correct - for example, a build tool might need to record additional compatibility constraints in a built wheel file. Software distribution is a pipeline, rather than a "once and done" thing, so we need to keep that in mind as we design the metadata. There's at least four phases: - pre-archiving metadata (not currently specified, setup.py/setuptools as the de facto standard for the time being) - archive metadata (metadata 2.0) - build metadata (wheel 2.0?) - install metadata (install DB 2.0?) Redistributors also need a way to inject our metadata additions (like the proposed "python.integrator" extension in PEP 459, which I may rename to "python.redistributor") The unordered-by-default nature of JSON makes it a difficult format to reliably patch (even in "append only" mode), so providing orthogonal files becomes a much easier option. Now, if we're going to have orthogonal files, does it make more sense to organise them by phase of distribution, or by the extension name? I suspect organising by phase does make more sense, but see enough merit in organising by extension to at least consider the idea. Cheers, Nick. P.S. Note that Daniel's old idea of a persistent on disk SQLite metadata cache, as well as updating the import hook system to abstract away the publication of packaging metadata are also both still open design questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Fri Sep 12 19:15:16 2014 From: erik.m.bray at gmail.com (Erik Bray) Date: Fri, 12 Sep 2014 13:15:16 -0400 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: References: Message-ID: On Fri, Sep 12, 2014 at 8:38 AM, Thomas Heller wrote: > While trying to add support for PEP420 implicit namespace packages > support to py2exe's modulefinder, I noticed something strange. > > I'm using the wheezy.template and zope.interface packages for my tests, > they are installed with pip. > > pip creates wheezy.template-0.1.155-py3.4-nspkg.pth and > zope.interface-4.1.1-py3.4-nspkg.pth files that are not present in the > package sources. The contents is like this: > > """ > import sys,types,os; p = os.path.join(sys._getframe(1).f_locals['sitedir'], > *('wheezy',)); ie = os.path.exists(os.path.join(p,'__init__.py')); m = not > ie and sys.modules.setdefault('wheezy',types.ModuleType('wheezy')); mp = (m > or []) and m.__dict__.setdefault('__path__',[]); (p not in mp) and > mp.append(p) > """ > > What is the purpose of these files? (The packages seem to work also > when I delete these files; which is what I expected from implicit > namespace packages...) > > > These .pth-files have the effect that they insert the modules 'wheezy' > and 'zope' into sys.modules, even when I don't import anything > from the packages. However, these modules are IMO damaged, they > only have __doc__, __name__, and __path__ attributes (plus __loader__ > and __spec__ in Python 3.4, both set to None). > > Reloading the package in Python 3.3 raises an error, and > importlib.find_loader("zope") also: > >>>> sys.modules["zope"] > >>>> import zope >>>> zope > >>>> >>>> imp.reload(zope) > Traceback (most recent call last): > File "", line 1, in > File "C:\Python33\lib\imp.py", line 276, in reload > module.__loader__.load_module(name) > AttributeError: 'module' object has no attribute '__loader__' >>>> >>>> importlib.find_loader("zope", None) > Traceback (most recent call last): > File "", line 1, in > File "C:\Python33\lib\importlib\__init__.py", line 64, in find_loader > loader = sys.modules[name].__loader__ > AttributeError: 'module' object has no attribute '__loader__' >>>> > > Trying the same in Python 3.4, behaviour is quite similar although > imp.reload() 'fixes' that package: > >>>> import sys >>>> sys.modules["zope"] > >>>> dir(sys.modules["zope"]) > ['__doc__', '__loader__', '__name__', '__package__', '__path__', '__spec__'] >>>> sys.modules["zope"].__spec__ >>>> >>>> import zope >>>> zope > >>>> import importlib.util >>>> importlib.util.find_spec("zope", None) > Traceback (most recent call last): > File "", line 1, in > File "C:\Python34\lib\importlib\util.py", line 100, in find_spec > raise ValueError('{}.__spec__ is None'.format(name)) > ValueError: zope.__spec__ is None >>>> import imp >>>> imp.reload(zope) > >>>> importlib.util.find_spec("zope", None) > ModuleSpec(name='zope', loader=None, origin='namespace', > submodule_search_locations=_NamespacePath(['C:\\Python34\\lib\\site-packages\\zope'])) >>>> Hi Thomas, I've dealt with this issue myself extensively in the past, but it's been a while so I might have to give it a bit of thought before I comment more on detail (if no one else does first). But I don't think (unless there's something new I don't know about) those come directly from pip. Rather, they are created by setuptools. The issue here is that before Python had any built-in notion of namespace packages, namespace packages were a feature in setuptools, and this was the hack, I suppose, that allowed it to work. Unfortunately as different versions of Python have grown different levels of support for namespace packages over the years, the setuptools hack is still the implementation of the concept that will work on the broadest range of Python versions (and I definitely know zope.interface has been using it going far back). That said, I've been meaning to try to figure out some way of supporting all three forms of namespace packages in a way that is interfering. As I said, I've had problems with this myself :/ Erik From theller at ctypes.org Fri Sep 12 21:14:59 2014 From: theller at ctypes.org (Thomas Heller) Date: Fri, 12 Sep 2014 21:14:59 +0200 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: References: Message-ID: Am 12.09.2014 19:15, schrieb Erik Bray: > On Fri, Sep 12, 2014 at 8:38 AM, Thomas Heller wrote: [snip about strange .pth files installed for namespace packages] > > Hi Thomas, > > I've dealt with this issue myself extensively in the past, but it's > been a while so I might have to give it a bit of thought before I > comment more on detail (if no one else does first). > > But I don't think (unless there's something new I don't know about) > those come directly from pip. Rather, they are created by setuptools. Thanks for the answer, Erik. I've lost understanding what is done by setuptools, disutils, wheels, pip, and whatever other software is working when building distributions or installing them. > The issue here is that before Python had any built-in notion of > namespace packages, namespace packages were a feature in setuptools, > and this was the hack, I suppose, that allowed it to work. > Unfortunately as different versions of Python have grown different > levels of support for namespace packages over the years, the > setuptools hack is still the implementation of the concept that will > work on the broadest range of Python versions (and I definitely know > zope.interface has been using it going far back). So it seems that it is a bug in setuptools: It must not create or install these pth files when installing in Python 3.3 or newer (which implement PEP 420). > That said, I've been meaning to try to figure out some way of > supporting all three forms of namespace packages in a way that is > interfering. As I said, I've had problems with this myself :/ Thomas From eric at trueblade.com Fri Sep 12 21:24:13 2014 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 12 Sep 2014 15:24:13 -0400 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: References: Message-ID: <5413485D.40207@trueblade.com> [Oops, replying to the list this time. Sorry for the dupe, Thomas.] On 9/12/2014 3:14 PM, Thomas Heller wrote: > So it seems that it is a bug in setuptools: It must not create or > install these pth files when installing in Python 3.3 or newer (which > implement PEP 420). PEP 420 goes out of its way to support pkgutil.extend_path(): http://legacy.python.org/dev/peps/pep-0420/#migrating-from-legacy-namespace-packages So it should be possible for some cross-version code to work. -- Eric. From anthony at xtfx.me Fri Sep 12 20:55:56 2014 From: anthony at xtfx.me (C Anthony Risinger) Date: Fri, 12 Sep 2014 13:55:56 -0500 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: On Fri, Sep 12, 2014 at 10:25 AM, Nick Coghlan wrote: > > On 13 Sep 2014 00:20, "Paul Moore" wrote: > > > > > Yes, it sounds like things are getting complex here and I'm not sure I > > follow why. At the moment, the metadata for a distribution is > > generated when setup.py is run, and is stored in the wheel and in the > > installed dist-info directory when the distribution is installed. > > > > The proposal here seems to be that something *else* could add metadata > > to a distribution, from outside of that distribution. > > Correct - for example, a build tool might need to record additional > compatibility constraints in a built wheel file. > > Software distribution is a pipeline, rather than a "once and done" thing, > so we need to keep that in mind as we design the metadata. > > There's at least four phases: > > - pre-archiving metadata (not currently specified, setup.py/setuptools as > the de facto standard for the time being) > - archive metadata (metadata 2.0) > - build metadata (wheel 2.0?) > - install metadata (install DB 2.0?) > > Redistributors also need a way to inject our metadata additions (like the > proposed "python.integrator" extension in PEP 459, which I may rename to > "python.redistributor") > > The unordered-by-default nature of JSON makes it a difficult format to > reliably patch (even in "append only" mode), so providing orthogonal files > becomes a much easier option. > > Now, if we're going to have orthogonal files, does it make more sense to > organise them by phase of distribution, or by the extension name? I suspect > organising by phase does make more sense, but see enough merit in > organising by extension to at least consider the idea. > > Cheers, > Nick. > > P.S. Note that Daniel's old idea of a persistent on disk SQLite metadata > cache, as well as updating the import hook system to abstract away the > publication of packaging metadata are also both still open design questions. > I'm glad this came up because I had been wondering for some time how this would be handled. While I have a couple other use cases related to a special installer-generator we use (opting-in to __future__'s, among others) I am specifically curious how pydist.json will handle 'namespace packages'... eg. I create package `foo.bar`, and from a packaging perspective it's an independent thing, but functionally, it is an extension of `foo`, and *parts* of it's metadata may very well apply to the `foo` package as a whole. This wasn't meant to be an [ANN], but my tool: https://github.com/xtfxme/zippy ...uses metadata 2.0 exclusively and is capable of: - compiling *any* python C-extension statically (numpy, whatever) - extends `-m` to support `-m package.module:callable` (NICE) - allows forwarding argv[0] != 'python*' to python.zip/bin/{argv[0]} - (many others) ...not the best examples, but for such things I'd like to allow an opt-in of sorts by the system integrator (note, not related to packager), so they can choose which features to apply to which packages at build-time... the current way I am considering is via `pydist.d` because, as Nick stated, it is familiar and makes sense to most people. Couple questions: - would the extension drop `.d` fragments into it's own dist-info, or the parent package? - if the latter, how does this work in the face of `provides` and `replaces` and such? Namespace packages I think are the clearest concept for why this feature is needed. -- C Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Fri Sep 12 21:34:37 2014 From: theller at ctypes.org (Thomas Heller) Date: Fri, 12 Sep 2014 21:34:37 +0200 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: <5413485D.40207@trueblade.com> References: <5413485D.40207@trueblade.com> Message-ID: Am 12.09.2014 21:24, schrieb Eric V. Smith: > [Oops, replying to the list this time. Sorry for the dupe, Thomas.] > > On 9/12/2014 3:14 PM, Thomas Heller wrote: >> So it seems that it is a bug in setuptools: It must not create or >> install these pth files when installing in Python 3.3 or newer (which >> implement PEP 420). > > PEP 420 goes out of its way to support pkgutil.extend_path(): > http://legacy.python.org/dev/peps/pep-0420/#migrating-from-legacy-namespace-packages > > So it should be possible for some cross-version code to work. My point is that the source code for zope.interface (for example) has the magic code in the zope/__init__.py file, right beneath the zope/interface/ directory: """ try: import pkg_resources pkg_resources.declare_namespace(__name__) except ImportError: import pkgutil __path__ = pkgutil.extend_path(__path__, __name__) """ IMO this is to support the legacy way. See: https://github.com/zopefoundation/zope.interface/blob/master/src/zope/__init__.py However, when installing it in Python 3.4, this __init__.py file is NOT installed. Instead, the 'zope' directory ONLY contains the 'interface' subdirectory with all the code. But a .pth file is installed that does the strange things that I mentioned in the original post. Thomas From fred at fdrake.net Fri Sep 12 21:44:12 2014 From: fred at fdrake.net (Fred Drake) Date: Fri, 12 Sep 2014 15:44:12 -0400 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: References: <5413485D.40207@trueblade.com> Message-ID: On Fri, Sep 12, 2014 at 3:34 PM, Thomas Heller wrote: > However, when installing it in Python 3.4, this __init__.py file is NOT > installed. Instead, the 'zope' directory ONLY contains the 'interface' > subdirectory with all the code. But a .pth file is installed that does > the strange things that I mentioned in the original post. The horrible .pth file in question definitely comes from setuptools. At Zope Corp., we consistently use zc.buildout to package applications, which drives setuptools with some long option whose name I forget that causes the .pth files to be suppressed, and entry-point-based scripts are generated with the right bits to assemble the sys.path needed for the application. For what it's worth. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From qwcode at gmail.com Fri Sep 12 21:53:14 2014 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 12 Sep 2014 12:53:14 -0700 Subject: [Distutils] Metadata extension discovery? In-Reply-To: References: Message-ID: > > > > The proposal here seems to be that something *else* could add metadata > > to a distribution, from outside of that distribution. > My specific interest is to use PEP426 extensions as the standard for defining OS package dependencies for PyPI projects. Having every PyPI project author handle all this information across all the Distros and OSs would never work I think. So the idea is for these extensions to live externally somewhere (maybe one big communal project, or in per-distro projects). And if they are external, the question becomes how and when they get used and combined with the metadata from the distribution. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.m.bray at gmail.com Fri Sep 12 21:55:20 2014 From: erik.m.bray at gmail.com (Erik Bray) Date: Fri, 12 Sep 2014 15:55:20 -0400 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: <5413485D.40207@trueblade.com> References: <5413485D.40207@trueblade.com> Message-ID: On Fri, Sep 12, 2014 at 3:24 PM, Eric V. Smith wrote: > [Oops, replying to the list this time. Sorry for the dupe, Thomas.] > > On 9/12/2014 3:14 PM, Thomas Heller wrote: >> So it seems that it is a bug in setuptools: It must not create or >> install these pth files when installing in Python 3.3 or newer (which >> implement PEP 420). > > PEP 420 goes out of its way to support pkgutil.extend_path(): > http://legacy.python.org/dev/peps/pep-0420/#migrating-from-legacy-namespace-packages > > So it should be possible for some cross-version code to work. The pkgutil.extend_path() way can be made to work with PEP 420 reasonably well, but *not* with the older setuptools approach. Unfortunately there are some dark corners of setuptools I've encountered where namespace packages don't work properly during installation *unless* they were installed in the old-fashioned setuptools way. I'll have to see if I can dig up what those cases are, because they should be fixed. Erik From pje at telecommunity.com Sun Sep 14 18:52:07 2014 From: pje at telecommunity.com (PJ Eby) Date: Sun, 14 Sep 2014 12:52:07 -0400 Subject: [Distutils] Installing namespace packages with pip inserts strange modules into sys.modules In-Reply-To: References: <5413485D.40207@trueblade.com> Message-ID: On Fri, Sep 12, 2014 at 3:55 PM, Erik Bray wrote: > Unfortunately there are some dark corners of setuptools I've > encountered where namespace packages don't work properly during > installation *unless* they were installed in the old-fashioned > setuptools way. I'll have to see if I can dig up what those cases > are, because they should be fixed. I don't know if this is what you had in mind, but, the main problem I know of with changing setuptools to not install the .pth files, is that if you already have any __init__.py's installed (and possibly, any namespace .pth files to go with them), then the newly-installed package isn't going to work. PEP 420 only takes effect if there are no existing __init__.py files for the namespace. In other words, it's not just a matter of changing how things are installed, it's also a matter of upgrading existing environments with things installed by older installation tools. From holger at merlinux.eu Sun Sep 14 19:58:32 2014 From: holger at merlinux.eu (holger krekel) Date: Sun, 14 Sep 2014 17:58:32 +0000 Subject: [Distutils] how can projects with no files have downloads? Message-ID: <20140914175832.GX28217@merlinux.eu> Hi Donald, all, I sometimes have doubts that the download numbers as shown by pypi.python.org are correct. Here is one case where i am pretty sure something is wrong: https://pypi.python.org/pypi/pytes That's a project a friend uploaded after he heart me saying at my devpi talk at EP2014 that i sometimes type "pip install pytes" :) It never had files ASFAIK but has "18 downloads". How comes? holger From donald at stufft.io Sun Sep 14 22:03:51 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Sep 2014 16:03:51 -0400 Subject: [Distutils] how can projects with no files have downloads? In-Reply-To: <20140914175832.GX28217@merlinux.eu> References: <20140914175832.GX28217@merlinux.eu> Message-ID: <7F8E1882-43BE-490E-B114-CF2A3D47A47B@stufft.io> > On Sep 14, 2014, at 1:58 PM, holger krekel wrote: > > Hi Donald, all, > > I sometimes have doubts that the download numbers as shown by > pypi.python.org are correct. Here is one case where i am pretty sure > something is wrong: > > https://pypi.python.org/pypi/pytes > > That's a project a friend uploaded after he heart me saying at my devpi > talk at EP2014 that i sometimes type "pip install pytes" :) It never had > files ASFAIK but has "18 downloads". How comes? > > holger > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig I dunno! I don?t see any reason why that would be the case, it?s possible that there is a bug in the rsyslog-cdn.py script. I?m not sure. I?m planning on rewriting that soon anyways to be tons better. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Sun Sep 14 22:06:30 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 14 Sep 2014 13:06:30 -0700 Subject: [Distutils] how can projects with no files have downloads? In-Reply-To: <7F8E1882-43BE-490E-B114-CF2A3D47A47B@stufft.io> References: <20140914175832.GX28217@merlinux.eu> <7F8E1882-43BE-490E-B114-CF2A3D47A47B@stufft.io> Message-ID: On Sun, Sep 14, 2014 at 1:03 PM, Donald Stufft wrote: > > On Sep 14, 2014, at 1:58 PM, holger krekel wrote: > > Hi Donald, all, > > I sometimes have doubts that the download numbers as shown by > pypi.python.org are correct. Here is one case where i am pretty sure > something is wrong: > > https://pypi.python.org/pypi/pytes > > That's a project a friend uploaded after he heart me saying at my devpi > talk at EP2014 that i sometimes type "pip install pytes" :) It never had > files ASFAIK but has "18 downloads". How comes? > > holger > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > I dunno! > > I don?t see any reason why that would be the case, it?s possible that there > is a bug in the rsyslog-cdn.py script. I?m not sure. I?m planning on > rewriting > that soon anyways to be tons better. Out of curiosity, where in source control is that script located? --Chris From donald at stufft.io Sun Sep 14 22:08:25 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Sep 2014 16:08:25 -0400 Subject: [Distutils] how can projects with no files have downloads? In-Reply-To: References: <20140914175832.GX28217@merlinux.eu> <7F8E1882-43BE-490E-B114-CF2A3D47A47B@stufft.io> Message-ID: <43D874F0-4CC3-44F5-AD14-CA0364516F1A@stufft.io> > On Sep 14, 2014, at 4:06 PM, Chris Jerdonek wrote: > > On Sun, Sep 14, 2014 at 1:03 PM, Donald Stufft > wrote: >> >> On Sep 14, 2014, at 1:58 PM, holger krekel wrote: >> >> Hi Donald, all, >> >> I sometimes have doubts that the download numbers as shown by >> pypi.python.org are correct. Here is one case where i am pretty sure >> something is wrong: >> >> https://pypi.python.org/pypi/pytes >> >> That's a project a friend uploaded after he heart me saying at my devpi >> talk at EP2014 that i sometimes type "pip install pytes" :) It never had >> files ASFAIK but has "18 downloads". How comes? >> >> holger >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> I dunno! >> >> I don?t see any reason why that would be the case, it?s possible that there >> is a bug in the rsyslog-cdn.py script. I?m not sure. I?m planning on >> rewriting >> that soon anyways to be tons better. > > Out of curiosity, where in source control is that script located? > > --Chris https://bitbucket.org/pypa/pypi/src/91c9ddbdf4222d982a01dd09f29ab9e184422e71/tools/rsyslog-cdn.py?at=default That processes incoming logs from Fastly and puts them into reds. The download rollups (last day, week, month) are pulled directly from Fastly. The total download counts come from that and are processed once an hour by: https://bitbucket.org/pypa/pypi/src/91c9ddbdf4222d982a01dd09f29ab9e184422e71/tools/integrate-redis-stats.py?at=default --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From schettino72 at gmail.com Mon Sep 15 05:05:31 2014 From: schettino72 at gmail.com (Eduardo Schettino) Date: Mon, 15 Sep 2014 11:05:31 +0800 Subject: [Distutils] how can projects with no files have downloads? In-Reply-To: <20140914175832.GX28217@merlinux.eu> References: <20140914175832.GX28217@merlinux.eu> Message-ID: On Mon, Sep 15, 2014 at 1:58 AM, holger krekel wrote: > Hi Donald, all, > > I sometimes have doubts that the download numbers as shown by > pypi.python.org are correct. Here is one case where i am pretty sure > something is wrong: > > https://pypi.python.org/pypi/pytes > > That's a project a friend uploaded after he heart me saying at my devpi > talk at EP2014 that i sometimes type "pip install pytes" :) It never had > files ASFAIK but has "18 downloads". How comes? > > Just guessing ... It counts downloads REQUESTS, not successful downloads. Whether a download is completed or not (or even a download file exists) doesnt matter. cheers -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Sep 15 14:40:45 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 15 Sep 2014 12:40:45 +0000 Subject: [Distutils] how can projects with no files have downloads? In-Reply-To: References: <20140914175832.GX28217@merlinux.eu> Message-ID: <20140915124045.GC28217@merlinux.eu> On Mon, Sep 15, 2014 at 11:05 +0800, Eduardo Schettino wrote: > > > Hi Donald, all, > > > > I sometimes have doubts that the download numbers as shown by > > pypi.python.org are correct. Here is one case where i am pretty sure > > something is wrong: > > > > https://pypi.python.org/pypi/pytes > > > > That's a project a friend uploaded after he heart me saying at my devpi > > talk at EP2014 that i sometimes type "pip install pytes" :) It never had > > files ASFAIK but has "18 downloads". How comes? > > > > > Just guessing ... > > It counts downloads REQUESTS, not successful downloads. > Whether a download is completed or not (or even a download file exists) > doesnt matter. very good guess i guess! :) Just did "pip install pytes" again, let's see if it increases in an hour or so. best, holger > cheers > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Tue Sep 16 12:59:40 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Sep 2014 11:59:40 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? Message-ID: I feel as though I must be missing something obvious, but is there an actual specification of the syntax and semantics of a requirement anywhere? I've scanned through the PEP, and while there is a spec for the environment marker mini-language, there isn't one for a requirement. (As a check I hadn't missed anything obvious, I did a text search for the operator ">=" which *is* a valid operator in a requirement, and it's not present in a syntax definition or anything equivalent. I ask because I'm looking for a way to find a way of matching a set of package/version details against a requirement, and I was coming up blank. So I was going to write my own, and then I found that there's no spec :-( Surely having a spec for a requirement has to be part of the sign-off requirements for Metadata 2.0? Digging a bit further, there is (of course, doh!) the pkg_resources requirement parser. But even if that is definitive, I think it should be integrated into the Metadata 2.0 spec, or at the very least referenced from there (and just having a reference risks the possibility of setuptools accidentally making changes outside of the PEP process). Paul [1] Pip's code is too complex to factor out the bits I want, and distlib's version matcher seems to support a different syntax if the docs are correct. From donald at stufft.io Tue Sep 16 13:02:00 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 07:02:00 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: <0B468AB3-F0DD-47CE-AEDB-0480D7F2F32E@stufft.io> Specifiers are defined in PEP 440 (the >=1.0 parts), however PEP 426 combines those with the package names to get ?foo >=1.0?. The packaging library implements the specifier part. > On Sep 16, 2014, at 6:59 AM, Paul Moore wrote: > > I feel as though I must be missing something obvious, but is there an > actual specification of the syntax and semantics of a requirement > anywhere? I've scanned through the PEP, and while there is a spec for > the environment marker mini-language, there isn't one for a > requirement. (As a check I hadn't missed anything obvious, I did a > text search for the operator ">=" which *is* a valid operator in a > requirement, and it's not present in a syntax definition or anything > equivalent. > > I ask because I'm looking for a way to find a way of matching a set of > package/version details against a requirement, and I was coming up > blank. So I was going to write my own, and then I found that there's > no spec :-( > > Surely having a spec for a requirement has to be part of the sign-off > requirements for Metadata 2.0? > > Digging a bit further, there is (of course, doh!) the pkg_resources > requirement parser. But even if that is definitive, I think it should > be integrated into the Metadata 2.0 spec, or at the very least > referenced from there (and just having a reference risks the > possibility of setuptools accidentally making changes outside of the > PEP process). > > Paul > > [1] Pip's code is too complex to factor out the bits I want, and > distlib's version matcher seems to support a different syntax if the > docs are correct. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Sep 16 13:02:35 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 07:02:35 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <0B468AB3-F0DD-47CE-AEDB-0480D7F2F32E@stufft.io> References: <0B468AB3-F0DD-47CE-AEDB-0480D7F2F32E@stufft.io> Message-ID: <72078E69-FE04-4B25-B573-23C77CA49CF6@stufft.io> Forgot the link: https://packaging.pypa.io/en/latest/version/ > On Sep 16, 2014, at 7:02 AM, Donald Stufft wrote: > > Specifiers are defined in PEP 440 (the >=1.0 parts), however PEP 426 combines > those with the package names to get ?foo >=1.0?. The packaging library implements > the specifier part. > >> On Sep 16, 2014, at 6:59 AM, Paul Moore > wrote: >> >> I feel as though I must be missing something obvious, but is there an >> actual specification of the syntax and semantics of a requirement >> anywhere? I've scanned through the PEP, and while there is a spec for >> the environment marker mini-language, there isn't one for a >> requirement. (As a check I hadn't missed anything obvious, I did a >> text search for the operator ">=" which *is* a valid operator in a >> requirement, and it's not present in a syntax definition or anything >> equivalent. >> >> I ask because I'm looking for a way to find a way of matching a set of >> package/version details against a requirement, and I was coming up >> blank. So I was going to write my own, and then I found that there's >> no spec :-( >> >> Surely having a spec for a requirement has to be part of the sign-off >> requirements for Metadata 2.0? >> >> Digging a bit further, there is (of course, doh!) the pkg_resources >> requirement parser. But even if that is definitive, I think it should >> be integrated into the Metadata 2.0 spec, or at the very least >> referenced from there (and just having a reference risks the >> possibility of setuptools accidentally making changes outside of the >> PEP process). >> >> Paul >> >> [1] Pip's code is too complex to factor out the bits I want, and >> distlib's version matcher seems to support a different syntax if the >> docs are correct. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Sep 16 13:39:31 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Sep 2014 23:39:31 +1200 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: On 16 September 2014 22:59, Paul Moore wrote: > I feel as though I must be missing something obvious, but is there an > actual specification of the syntax and semantics of a requirement > anywhere? I've scanned through the PEP, and while there is a spec for > the environment marker mini-language, there isn't one for a > requirement. (As a check I hadn't missed anything obvious, I did a > text search for the operator ">=" which *is* a valid operator in a > requirement, and it's not present in a syntax definition or anything > equivalent. > > I ask because I'm looking for a way to find a way of matching a set of > package/version details against a requirement, and I was coming up > blank. So I was going to write my own, and then I found that there's > no spec :-( > > Surely having a spec for a requirement has to be part of the sign-off > requirements for Metadata 2.0? Donald already noted that most of these details were moved to PEP 440, despite being in PEP 426 in earlier drafts. This had two primary benefits: 1. The specifier semantics and the allowed version numbers are closely intertwined, so describing them together made it easier to ensure they were consistent (and sufficiently comprehensive). 2. It also meant they were *approved* together, in advance of the rest of PEP 426. An agreed version numbering scheme on its own isn't particular useful, without a way to use it to improve pkg_resources style dependency declarations. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Sep 16 13:57:31 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Sep 2014 12:57:31 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: On 16 September 2014 12:39, Nick Coghlan wrote: >> Surely having a spec for a requirement has to be part of the sign-off >> requirements for Metadata 2.0? > > Donald already noted that most of these details were moved to PEP 440, > despite being in PEP 426 in earlier drafts. > > This had two primary benefits: Thanks, yes. I knew it must be somewhere, I just couldn't work out where. Sorry for the noise. One thing that might be worth clarifying somewhere/somehow (not particularly in the specs, though) is where is the best place to find the "canonical" implementations of the various metadata specs. At one point, distlib seemed to be taking that role, but I'm not sure it is any more. Is that the role the "packaging" project is now taking on? This is where I think it's a shame that this infrastructure isn't being added to the stdlib - knowing that if you use "import packaging.pep440" from the stdlib (or a backport of it) you get the official semantics is a major help in writing one-off utility scripts. For many of my scripts, I've spent longer looking for a good implementation of the standard stuff (or writing it myself) than I have writing the application logic :-( Paul From donald at stufft.io Tue Sep 16 14:01:19 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 08:01:19 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: > On Sep 16, 2014, at 7:57 AM, Paul Moore wrote: > > On 16 September 2014 12:39, Nick Coghlan wrote: >>> Surely having a spec for a requirement has to be part of the sign-off >>> requirements for Metadata 2.0? >> >> Donald already noted that most of these details were moved to PEP 440, >> despite being in PEP 426 in earlier drafts. >> >> This had two primary benefits: > > Thanks, yes. I knew it must be somewhere, I just couldn't work out > where. Sorry for the noise. > > One thing that might be worth clarifying somewhere/somehow (not > particularly in the specs, though) is where is the best place to find > the "canonical" implementations of the various metadata specs. At one > point, distlib seemed to be taking that role, but I'm not sure it is > any more. Is that the role the "packaging" project is now taking on? > This is where I think it's a shame that this infrastructure isn't > being added to the stdlib - knowing that if you use "import > packaging.pep440" from the stdlib (or a backport of it) you get the > official semantics is a major help in writing one-off utility scripts. > For many of my scripts, I've spent longer looking for a good > implementation of the standard stuff (or writing it myself) than I > have writing the application logic :-( > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig It?s in the references section of PEP 440, although that links to the PR that has since been merged so that can likely be updated. I?m hesitant to include any of this stuff in the stdlib as of right now. It wouldn?t have helped you here since PEP 440 wasn?t approved until after 3.4 was out so the earliest it would be in is Python 3.5. Perhaps once we have most of the pieces fitted together and working sanely it would be a good time to figure out what, if any should be moved to the stdlib. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Sep 16 14:12:33 2014 From: holger at merlinux.eu (holger krekel) Date: Tue, 16 Sep 2014 12:12:33 +0000 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: <20140916121233.GM28217@merlinux.eu> On Tue, Sep 16, 2014 at 08:01 -0400, Donald Stufft wrote: > > On Sep 16, 2014, at 7:57 AM, Paul Moore wrote: > > One thing that might be worth clarifying somewhere/somehow (not > > particularly in the specs, though) is where is the best place to find > > the "canonical" implementations of the various metadata specs. At one > > point, distlib seemed to be taking that role, but I'm not sure it is > > any more. Is that the role the "packaging" project is now taking on? > > This is where I think it's a shame that this infrastructure isn't > > being added to the stdlib - ... > I?m hesitant to include any of this stuff in the stdlib as of right now. It wouldn?t > have helped you here since PEP 440 wasn?t approved until after 3.4 was out > so the earliest it would be in is Python 3.5. Perhaps once we have most of > the pieces fitted together and working sanely it would be a good time to > figure out what, if any should be moved to the stdlib. I suggest to rather list good PEP-implementing modules/packages on some "pypa" PEP-independent administered page and reference that page prominently. Given that we have a mixed py27/py3 python ecosystem and will do so for some more years, and given that things are still moving, focusing on adding such to the stdlib is wasted effort IMHO. best, holger From p.f.moore at gmail.com Tue Sep 16 14:29:52 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Sep 2014 13:29:52 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <20140916121233.GM28217@merlinux.eu> References: <20140916121233.GM28217@merlinux.eu> Message-ID: On 16 September 2014 13:12, holger krekel wrote: >> I?m hesitant to include any of this stuff in the stdlib as of right now. It wouldn?t >> have helped you here since PEP 440 wasn?t approved until after 3.4 was out >> so the earliest it would be in is Python 3.5. Perhaps once we have most of >> the pieces fitted together and working sanely it would be a good time to >> figure out what, if any should be moved to the stdlib. > > I suggest to rather list good PEP-implementing modules/packages > on some "pypa" PEP-independent administered page and reference that page > prominently. Given that we have a mixed py27/py3 python ecosystem and > will do so for some more years, and given that things are still moving, > focusing on adding such to the stdlib is wasted effort IMHO. Agreed. The stdlib should be a longer-term goal, but that doesn't mean there shouldn't be reference implementations that are authoritative and documented as such, ideally referenced from the relevant PEPs and documented on a PyPA webpage. I really shouldn't be looking at a list of version numbers and wanting to get the latest one and be grabbing distutils.version.LooseVersion just because it's good enough and life's too short to go searching for a "proper" answer. And I do, regularly :-( Paul PS This isn't about pressuring anyone to write such modules. If they don't exist and are needed, I'm more than willing to help write them, but only if they are intended as reference implementations, not just as "yet another version module". I've no particular wish to set myself up as a competitor to setuptools and distlib :-) From fred at fdrake.net Tue Sep 16 14:47:19 2014 From: fred at fdrake.net (Fred Drake) Date: Tue, 16 Sep 2014 08:47:19 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: <20140916121233.GM28217@merlinux.eu> Message-ID: On Tue, Sep 16, 2014 at 8:29 AM, Paul Moore wrote: > I've no particular wish to set myself > up as a competitor to setuptools and distlib :-) pip install Paul -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From qwcode at gmail.com Tue Sep 16 17:47:09 2014 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 16 Sep 2014 08:47:09 -0700 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <20140916121233.GM28217@merlinux.eu> References: <20140916121233.GM28217@merlinux.eu> Message-ID: On Tue, Sep 16, 2014 at 5:12 AM, holger krekel wrote: > On Tue, Sep 16, 2014 at 08:01 -0400, Donald Stufft wrote: > > > On Sep 16, 2014, at 7:57 AM, Paul Moore wrote: > > > One thing that might be worth clarifying somewhere/somehow (not > > > particularly in the specs, though) is where is the best place to find > > > the "canonical" implementations of the various metadata specs. At one > > > point, distlib seemed to be taking that role, but I'm not sure it is > > > any more. Is that the role the "packaging" project is now taking on? > > > This is where I think it's a shame that this infrastructure isn't > > > being added to the stdlib - ... > > I?m hesitant to include any of this stuff in the stdlib as of right now. > It wouldn?t > > have helped you here since PEP 440 wasn?t approved until after 3.4 was > out > > so the earliest it would be in is Python 3.5. Perhaps once we have most > of > > the pieces fitted together and working sanely it would be a good time to > > figure out what, if any should be moved to the stdlib. > > I suggest to rather list good PEP-implementing modules/packages > on some "pypa" PEP-independent administered page and reference that page > prominently. Given that we have a mixed py27/py3 python ecosystem and > will do so for some more years, and given that things are still moving, > focusing on adding such to the stdlib is wasted effort IMHO. > there's a page for that in the Python Packaging User Guide https://packaging.python.org/en/latest/peps.html (each PEP has an "implementation" section) although that page might move to a pypa developer guide that lives at pypa.io (that's still in progress) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Tue Sep 16 19:02:25 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Sep 2014 18:02:25 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: Message-ID: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> From: Paul Moore > One thing that might be worth clarifying somewhere/somehow (not > particularly in the specs, though) is where is the best place to find > the "canonical" implementations of the various metadata specs. At one > point, distlib seemed to be taking that role, but I'm not sure it is > any more. That's more to do with the preferences of, and choices made by, other people. I don't know who decides what's "canonical" (I suppose Nick, naturally, but I'm not sure if it's *only* Nick) or what criteria they would use, but I've aimed distlib to implement the various PEPs in their various states of completion. Although I have been less active of late than I have previously, that's just down to my current workload: I'm busy with consultancy work :-) Note that I have recently updated distlib to reflect changes in PEP 440, though this functionality has not been officially released (it's available in the public repos, though). On requirements, distlib supports both the setuptools forms "foo>=X.Y" and the PEP forms such as "foo (~ X.Y)". Generally, distlib and distil work well enough [for my needs] for the most part. Where distributions uses custom code in setup.py or extends distutils/setuptools command classes, I use "distil pip" to convert sdists to wheels, or "distil package" to convert .exe installers (like those on Christoph Golhke's site) to wheels. When you pin your dependencies, you don't have to do this dance too often. I feel I'm fairly responsive when issues are raised against either distlib or distil. I'm always open to feedback, try to keep the docs up to date, etc. The coverage docs are a little out of date, and coveralls is on my todo list. Not sure what more would be expected from a canonical implementation, other than an official blessing. While on the topic of specs, I'm curious to know what the specification status is for other elements in the packaging landscape, such as Warehouse or Twine - are there any PEPs specifying anything new that they do over existing PyPI/distutils, or is there nothing new over and above existing code other than (no doubt improved) reimplementation of existing functionality? Regards, Vinay Sajip From donald at stufft.io Tue Sep 16 19:07:48 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 13:07:48 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> Message-ID: <85F1D5B6-00AF-4ADA-AF94-611B85ED0A04@stufft.io> > On Sep 16, 2014, at 1:02 PM, Vinay Sajip wrote: > > While on the topic of specs, I'm curious to know what the specification status is for other elements in the packaging landscape, such as Warehouse or Twine - are there any PEPs specifying anything new that they do over existing PyPI/distutils, or is there nothing new over and above existing code other than (no doubt improved) reimplementation of existing functionality? Warehouse is currently focused on reimplementation with the future being standization and spec work for new stuff. Twine uses the same APIs as distutils does on PyPI, but it A) Verifies TLS and B) enables uploading an already built distribution instead of mandating that you build it and upload at the same time. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Tue Sep 16 19:10:49 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Sep 2014 18:10:49 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: <20140916121233.GM28217@merlinux.eu> Message-ID: <1410887449.78966.YahooMailNeo@web172401.mail.ir2.yahoo.com> From: Paul Moore > PS This isn't about pressuring anyone to write such modules. If they > don't exist and are needed, I'm more than willing to help write them, > but only if they are intended as reference implementations, not just > as "yet another version module". I've no particular wish to set myself > up as a competitor to setuptools and distlib :-) Well, the distlib implementation is intended to be faithful to the PEPs, and to act as a reference implementation in the absence of anything better. Some parts of distlib (including the version functionality) were blessed by python-dev to the extent that they were expected to be part of Python 3.3 (in the packaging package), though I've changed the implementation considerably, both to allow conformance with setuptools formats and to implement functionality developed in newer PEPs such as 426, 427, and 440. Is there some particular functionality you need that distlib.version doesn't provide? Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Tue Sep 16 23:54:01 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Sep 2014 22:54:01 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <85F1D5B6-00AF-4ADA-AF94-611B85ED0A04@stufft.io> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <85F1D5B6-00AF-4ADA-AF94-611B85ED0A04@stufft.io> Message-ID: <1410904441.21514.YahooMailNeo@web172401.mail.ir2.yahoo.com> > Warehouse is currently focused on reimplementation with the future being standization and spec work for new stuff. > Twine uses the same APIs as distutils does on PyPI, but it A) Verifies TLS and B) enables uploading an already built > distribution instead of mandating that you build it and upload at the same time. Thanks for the update. Regards, Vinay Sajip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Sep 17 01:01:56 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Sep 2014 11:01:56 +1200 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> Message-ID: On 17 Sep 2014 03:02, "Vinay Sajip" wrote: > > From: Paul Moore > > > > One thing that might be worth clarifying somewhere/somehow (not > > particularly in the specs, though) is where is the best place to find > > the "canonical" implementations of the various metadata specs. At one > > point, distlib seemed to be taking that role, but I'm not sure it is > > any more. > > That's more to do with the preferences of, and choices made by, other people. I don't know who decides what's "canonical" (I suppose Nick, naturally, but I'm not sure if it's *only* Nick) or what criteria they would use, but I've aimed distlib to implement the various PEPs in their various states of completion. My current hope is that we'll end up with a situation where packaging is the minimalist reference implementation that provides the bare minimum amount of functionality necessary to work with versions (including the installation database) at runtime, while distlib becomes the more feature complete library you'd want on a build system. Think pkg_resources vs setuptools, but without the tight coupling that exists in the status quo. Someone using packaging would also *only* get features from approved PEPs, freeing distlib as a vector for publishing draft proposals in a way that folks can more easily experiment with and provide feedback on. Eventually, I believe we should also officially make the "packaging" module a *public* dependency of pip, and update the bundling into CPython accordingly. By default, you would get "pip" and "packaging", with "pip install setuptools" only triggering if you needed to build from source (and no other build dependencies were specified). If that long term approach sounds reasonable to folks, we should probably promote packaging from Donald's personal repo, include it in the PyPA project list and tweak the description of distlib accordingly. > Although I have been less active of late than I have previously, that's just down to my current workload: I'm busy with consultancy work :-) Note that I have recently updated distlib to reflect changes in PEP 440, though this functionality has not been officially released (it's available in the public repos, though). On requirements, distlib supports both the setuptools forms "foo>=X.Y" and the PEP forms such as "foo (~ X.Y)". PEP 440 ended up switching to the setuptools forms, to make the pip integration actually practical. > > Generally, distlib and distil work well enough [for my needs] for the most part. Where distributions uses custom code in setup.py or extends distutils/setuptools command classes, I use "distil pip" to convert sdists to wheels, or "distil package" to convert .exe installers (like those on Christoph Golhke's site) to wheels. When you pin your dependencies, you don't have to do this dance too often. > > I feel I'm fairly responsive when issues are raised against either distlib or distil. I'm always open to feedback, try to keep the docs up to date, etc. The coverage docs are a little out of date, and coveralls is on my todo list. Not sure what more would be expected from a canonical implementation, other than an official blessing. A reference implementation needs *less*, rather than more. However, I've come to realise we want both - the minimalist reference implementation, and the broader platform that includes more scope for experimenting with draft standards. > > While on the topic of specs, I'm curious to know what the specification status is for other elements in the packaging landscape, such as Warehouse or Twine - are there any PEPs specifying anything new that they do over existing PyPI/distutils, or is there nothing new over and above existing code other than (no doubt improved) reimplementation of existing functionality? Largely reimplementation at this point, except for a couple of PEPs like PEP 470 that aim to clean up certain areas. PEP 426 also has a list of other things that will likely need PEPs at some point :) Cheers, Nick. > > Regards, > > Vinay Sajip -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Sep 17 01:28:46 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 19:28:46 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> Message-ID: <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> > On Sep 16, 2014, at 7:01 PM, Nick Coghlan wrote: > > > On 17 Sep 2014 03:02, "Vinay Sajip" > wrote: > > > > From: Paul Moore > > > > > > > > One thing that might be worth clarifying somewhere/somehow (not > > > particularly in the specs, though) is where is the best place to find > > > the "canonical" implementations of the various metadata specs. At one > > > point, distlib seemed to be taking that role, but I'm not sure it is > > > any more. > > > > That's more to do with the preferences of, and choices made by, other people. I don't know who decides what's "canonical" (I suppose Nick, naturally, but I'm not sure if it's *only* Nick) or what criteria they would use, but I've aimed distlib to implement the various PEPs in their various states of completion. > > My current hope is that we'll end up with a situation where packaging is the minimalist reference implementation that provides the bare minimum amount of functionality necessary to work with versions (including the installation database) at runtime, while distlib becomes the more feature complete library you'd want on a build system. Think pkg_resources vs setuptools, but without the tight coupling that exists in the status quo. > > Someone using packaging would also *only* get features from approved PEPs, freeing distlib as a vector for publishing draft proposals in a way that folks can more easily experiment with and provide feedback on. > > Eventually, I believe we should also officially make the "packaging" module a *public* dependency of pip, and update the bundling into CPython accordingly. By default, you would get "pip" and "packaging", with "pip install setuptools" only triggering if you needed to build from source (and no other build dependencies were specified). > > If that long term approach sounds reasonable to folks, we should probably promote packaging from Donald's personal repo, include it in the PyPA project list and tweak the description of distlib accordingly. > > Pip does not have runtime dependencies and I am extremely against adding them. They cause nothing but pain for us. Either it?s in the stdlib, it?s vendored, or it?s optional. This doesn?t include *non runtime* things like what setuptools is now since we?ve vendored pkg_resources. It?s a build system we call out to, not something that pip itself needs to run. It?s actually been moved into the PyPA github account since I planned on using it in pip. > > Although I have been less active of late than I have previously, that's just down to my current workload: I'm busy with consultancy work :-) Note that I have recently updated distlib to reflect changes in PEP 440, though this functionality has not been officially released (it's available in the public repos, though). On requirements, distlib supports both the setuptools forms "foo>=X.Y" and the PEP forms such as "foo (~ X.Y)". > > PEP 440 ended up switching to the setuptools forms, to make the pip integration actually practical. > > Technically that was a PEP 426 change. > > > > Generally, distlib and distil work well enough [for my needs] for the most part. Where distributions uses custom code in setup.py or extends distutils/setuptools command classes, I use "distil pip" to convert sdists to wheels, or "distil package" to convert .exe installers (like those on Christoph Golhke's site) to wheels. When you pin your dependencies, you don't have to do this dance too often. > > > > I feel I'm fairly responsive when issues are raised against either distlib or distil. I'm always open to feedback, try to keep the docs up to date, etc. The coverage docs are a little out of date, and coveralls is on my todo list. Not sure what more would be expected from a canonical implementation, other than an official blessing. > > A reference implementation needs *less*, rather than more. However, I've come to realise we want both - the minimalist reference implementation, and the broader platform that includes more scope for experimenting with draft standards. > > Yea, my ?problem? with distlib was always that I think Vinay and I wanted two different things from it. I wanted a reference implementation that only came with the PEP standardized pieces, vinay wanted a library that implemented things he could use for distitil. > > > > While on the topic of specs, I'm curious to know what the specification status is for other elements in the packaging landscape, such as Warehouse or Twine - are there any PEPs specifying anything new that they do over existing PyPI/distutils, or is there nothing new over and above existing code other than (no doubt improved) reimplementation of existing functionality? > > Largely reimplementation at this point, except for a couple of PEPs like PEP 470 that aim to clean up certain areas. > > PEP 426 also has a list of other things that will likely need PEPs at some point :) > > Probably once the simple API has been cleaned up as well as we can without breaking too much it would make sense to write another PEP just locking the API in place instead of having it?s definition be spread amongst multiple PEPs, the setuptools docs, and the setuptools/pip code bases. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Sep 17 03:16:59 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Sep 2014 02:16:59 +0100 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> Message-ID: <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> From: Donald Stufft > Technically that was a PEP 426 change. Yes, and I haven't yet changed distlib to remove support for the older "foo (>=X.Y)" form in the earlier version of the PEP. > Yea, my ?problem? with distlib was always that I think Vinay and I wanted two different things from it. I wanted a > reference implementation that only came with the PEP standardized pieces, vinay wanted a library that implemented > things he could use for distitil. Not quite - it's the other way around: distil is mainly a test bed for distlib, to verify that the latter's functionality is usable in practice. What I want is a rather more modern packaging system than we presently have - for example having to download archives in order to determine dependencies is, shall we say, sub-optimal. I want to move away from setup.py, towards declarative metadata, while offering a migration path (which 3.3 packaging didn't). While they're not perfect, distlib/distil allow me to install stuff without executing setup.py on target systems a lot of the time, and ISTM that's moving things in the right direction. Regards, Vinay Sajip From donald at stufft.io Wed Sep 17 03:23:44 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Sep 2014 21:23:44 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> Message-ID: <92DB917E-4B22-4BE0-9C59-4D85EC5CD49D@stufft.io> > On Sep 16, 2014, at 9:16 PM, Vinay Sajip wrote: > > From: Donald Stufft > > >> Technically that was a PEP 426 change. > > Yes, and I haven't yet changed distlib to remove support for the older "foo (>=X.Y)" form in the earlier version of the PEP. > > >> Yea, my ?problem? with distlib was always that I think Vinay and I wanted two different things from it. I wanted a >> reference implementation that only came with the PEP standardized pieces, vinay wanted a library that implemented >> things he could use for distitil. > > Not quite - it's the other way around: distil is mainly a test bed for distlib, to verify that the latter's functionality is usable in practice. What I want is a rather more modern packaging system than we presently have - for example having to download archives in order to determine dependencies is, shall we say, sub-optimal. I want to move away from setup.py, towards declarative metadata, while offering a migration path (which 3.3 packaging didn't). While they're not perfect, distlib/distil allow me to install stuff without executing setup.py on target systems a lot of the time, and ISTM that's moving things in the right direction. > > Regards, > > Vinay Sajip I think that?s what we all want, the difference is that myself and some others don?t think it?s acceptable to build ontop of things which aren?t standardized. We?ve had ~15 years of implementation defined ?standards?, I don?t think blessing officially something which adds more implementation defined standards is the right path forward. This means that things take longer (It took well over a year for PEP 440, which is just focused around version numbers!) but I think in the end it will end up with a solution that is far more robust and far less likely to end up in a situation where we are today where if you don?t use the exact same tooling as everyone else you?re likely to have problems. That static metadata is one of the reasons *why* distlib isn?t suitable for the reference implementation. I have no idea if your specific implementation is good, bad, or somewhere in between but afaik there isn?t even a spec at all much less a general discussion about how it should be structured. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Sep 17 07:05:36 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Sep 2014 15:05:36 +1000 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: <92DB917E-4B22-4BE0-9C59-4D85EC5CD49D@stufft.io> References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> <92DB917E-4B22-4BE0-9C59-4D85EC5CD49D@stufft.io> Message-ID: On 17 September 2014 11:23, Donald Stufft wrote: > I think that?s what we all want, the difference is that myself and some > others don?t think it?s acceptable to build ontop of things which aren?t > standardized. We?ve had ~15 years of implementation defined ?standards?, I > don?t think blessing officially something which adds more implementation > defined standards is the right path forward. This means that things take > longer (It took well over a year for PEP 440, which is just focused around > version numbers!) but I think in the end it will end up with a solution that > is far more robust and far less likely to end up in a situation where we are > today where if you don?t use the exact same tooling as everyone else you?re > likely to have problems. > > That static metadata is one of the reasons *why* distlib isn?t suitable for > the reference implementation. I have no idea if your specific implementation > is good, bad, or somewhere in between but afaik there isn?t even a spec at > all much less a general discussion about how it should be structured. Right, and I think that's a good way to *explicitly* position the two levels: * packaging & pip now aim to be strict implementations of the agreed standards, with only the distutils/setuptools/pkg_resources de facto standards supported for reasons of compatibility * distlib & distil aim to explore what the current drafts of the standards (perhaps with a few experimental embellishments) make possible I've come to realise we need both of those capabilities, and that the previous arguments around the appropriate scope of distlib related to trying to get it to serve two fundamentally incompatible use cases. Perhaps we should make that official policy? Anything in PEP 426 and PEP 459 (and other packaging metadata and installation database related PEPs) needs to be trialled in distlib/distil before the PEPs can be accepted? distlib could operate permanently under a PEP 411 style "provisional API" guideline, and if folks aren't comfortable with "this may break without warning", then they can stick to the stable packaging/pip layer. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From regebro at gmail.com Wed Sep 17 19:47:12 2014 From: regebro at gmail.com (Lennart Regebro) Date: Wed, 17 Sep 2014 19:47:12 +0200 Subject: [Distutils] Buildout setuid Message-ID: While writing a blog post about software configuration management I looked into buildout, and using it as an SCM tool. And it has one big restriction: You can't run certain parts as root. I think adding that would actually not be too hard. Are there any principal arguments against it? I looked at making an extension, but I would need a hook that is run before and after each step in that case. I was thinking that you could define which parts should run as root in one of two ways: 1. A parameter in the part config 2. Having a global configuration with a list of parts. This for the case when the parts recipe itself has a parameter that clashes with the parameter in 1. I'm leaning towards having a setuid parameter, so you can set to other id's than 0. Technically it would be done by setuid to root for the configured parts, and then back after it has run. You would have to run buildout as a whole with sudo for this to work. It would use the login name as the "normal" setuid, unless configured explicitly with a global setuid parameter. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Sep 18 02:08:35 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Sep 2014 20:08:35 -0400 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> <92DB917E-4B22-4BE0-9C59-4D85EC5CD49D@stufft.io> Message-ID: > On Sep 17, 2014, at 1:05 AM, Nick Coghlan wrote: > > On 17 September 2014 11:23, Donald Stufft wrote: >> I think that?s what we all want, the difference is that myself and some >> others don?t think it?s acceptable to build ontop of things which aren?t >> standardized. We?ve had ~15 years of implementation defined ?standards?, I >> don?t think blessing officially something which adds more implementation >> defined standards is the right path forward. This means that things take >> longer (It took well over a year for PEP 440, which is just focused around >> version numbers!) but I think in the end it will end up with a solution that >> is far more robust and far less likely to end up in a situation where we are >> today where if you don?t use the exact same tooling as everyone else you?re >> likely to have problems. >> >> That static metadata is one of the reasons *why* distlib isn?t suitable for >> the reference implementation. I have no idea if your specific implementation >> is good, bad, or somewhere in between but afaik there isn?t even a spec at >> all much less a general discussion about how it should be structured. > > Right, and I think that's a good way to *explicitly* position the two levels: > > * packaging & pip now aim to be strict implementations of the agreed > standards, with only the distutils/setuptools/pkg_resources de facto > standards supported for reasons of compatibility > > * distlib & distil aim to explore what the current drafts of the > standards (perhaps with a few experimental embellishments) make > possible > > I've come to realise we need both of those capabilities, and that the > previous arguments around the appropriate scope of distlib related to > trying to get it to serve two fundamentally incompatible use cases. > > Perhaps we should make that official policy? Anything in PEP 426 and > PEP 459 (and other packaging metadata and installation database > related PEPs) needs to be trialled in distlib/distil before the PEPs > can be accepted? distlib could operate permanently under a PEP 411 > style "provisional API" guideline, and if folks aren't comfortable > with "this may break without warning", then they can stick to the > stable packaging/pip layer. > I?m OK with calling out this relationship though I don?t think it should be a mandatory thing. I think we?re all adults and able to figure out when it makes sense to trial it in distil/distlib or not. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Sep 18 02:59:41 2014 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Sep 2014 20:59:41 -0400 Subject: [Distutils] The Simple API - What URLs are "supported" Message-ID: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Right now pip (and originally setuptools, which does it as well) will do this sort of dance when looking for things on the PyPI simple index. This isn't the actual code though: thing_to_install = "foo==1.0" page = None if thing_to_install.contains("=="): # First look at a versioned url if == page = request_url( "https://pypi.python.org/simple/" + thing_to_install.name + "/" + thing_to_install.version ) if not page: # If we don't have something, look for unversioned page = request_url( "https://pypi.python.org/simple/" + thing_to_install.name ) if not page: # Finally, look at the /simple/ index itself page = request_url("https://pypi.python.org/simple/") # From here, look at the page to discover things. As far as I can tell a lot of this is largely historical. The /simple/{name}/{verison}/ pages come from a time when there wasn't a simple index I think and sometimes packages would need to go to /pypi/foo/version/ in order to actually get a list of things. However we now do have the simple API and AFAICT the /simple/ API does not nor has ever had a reponse for /simple/{name}/{version}/. This always 404's and falls back to the /simple/{name}/. I would like to consider this URL unsupported in pip and remove checking for it. It will reduce the number of needless HTTP requests by one per pinned version. Does anyone know anything this will break? The other thing that happens is if the /simple/{name}/ thing 404's it'll fallback to /simple/. This is done so that if someone mistypes a name in a way that is still considered equivilant after normalization is applied, instead of a 404 they get the /simple/ page and the tooling can discover the name from there. If you remember back a little while ago I changed PyPI so that it considered the normalized form of of the name the "cannonical" name for the simple index, this means that tooling will be able to know ahead of time if a project called say "Django" should be requested with /simple/Django/ or /simple/django/. What I would like to do now is remove the fallback to /simple/. If we fall back to that it is a 2.1MB download that occurs which is a fairly big deal and can slow down a ``pip install`` quite signifcantly. I have a PR against bandersnatch which will make bandersnatch generate a /simple/{name}/ URL where name is the normalized form, and another PR against pip which will cause it to always request the normalized form. When both of these land it would mean that the only time pip will fallback to /simple/ is: 1. If someone is using a *new* pip (with the PR merged) but with a mirror that doesn't support the normalized form of the URLs (old bandersnatch, pep381client, maybe others?) 2. If someone typed ``pip install `` where is a thing that doesn't actually exist. Does anyone have any complaints if pip stopped falling back to /simple/ once the bandersnatch PR is merged and released? Further more does anyone have any problems with narrowing the "supported URLS" of the "simple API" to /simple/ and /simple// and make fetching /simple/ optional? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Sep 18 08:04:21 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Sep 2014 16:04:21 +1000 Subject: [Distutils] Metadata 2.0: Is there a formal spec for a requirement? In-Reply-To: References: <1410886945.79407.YahooMailNeo@web172405.mail.ir2.yahoo.com> <56D6B9CF-D0EF-4697-9A21-A2BB201D8ABD@stufft.io> <1410916619.17518.YahooMailNeo@web172406.mail.ir2.yahoo.com> <92DB917E-4B22-4BE0-9C59-4D85EC5CD49D@stufft.io> Message-ID: On 18 September 2014 10:08, Donald Stufft wrote: > On Sep 17, 2014, at 1:05 AM, Nick Coghlan wrote: > Perhaps we should make that official policy? Anything in PEP 426 and > PEP 459 (and other packaging metadata and installation database > related PEPs) needs to be trialled in distlib/distil before the PEPs > can be accepted? distlib could operate permanently under a PEP 411 > style "provisional API" guideline, and if folks aren't comfortable > with "this may break without warning", then they can stick to the > stable packaging/pip layer. > > > I?m OK with calling out this relationship though I don?t think it should > be a mandatory thing. I think we?re all adults and able to figure out when > it makes sense to trial it in distil/distlib or not. Works for me. I do suspect we're going to want to trial PEP 426 and the PEP 459 metadata extensions :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From holger at merlinux.eu Thu Sep 18 09:27:26 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Sep 2014 07:27:26 +0000 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Message-ID: <20140918072726.GB3692@merlinux.eu> On Wed, Sep 17, 2014 at 20:59 -0400, Donald Stufft wrote: > Right now pip (and originally setuptools, which does it as well) will do this > sort of dance when looking for things on the PyPI simple index. This isn't the > actual code though: > > thing_to_install = "foo==1.0" > page = None > > if thing_to_install.contains("=="): # First look at a versioned url if == > page = request_url( > "https://pypi.python.org/simple/" + thing_to_install.name > + "/" + thing_to_install.version > ) > > if not page: # If we don't have something, look for unversioned > page = request_url( > "https://pypi.python.org/simple/" + thing_to_install.name > ) > > if not page: # Finally, look at the /simple/ index itself > page = request_url("https://pypi.python.org/simple/") > > # From here, look at the page to discover things. > > > As far as I can tell a lot of this is largely historical. > > The /simple/{name}/{verison}/ pages come from a time when there wasn't a simple > index I think and sometimes packages would need to go to /pypi/foo/version/ > in order to actually get a list of things. However we now do have the simple > API and AFAICT the /simple/ API does not nor has ever had a reponse for > /simple/{name}/{version}/. This always 404's and falls back to the > /simple/{name}/. I would like to consider this URL unsupported in pip and remove > checking for it. It will reduce the number of needless HTTP requests by one > per pinned version. > > Does anyone know anything this will break? Sounds fine to me to remove the /{version}/ check. FWIW devpi also doesn't support it. > The other thing that happens is if the /simple/{name}/ thing 404's it'll > fallback to /simple/. This is done so that if someone mistypes a name in a way > that is still considered equivilant after normalization is applied, instead > of a 404 they get the /simple/ page and the tooling can discover the name from > there. > > If you remember back a little while ago I changed PyPI so that it considered > the normalized form of of the name the "cannonical" name for the simple index, > this means that tooling will be able to know ahead of time if a project called > say "Django" should be requested with /simple/Django/ or /simple/django/. > > What I would like to do now is remove the fallback to /simple/. If we fall back > to that it is a 2.1MB download that occurs which is a fairly big deal and can > slow down a ``pip install`` quite signifcantly. I have a PR against > bandersnatch which will make bandersnatch generate a /simple/{name}/ URL where > name is the normalized form, and another PR against pip which will cause it > to always request the normalized form. When both of these land it would mean > that the only time pip will fallback to /simple/ is: > > 1. If someone is using a *new* pip (with the PR merged) but with a mirror that > doesn't support the normalized form of the URLs (old bandersnatch, > pep381client, maybe others?) I think devpi-server would also still redirect on normalized form to the "real name" but we plan to change that to behave as pypi does now (serve everything under the normalized name). How/do you plan to detect if the "normalized form" is supported from a server? > 2. If someone typed ``pip install `` where is a thing that doesn't > actually exist. If we had a meta header "pypi-simple-protocol-version" set to to "1" by pypi.python.org and compliant servers we could avoid all falling back to simple/ safely. You do a single lookup for each project and are done, safely, no falling back to simple/ at all. FWIW devpi-server on simple/ already returns a "200" with an empty link set for projects that don't exist. I've never had complaints from users about this hack. Note that pip/easy_install don't behave much differently today if they get an empty link set on simple/ or a 404 and then no match on the full simple/ page. > Does anyone have any complaints if pip stopped falling back to /simple/ once > the bandersnatch PR is merged and released? I am fine, preferably in lieu with introducing indication from pypi.python.org about the simple-api version. > Further more does anyone have any problems with narrowing the "supported URLS" > of the "simple API" to /simple/ and /simple// and make > fetching /simple/ optional? Thanks for cleaning up the protocols. holger From ncoghlan at gmail.com Thu Sep 18 09:48:33 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Sep 2014 17:48:33 +1000 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Message-ID: What about an approach where pip first tries the canonical name, and if that fails, tries the exact given name? Seems to me like that should handle legacy mirrors without the big download. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Sep 18 09:49:34 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Sep 2014 17:49:34 +1000 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Message-ID: On 18 Sep 2014 17:48, "Nick Coghlan" wrote: > > What about an approach where pip first tries the canonical name, and if that fails, tries the exact given name? And by canonical I mean normalised. > > Seems to me like that should handle legacy mirrors without the big download. > > Cheers, > Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Sep 18 09:54:30 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Sep 2014 07:54:30 +0000 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Message-ID: <20140918075430.GD3692@merlinux.eu> On Thu, Sep 18, 2014 at 17:48 +1000, Nick Coghlan wrote: > What about an approach where pip first tries the canonical name, and if > that fails, tries the exact given name? > > Seems to me like that should handle legacy mirrors without the big download. The download could still happen, however, and then you'd have made three requests (especially for non-existing projects). If we introduce a "simple-protocol-api" version, we can avoid all this trying around and provide correct efficient behaviour when using new versions of tools/servers. best, holger From donald at stufft.io Thu Sep 18 12:17:55 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Sep 2014 06:17:55 -0400 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> Message-ID: <4880A51C-2BA6-4F7A-B56D-8FD1119114D6@stufft.io> > On Sep 18, 2014, at 3:48 AM, Nick Coghlan wrote: > > What about an approach where pip first tries the canonical name, and if that fails, tries the exact given name? > > Seems to me like that should handle legacy mirrors without the big download. > > Cheers, > Nick. > The exact implementation I had in mind has the /simple/{name}/{version}/ url being removed immediately since it?s a cost that's *always* paid if you use == and as far as I know it's not supported/used anywhere. However the fallback to /simple/ would go through the 2 release deprecation cycle since it's both actually useful for use with older mirrors and it's only paid if it needs to be used right now. I think that the deprecation cycle is probably fine to handle that, because even if we do as you suggested we'll still need to fall back to /simple/ because of cases like ``pip install django`` where the "real" name is Django and we have no way of knowing that. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Sep 18 12:22:19 2014 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Sep 2014 10:22:19 +0000 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <4880A51C-2BA6-4F7A-B56D-8FD1119114D6@stufft.io> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> <4880A51C-2BA6-4F7A-B56D-8FD1119114D6@stufft.io> Message-ID: <20140918102219.GG3692@merlinux.eu> On Thu, Sep 18, 2014 at 06:17 -0400, Donald Stufft wrote: > > On Sep 18, 2014, at 3:48 AM, Nick Coghlan wrote: > > > > What about an approach where pip first tries the canonical name, and if that fails, tries the exact given name? > > > > Seems to me like that should handle legacy mirrors without the big download. > > > > Cheers, > > Nick. > > > > The exact implementation I had in mind has the /simple/{name}/{version}/ url > being removed immediately since it?s a cost that's *always* paid if you use == > and as far as I know it's not supported/used anywhere. I agree and think that's just fine. > However the fallback to > /simple/ would go through the 2 release deprecation cycle since it's both > actually useful for use with older mirrors and it's only paid if it needs to > be used right now. > > I think that the deprecation cycle is probably fine to handle that, because > even if we do as you suggested we'll still need to fall back to /simple/ > because of cases like ``pip install django`` where the "real" name is Django > and we have no way of knowing that. Not sure i follow. Do you mean "pip install django" needs to fall back to /simple/ when it works against a static file server? What significance does the "real" name have in "pip install" context? holger From donald at stufft.io Thu Sep 18 12:25:37 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Sep 2014 06:25:37 -0400 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <20140918072726.GB3692@merlinux.eu> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> <20140918072726.GB3692@merlinux.eu> Message-ID: <0A48BE45-9F4C-4CD6-A02B-6F0F916CC1FC@stufft.io> > On Sep 18, 2014, at 3:27 AM, holger krekel wrote: > > On Wed, Sep 17, 2014 at 20:59 -0400, Donald Stufft wrote: >> Right now pip (and originally setuptools, which does it as well) will do this >> sort of dance when looking for things on the PyPI simple index. This isn't the >> actual code though: >> >> thing_to_install = "foo==1.0" >> page = None >> >> if thing_to_install.contains("=="): # First look at a versioned url if == >> page = request_url( >> "https://pypi.python.org/simple/" + thing_to_install.name >> + "/" + thing_to_install.version >> ) >> >> if not page: # If we don't have something, look for unversioned >> page = request_url( >> "https://pypi.python.org/simple/" + thing_to_install.name >> ) >> >> if not page: # Finally, look at the /simple/ index itself >> page = request_url("https://pypi.python.org/simple/") >> >> # From here, look at the page to discover things. >> >> >> As far as I can tell a lot of this is largely historical. >> >> The /simple/{name}/{verison}/ pages come from a time when there wasn't a simple >> index I think and sometimes packages would need to go to /pypi/foo/version/ >> in order to actually get a list of things. However we now do have the simple >> API and AFAICT the /simple/ API does not nor has ever had a reponse for >> /simple/{name}/{version}/. This always 404's and falls back to the >> /simple/{name}/. I would like to consider this URL unsupported in pip and remove >> checking for it. It will reduce the number of needless HTTP requests by one >> per pinned version. >> >> Does anyone know anything this will break? > > Sounds fine to me to remove the /{version}/ check. FWIW devpi also doesn't > support it. > >> The other thing that happens is if the /simple/{name}/ thing 404's it'll >> fallback to /simple/. This is done so that if someone mistypes a name in a way >> that is still considered equivilant after normalization is applied, instead >> of a 404 they get the /simple/ page and the tooling can discover the name from >> there. >> >> If you remember back a little while ago I changed PyPI so that it considered >> the normalized form of of the name the "cannonical" name for the simple index, >> this means that tooling will be able to know ahead of time if a project called >> say "Django" should be requested with /simple/Django/ or /simple/django/. >> >> What I would like to do now is remove the fallback to /simple/. If we fall back >> to that it is a 2.1MB download that occurs which is a fairly big deal and can >> slow down a ``pip install`` quite signifcantly. I have a PR against >> bandersnatch which will make bandersnatch generate a /simple/{name}/ URL where >> name is the normalized form, and another PR against pip which will cause it >> to always request the normalized form. When both of these land it would mean >> that the only time pip will fallback to /simple/ is: >> >> 1. If someone is using a *new* pip (with the PR merged) but with a mirror that >> doesn't support the normalized form of the URLs (old bandersnatch, >> pep381client, maybe others?) > > I think devpi-server would also still redirect on normalized form to > the "real name" but we plan to change that to behave as pypi does now > (serve everything under the normalized name). How/do you plan to detect > if the "normalized form" is supported from a server? Yea I forgot to explicitly mention devpi-server, but I knew it did the redirect so it would be fine there. I don?t plan on detecting it, I just plan on sending it through the normal deprecation cycle in pip. Since this doesn?t reflect any changes on the PyPI side beyond what's already been done, and the changes to bandersnatch/mirrors can be made in an completely backwards compatible fashion the only problem would be when you use a *new* pip with an older mirroring client or index server. I think a 2 release deprecation cycle (so pip 6.0 is our next release, it would be deprecated there and ultimately removed in pip 8.0) is ample time for folks to see the warning and adjust their own internal mirrors. > >> 2. If someone typed ``pip install `` where is a thing that doesn't >> actually exist. > > If we had a meta header "pypi-simple-protocol-version" set to to "1" by > pypi.python.org and compliant servers we could avoid all falling back to > simple/ safely. You do a single lookup for each project and are done, > safely, no falling back to simple/ at all. > > FWIW devpi-server on simple/ already returns a "200" with an empty > link set for projects that don't exist. I've never had complaints from > users about this hack. Note that pip/easy_install don't behave much > differently today if they get an empty link set on simple/ or a > 404 and then no match on the full simple/ page. Correct, if there?s no match on the /simple/ page, then it?s functionally equivalent to getting a blank /simple/{name}/ except there?s one less HTTP request made. I don?t think we need to introduce a header, other than the change to serve normalized URLs by default this isn?t technically a change in what PyPI serves, just a change in what the installers are expected to go and fetch. > >> Does anyone have any complaints if pip stopped falling back to /simple/ once >> the bandersnatch PR is merged and released? > > I am fine, preferably in lieu with introducing indication from pypi.python.org > about the simple-api version. > >> Further more does anyone have any problems with narrowing the "supported URLS" >> of the "simple API" to /simple/ and /simple// and make >> fetching /simple/ optional? > > Thanks for cleaning up the protocols. Heh, no worries :) Hopefully we?ll end up with this all simplified so that pip/index.py can be cleaned up and simplified and a nice PEP can be written up with a final ?this is the Simple API? and no longer need to have it spread over multiple PEPs, setuptools documentation, and pip/setuptools source code. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Sep 18 12:31:34 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Sep 2014 06:31:34 -0400 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <20140918102219.GG3692@merlinux.eu> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> <4880A51C-2BA6-4F7A-B56D-8FD1119114D6@stufft.io> <20140918102219.GG3692@merlinux.eu> Message-ID: <2D842B16-F874-48B0-BC26-DEAD385230D7@stufft.io> > On Sep 18, 2014, at 6:22 AM, holger krekel wrote: > > On Thu, Sep 18, 2014 at 06:17 -0400, Donald Stufft wrote: >>> On Sep 18, 2014, at 3:48 AM, Nick Coghlan wrote: >>> >>> What about an approach where pip first tries the canonical name, and if that fails, tries the exact given name? >>> >>> Seems to me like that should handle legacy mirrors without the big download. >>> >>> Cheers, >>> Nick. >>> >> >> The exact implementation I had in mind has the /simple/{name}/{version}/ url >> being removed immediately since it?s a cost that's *always* paid if you use == >> and as far as I know it's not supported/used anywhere. > > I agree and think that's just fine. > >> However the fallback to >> /simple/ would go through the 2 release deprecation cycle since it's both >> actually useful for use with older mirrors and it's only paid if it needs to >> be used right now. >> >> I think that the deprecation cycle is probably fine to handle that, because >> even if we do as you suggested we'll still need to fall back to /simple/ >> because of cases like ``pip install django`` where the "real" name is Django >> and we have no way of knowing that. > > Not sure i follow. Do you mean "pip install django" needs to fall back > to /simple/ when it works against a static file server? > What significance does the "real" name have in "pip install" context? As of right now, bandersnatch uses the "real" name of a project as the directory name, so in the case of Django, which captilizes it's "real" name on PyPI, it has URLs like /simple/Django/. Historically this matched what PyPI did. Since bandersnatch is a static mirror it generally couldn't do a redirect from /simple/django/ to /simple/Django/ so if someone typed ``pip install django`` pip would request /simple/django/, get a 404 and then request /simple/, find that the "real" name of "django" is "Django", and then it will request /simple/Django/. The benefit of the change I made to PyPI to prefer the normalized name instead of the "real" name is that tools like pip can know ahead of time what /simple/{name}/ URL to request regardless of what the "real" name is on PyPI. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Thu Sep 18 15:57:16 2014 From: jim at zope.com (Jim Fulton) Date: Thu, 18 Sep 2014 09:57:16 -0400 Subject: [Distutils] Buildout setuid In-Reply-To: References: Message-ID: On Wed, Sep 17, 2014 at 1:47 PM, Lennart Regebro wrote: > While writing a blog post about software configuration management I looked > into buildout, and using it as an SCM tool. And it has one big restriction: > > You can't run certain parts as root. > > I think adding that would actually not be too hard. Are there any principal > arguments against it? I looked at making an extension, but I would need a > hook that is run before and after each step in that case. > > I was thinking that you could define which parts should run as root in one > of two ways: > > 1. A parameter in the part config > 2. Having a global configuration with a list of parts. This for the case > when the parts recipe itself has a parameter that clashes with the parameter > in 1. > > I'm leaning towards having a setuid parameter, so you can set to other id's > than 0. > > Technically it would be done by setuid to root for the configured parts, and > then back after it has run. You would have to run buildout as a whole with > sudo for this to work. It would use the login name as the "normal" setuid, > unless configured explicitly with a global setuid parameter. > > Thoughts? We deploy to production with buildout and have never needed this. Our approach is to have separate buildouts for building software (RPMs currently) and for deploying to production machines. The deployment buildouts are run as root (typically from another process that runs from root, https://bitbucket.org/zc/zkdeployment). These 2 buildouts are run at very different times and situations. A better approach IMO is to deploy with Docker. With Docker, all of your "deployment" is done when you build an image, still as root. Unfortunately, our current scheme is working well enough and we have enough other priorities that I fear I won't find time to dockerfy our processes soon. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From barrycroff1 at gmail.com Thu Sep 18 18:03:33 2014 From: barrycroff1 at gmail.com (barry croff) Date: Fri, 19 Sep 2014 00:03:33 +0800 Subject: [Distutils] PEX at Twitter (re: PEX - Twitter's multi-platform executable archive format for Python) Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From pydanny at gmail.com Fri Sep 19 20:47:32 2014 From: pydanny at gmail.com (Daniel Greenfeld) Date: Fri, 19 Sep 2014 11:47:32 -0700 Subject: [Distutils] Create formal process for claiming 'abandoned' packages Message-ID: In order to claim a package as being abandoned it should undergo a formal process that includes: * Placement on a PUBLIC list of packages under review for a grace period to be determined by this discussion * Formal attempts via email and social media (twitter, github, et al) to contact the maintainer. * Investigation of the claimant for the rights to the package. The parties attempting to claim a package may not be the best representatives of the community behind that package, or the Python community in general. Why? * Non-reply does not equal consent. * Access to a commonly (or uncommonly) used package poses security and reliability issues. Why: Scenario 1: I could claim ownership of the redis package, providing a certain-to-fail email for the maintainers of PyPI to investigate? Right now the process leads me to think I would succeed in gaining access. If successful, I would gain complete access to a package used by hundreds of projects for persistence storage. Scenario 2: I could claim ownership of the redis package, while Andy McCurdy (maintainer) was on vacation for two weeks, or sabbatical for six weeks. Again, I would gain access because under the current system non-reply equals consent. Reference: In ticket #407 (https://sourceforge.net/p/pypi/support-requests/407/) someone who does not appear to be vetted managed to gain control of the (arguably) abandoned but still extremely popular django-registration on PyPI. They run one of several HUNDRED forks of django-registration, one that is arguably not the most commonly used. My concern is that as django-registration is the leading package for handling system registration for Python's most popular web framework, handing it over without a full investigation of not just the current maintainer but also the candidate maintainer is risky. Regards, Daniel Greenfeld pydanny at gmail.com From richard at python.org Fri Sep 19 23:55:13 2014 From: richard at python.org (Richard Jones) Date: Sat, 20 Sep 2014 07:55:13 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On 20 September 2014 04:47, Daniel Greenfeld wrote: > In order to claim a package as being abandoned it should undergo a > formal process that includes: > > * Placement on a PUBLIC list of packages under review for a grace > period to be determined by this discussion > This is not done at present. Can you suggest a public forum that would reach a useful audience? > * Formal attempts via email and social media (twitter, github, et al) > to contact the maintainer. > This is done at present, using the contact details registered with pypi. Or other contact methods if that fails. I always default to asking the current maintainer of a package to transfer it to a new maintainer. > * Investigation of the claimant for the rights to the package. The > parties attempting to claim a package may not be the best > representatives of the community behind that package, or the Python > community in general. > I'm not sure how I could do this reasonably given the breadth of packages in the index, and the size and number of Python communities. How could I possibly determine this? In the open source world, how do you vet someone, especially when the original maintainer is unresponsive? > Why? > > * Non-reply does not equal consent. > That's a reasonable statement, but if this were to be held then a large number of stagnating package listings would have remained in that state. > * Access to a commonly (or uncommonly) used package poses security and > reliability issues. > > Why: > > Scenario 1: > > I could claim ownership of the redis package, providing a > certain-to-fail email for the maintainers of PyPI to investigate? > I attempt contact through other channels. I don't rely just on information provided by the requestor. > Scenario 2: > > I could claim ownership of the redis package, while Andy McCurdy > (maintainer) was on vacation for two weeks, or sabbatical for six > weeks. Again, I would gain access because under the current system > non-reply equals consent. > I tend to wait one month, but yes a six month sabbatical would be a problem. On the other hand, I do make every attempt to contact > Reference: > > In ticket #407 (https://sourceforge.net/p/pypi/support-requests/407/) > someone who does not appear to be vetted managed to gain control of > the (arguably) abandoned but still extremely popular > django-registration on PyPI. They run one of several HUNDRED forks of > django-registration, one that is arguably not the most commonly used. > > My concern is that as django-registration is the leading package for > handling system registration for Python's most popular web framework, > handing it over without a full investigation of not just the current > maintainer but also the candidate maintainer is risky. > And my counter is that I get a lot of these requests, I do my best to try to contact the original maintainer, and in the absence of any other information I need to take the requestor at their word. In the case of the request above, I contacted the original maintainer directly, using an address I knew to work, and received no response. To me that correlated well with the indication that he wanted nothing to do with the package any longer. Someone keen enough had come forward to provide updated versions of the package, amongst what you claim are hundreds of such forks (recognising that github forks are a very poor method to judge how engaged someone is with a project). In light of that, I granted that person permission to provided updates for that project. Thanks for your thoughts. The procedure I use should be written down, I guess, but I'm the only person who follows it, so the motivation to do so is very low. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Sep 20 00:22:54 2014 From: donald at stufft.io (Donald Stufft) Date: Fri, 19 Sep 2014 18:22:54 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: <9FA232F5-17C1-40E2-9F20-EAA607BD2BDB@stufft.io> > On Sep 19, 2014, at 5:55 PM, Richard Jones wrote: > > On 20 September 2014 04:47, Daniel Greenfeld > wrote: > In order to claim a package as being abandoned it should undergo a > formal process that includes: > > * Placement on a PUBLIC list of packages under review for a grace > period to be determined by this discussion > > This is not done at present. Can you suggest a public forum that would reach a useful audience? > > > * Formal attempts via email and social media (twitter, github, et al) > to contact the maintainer. > > This is done at present, using the contact details registered with pypi. Or other contact methods if that fails. > > I always default to asking the current maintainer of a package to transfer it to a new maintainer. > > > * Investigation of the claimant for the rights to the package. The > parties attempting to claim a package may not be the best > representatives of the community behind that package, or the Python > community in general. > > I'm not sure how I could do this reasonably given the breadth of packages in the index, and the size and number of Python communities. How could I possibly determine this? In the open source world, how do you vet someone, especially when the original maintainer is unresponsive? > > > Why? > > * Non-reply does not equal consent. > > That's a reasonable statement, but if this were to be held then a large number of stagnating package listings would have remained in that state. > > > * Access to a commonly (or uncommonly) used package poses security and > reliability issues. > > Why: > > Scenario 1: > > I could claim ownership of the redis package, providing a > certain-to-fail email for the maintainers of PyPI to investigate? > > I attempt contact through other channels. I don't rely just on information provided by the requestor. > > > Scenario 2: > > I could claim ownership of the redis package, while Andy McCurdy > (maintainer) was on vacation for two weeks, or sabbatical for six > weeks. Again, I would gain access because under the current system > non-reply equals consent. > > I tend to wait one month, but yes a six month sabbatical would be a problem. On the other hand, I do make every attempt to contact > > > Reference: > > In ticket #407 (https://sourceforge.net/p/pypi/support-requests/407/ ) > someone who does not appear to be vetted managed to gain control of > the (arguably) abandoned but still extremely popular > django-registration on PyPI. They run one of several HUNDRED forks of > django-registration, one that is arguably not the most commonly used. > > My concern is that as django-registration is the leading package for > handling system registration for Python's most popular web framework, > handing it over without a full investigation of not just the current > maintainer but also the candidate maintainer is risky. > > And my counter is that I get a lot of these requests, I do my best to try to contact the original maintainer, and in the absence of any other information I need to take the requestor at their word. In the case of the request above, I contacted the original maintainer directly, using an address I knew to work, and received no response. To me that correlated well with the indication that he wanted nothing to do with the package any longer. Someone keen enough had come forward to provide updated versions of the package, amongst what you claim are hundreds of such forks (recognising that github forks are a very poor method to judge how engaged someone is with a project). In light of that, I granted that person permission to provided updates for that project. > > Thanks for your thoughts. The procedure I use should be written down, I guess, but I'm the only person who follows it, so the motivation to do so is very low. > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Perhaps in Warehouse the procedure can be automated to some degree and a public record of what actions were taken and when? I don?t mean like a public log of the actual email address or email content or anything of the sort. Just like a "attempted to contact on X date", "notified X thing on Y", "No response in X time, transfering ownership" kind of things. Maybe we could create something like python-updates which would be a read only mailing list which just posts a thread per request and updates it with the actions taken and stuff. People who care could subscribe to it without having to get all of distutils-sig or wahtever. Maybe it could even offer package authors the ability to mark a package as "Request For Adoption" saying that they have a package that they wrote, but that they no longer wish to maintain. I don't know, I'm just tossing out some potentional ideas! --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Sep 20 00:34:19 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Sep 2014 08:34:19 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: <9FA232F5-17C1-40E2-9F20-EAA607BD2BDB@stufft.io> References: <9FA232F5-17C1-40E2-9F20-EAA607BD2BDB@stufft.io> Message-ID: On 20 September 2014 08:22, Donald Stufft wrote: > Perhaps in Warehouse the procedure can be automated to some degree > and a public record of what actions were taken and when? I don?t mean like > a public log of the actual email address or email content or anything of the > sort. Just like a "attempted to contact on X date", "notified X thing on Y", > "No response in X time, transfering ownership" kind of things. > > Maybe we could create something like python-updates which would be a read > only > mailing list which just posts a thread per request and updates it with the > actions taken and stuff. People who care could subscribe to it without > having > to get all of distutils-sig or wahtever. > > Maybe it could even offer package authors the ability to mark a package as > "Request For Adoption" saying that they have a package that they wrote, but > that they no longer wish to maintain. > > I don't know, I'm just tossing out some potentional ideas! Yep, for this kind of thing, "automate" can be a better answer than "document" - it's much easier to delegate (or otherwise hand over the reins) when the process is built into the tools. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pydanny at gmail.com Sat Sep 20 00:23:06 2014 From: pydanny at gmail.com (Daniel Greenfeld) Date: Fri, 19 Sep 2014 15:23:06 -0700 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: Hi Richard, On Fri, Sep 19, 2014 at 2:55 PM, Richard Jones wrote: > On 20 September 2014 04:47, Daniel Greenfeld wrote: >> >> In order to claim a package as being abandoned it should undergo a >> formal process that includes: >> >> * Placement on a PUBLIC list of packages under review for a grace >> period to be determined by this discussion > > This is not done at present. Can you suggest a public forum that would reach > a useful audience? What about a page on PyPI that tracks packages undergoing this review? PyPI has a huge audience. "In theory" all this requires is just a few additional fields added. >> * Formal attempts via email and social media (twitter, github, et al) >> to contact the maintainer. > > This is done at present, using the contact details registered with pypi. Or > other contact methods if that fails. > > I always default to asking the current maintainer of a package to transfer > it to a new maintainer. It would be nice to have this documented on PyPI. I would be more than willing to write this down for you. >> * Investigation of the claimant for the rights to the package. The >> parties attempting to claim a package may not be the best >> representatives of the community behind that package, or the Python >> community in general. > > I'm not sure how I could do this reasonably given the breadth of packages in > the index, and the size and number of Python communities. How could I > possibly determine this? In the open source world, how do you vet someone, > especially when the original maintainer is unresponsive? Honestly? I'm not sure either. I know the people that I know, and can research a segment of the community. However, I'm well aware that this is a tiny portion of who is actually using python. >> >> Why? >> >> * Non-reply does not equal consent. > > That's a reasonable statement, but if this were to be held then a large > number of stagnating package listings would have remained in that state I concur. Which is why I suggested creating a page that tracks packages undergoing the transfer-of-ownership grace period. That would mean more eyes on the issue, as well as provide a means for eventually automating things in order to relieve you of the workload of maintenance. >> * Access to a commonly (or uncommonly) used package poses security and >> reliability issues. >> >> Why: >> >> Scenario 1: >> >> I could claim ownership of the redis package, providing a >> certain-to-fail email for the maintainers of PyPI to investigate? > > I attempt contact through other channels. I don't rely just on information > provided by the requestor. Knowing you, I would be surprised if it were any other way. ;) I believe documenting this process of communication would cast light on the process. And would mean that you could more easily enlist others to help you. I would be honored to document this or any other part of this system. >> Reference: >> >> In ticket #407 (https://sourceforge.net/p/pypi/support-requests/407/) >> someone who does not appear to be vetted managed to gain control of >> the (arguably) abandoned but still extremely popular >> django-registration on PyPI. They run one of several HUNDRED forks of >> django-registration, one that is arguably not the most commonly used. >> >> My concern is that as django-registration is the leading package for >> handling system registration for Python's most popular web framework, >> handing it over without a full investigation of not just the current >> maintainer but also the candidate maintainer is risky. > > > And my counter is that I get a lot of these requests, I do my best to try to > contact the original maintainer, and in the absence of any other information > I need to take the requestor at their word. In the case of the request > above, I contacted the original maintainer directly, using an address I knew > to work, and received no response. To me that correlated well with the > indication that he wanted nothing to do with the package any longer. Someone > keen enough had come forward to provide updated versions of the package, > amongst what you claim are hundreds of such forks (recognising that github > forks are a very poor method to judge how engaged someone is with a > project). In light of that, I granted that person permission to provided > updates for that project. > > Thanks for your thoughts. The procedure I use should be written down, I > guess, but I'm the only person who follows it, so the motivation to do so is > very low. Having maintained enough projects of my own, I really do understand your point of view. People ask for things, but it's rare that they will actually provide assistance. It's tiring and frustrating, since they always want you to put in more time, usually without offering to help in any way. So let me say right now that I want to help: * I will help with documenting the process. You can tell it to me in any format you want, written or verbal, and then I'll write it up. * I would like to help with modifying PyPI to create a tracking process for transfer-of-ownership. * I would be honored to pitch in for maintenance of this part of things, and can also issue a call for assistance for more help. I know you do a lot of work on PyPI. I can't begin to tell you how much that is appreciated. Let me help you. Sincerely, Danny From ubernostrum at gmail.com Sat Sep 20 00:52:05 2014 From: ubernostrum at gmail.com (James Bennett) Date: Fri, 19 Sep 2014 17:52:05 -0500 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On Fri, Sep 19, 2014 at 4:55 PM, Richard Jones wrote: > This is done at present, using the contact details registered with pypi. > Or other contact methods if that fails. > I always default to asking the current maintainer of a package to transfer > it to a new maintainer. > Could you clarify when and how you attempted that contact in this case? At the email address on file for me at PyPI, I have received one email from you regarding PyPI, and it was the automated message regarding the Python wiki password breach. Additionally, the requesting party had contacted me, and we had a brief but inconclusive discussion regarding whether it would be a good idea for the package to be resurrected under a new maintainer. The fact that I literally woke up from a nap to find someone else had been assigned as an owner of one of my packages -- even one I've publicly stepped down as maintainer of -- without any notice to me that I can find from the PyPI side (I found out from seeing my name mentioned on Twitter, then saw this email thread), has placed me in a position where my faith in PyPI's security is now exactly zero, and I'm forced to consider whether I want to continue hosting packages there. For now I have removed user 'macropin' from django-registration on PyPI. Do not make any further changes to the package's records/roles/etc. on PyPI unless I request it of you, via GPG-signed mail (my key is available quite publicly courtesy of Django releases). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ubernostrum at gmail.com Sat Sep 20 00:56:08 2014 From: ubernostrum at gmail.com (James Bennett) Date: Fri, 19 Sep 2014 17:56:08 -0500 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On further information, it seems the contact attempt was a message to my gmail address, which is not the contact information I have on file for PyPI, and is the address I use for bulk things like mailing lists. I am now more frightened that missing an email to an alternate address (that address gets literally hundreds of messages/day, and I do not attempt to read all of them, hence I don't use it as a contact address for important things) can result in this type of action. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Sat Sep 20 01:13:49 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 19 Sep 2014 23:13:49 +0000 (UTC) Subject: [Distutils] =?utf-8?q?Create_formal_process_for_claiming_=27aband?= =?utf-8?q?oned=27=09packages?= References: Message-ID: I **strongly** concur with James here. This has flagrantly violated my trust in PyPI. I would much rather packages not be reclaimed than need to think about whether I trust the PyPI maintainers to do it. Alex From ethan at stoneleaf.us Sat Sep 20 01:28:50 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 19 Sep 2014 16:28:50 -0700 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: <541CBC32.2000404@stoneleaf.us> On 09/19/2014 04:13 PM, Alex Gaynor wrote: > > I **strongly** concur with James here. This has flagrantly violated my trust in > PyPI. > > I would much rather packages not be reclaimed than need to think about whether > I trust the PyPI maintainers to do it. Having PyPI become full of cruft is not a tenable situation. People make mistakes. Instead of flaming Richard we should figure out how to make the process better for everyone, Richard included. Thank you, Daniel, for taking the lead on that. -- ~Ethan~ From richard at python.org Sat Sep 20 03:26:14 2014 From: richard at python.org (Richard Jones) Date: Sat, 20 Sep 2014 11:26:14 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: Hi all, Having had some time to think this over, I will attempt to explain what the current process is, and how I believe I should change it. It's worth noting that I'm the only person who handles support issues for PyPI (years ago Martin von Lowis also did this, and Donald Stufft has handled one or two cases over the years). There's various reasons for this, not the least of which is that direct ssh/database access is often required to investigate ownership issues. When someone requests to take over a listing on PyPI, the process is: * If the request comes in through some means other than the sf.net support tracker, I require the requestor to make the request through that tracker so there is a record, * I ask whether they have contact the current owner, * I personally contact the owner through whatever means I have (sometimes this means using the address listed for the user in PyPI, sometimes that address is not valid so I use other means where possible), * If contact is made, I ask the current owner to grant the requestor ownership if they feel it is appropriate, * If contact is not made after one month, I add the requestor as an owner. Between the support tracker and PyPI's audit log, all those actions are recorded. However, in this instance, two things did not happen: * I did not record that I had attempted to contact James in the tracker, and * I did not use the listed contact address for James in my attempt to contact him, rather using the address I had in my personal address book. I cannot definitively explain why I didn't do the first step. On the second count though, I can only claim laziness combined with my usually handling these requests in a bunch at 5pm or later after a work day (basically, when I can find a few moments to deal with the backlog). Actually, I think I might have been in an airport transit lounge in that particular instance. It was just easier to use the address I knew than to go through the hoops to find out the correct address to use. Not trying to excuse myself, just explain. There's been some suggestions made: * Publicly announcing the intention to make the change is a good one, though again finding an appropriate forum that enough people would actually read is tricky. * Implement some sort of automated process. Given that we've struggled to produce Warehouse over *years* of development, I don't see this happening any time soon. In light of this specific case, I have an additional change that I think I'll implement to attempt to prevent it again: In the instances where the current owner is unresponsive to my attempts to contact them, *and* the project has releases in the index, I will not transfer ownership. In the cases where no releases have been made I will continue to transfer ownership. Your thoughts, as always, are welcome. Thanks to Danny for bringing the issue up, and to James and Alex for presenting their security concerns so clearly. Richard On 20 September 2014 04:47, Daniel Greenfeld wrote: > In order to claim a package as being abandoned it should undergo a > formal process that includes: > > * Placement on a PUBLIC list of packages under review for a grace > period to be determined by this discussion > * Formal attempts via email and social media (twitter, github, et al) > to contact the maintainer. > * Investigation of the claimant for the rights to the package. The > parties attempting to claim a package may not be the best > representatives of the community behind that package, or the Python > community in general. > > Why? > > * Non-reply does not equal consent. > * Access to a commonly (or uncommonly) used package poses security and > reliability issues. > > Why: > > Scenario 1: > > I could claim ownership of the redis package, providing a > certain-to-fail email for the maintainers of PyPI to investigate? > Right now the process leads me to think I would succeed in gaining > access. If successful, I would gain complete access to a package used > by hundreds of projects for persistence storage. > > Scenario 2: > > I could claim ownership of the redis package, while Andy McCurdy > (maintainer) was on vacation for two weeks, or sabbatical for six > weeks. Again, I would gain access because under the current system > non-reply equals consent. > > Reference: > > In ticket #407 (https://sourceforge.net/p/pypi/support-requests/407/) > someone who does not appear to be vetted managed to gain control of > the (arguably) abandoned but still extremely popular > django-registration on PyPI. They run one of several HUNDRED forks of > django-registration, one that is arguably not the most commonly used. > > My concern is that as django-registration is the leading package for > handling system registration for Python's most popular web framework, > handing it over without a full investigation of not just the current > maintainer but also the candidate maintainer is risky. > > > Regards, > > Daniel Greenfeld > pydanny at gmail.com > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Sat Sep 20 01:09:03 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Fri, 19 Sep 2014 23:09:03 +0000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: <9FA232F5-17C1-40E2-9F20-EAA607BD2BDB@stufft.io> References: <9FA232F5-17C1-40E2-9F20-EAA607BD2BDB@stufft.io> Message-ID: Donald Stufft wrote: > Perhaps in Warehouse the procedure can be automated to some degree > and a public record of what actions were taken and when? I don?t mean like? > a public log of the actual email address or email content or anything of the > sort. Just like a "attempted to contact on X date", "notified X thing on Y", > "No response in X time, transfering ownership" kind of things. > > Maybe we could create something like python-updates which would be a read only > mailing list which just posts a thread per request and updates it with the > actions taken and stuff. People who care could subscribe to it without having > to get all of distutils-sig or wahtever. > > Maybe it could even offer package authors the ability to mark a package as > "Request For Adoption" saying that they have a package that they wrote, but > that they no longer wish to maintain. > > I don't know, I'm just tossing out some potentional ideas! In the same vein, but at the more annoying end of the scale (I don't know how big a problem this really is) - build in an expiry date for every package. Every upload or ping from the maintainer extends it by 6-12 months, then automate a couple of emails in the last month before it expires. Nothing more has to happen, except that then it may be claimed by someone else and the communication has already been done. The manual process becomes checking a flag and the rest is automated. Regular emails also helps make sure that you track email address changes better. I also like the "Request for Adoption" idea. Plenty of projects get wide use without developing a community where it's easy to find someone to take over. Cheers, Steve From a.badger at gmail.com Sat Sep 20 05:43:45 2014 From: a.badger at gmail.com (Toshio Kuratomi) Date: Fri, 19 Sep 2014 23:43:45 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones wrote: > When someone requests to take over a listing on PyPI, the process is: >i > * If the request comes in through some means other than the sf.net support > tracker, I require the requestor to make the request through that tracker so > there is a record,ard > * I ask whether they have contact the current owner, > * I personally contact the owner through whatever means I have (sometimes > this means using the address listed for the user in PyPI, sometimes that > address is not valid so I use other means where possible), This seems like the step where change would be most fruitful. The idea of a public list mentioned before allows a variety of feedback: 1) The maintainer themselves 2) People who know the maintainer and have an alternate method to contact them 3) Other people who know the project and can raise an objection to the exact person who is being added as a new owner Another thought here is that it's often best to use every means of contacting someone that you reasonably have available. So if there's a valid mail in pypi and a valid email in your contacts, use both. The public list idea essentially lets you crowdsource additional methods of contacting the maintainer. > There's been some suggestions made: > > * Publicly announcing the intention to make the change is a good one, though > again finding an appropriate forum that enough people would actually read is > tricky. If there's no appropriate forum, starting a new one might be the best. "Uploaders to pypi" could certainly be seen as an audience that doesn't match well with any other existing mailing list. > > In light of this specific case, I have an additional change that I think > I'll implement to attempt to prevent it again: In the instances where the > current owner is unresponsive to my attempts to contact them, *and* the > project has releases in the index, I will not transfer ownership. In the > cases where no releases have been made I will continue to transfer > ownership. > This is tricky. There are certainly security issues with allowing just anyone to take over a popular package at any time. But there are also security concerns with letting a package bitrot on pypi. Say that the 4 pypi maintainers of Django or the 6 pypi maintainers of pip became unresponsive (it doesn't even have to be forever... that 6 month sabbatical could correspond with something happening to your co-maintainers as well). And the still active upstream project makes a new security fix that they need to get into the hands of their users ASAP. We don't want pypi to block that update from going out. Even if the project creates a new pypi package name and uploads there, would we really want the last package on pypi that all sorts of old documentation and blog posts on the internet is pointing to to be the insecure one? So I don't think an absolute "we will never transfer ownership once code is released" is a good idea here. It's a good idea to increase the means used to determine if the current maintainer can be reached and it's a good idea to throw extra eyes at vetting whether a transfer is warranted. It may be a good idea to add more criteria around what makes for an allowable transfer (for instance, in my examples, there's still a large, well known canonical upstream even though the specific members of that upstream responsible for uploading to pypi have gone unresponsive. That might be a valid criteria whereas one-coder projects being replaced by other one-coder forks might be a case where you simply say "rename please"). It could help to have other people involved in the decision making for this. At the least, having other people involved will spread responsibility. At best it gives the group additional man-hours to research the facts in the case. One final thought in regards to ticket 407. My impression from reading the notes is that this was not a complete invalidation of the current process. In the end, the current owner was alerted to the takeover attempt and also was in a position to do something about it since they disagreed with what was happening. Those are both points in favor of some pieces of the process (adding the new owner instead of replacing the owner). This might not be sufficient for a malicious attack on a project but it does show that the process does have some good features in terms of dealing with mistakes in communication. -Toshio From gokoproject at gmail.com Sat Sep 20 07:30:04 2014 From: gokoproject at gmail.com (John Wong) Date: Sat, 20 Sep 2014 01:30:04 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: Hi all. TL;DR version: I think * an option to enroll in automatic ownership transfer * an option to promote Request for Adoption * don't transfer unless there are no releases on the index will be reasonable to me. On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones wrote: > > In light of this specific case, I have an additional change that I think > I'll implement to attempt to prevent it again: In the instances where the > current owner is unresponsive to my attempts to contact them, *and* the > project has releases in the index, I will not transfer ownership. In the > cases where no releases have been made I will continue to transfer > ownership. > > I believe this is the best solution, and frankly, people in the OSS world has been forking all these years should someone disagree with the upstream or just believe they are better off with the fork. I am not a lawyer, but one has to look at any legal issue with ownership transfer. I am not trying to scare anyone, but the way I see ownership transfer (or even modifying the index on behalf of me) is the same as asking Twitter or Github to grant me a username simply because the account has zero activity. Between transferring ownership automatically after N trials and the above, I choose the above. Note not everyone is on Github, twitter. Email, er, email send/receive can go wrong. As a somewhat extreme but not entirely rare example, Satoshi Nakamoto and Bitcoin would be an interesting argument. If Bitcoin was published as a package on PyPI, should someone just go and ask for transfer? We know this person shared his codes and the person disappeared. Is Bitcoin mission-critical? People downloaded the code, fork it and started building a community on their own. What I am illustrating here is that not everyone can be in touch. There are people who choose to remain anonymous, or away from popular social network. Toshio Kuratomi wrote: > But there are > also security concerns with letting a package bitrot on pypi. > Again, I think that people should simply fork. The best we can do is simply prevent the packages from being downloaded again. Basically, shield all the packages from public. We preserve what people did and had. We can post a notice so the public knows what is going on. Surely it sucks to have to use a fork when Django or Requests are forked and now everyone has to call it something different and rewrite their code. But that's the beginning of a new chapter. The community has to be reformed. It sucks but I think it is better in the long run. You don't have to argue with the original owner anymore in theory. Last, I think it is reasonable to add Request for Adoption to PyPI. Owners who feel obligated to step down can promote the intent over PyPI John -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Sat Sep 20 17:26:14 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Sat, 20 Sep 2014 10:26:14 -0500 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On Sat, Sep 20, 2014 at 12:30 AM, John Wong wrote: > Hi all. > > TL;DR version: I think > > * an option to enroll in automatic ownership transfer > * an option to promote Request for Adoption > * don't transfer unless there are no releases on the index > > will be reasonable to me. > > On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones wrote: >> >> >> In light of this specific case, I have an additional change that I think >> I'll implement to attempt to prevent it again: In the instances where the >> current owner is unresponsive to my attempts to contact them, *and* the >> project has releases in the index, I will not transfer ownership. In the >> cases where no releases have been made I will continue to transfer >> ownership. >> > > I believe this is the best solution, and frankly, people in the OSS world > has been forking all these years > should someone disagree with the upstream or just believe they are better > off with the fork. I am not > a lawyer, but one has to look at any legal issue with ownership transfer. I > am not trying to scare > anyone, but the way I see ownership transfer (or even modifying the index on > behalf of me) is the same > as asking Twitter or Github to grant me a username simply because the > account has zero activity. > > Between transferring ownership automatically after N trials and the above, I > choose the above. > Note not everyone is on Github, twitter. Email, er, email send/receive can > go wrong. > > As a somewhat extreme but not entirely rare example, Satoshi Nakamoto and > Bitcoin would > be an interesting argument. If Bitcoin was published as a package on PyPI, > should someone > just go and ask for transfer? We know this person shared his codes and the > person disappeared. > Is Bitcoin mission-critical? People downloaded the code, fork it and started > building a community > on their own. What I am illustrating here is that not everyone can be in > touch. There are people > who choose to remain anonymous, or away from popular social network. > > Toshio Kuratomi wrote: >> >> But there are >> also security concerns with letting a package bitrot on pypi. > > > Again, I think that people should simply fork. The best we can do is simply > prevent > the packages from being downloaded again. Basically, shield all the packages > from public. We preserve what people did and had. We can post a notice > so the public knows what is going on. > > Surely it sucks to have to use a fork when Django or Requests are forked and > now everyone has to call it something different and rewrite their code. > But that's the beginning of a new chapter. The community has to be reformed. > It sucks but I think it is better in the long run. You don't have to argue > with the > original owner anymore in theory. > > Last, I think it is reasonable to add Request for Adoption to PyPI. > Owners who feel obligated to step down can promote the intent over PyPI > > John > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > I, for one, am happy that this conversation is happening because I wasn't aware of other communities that did this (but was aware that it happened on PyPI). That said, I would really appreciate people's suggestions be contained to improving the process, not towards modifying PyPI. At this point, as I understand it, PyPI is incredibly hard to modify safely for a number of reasons that others are likely better to speak to. Warehouse has a clear definition, design, and goals and I don't know if adding these on after-the-fact in a semi-haphazard way will improve anything. The more useful discussion right now will be to talk about process and how we can improve it and help Richard with it. Cheers From donald at stufft.io Sat Sep 20 17:34:23 2014 From: donald at stufft.io (Donald Stufft) Date: Sat, 20 Sep 2014 11:34:23 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> > On Sep 20, 2014, at 11:26 AM, Ian Cordasco wrote: > > I, for one, am happy that this conversation is happening because I > wasn't aware of other communities that did this (but was aware that it > happened on PyPI). That said, I would really appreciate people's > suggestions be contained to improving the process, not towards > modifying PyPI. At this point, as I understand it, PyPI is incredibly > hard to modify safely for a number of reasons that others are likely > better to speak to. Warehouse has a clear definition, design, and > goals and I don't know if adding these on after-the-fact in a > semi-haphazard way will improve anything. The more useful discussion > right now will be to talk about process and how we can improve it and > help Richard with it. > > Cheers > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig For the record, CPAN and npm both have similar things allowing someone to take over an abandoned project. I don?t believe ruby gems has an official policy and it appears that they are hesitant to do this form the threads I?ve seen (Though they mentioned doing it for _why). Most of the Linux distros have some mechanism for someone to claim that a particular package in the distro is no longer maintained and to attempt to take it over, though that is somewhat different. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Sep 20 23:55:16 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 21 Sep 2014 07:55:16 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> References: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> Message-ID: On 21 September 2014 01:34, Donald Stufft wrote: > > Most of the Linux distros have some mechanism for someone to claim that a > particular > package in the distro is no longer maintained and to attempt to take it > over, though > that is somewhat different. For reference, Fedora's non-responsive maintainer policy: https://fedoraproject.org/wiki/Policy_for_nonresponsive_package_maintainers "Just fork it" usually isn't an acceptable solution in the distro case, as it creates an incredible amount of rework in other packages for the distro maintainers *and* for distro users as they also need to adjust to the new name. Even when *we're* the ones creating the new tool (e.g. the currently ongoing yum -> dnf transition), it's entirely possible that at the end of process, the new tool (or a compatibility shim based on it) will take over the original name to minimise the ripple effects on the rest of the ecosystem. When forks of major upstream packages happen (ranging from relative small ones like PIL -> Pillow to core dependency changes and community splits like XFree86 -> x.org), the amount of rework generated just on a per-distro basis can be significant, let alone after you account for folks consuming upstream directly. (Triggering external rework is also one of the largest costs associated with major platform upgrades, like Python 2 -> 3, Gnome 2->3, KDE 3->4, the Windows 8-bit -> UTF-16-LE transition, Windows security hardening, RHEL/CentOS major version number bumps, etc. In those cases, the main mitigation effort is to maintain the old platform for an extended period, to allow the cost to be spread out over several years, rather than having to be addressed in a small time window). I do like the preferred contact mechanism listed in the Fedora policy guide though: filing a bug against the package in the Fedora issue tracker. Including the relevant project issue tracker in the required points of contact for PyPI ownership transfer could be a good enhancement over doing it via email, and providing a public record of the interaction between the two communities by cross-linking the PyPI bug requesting the transfer, and the project bug notifying them of the project transfer request. The usage of the project issue tracker also makes it clear that going to the PyPI admins *won't help* if you're having a dispute with currently active project maintainers. Adapting Fedora's process wholesale might look something like: 1. Requestor starts with a ticket on the PyPI issue tracker 2. If there isn't already one on the project issue tracker, the requestor is instructed to file it and link it from the PyPI issue 3. The PyPI admins will notify the registered package maintainer of the transfer request (and both tickets) via all available contact mechanisms 4. The requestor must ping the project ticket and the PyPI ticket after 7 (or more) days, and then again after another 7 (or more) days 5. If there's no response 7 (or more) days after the second ping, the requestor must notify distutils-sig in addition to pinging both tickets a third time 6. If there's no response 7 (or more) days after notifying distutils-sig, then the requestor is *added* to the maintainer list for the project Exception: I'm not sure how best to handle cases where the affected project has no public bug tracker. However, the requestor should definitely be required to *set up* a bug tracker for the project in such cases as part of the handover of responsibility. Some advantages of such an approach: 1. It's mostly public - the only "offline" part is the notification from the PyPI admins directly to the existing PyPI package maintainers 2. It should be scalable, as the folks making the transfer request are responsible for doing most of the work. If they don't do the work, the transfer simply doesn't happen 3. The minimum transfer time is 4 weeks, which will cover many instances of vacation (etc), and provide plenty of time for folks to notice the request (including folks that aren't the package maintainer, but may be aware if they're away from an extended period, and haven't arranged for anyone else to handle bug reports in the meantime) I had a harder time finding the relevant Debian policy info, but this seems to be it: https://www.debian.org/doc/manuals/developers-reference/beyond-pkging.html#mia-qa One common element to both systems: both Fedora and Debian have the capacity for maintainers to officially declare a package an "orphan". While I don't think it's urgent, I'd actually really like to see such a feature for PyPI at some point in the future, independently of the ownership transfer questions. A library like my own shell-command, for example, is usable as is, but there's zero chance I'll ever go back to working on it myself, as I no longer like the basic design concept. If I could flag that directly in PyPI, that would be useful in a few ways: 1. Folks that are *considering* using it would get a clear notice that it's unmaintained and perhaps move on to something with an active maintainer 2. Tools like "pip list" could grow a "--orphaned" option to show unmaintained dependencies 3. A simpler "fast path" could be added to the ownership transfer process for already orphaned packages One possible way approach to handling cases where folks are out of touch for extended periods without arranging for someone else to handle responding to upstream maintenance requests is to offer a calendar application where highly engaged package maintainers could explicitly notify the PyPI admins of an extended lack of availability. We probably don't need to go that far (folks that would be engaged enough to do that are likely to also be engaged enough to have other other folks participating on their issue tracker that would be aware of their extended absence), but if we ever decided such a service would be useful, Fedora also maintain a calendar app specifically to allow folks to indicate lack of availability: https://apps.fedoraproject.org/calendar/list/vacation/ (Project page for the Flask app itself: http://fedocal.readthedocs.org) We probably don't need to go that far, but it would be one possible way for project maintainers to explicitly tell the PyPI admins about lack of availability, without needing to change PyPI itself, and without trying to manage such notifications manually via email. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From a.badger at gmail.com Sun Sep 21 00:10:11 2014 From: a.badger at gmail.com (Toshio Kuratomi) Date: Sat, 20 Sep 2014 18:10:11 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> References: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> Message-ID: On Sat, Sep 20, 2014 at 11:34 AM, Donald Stufft wrote: > > > For the record, CPAN and npm both have similar things allowing someone to > take > over an abandoned project. > > I don?t believe ruby gems has an official policy and it appears that they > are hesitant > to do this form the threads I?ve seen (Though they mentioned doing it for > _why). Good information. > > Most of the Linux distros have some mechanism for someone to claim that a > particular > package in the distro is no longer maintained and to attempt to take it > over, though > is somewhat different. > yeah, I come from distro land but I'm hesitant to point directly at any of our documented policies on this because there are some differences between being a bunch of people working together to make a set of curated and integrated packages vs a loosely associated group of developers who happen to use a shared namespace within a popular service. All distros I can think of have some sort of self-governance whereas pypi is more akin to a bunch of customers making use of a service. Some of the distro policies don't apply very well in this space. Some do, however, so I hope other people who are familiar with their distros will also filter the relevant policy ideas from their realms and put them forward. -Toshio From barry at python.org Sun Sep 21 01:06:16 2014 From: barry at python.org (Barry Warsaw) Date: Sat, 20 Sep 2014 19:06:16 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages References: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> Message-ID: <20140920190616.6319c5d0@anarchist.wooz.org> On Sep 20, 2014, at 06:10 PM, Toshio Kuratomi wrote: >All distros I can think of have some sort of self-governance whereas pypi is >more akin to a bunch of customers making use of a service. Some of the >distro policies don't apply very well in this space. Some do, however, so I >hope other people who are familiar with their distros will also filter the >relevant policy ideas from their realms and put them forward. Debian and Ubuntu have very interesting differences, especially given that one is derived from the other. Debian has a very strong personal maintainership culture, where often one person maintains a package alone. Debian Developers have upload rights for any package, and they can do non-maintainer uploads. Debian also has various policies related to orphaning and adopting packages, but those are mostly for cooperative package ownership transfers. When a maintainer cannot be contacted, there is a missing-in-action process that can be used to wrest ownership for a non-responsive maintainer. Many packages are team maintained, and I personally find these much more productive and enjoyable to work with. A team maintained package doesn't have to worry about ownership transfers because any team member with general upload rights can upload the package, and even non-uploading team members can do everything short of that. Primarily that means prepare the package's vcs so that it's ready to be sponsored by an uploading developer. Ubuntu is different in that no package is maintained by a single person. Essentially they are all team maintained. Rights to upload packages are conferred on the basis of "pockets" and package-sets. So for example, if someone wants to be involved in Zope, they could join the ~ubuntu-zope-dev team and once approved, they'd have upload rights to any package in the Zope package set. There are also pockets such as universe (packages which are available in Ubuntu but without security and other distro-level guarantees), and there is a MOTU (masters of the universe) team that can upload there. At the top of the ladder, core-devs can upload anything. In the context of PyPI, I tend to think that teams can be an answer to a lot of the problem. I'm looking for example at one of the lazr.* packages I co-maintain on PyPI. The package has a number of individual owner roles, but I know that there are probably only two of those people who still care enough about the package to maintain it upstream, or would ever likely upload new versions to PyPI. Handling roles in this way is pretty inconvenient because there might be dozens of packages that some combination of those group of people would be responsible for. If I could create a LAZR team and manage roles within that team, and then assign ownership of a package to that team, it would be easier to administer. That doesn't solve the problem where individuals have a strong preference for personal ownership of PyPI entries, but given that upstreams often are a team effort, I think it would go a long way toward helping easy transition efforts for PyPI ownership. It might even allow for better handling of transitions. For example, if a package owner is not reachable for some period of time, and someone steps up to take it over, you could create a team and put both people in it, then transfer the ownership to that team. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From a.badger at gmail.com Sun Sep 21 01:08:08 2014 From: a.badger at gmail.com (Toshio Kuratomi) Date: Sat, 20 Sep 2014 19:08:08 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: Message-ID: On Sat, Sep 20, 2014 at 1:30 AM, John Wong wrote: > Hi all. > > TL;DR version: I think > > * an option to enroll in automatic ownership transfer > * an option to promote Request for Adoption > * don't transfer unless there are no releases on the index > > will be reasonable to me. > > On Fri, Sep 19, 2014 at 9:26 PM, Richard Jones wrote: >> >> >> In light of this specific case, I have an additional change that I think >> I'll implement to attempt to prevent it again: In the instances where the >> current owner is unresponsive to my attempts to contact them, *and* the >> project has releases in the index, I will not transfer ownership. In the >> cases where no releases have been made I will continue to transfer >> ownership. >> > > I believe this is the best solution, and frankly, people in the OSS world > has been forking all these years > should someone disagree with the upstream or just believe they are better > off with the fork. I am not > a lawyer, but one has to look at any legal issue with ownership transfer. I > am not trying to scare > anyone, but the way I see ownership transfer (or even modifying the index on > behalf of me) is the same > as asking Twitter or Github to grant me a username simply because the > account has zero activity. > This is a great example, however, I think that you're assuming that the answer to the question of whether services like twitter and github (and facebook and email service providers and many other service-customer relationships) should sometimes grant username takeovers is 100% no and I don't believe that's the case. I mean, in the past year there was a loud outcry about facebook not being willing to grant access to an account where the user had died and their family wanted access to the data so as to preserve it. Facebook eventually granted access in that case. Email has historically been transferred quite frequently. When you quit a job or leave a university your email address is often taken from you and, when someone with a similar name or inclination arrives, that address can be given to someone else. > Toshio Kuratomi wrote: >> >> But there are >> also security concerns with letting a package bitrot on pypi. > > > Again, I think that people should simply fork. The best we can do is simply > prevent > the packages from being downloaded again. Basically, shield all the packages > from public. We preserve what people did and had. We can post a notice > so the public knows what is going on. > > Surely it sucks to have to use a fork when Django or Requests are forked and > now everyone has to call it something different and rewrite their code. > But that's the beginning of a new chapter. The community has to be reformed. > It sucks but I think it is better in the long run. You don't have to argue > with the > original owner anymore in theory. > I'm on the fence over the model that I think you've got in your head here but I think it's more important to talk about why I think demanding people fork is the wrong path to take in my example which I think is much more cut and dried. Let's say you belong to a large project with 50 committers and a user's mailing list that numbers in the thousands of subscribers. The project owns a domain name with a large website and shoot, maybe they even have a legal body that serves as a place to take donations, register trademarks and so forth. You happen to be the release manager. You've been with the project since it was a small 5 person endeavour. While everyone else was busy coding, you specialized in deployment, installation, and, of course, creating a tarball to upload to pypi on every release. People may oca casionally think that they should help you out but hey, you've always done it, no one has reason to complain, and besides, there's this really important bug that they should be working on fixing instead.... So then you die. It's unexpected. Hit by a bus. Eaten by a velociraptor. You know the various hypothetical scenarios. Well, the project is still vibrant. It still has 49 committers. It's still the owner of a trademark and a domain name. It still has thousands of users. Now that it's a necessity, it can even find that has other people to volunteer to replace you as release manager. What it doesn't have, is permission to upload to pypi anymore. I think if someone asked to transfer ownership to another member of the upstream project had the time to research this they'd have no trouble at all deciding that the right course of action would be to transfer ownership. In this scenario all of the facts point towards the upstream being the people who should have rights to upload to pypi and they simply didn't have the foresight to assure that they wouldn't lose that right through an accident. Now what if we start taking some of the features of the scenario away? What if there wasn't a foundation? A trademark? A domain name? What if the release manager disappeared from the internetz and no one knew if he was alive or not? What if he was on a two week vacation but the project found that they had to make an unexpected release to fix to a horrendous security exploit ASAP? I think that pypi works best if we can be accommodating to these circumstances when there's a clear story of whether the requestor is a valid person to take ownership. We can quibble about the criteria for determining that validity but I think we should keep in forefront of our minds that all of our goals is to give the user the code they are searching for and not some other code. In some cases this will mean that we shouldn't transfer ownership when the new owner is a random fork but in others it will mean that we should transfer ownership because everyone expects the code to be this specific fork. -Toshio From ncoghlan at gmail.com Sun Sep 21 01:20:47 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 21 Sep 2014 09:20:47 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: <20140920190616.6319c5d0@anarchist.wooz.org> References: <3D3AFFBF-66A3-43B7-921A-80BFC9B439BC@stufft.io> <20140920190616.6319c5d0@anarchist.wooz.org> Message-ID: On 21 September 2014 09:06, Barry Warsaw wrote: > In the context of PyPI, I tend to think that teams can be an answer to a lot > of the problem. I'm looking for example at one of the lazr.* packages I > co-maintain on PyPI. The package has a number of individual owner roles, but > I know that there are probably only two of those people who still care enough > about the package to maintain it upstream, or would ever likely upload new > versions to PyPI. Handling roles in this way is pretty inconvenient because > there might be dozens of packages that some combination of those group of > people would be responsible for. If I could create a LAZR team and manage > roles within that team, and then assign ownership of a package to that team, > it would be easier to administer. > > That doesn't solve the problem where individuals have a strong preference for > personal ownership of PyPI entries, but given that upstreams often are a team > effort, I think it would go a long way toward helping easy transition efforts > for PyPI ownership. Right, it also better reflects the way a lot of folks are organising their work on GitHub/BitBucket/GitLab/RhodeCode/etc these days - you have a team of folks with relatively broad permissions as part of an "organisation" (e.g. PyPA) and then multiple projects within those teams. In theory, anyone on the developer list for the organisation can commit directly to any of the repos, but in practice we don't. However, adding teams support would mean a *lot* more engineering work on the PyPI/Warehouse side, so it won't be practical until after the Warehouse development effort comes to fruition and takes over the pypi.python.org domain, allowing the legacy PyPI application to be retired. A major part of that effort is actually cleaning up the PyPI APIs to separate "useful and necessary public interface" from "legacy PyPI implementation detail", so that those legacy features can simply never be implemented in Warehouse at all - that's fairly slow going, since it means a fair bit of negotiation with the client tool developers, as folks figure out what is needed and what isn't. > It might even allow for better handling of transitions. For example, if a > package owner is not reachable for some period of time, and someone steps up > to take it over, you could create a team and put both people in it, then > transfer the ownership to that team. That's sort of what happens now - the requestor is *added* to the admin list, but the previous maintainer remains as co-owner. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From antoine at python.org Mon Sep 22 01:06:14 2014 From: antoine at python.org (Antoine Pitrou) Date: Sun, 21 Sep 2014 23:06:14 +0000 (UTC) Subject: [Distutils] Create formal process for claiming 'abandoned' packages References: <541CBC32.2000404@stoneleaf.us> Message-ID: Ethan Furman stoneleaf.us> writes: > On 09/19/2014 04:13 PM, Alex Gaynor wrote: > > > > I **strongly** concur with James here. This has flagrantly violated my trust in > > PyPI. > > > > I would much rather packages not be reclaimed than need to think about whether > > I trust the PyPI maintainers to do it. > > Having PyPI become full of cruft is not a tenable situation. What is the problem with "cruft" exactly? I'm opposed to the idea that a package may be "transferred" to someone else without any explicit authorization from the original maintainer(s) (or author(s)). It is not a matter of waiting time or tools. It is morally unacceptable. If you want to maintain a package and the maintainer doesn't reply, there is a solution: fork the package (under a new name). If you don't want to see "stale" packages, there is a solution: build an optional filter into PyPI that only shows packages which have received an update in the last 12/24/36 months. Regards Antoine. From holger at merlinux.eu Mon Sep 22 12:15:35 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 22 Sep 2014 10:15:35 +0000 Subject: [Distutils] devpi-{server-2.1, web-2.2}: upload history, deploy status, groups Message-ID: <20140922101535.GJ3692@merlinux.eu> devpi-{server-2.1,web-2.2}: upload history, deploy status, groups ================================================================== With devpi-server-2.1 and devpi-web-2.2 you'll get a host of fixes and improvements as well as some major new features for the private pypi system: - upload history: release/tests/doc files now carry metadata for by whom, when and to which index something was uploaded - deployment status: the new json /+status gives detailed information about a replica or master's internal state - a new authentication hook supports arbitrary external authentication systems which can also return "group" membership information. An initial separately released "devpi-ldap" plugin implements verification accordingly. You can specify groups in ACLs with the ":GROUPNAME" syntax. - a new "--restrict-modify=ACL" option to start devpi-server such that only select accounts can create new or modify users or indexes For many more changes and fixes, please see the CHANGELOG information below. UPGRADE note: devpi-server-2.1 requires to ``--export`` your 2.0 server state and then using ``--import`` with the new version before you can serve your private packages through devpi-server-2.1. Please checkout the web plugin if you want to have a web interface:: http://doc.devpi.net/2.1/web.html Here is a Quickstart tutorial for efficient pypi-mirroring on your laptop:: http://doc.devpi.net/2.1/quickstart-pypimirror.html And if you want to manage your releases or implement staging as an individual or within an organisation:: http://doc.devpi.net/2.1/quickstart-releaseprocess.html If you want to host a devpi-server installation with nginx/supervisor and access it from clients from different hosts:: http://doc.devpi.net/2.1/quickstart-server.html More documentation here:: http://doc.devpi.net/2.1/ many thanks to Florian Schulze who co-implemented many of the new features. And special thanks go to the two companies who funded major parts of the above work. have fun, Holger Krekel, merlinux GmbH devpi-server-2.1.0 (compared to 2.0.6) ---------------------------------------- - make replication more precise: if a file cannot be replicated, fail with an error log and try again in a few seconds. This helps to maintain a consistent replica and discover the potential remaining bugs in the replication code. - add who/when metadata to release files, doczips and test results and preserve it during push operations so that any such file provides some history which can be visualized via the web-plugin. The metadata is also exposed via the json API (/USER/INDEX/PROJECTNAME[/VERSION]) - fix issue113: provide json status information at /+status including roles and replica polling status, UUIDs of the repository. See new server status docs for more info. - support for external authentication plugins: new devpiserver_auth_user hook which plugins can implement for user/password validation and for providing group membership. - support groups for acl_upload via the ":GROUPNAME" syntax. This requires an external authentication plugin that provides group information. - on replicas return auth status for "+api" requests by relaying to the master instead of using own key. - add "--restrict-modify" option to specify users/groups which can create, delete and modify users and indices. - make master/replica configuration more permanent and a bit safer against accidental errors: introduce "--role=auto" option, defaulting to determine the role from a previous invocation or the presence of the "--master-url" option if there was no previous invocation. Also verify that a replica talks to the same master UUID as with previous requests. - replaced hack from nginx template which abused "try_files" in "location /" with the recommended "error_page"/"return" combo. Thanks J?rgen Hermann - change command line option "--master" to "--master-url" - fix issue97: remove already deprecated --upgrade option in favor of just using --export/--import - actually store UTC in last_modified attribute of release files instead of the local time disguising as UTC. preserve last_modified when pushing a release. - fix exception when a static resource can't be found. - address issue152: return a proper 400 "not registered" message instead of 500 when a doczip is uploaded without prior registration. - add OSX/launchd example configuration when "--gen-config" is issued. thanks Sean Fisk. - fix replica proxying: don't pass original host header when relaying a modifying request from replica to master. - fix export error when a private project doesnt exist on pypi - fix pushing of a release when it contains multiple tox results. - fix "refresh" button on simple pages on replica sites - fix an internal link code issue possibly affecting strangeness or exceptions with test result links - be more tolerant when different indexes have different project names all mapping to the same canonical project name. - fix issue161: allow "{pkgversion}" to be part of a jenkins url devpi-web-2.2.0 (compared to 2.1.6) ---------------------------------------- - require devpi-server >= 2.1.0 - static resources now have a plus in front to avoid clashes with usernames and be consistent with how other urls work: "+static/..." and "+theme-static/..." - adjusted font-sizes and cut-off width of content. - only show underline on links when hovering. - make the "description hasn't been rendered" warning stand out. - version.pt: moved md5 sum from it's own column to the file column below the download link - version.pt: added "history" column showing last modified time and infos about uploads and pushes. - fix issue153: friendly error messages on upstream errors. - index.pt: show permissions on index page devpi-client-2.0.3 (compared to 2.0.2) ---------------------------------------- - use default "https://www.python.org/pypi" when no repository is set in .pypirc see https://docs.python.org/2/distutils/packageindex.html#the-pypirc-file - fix issue152: when --upload-docs is given, make sure to first register and upload the release file before attempting to upload docs (the latter requires prior registration) - fix issue75: add info about basic auth to "url" option help of "devpi use". - fix issue154: fix handling of vcs-exporting when unicode filenames are present. This is done by striking our own code in favor of Marius Gedminas' vcs exporting functions from his check-manifest project which devpi-client now depends on. This also adds in support for svn and bazaar in addition to the already supported git/hg. - devpi list: if a tox result does not contain basic information (probably a bug in tox) show a red error instead of crashing out with a traceback. - fix issue157: filtering of tox results started with the oldest ones and didn't show newer results if the host, platform and environment were the same. From stefan at bytereef.org Mon Sep 22 14:06:44 2014 From: stefan at bytereef.org (Stefan Krah) Date: Mon, 22 Sep 2014 14:06:44 +0200 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: <541CBC32.2000404@stoneleaf.us> Message-ID: <20140922120644.GA19974@sleipnir.bytereef.org> Antoine Pitrou wrote: > > Having PyPI become full of cruft is not a tenable situation. > > What is the problem with "cruft" exactly? > > I'm opposed to the idea that a package may be "transferred" to someone > else without any explicit authorization from the original maintainer(s) > (or author(s)). It is not a matter of waiting time or tools. It is morally > unacceptable. I agree. The situation here is totally different from Debian or Fedora, where the maintainers are just repackaging upstream most of the time. Stefan Krah From donald at stufft.io Mon Sep 22 14:40:46 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 22 Sep 2014 08:40:46 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: <541CBC32.2000404@stoneleaf.us> Message-ID: > On Sep 21, 2014, at 7:06 PM, Antoine Pitrou wrote: > > Ethan Furman stoneleaf.us> writes: >> On 09/19/2014 04:13 PM, Alex Gaynor wrote: >>> >>> I **strongly** concur with James here. This has flagrantly violated my > trust in >>> PyPI. >>> >>> I would much rather packages not be reclaimed than need to think about > whether >>> I trust the PyPI maintainers to do it. >> >> Having PyPI become full of cruft is not a tenable situation. > > What is the problem with "cruft" exactly? > > I'm opposed to the idea that a package may be "transferred" to someone > else without any explicit authorization from the original maintainer(s) > (or author(s)). It is not a matter of waiting time or tools. It is morally > unacceptable. > > If you want to maintain a package and the maintainer doesn't reply, > there is a solution: fork the package (under a new name). > > If you don't want to see "stale" packages, there is a solution: > build an optional filter into PyPI that only shows packages > which have received an update in the last 12/24/36 months. > > Regards > > Antoine. The problem with cruft is they make it more difficult to find things for end users who often times don't know what they are looking for. This is especially bad when you have a once popular library/tool for which the maintainer is no longer available. It's already a daunting task for someone to select a library that does something they need to do if they aren?t already familiar with the ecosystem. Adding "landmines" in the form of projects which look to solve their problem but where there is no-one to help them if they run into a bug or who can release a bug fix is fairly unfriendly. Circling back to django-registration, we can see the extra confusion this can cause when a maintainer stops maintaining a popular package. You end up with a multitude of forks, each slightly incompatible and with different features, bugs, etc. Now in the case of django-registration the author *is* available and wishes to retain control of django-registration, so that's fine, but you can hopefully see the sort of confusion that a maintainer going missing can cause? This isn't a problem that can be automatically solved because there's no programatic way to differentiate between "stable" and "abandoned", especially when you want to consider "abandoned but the author wants to maintain control", "abandoned but the author is willing to give it up", and "abandoned and the author is no longer available" differently. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine at python.org Mon Sep 22 15:25:50 2014 From: antoine at python.org (Antoine Pitrou) Date: Mon, 22 Sep 2014 13:25:50 +0000 (UTC) Subject: [Distutils] Create formal process for claiming 'abandoned' packages References: <541CBC32.2000404@stoneleaf.us> Message-ID: Donald Stufft stufft.io> writes: > > The problem with cruft is they make it more difficult to find things for end > users who often times don't know what they are looking for. This is especially > bad when you have a once popular library/tool for which the maintainer is no > longer available. It's already a daunting task for someone to select a library > that does something they need to do if they aren?t already familiar with the > ecosystem. Adding "landmines" in the form of projects which look to solve their > problem but where there is no-one to help them if they run into a bug or who > can release a bug fix is fairly unfriendly. It's unfriendly if you consider that it's PyPI's job to select packages for users. But it does not seem to be going in that direction (see e.g. the absence of ratings or comments, after a brief appearance). Usually people get their recommendations through the community. If you want to help people through PyPI, you may want to add a friendly, non-scary warning to the PyPI pages of projects which haven't been updated for 24+ months. > Circling back to django-registration, we can see the extra confusion this can > cause when a maintainer stops maintaining a popular package. You end up with > a multitude of forks, each slightly incompatible and with different features, > bugs, etc. It's inherent to the problem of unmaintained packages. But why would PyPI have any authority over who steps over? PyPI does not have any legitimity to steer those projects. It's not even a controlled software distribution; it's just an index. Regards Antoine. From donald at stufft.io Mon Sep 22 15:50:23 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 22 Sep 2014 09:50:23 -0400 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: <541CBC32.2000404@stoneleaf.us> Message-ID: <88CAA918-E4D2-428C-ACA1-55219DF6AB0E@stufft.io> > On Sep 22, 2014, at 9:25 AM, Antoine Pitrou wrote: > > Donald Stufft stufft.io> writes: >> >> The problem with cruft is they make it more difficult to find things for end >> users who often times don't know what they are looking for. This is especially >> bad when you have a once popular library/tool for which the maintainer is no >> longer available. It's already a daunting task for someone to select a library >> that does something they need to do if they aren?t already familiar with the >> ecosystem. Adding "landmines" in the form of projects which look to solve > their >> problem but where there is no-one to help them if they run into a bug or who >> can release a bug fix is fairly unfriendly. > > It's unfriendly if you consider that it's PyPI's job to select packages for > users. But it does not seem to be going in that direction (see e.g. the absence > of ratings or comments, after a brief appearance). It's not PyPI's job to "select packages for users", but it is PyPI's job to make it as easy as possible to discover relevant packages and provide them with as much information as is reasonable about that package so that the user can select for themselves which packages to use. I'm not going to get into the debacle that was the ratings system but the lack or existence of such a feature does not change the role of PyPI in any way. > > Usually people get their recommendations through the community. > If you want to help people through PyPI, you may want to add a friendly, > non-scary warning to the PyPI pages of projects which haven't been updated > for 24+ months. Sure, I never stated that transfering ownership was the only possible or even the best way to handle cruft. Personally I'm on the fence about what the best way to handle it is, there are benefits to transfering ownership and downsides. I was only stating what the problem with cruft is. > >> Circling back to django-registration, we can see the extra confusion this can >> cause when a maintainer stops maintaining a popular package. You end up with >> a multitude of forks, each slightly incompatible and with different features, >> bugs, etc. > > It's inherent to the problem of unmaintained packages. But why would PyPI have > any authority over who steps over? PyPI does not have any legitimity to steer > those projects. It's not even a controlled software distribution; it's just > an index. PyPI inherinently has complete control over who owns what name on PyPI. What limits are put on that control are a matter of policy. It is no less valid for PyPI to transfer ownership than it is for PyPI not to transfer ownership as a matter of Policy. As I said earlier, PyPI isn't hardly the only system like itself that exercises authority over the project names in the case of abandoned projects. As Toshio said that are situations where it makes *obvious* sense to transfer ownership of a project. Using Django as an pretty good example here, There are four people able to make releases there, until fairly recently there were only two if I recall. I don't think anyone would be against PyPI transfering ownership of Django to another active core developer of Django in the event that all of the people with permissions on PyPI were gone in some fashion. There are also cases where it makes *obvious* sense not to do a transfer, such as if some random person asked to transfer pip to their name because we hadn't had a release in a little under 6 months. Given that there are cases where it makes sense to do a transfer, and cases where it doesn't make sense to do a transfer the important part is to figure out, as a matter of policy, where those lines are. To that I think it'd be great to have a documented procedure for doing transfers and even rough guidelines as to when it makes sense to do it, but that ultimately it would be a decision up to the maintainers of PyPI (currently Richard and myself, though I rarely get involved in that side of things). This sort of "BDFL"-esque thing is primarily because edge cases happen often, although the common cases should be able to be handled by the rough guidelines, and because ultimately you have to trust the PyPI maintainers anyways for any reasonable use of PyPI. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine at python.org Mon Sep 22 16:16:09 2014 From: antoine at python.org (Antoine Pitrou) Date: Mon, 22 Sep 2014 14:16:09 +0000 (UTC) Subject: [Distutils] Create formal process for claiming 'abandoned' packages References: <541CBC32.2000404@stoneleaf.us> <88CAA918-E4D2-428C-ACA1-55219DF6AB0E@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > PyPI inherinently has complete control over who owns what name on PyPI. Political authority does not derive from technical control, though. > As Toshio said that are situations where it makes *obvious* sense to transfer > ownership of a project. Using Django as an pretty good example here, There are > four people able to make releases there, until fairly recently there were only > two if I recall. I don't think anyone would be against PyPI transfering > ownership of Django to another active core developer of Django in the event > that all of the people with permissions on PyPI were gone in some fashion. Assuming the remaining Django core developers agree on it, then, yes, that can make sense. That's because they are the primary authors of the project (even though they might not have been listed as such on PyPI). The case people are worried about is whether someone who is not part of the original project author(s) or maintainer(s) can get assigned the PyPI project. In that case people should use one of the forks; there's no reason for PyPI to crown a successor. > To that I think it'd be > great to have a documented procedure for doing transfers and even rough > guidelines as to when it makes sense to do it, but that ultimately it would be > a decision up to the maintainers of PyPI (currently Richard and myself, though > I rarely get involved in that side of things). I think the "rough guidelines" should actually be quite strict about it (see above). Also, publicity is important (see Daniel's original post). Regards Antoine. From holger at merlinux.eu Mon Sep 22 16:41:39 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 22 Sep 2014 14:41:39 +0000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: <541CBC32.2000404@stoneleaf.us> <88CAA918-E4D2-428C-ACA1-55219DF6AB0E@stufft.io> Message-ID: <20140922144139.GR3692@merlinux.eu> On Mon, Sep 22, 2014 at 14:16 +0000, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: > > PyPI inherinently has complete control over who owns what name on PyPI. > > Political authority does not derive from technical control, though. valid point IMO. > > As Toshio said that are situations where it makes *obvious* sense to transfer > > ownership of a project. Using Django as an pretty good example here, There are > > four people able to make releases there, until fairly recently there were only > > two if I recall. I don't think anyone would be against PyPI transfering > > ownership of Django to another active core developer of Django in the event > > that all of the people with permissions on PyPI were gone in some fashion. > > Assuming the remaining Django core developers agree on it, then, yes, that > can make sense. That's because they are the primary authors of the project > (even though they might not have been listed as such on PyPI). > > The case people are worried about is whether someone who is not part of the > original project author(s) or maintainer(s) can get assigned the PyPI project. > In that case people should use one of the forks; there's no reason for PyPI > to crown a successor. It can be considerable effort to communicate new names throughout the community. FWIW I am pretty happy that e.g. for pytest-cov we got a new maintainer because people use that module heavily, it's advertised in many places, part of tox.ini files etc, yet the original author did not respond and did not release updates for years. > > To that I think it'd be > > great to have a documented procedure for doing transfers and even rough > > guidelines as to when it makes sense to do it, but that ultimately it would be > > a decision up to the maintainers of PyPI (currently Richard and myself, though > > I rarely get involved in that side of things). > > I think the "rough guidelines" should actually be quite strict about it > (see above). Also, publicity is important (see Daniel's original post). I think a documented procedure makes sense. FWIW i haven't heart of any complaints from original authors/maintainers regarding the way how ownership was transferred, in the last 10 years or so. Many people are not aware that transfering ownership is possible. best, holger > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From ncoghlan at gmail.com Mon Sep 22 23:49:20 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Sep 2014 07:49:20 +1000 Subject: [Distutils] Create formal process for claiming 'abandoned' packages In-Reply-To: References: <541CBC32.2000404@stoneleaf.us> <88CAA918-E4D2-428C-ACA1-55219DF6AB0E@stufft.io> Message-ID: On 23 Sep 2014 00:19, "Antoine Pitrou" wrote: > > Donald Stufft stufft.io> writes: > > > > PyPI inherinently has complete control over who owns what name on PyPI. > > Political authority does not derive from technical control, though. > > > As Toshio said that are situations where it makes *obvious* sense to transfer > > ownership of a project. Using Django as an pretty good example here, There are > > four people able to make releases there, until fairly recently there were only > > two if I recall. I don't think anyone would be against PyPI transfering > > ownership of Django to another active core developer of Django in the event > > that all of the people with permissions on PyPI were gone in some fashion. > > Assuming the remaining Django core developers agree on it, then, yes, that > can make sense. That's because they are the primary authors of the project > (even though they might not have been listed as such on PyPI). > > The case people are worried about is whether someone who is not part of the > original project author(s) or maintainer(s) can get assigned the PyPI project. > In that case people should use one of the forks; there's no reason for PyPI > to crown a successor. That's why I consider it important to get the original project's issue tracker involved in the transfer process. I'd also be OK with a process that required an affirmative "Yes" from the project community, defaulting to "No transfer" in the case of a lack of response. Transfers are most needed for highly active projects where a fork could have a lot of ripple effects. I think it's reasonable to interpret "nobody cared enough to say yes or no" as "nobody cares enough for a transfer to be needed - just fork it rather than claiming the name". Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Sep 24 00:42:15 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 23 Sep 2014 23:42:15 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers Message-ID: Can anyone give me some advice, please? I am trying to build extensions on Windows 64-bit, using the free Windows SDK compilers. But I can't find any official documentation on how to do this, and everything I have tried so far has failed in frustrating ways. I'm now at the point where I appear to be hitting the following bug - http://bugs.python.org/issue7511 which has stumped me completely. Sadly, as is typical with distutils issues, this one seems to have been round for years and there is little or no sign that anyone is willing to fix it. Two questions, really: * Is there any intention that building extensions with the SDK compilers is supported? * How do I do it, if so? Personally, this is of limited relevance, as I have the full version of MSVC available. But I'm trying to put together some documentation for package developers on how to build Windows wheels, in particular using Appveyor to automate the process, with the intention that people shouldn't have to jump through hoops to provide wheels, but should rather be able to simply use a prebuilt recipe to automate the process. As an alternative, I wonder whether Microsoft would be willing to support Appveyor by providing them with access to the full version of MSVC (2008 and 2010) for the build workers? Steve - do you know if there's any possibility of something like that? Paul From Steve.Dower at microsoft.com Wed Sep 24 02:05:22 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 24 Sep 2014 00:05:22 +0000 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: Message-ID: <12f92eaf7cf44fce9aa291ebdbe4855a@DM2PR0301MB0734.namprd03.prod.outlook.com> We're very close to having some good news, but unfortunately, that's all I can say right now. Expect a more significant email/announcement from me in the next couple of weeks. (Distutils will hear it first and get the most detailed info.) Sent from my Windows Phone ________________________________ From: Paul Moore Sent: ?9/?23/?2014 15:42 To: Distutils; Steve Dower Subject: Building Python extensions on 64-bit Windows using the SDK compilers Can anyone give me some advice, please? I am trying to build extensions on Windows 64-bit, using the free Windows SDK compilers. But I can't find any official documentation on how to do this, and everything I have tried so far has failed in frustrating ways. I'm now at the point where I appear to be hitting the following bug - http://bugs.python.org/issue7511 which has stumped me completely. Sadly, as is typical with distutils issues, this one seems to have been round for years and there is little or no sign that anyone is willing to fix it. Two questions, really: * Is there any intention that building extensions with the SDK compilers is supported? * How do I do it, if so? Personally, this is of limited relevance, as I have the full version of MSVC available. But I'm trying to put together some documentation for package developers on how to build Windows wheels, in particular using Appveyor to automate the process, with the intention that people shouldn't have to jump through hoops to provide wheels, but should rather be able to simply use a prebuilt recipe to automate the process. As an alternative, I wonder whether Microsoft would be willing to support Appveyor by providing them with access to the full version of MSVC (2008 and 2010) for the build workers? Steve - do you know if there's any possibility of something like that? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Wed Sep 24 08:14:59 2014 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 24 Sep 2014 08:14:59 +0200 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: Message-ID: <20140924081459.Horde.zPlj5WKZwLyVutQntQqEEQ1@webmail.df.eu> Zitat von Paul Moore : > Can anyone give me some advice, please? I am trying to build > extensions on Windows 64-bit, using the free Windows SDK compilers. Can you please be more specific? What SDK, and what free compilers? The bug report is about VS Express, not the SDK compilers. > Two questions, really: > > * Is there any intention that building extensions with the SDK > compilers is supported? As long as the SDK does include compilers: yes. > * How do I do it, if so? Open a command window with the SDK environment variables set, then also set DISTUTILS_USE_SDK, and invoke setup.py > As an alternative, I wonder whether Microsoft would be willing to > support Appveyor by providing them with access to the full version of > MSVC (2008 and 2010) for the build workers? Steve - do you know if > there's any possibility of something like that? Not sure who is "they" and the "build workers". If you are talking about the Python core developers - we can already have MSDN access if we want. Regards, Martin From p.f.moore at gmail.com Wed Sep 24 08:41:36 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 07:41:36 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: <20140924081459.Horde.zPlj5WKZwLyVutQntQqEEQ1@webmail.df.eu> References: <20140924081459.Horde.zPlj5WKZwLyVutQntQqEEQ1@webmail.df.eu> Message-ID: On 24 September 2014 07:14, wrote: > Zitat von Paul Moore : > >> Can anyone give me some advice, please? I am trying to build >> extensions on Windows 64-bit, using the free Windows SDK compilers. > > Can you please be more specific? What SDK, and what free compilers? > The bug report is about VS Express, not the SDK compilers. For Python 2.7, I was using the "Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 SP1" x64 version. I set DISTUTILS_USE_SDK, and got exactly the symptom mentioned in the bug. Hence my comment that I was hitting "the same" issue with a different environment. I don't know whether it is relevant, but Visual Studio 2008 Express was also installed in the environment. >> Two questions, really: >> >> * Is there any intention that building extensions with the SDK >> compilers is supported? > > As long as the SDK does include compilers: yes. > >> * How do I do it, if so? > > Open a command window with the SDK environment variables set, > then also set DISTUTILS_USE_SDK, and invoke setup.py I'm scripting the build, which means I can't use the "Open SDK command line environment" start menu item. But what I do is SetEnv.cmd /x64 /release SET DISTUTILS_USE_SDK=1 SET MSSdk=1 I then run setup.py and get the error ValueError: [u'path'] I have not yet been able to successfully set up the SDK environment locally (the above was done on a remote machine hosted by appveyor.com) but am close to doing so. My normal build machine includes the full Visual Studio (via the MSDN access you mentioned). Once I have that environment I'll be in a position to reproduce the error on my local machine and do further testing much more conveniently. Paul From abr at ariddell.org Wed Sep 24 03:19:25 2014 From: abr at ariddell.org (Allen Riddell) Date: Tue, 23 Sep 2014 21:19:25 -0400 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: Message-ID: <1411521565.125496.171037381.7209B875@webmail.messagingengine.com> Hi Paul, I don't know how Olivier Grisel did it, but I can testify it does build extensions for Windows (32bit and 64bit) using appveyor. https://github.com/ogrisel/python-appveyor-demo/ Best wishes, Allen On Tue, Sep 23, 2014, at 06:42 PM, Paul Moore wrote: > Can anyone give me some advice, please? I am trying to build > extensions on Windows 64-bit, using the free Windows SDK compilers. > But I can't find any official documentation on how to do this, and > everything I have tried so far has failed in frustrating ways. I'm now > at the point where I appear to be hitting the following bug - > http://bugs.python.org/issue7511 which has stumped me completely. > Sadly, as is typical with distutils issues, this one seems to have > been round for years and there is little or no sign that anyone is > willing to fix it. > > Two questions, really: > > * Is there any intention that building extensions with the SDK > compilers is supported? > * How do I do it, if so? > > Personally, this is of limited relevance, as I have the full version > of MSVC available. But I'm trying to put together some documentation > for package developers on how to build Windows wheels, in particular > using Appveyor to automate the process, with the intention that people > shouldn't have to jump through hoops to provide wheels, but should > rather be able to simply use a prebuilt recipe to automate the > process. > > As an alternative, I wonder whether Microsoft would be willing to > support Appveyor by providing them with access to the full version of > MSVC (2008 and 2010) for the build workers? Steve - do you know if > there's any possibility of something like that? > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From jjhelmus at gmail.com Wed Sep 24 04:45:16 2014 From: jjhelmus at gmail.com (Jonathan J. Helmus) Date: Tue, 23 Sep 2014 21:45:16 -0500 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: Message-ID: <5422303C.7080005@gmail.com> On 9/23/2014 5:42 PM, Paul Moore wrote: > Can anyone give me some advice, please? I am trying to build > extensions on Windows 64-bit, using the free Windows SDK compilers. > But I can't find any official documentation on how to do this, and > everything I have tried so far has failed in frustrating ways. I'm now > at the point where I appear to be hitting the following bug - > http://bugs.python.org/issue7511 which has stumped me completely. > Sadly, as is typical with distutils issues, this one seems to have > been round for years and there is little or no sign that anyone is > willing to fix it. > > Two questions, really: > > * Is there any intention that building extensions with the SDK > compilers is supported? > * How do I do it, if so? > > Personally, this is of limited relevance, as I have the full version > of MSVC available. But I'm trying to put together some documentation > for package developers on how to build Windows wheels, in particular > using Appveyor to automate the process, with the intention that people > shouldn't have to jump through hoops to provide wheels, but should > rather be able to simply use a prebuilt recipe to automate the > process. > > As an alternative, I wonder whether Microsoft would be willing to > support Appveyor by providing them with access to the full version of > MSVC (2008 and 2010) for the build workers? Steve - do you know if > there's any possibility of something like that? > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Paul, Some of us from the Scientific Python side of development have been using appveyor to build Windows wheels for a few projects. A demo from one of developers of scikit-learn gives a good overview of the process we have been using [1]. The Cython wiki also has some information on getting the Windows SDK set up correctly for 64-bit compiling [2]. Personally I was able to get the pacakges I was working on to compile on a Windows host using only the Windows SDK compilers following the hints available on those to links and a Stack overflow answer on the topic [3]. It has been a few months since then but I can try to reproduce my work if those links don't provide the answers. Cheers, - Jonathan Helmus [1] https://github.com/ogrisel/python-appveyor-demo [2] https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows [3] http://stackoverflow.com/questions/11267463/compiling-python-modules-on-win-x64/13751649#13751649 From p.f.moore at gmail.com Wed Sep 24 15:55:06 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 14:55:06 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: <5422303C.7080005@gmail.com> References: <5422303C.7080005@gmail.com> Message-ID: On 24 September 2014 03:45, Jonathan J. Helmus wrote: > Some of us from the Scientific Python side of development have been > using appveyor to build Windows wheels for a few projects. A demo from one > of developers of scikit-learn gives a good overview of the process we have > been using [1]. Thanks for the pointer. (Also thanks to Allen Riddell). I'll take a look. Ideally, what I'd like to do is write something up to help non-Windows experts get things up and running, so this will be very useful. Cheers. Paul From chris.barker at noaa.gov Wed Sep 24 18:24:21 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 24 Sep 2014 09:24:21 -0700 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On Wed, Sep 24, 2014 at 6:55 AM, Paul Moore wrote: > Thanks for the pointer. (Also thanks to Allen Riddell). I'll take a > look. Ideally, what I'd like to do is write something up to help > non-Windows experts get things up and running, so this will be very > useful. > Thanks -- that would be great. But really, why is this so hard? Win64 is essentially One platform, and the freely available SDK is ONE compiler environment. surely it's possible to write a batch script of some sort that you could put somewhere (or even deliver with python! ) so this would be: 1) download and install THIS (the sdk from MS) 2) run: set_up_win_complier.py 3) build the package: python setup.py build without needing to do multiple step, without needing to be in the special set-up command Window, etc. In fact, even better would be for distutils to run the mythical "set_up_win_complier.py" script for you. distutils does work "out of the box" with the VS2008 Express for 32 bit -- I'm still confused why this is so much harder for 64 bit. *sigh* -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Sep 24 20:49:21 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 19:49:21 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On 24 September 2014 17:24, Chris Barker wrote: > Thanks -- that would be great. But really, why is this so hard? Win64 is > essentially One platform, and the freely available SDK is ONE compiler > environment. If only that were true :-) What I've found is: 1. Different SDKs are needed for Python 2.7 and 3.3+ (the VS2008/VS2010 split) 2. The v7.0 SDK (Python 2.7) is a bit of a beast to install correctly - I managed to trash a VM by installing the x86 one when I should have installed the x64 one. 3. There are bugs in the SDK - the setenv script for v7.0 needs fixes or it fails. Agreed, it should be easy. And indeed, it is if you have the full Visual Studio. But when Python 2.7 came out, the freely available MS tools were distinctly less convenient to use, and that shows. It's getting a lot better, and once we start using MSVC 2012 or later (i.e., Python 3.5+), the express editions include 64-bit support out of the box, which makes most of the problems go away. Paul From p.f.moore at gmail.com Wed Sep 24 21:28:38 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 20:28:38 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On 24 September 2014 20:02, Chris Barker wrote: >> t's getting a lot better, and once we start using MSVC 2012 or later >> (i.e., Python 3.5+), the express editions include 64-bit support out >> of the box, which makes most of the problems go away. > > Sure, but is there something we can do with the old stuff -- some of us will > be ruing 2.7 for a good while yet! > >> Steve wrote: > As I mentioned at the start of this thread - hold your frustration and wait > for a little while :) > > It wasn't clear -- will things get better for 2.7 ? OR just the new stuff? > > i.e. frustration aside, should I not bother to wrangle this now for my > projects if I can hold off a bit? I can't speak for Steve, but personally, I do intend to work through these issues and write up a "how to set things up to build wheels for your Python projects" (probably using appveyor, as it has the environment already there, but likely also for a local setup) document. It'll be far from the first such document, but I'd like to see my version published under the PyPA banner and as such have the status of "the official answer". My advice would be not to rush. If the currently available information is enough for you, by all means go for it, but if you're hitting difficulties (or just don't want to risk doing so) I'm hoping things will be improved[1] in the relatively short term, so it might be worth waiting. Paul [1] Where it's possible the only improvement is that you've got me as a specific target for your complaints about the lousy documentation, but I'm hoping I can do a *bit* better than that :-) From chris.barker at noaa.gov Wed Sep 24 21:02:23 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 24 Sep 2014 12:02:23 -0700 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On Wed, Sep 24, 2014 at 11:49 AM, Paul Moore wrote: > > essentially One platform, and the freely available SDK is ONE compiler > > environment. > > If only that were true :-) > > What I've found is: > > 1. Different SDKs are needed for Python 2.7 and 3.3+ (the VS2008/VS2010 > split) > well, yeah, but that's not the problem at hand -- that one is ugly and painful and always has been :-( > 2. The v7.0 SDK (Python 2.7) is a bit of a beast to install correctly > - I managed to trash a VM by installing the x86 one when I should have > installed the x64 one. > Ah, what fun -- though if you DO install the right one, hopefully it will work, at least if it's installed with defaults, which most folks can do. > 3. There are bugs in the SDK - the setenv script for v7.0 needs fixes > or it fails. > OK -- that sucks and is simply going make this painful -- darn it! Agreed, it should be easy. And indeed, it is if you have the full > Visual Studio. But when Python 2.7 came out, the freely available MS > tools were distinctly less convenient to use, and that shows. > and they still are, too. It's getting a lot better, and once we start using MSVC 2012 or later > (i.e., Python 3.5+), the express editions include 64-bit support out > of the box, which makes most of the problems go away. > Sure, but is there something we can do with the old stuff -- some of us will be ruing 2.7 for a good while yet! > Steve wrote: As I mentioned at the start of this thread - hold your frustration and wait for a little while :) It wasn't clear -- will things get better for 2.7 ? OR just the new stuff? i.e. frustration aside, should I not bother to wrangle this now for my projects if I can hold off a bit? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Wed Sep 24 19:03:23 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 24 Sep 2014 17:03:23 +0000 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: <7d0b49d665e54a56944ac10ac76ab4f8@DM2PR0301MB0734.namprd03.prod.outlook.com> Chris Barker wrote: > On Wed, Sep 24, 2014 at 6:55 AM, Paul Moore wrote: >> Thanks for the pointer. (Also thanks to Allen Riddell). I'll take a >> look. Ideally, what I'd like to do is write something up to help >> non-Windows experts get things up and running, so this will be very >> useful. > > Thanks -- that would be great. But really, why is this so hard? Win64 is > essentially One platform, and the freely available SDK is ONE compiler > environment. > > surely it's possible to write a batch script of some sort that you could put > somewhere (or even deliver with python! ) so this would be: > > 1) download and install THIS (the sdk from MS) > > 2) run: > set_up_win_complier.py > > 3) build the package: > python setup.py build > > without needing to do multiple step, without needing to be in the special set-up > command Window, etc. > > In fact, even better would be for distutils to run the mythical > "set_up_win_complier.py" script for you. > > distutils does work "out of the box" with the VS2008 Express for 32 bit -- I'm > still confused why this is so much harder for 64 bit. Someone made a decision back when that express edition was released that people who _needed_ 64-bit compilers could justify paying for them. At the time (pre-Windows 7, which was the first usable 64-bit Windows), this made sense, but the world has changed since then and so have the later versions of VC++ Express/Express for Desktop, which now include all the compilers. > *sigh* As I mentioned at the start of this thread - hold your frustration and wait for a little while :) Cheers, Steve > -Chris > From martin at v.loewis.de Wed Sep 24 22:04:01 2014 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 24 Sep 2014 22:04:01 +0200 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <20140924081459.Horde.zPlj5WKZwLyVutQntQqEEQ1@webmail.df.eu> Message-ID: <542323B1.3020405@v.loewis.de> Am 24.09.14 08:41, schrieb Paul Moore: > On 24 September 2014 07:14, wrote: >> Zitat von Paul Moore : >> >>> Can anyone give me some advice, please? I am trying to build >>> extensions on Windows 64-bit, using the free Windows SDK compilers. >> >> Can you please be more specific? What SDK, and what free compilers? >> The bug report is about VS Express, not the SDK compilers. > > For Python 2.7, I was using the "Microsoft Windows SDK for Windows 7 > and .NET Framework 3.5 SP1" x64 version. I set DISTUTILS_USE_SDK, and > got exactly the symptom mentioned in the bug. So what is the value of your vcvarsall.bat? Why could it not find the other interesting variables? If it's really the same issue: does any of the proposed patches help? Regards, Martin From p.f.moore at gmail.com Wed Sep 24 22:22:11 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 21:22:11 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: <542323B1.3020405@v.loewis.de> References: <20140924081459.Horde.zPlj5WKZwLyVutQntQqEEQ1@webmail.df.eu> <542323B1.3020405@v.loewis.de> Message-ID: On 24 September 2014 21:04, "Martin v. L?wis" wrote: > Am 24.09.14 08:41, schrieb Paul Moore: >> On 24 September 2014 07:14, wrote: >>> Zitat von Paul Moore : >>> >>>> Can anyone give me some advice, please? I am trying to build >>>> extensions on Windows 64-bit, using the free Windows SDK compilers. >>> >>> Can you please be more specific? What SDK, and what free compilers? >>> The bug report is about VS Express, not the SDK compilers. >> >> For Python 2.7, I was using the "Microsoft Windows SDK for Windows 7 >> and .NET Framework 3.5 SP1" x64 version. I set DISTUTILS_USE_SDK, and >> got exactly the symptom mentioned in the bug. > > So what is the value of your vcvarsall.bat? Why could it not find the > other interesting variables? I'm using setenv.cmd, not vcvarsall.bat (because that's the advice I found in the scattered documents I found). I'm not even sure I have a vcvarsall.bat that I can call (the only working environment I currently have access to with a SDK installed is only accessible in a convoluted manner which makes investigation painful (appveyor, if you know the system) > If it's really the same issue: does any of the proposed patches help? I am still in the process of trying to get a usable local environment with the SDK installed. Once I do, I'll report back. Paul From p.f.moore at gmail.com Wed Sep 24 23:52:30 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 22:52:30 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: <5422303C.7080005@gmail.com> References: <5422303C.7080005@gmail.com> Message-ID: On 24 September 2014 03:45, Jonathan J. Helmus wrote: > Some of us from the Scientific Python side of development have been > using appveyor to build Windows wheels for a few projects. A demo from one > of developers of scikit-learn gives a good overview of the process we have > been using [1]. This is excellent. Many thanks for the pointer - you've clearly managed to solve some of the more annoying problems that I have been hitting. (I'd claim that I was getting there, but you've saved me the effort :-)) One thing I have done is request the Appveyor team to add 64-bit Pythons to their build environments, which they have done, so that now there should be no need to install your own copy of Python (at least for 2.7, 3.3 and 3.4). I've copied Olivier in here as the author of the demo project, but would you mind if I used this as the basis of a document covering how to build wheels for your project using Appveyor? Obviously, I'd give you full credit. I'm thinking of including it as a section in the Python packaging guide, or maybe as a separate HOWTO document. Paul From p.f.moore at gmail.com Thu Sep 25 00:09:46 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 24 Sep 2014 23:09:46 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On 24 September 2014 22:58, Olivier Grisel wrote: > Under which path? It's now documented in http://www.appveyor.com/docs/installed-software, but C:\PythonXY and C:\PythonXY-x64. > Could you please issue a PR to: > https://github.com/ogrisel/python-appveyor-demo > > to show how to leverage pre-installed versions of Python? Will do (although it might be a few days, I'm pretty snowed under at work right now). Paul From cournape at gmail.com Thu Sep 25 17:15:25 2014 From: cournape at gmail.com (David Cournapeau) Date: Thu, 25 Sep 2014 16:15:25 +0100 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On Wed, Sep 24, 2014 at 7:49 PM, Paul Moore wrote: > On 24 September 2014 17:24, Chris Barker wrote: > > Thanks -- that would be great. But really, why is this so hard? Win64 is > > essentially One platform, and the freely available SDK is ONE compiler > > environment. > > If only that were true :-) > > What I've found is: > > 1. Different SDKs are needed for Python 2.7 and 3.3+ (the VS2008/VS2010 > split) > 2. The v7.0 SDK (Python 2.7) is a bit of a beast to install correctly > - I managed to trash a VM by installing the x86 one when I should have > installed the x64 one. > 3. There are bugs in the SDK - the setenv script for v7.0 needs fixes > or it fails. > > Agreed, it should be easy. And indeed, it is if you have the full > Visual Studio. But when Python 2.7 came out, the freely available MS > tools were distinctly less convenient to use, and that shows. > The SDK scripts are indeed a bit broken, but it is possible to detect them automatically in a way that is similar to what was done for MSVC 2008. I know that for a fact because I ported the python distutils MSVC detection to scons, and added support for the SDK there: https://bitbucket.org/scons/scons/annotate/b43c04896075c3392818e07ce472e73cd6a9aca5/src/engine/SCons/Tool/MSCommon/sdk.py?at=default (the code has changed since then). Is that the kind of thing that falls onto long term support for 2.7 ? If so, I would be willing to work it out to put in distutils. David > It's getting a lot better, and once we start using MSVC 2012 or later > (i.e., Python 3.5+), the express editions include 64-bit support out > of the box, which makes most of the problems go away. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Sep 25 23:35:01 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 26 Sep 2014 07:35:01 +1000 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On 26 Sep 2014 01:15, "David Cournapeau" wrote: > > > > On Wed, Sep 24, 2014 at 7:49 PM, Paul Moore wrote: >> >> On 24 September 2014 17:24, Chris Barker wrote: >> > Thanks -- that would be great. But really, why is this so hard? Win64 is >> > essentially One platform, and the freely available SDK is ONE compiler >> > environment. >> >> If only that were true :-) >> >> What I've found is: >> >> 1. Different SDKs are needed for Python 2.7 and 3.3+ (the VS2008/VS2010 split) >> 2. The v7.0 SDK (Python 2.7) is a bit of a beast to install correctly >> - I managed to trash a VM by installing the x86 one when I should have >> installed the x64 one. >> 3. There are bugs in the SDK - the setenv script for v7.0 needs fixes >> or it fails. >> >> Agreed, it should be easy. And indeed, it is if you have the full >> Visual Studio. But when Python 2.7 came out, the freely available MS >> tools were distinctly less convenient to use, and that shows. > > > The SDK scripts are indeed a bit broken, but it is possible to detect them automatically in a way that is similar to what was done for MSVC 2008. > > I know that for a fact because I ported the python distutils MSVC detection to scons, and added support for the SDK there: https://bitbucket.org/scons/scons/annotate/b43c04896075c3392818e07ce472e73cd6a9aca5/src/engine/SCons/Tool/MSCommon/sdk.py?at=default (the code has changed since then). > > Is that the kind of thing that falls onto long term support for 2.7 ? If so, I would be willing to work it out to put in distutils. Yes, better handling of interoperability issues with the underlying platform is in scope for 2.7 distutils - it effectively counts as a bug. Cheers, Nick. > David > >> >> It's getting a lot better, and once we start using MSVC 2012 or later >> (i.e., Python 3.5+), the express editions include 64-bit support out >> of the box, which makes most of the problems go away. >> >> Paul >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Sep 26 04:11:22 2014 From: donald at stufft.io (Donald Stufft) Date: Thu, 25 Sep 2014 22:11:22 -0400 Subject: [Distutils] The Simple API - What URLs are "supported" In-Reply-To: <2D842B16-F874-48B0-BC26-DEAD385230D7@stufft.io> References: <8C1E02C3-B237-4EE6-8658-F43011DDF54B@stufft.io> <4880A51C-2BA6-4F7A-B56D-8FD1119114D6@stufft.io> <20140918102219.GG3692@merlinux.eu> <2D842B16-F874-48B0-BC26-DEAD385230D7@stufft.io> Message-ID: <1C7F8B35-370A-4580-812A-FCB7D7F27200@stufft.io> Given limited feedback and none negative I?m going to move ahead with this. Thanks All! --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From olivier.grisel at ensta.org Wed Sep 24 23:58:47 2014 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Wed, 24 Sep 2014 23:58:47 +0200 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: 2014-09-24 23:52 GMT+02:00 Paul Moore : > On 24 September 2014 03:45, Jonathan J. Helmus wrote: >> Some of us from the Scientific Python side of development have been >> using appveyor to build Windows wheels for a few projects. A demo from one >> of developers of scikit-learn gives a good overview of the process we have >> been using [1]. > > This is excellent. Many thanks for the pointer - you've clearly > managed to solve some of the more annoying problems that I have been > hitting. (I'd claim that I was getting there, but you've saved me the > effort :-)) > > One thing I have done is request the Appveyor team to add 64-bit > Pythons to their build environments, which they have done, so that now > there should be no need to install your own copy of Python (at least > for 2.7, 3.3 and 3.4). Under which path? Could you please issue a PR to: https://github.com/ogrisel/python-appveyor-demo to show how to leverage pre-installed versions of Python? > I've copied Olivier in here as the author of the demo project, but > would you mind if I used this as the basis of a document covering how > to build wheels for your project using Appveyor? Obviously, I'd give > you full credit. I'm thinking of including it as a section in the > Python packaging guide, or maybe as a separate HOWTO document. Feel free to reuse any of my work for your document. The license of the scripts in python-appveyor-demo is CC0, no attribution required. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From olivier.grisel at ensta.org Thu Sep 25 00:10:38 2014 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 25 Sep 2014 00:10:38 +0200 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: 2014-09-25 0:09 GMT+02:00 Paul Moore : > On 24 September 2014 22:58, Olivier Grisel wrote: >> Under which path? > > It's now documented in > http://www.appveyor.com/docs/installed-software, but C:\PythonXY and > C:\PythonXY-x64. Nice, thanks: I will try it now. -- Olivier http://twitter.com/ogrisel - http://github.com/ogrisel From olivier.grisel at ensta.org Thu Sep 25 00:29:57 2014 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Thu, 25 Sep 2014 00:29:57 +0200 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: It seems to work, I merged the change in the master of python-appveyor-demo. Thanks! -- Olivier From peke at iki.fi Fri Sep 26 10:27:07 2014 From: peke at iki.fi (=?UTF-8?Q?Pekka_Kl=C3=A4rck?=) Date: Fri, 26 Sep 2014 11:27:07 +0300 Subject: [Distutils] Fwd: New classifiers In-Reply-To: References: Message-ID: Forwarding the mail here because catalog-sig has apparently been retired. You might want to update references to it at https://wiki.python.org/moin/CheeseShopTutorial ---------- Forwarded message ---------- From: Pekka Kl?rck Date: 2014-09-26 11:22 GMT+03:00 Subject: New classifiers To: catalog-sig at python.org Hello, Robot Framework is a generic acceptance level test automation framework implemented with Python. The core framework and quite a few test libraries and other tools related to it are available on PyPI: https://pypi.python.org/pypi?%3Aaction=search&term=robotframework&submit=search Would it be possible to get a separate classifier for Robot Framework? Probably something like `Framework :: Robot Framework`. It would also be convenient to have these two new sub-classifiers: Topic :: Software Development :: Testing :: Unit Topic :: Software Development :: Testing :: Acceptance Cheers, .peke -- Agile Tester/Developer/Consultant :: http://eliga.fi Lead Developer of Robot Framework :: http://robotframework.org From chris.barker at noaa.gov Fri Sep 26 18:53:14 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 26 Sep 2014 09:53:14 -0700 Subject: [Distutils] Building Python extensions on 64-bit Windows using the SDK compilers In-Reply-To: References: <5422303C.7080005@gmail.com> Message-ID: On Thu, Sep 25, 2014 at 8:15 AM, David Cournapeau wrote: > The SDK scripts are indeed a bit broken, but it is possible to detect them > automatically in a way that is similar to what was done for MSVC 2008. > > I know that for a fact because I ported the python distutils MSVC > detection to scons, and added support for the SDK there: > https://bitbucket.org/scons/scons/annotate/b43c04896075c3392818e07ce472e73cd6a9aca5/src/engine/SCons/Tool/MSCommon/sdk.py?at=default > (the code has changed since then). > > Is that the kind of thing that falls onto long term support for 2.7 ? If > so, I would be willing to work it out to put in distutils. > Yes please! That would be great. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Fri Sep 26 19:59:03 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Fri, 26 Sep 2014 17:59:03 +0000 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 Message-ID: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> I'll post this on the various other lists later, but I promised distutils-sig first taste, especially since the discussion has been raging for a few days (if you're following the setuptools repo, you may already know, but let me take the podium for a few minutes anyway :) ) Microsoft has released a compiler package for Python 2.7 to make it easier for people to build and distribute their C extension modules on Windows. The Microsoft Visual C++ Compiler for Python 2.7 (a.k.a. VC9) is available from: http://aka.ms/vcpython27 This package contains all the tools and headers required to build C extension modules for Python 2.7 32-bit and 64-bit (note that some extension modules require 3rd party dependencies such as OpenSSL or libxml2 that are not included). Other versions of Python built with Visual C++ 2008 are also supported, so "Python 2.7" is just advertising - it'll work fine with 2.6 and 3.2. You can install the package without requiring administrative privileges and, with the latest version of setuptools (from the source repo - there's no release yet), use tools such as pip, wheel, or a setup.py file to produce binaries on Windows. The license prevents redistribution of the package itself (obviously you can do what you like with the binaries you produce) and IANAL but there should be no restriction on using this package on automated build systems under the usual one-developer rule (http://stackoverflow.com/a/779631/891 - in effect, the compilers are licensed to one user who happens to be using it on a remote machine). My plan is to keep the download link stable so that automated scripts can reference and install the package. I have no idea how long that will last... :) Our intent is to heavily focus on people using this package to produce wheels rather than trying to get this onto every user machine. Binary distribution is the way Windows has always worked and we want to encourage that, though we do also want people to be able to unblock themselves with these compilers. I should also point out that VC9 is no longer supported by Microsoft. This means there won't be any improvements or bug fixes coming, and there's no official support offered. Feel free to contact me directly if there are issues with the package. Cheers, Steve From ncoghlan at gmail.com Sat Sep 27 00:10:33 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 27 Sep 2014 08:10:33 +1000 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: On 27 September 2014 03:59, Steve Dower wrote: > I'll post this on the various other lists later, but I promised distutils-sig first taste, especially since the discussion has been raging for a few days (if you're following the setuptools repo, you may already know, but let me take the podium for a few minutes anyway :) ) > > Microsoft has released a compiler package for Python 2.7 to make it easier for people to build and distribute their C extension modules on Windows. The Microsoft Visual C++ Compiler for Python 2.7 (a.k.a. VC9) is available from: http://aka.ms/vcpython27 Wonderful news Steve, thanks! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sat Sep 27 13:16:05 2014 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 27 Sep 2014 12:16:05 +0100 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: <1411816565.4424.YahooMailNeo@web172401.mail.ir2.yahoo.com> > From: Steve Dower > Microsoft has released a compiler package for Python 2.7 Great. Thank you very much! Downloading it now :-) Regards, Vinay Sajip From Steve.Dower at microsoft.com Sat Sep 27 17:32:45 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 27 Sep 2014 15:32:45 +0000 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com>, Message-ID: <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> It's free (VC Express 2008 is behind a pay wall these days) It's small (85MB download, 300mb on installed) It's a per-user install with no reboot required If you have the permissions, time, and access for VC Express 2008, it gains you nothing. You're not the intended target audience (I thought I had that wording in the announcement, but I guess not). Most people don't have or want Visual Studio installed on their machine, or need to install on a machine where they're not admin (think university student on a lab machine who needs Cython). Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Piotr Dobrogost Sent: ?9/?27/?2014 3:34 To: Steve Dower Cc: distutils sig Subject: Re: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 On Sep 27, 2014 12:32 AM, "Steve Dower" > wrote: > > I'll post this on the various other lists later, but I promised distutils-sig first taste, especially since the discussion has been raging for a few days (if you're following the setuptools repo, you may already know, but let me take the podium for a few minutes anyway :) ) > > Microsoft has released a compiler package for Python 2.7 to make it easier for people to build and distribute their C extension modules on Windows. The Microsoft Visual C++ Compiler for Python 2.7 (a.k.a. VC9) is available from: http://aka.ms/vcpython27 > > This package contains all the tools and headers required to build C extension modules for Python 2.7 32-bit and 64-bit (note that some extension modules require 3rd party dependencies such as OpenSSL or libxml2 that are not included). Other versions of Python built with Visual C++ 2008 are also supported, so "Python 2.7" is just advertising - it'll work fine with 2.6 and 3.2 What that buys us in comparision to simply using VC 2008 Express? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p at 2014.dobrogost.net Sat Sep 27 12:34:24 2014 From: p at 2014.dobrogost.net (Piotr Dobrogost) Date: Sat, 27 Sep 2014 12:34:24 +0200 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: On Sep 27, 2014 12:32 AM, "Steve Dower" wrote: > > I'll post this on the various other lists later, but I promised distutils-sig first taste, especially since the discussion has been raging for a few days (if you're following the setuptools repo, you may already know, but let me take the podium for a few minutes anyway :) ) > > Microsoft has released a compiler package for Python 2.7 to make it easier for people to build and distribute their C extension modules on Windows. The Microsoft Visual C++ Compiler for Python 2.7 (a.k.a. VC9) is available from: http://aka.ms/vcpython27 > > This package contains all the tools and headers required to build C extension modules for Python 2.7 32-bit and 64-bit (note that some extension modules require 3rd party dependencies such as OpenSSL or libxml2 that are not included). Other versions of Python built with Visual C++ 2008 are also supported, so "Python 2.7" is just advertising - it'll work fine with 2.6 and 3.2 What that buys us in comparision to simply using VC 2008 Express? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at RACKSPACE.COM Sun Sep 28 21:31:31 2014 From: donald.stufft at RACKSPACE.COM (Donald Stufft) Date: Sun, 28 Sep 2014 19:31:31 +0000 Subject: [Distutils] Immutable Files on PyPI Message-ID: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Hello All! I'd like to discuss the idea of moving PyPI to having immutable files. This would mean that once you publish a particular file you can never reupload that file again with different contents. This would still allow deleting the file or reuploading it if the checksums match what was there prior. This would be good for a few reasons: * It represents "best practices" for version numbers. Ideally if two people have version "2.1" of a project, they'll have the same code, however as it stands two people installing at two different times could have two very different versions. * This will make improving the PyPI infrastructure easier, in particular it will make it simpler to move away from using a glusterfs storage array and switch to a redudant set of cloud object stores. In the past this was brought up and a few points were brought against it, those were: 1. That authors could simply change files that were hosted on not PyPI anyways so it didn't really do much. 2. That it was too hard to test a release prior to uploading it due to the nature of distutils requiring you to build the release in the same command as the upload. With the fact that pip no longer hits external URLs by default, I believe that the first item is no longer that large of a factor. People can do whatever they want on external URLs of course, however if something is coming from PyPI as end users should now be aware of, they can know it is immutable. Now that there is twine, which allows uploading already created packages, I also believe that the second item is no longer a concern. People can easily create a distribution using ``setup.py sdist``, test it, and then upload that exact thing they tested using ``twine upload ``. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Sun Sep 28 23:21:11 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 28 Sep 2014 14:21:11 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: Would this also affect the ability to update the "readme" information for a version on PyPI (i.e. the information displayed on the default home page generated by PyPI for the release) once the version has already been uploaded to PyPI? There are sometimes issues encountered with the display of that information that can't easily be checked without doing an actual version upload. I haven't recently tried reuploading the metadata for a version, mainly because of uncertainty around how PyPI would handle it. Thanks, --Chris On Sun, Sep 28, 2014 at 12:31 PM, Donald Stufft wrote: > Hello All! > > I'd like to discuss the idea of moving PyPI to having immutable files. This > would mean that once you publish a particular file you can never reupload > that > file again with different contents. This would still allow deleting the file > or > reuploading it if the checksums match what was there prior. > > This would be good for a few reasons: > > * It represents "best practices" for version numbers. Ideally if two people > have version "2.1" of a project, they'll have the same code, however as it > stands two people installing at two different times could have two very > different versions. > > * This will make improving the PyPI infrastructure easier, in particular it > will make it simpler to move away from using a glusterfs storage array and > switch to a redudant set of cloud object stores. > > > In the past this was brought up and a few points were brought against it, > those > were: > > 1. That authors could simply change files that were hosted on not PyPI > anyways > so it didn't really do much. > > 2. That it was too hard to test a release prior to uploading it due to the > nature of distutils requiring you to build the release in the same > command > as the upload. > > With the fact that pip no longer hits external URLs by default, I believe > that > the first item is no longer that large of a factor. People can do whatever > they > want on external URLs of course, however if something is coming from PyPI as > end users should now be aware of, they can know it is immutable. > > Now that there is twine, which allows uploading already created packages, I > also believe that the second item is no longer a concern. People can easily > create a distribution using ``setup.py sdist``, test it, and then upload > that > exact thing they tested using ``twine upload ``. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From ethan at stoneleaf.us Sun Sep 28 23:21:45 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 28 Sep 2014 14:21:45 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: <54287BE9.8090101@stoneleaf.us> On 09/28/2014 12:31 PM, Donald Stufft wrote: > > I'd like to discuss the idea of moving PyPI to having immutable files. Perhaps I'm missing something, but I already get errors if I try to reupload a package with the same version number. -- ~Ethan~ From donald at stufft.io Sun Sep 28 23:22:50 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 28 Sep 2014 17:22:50 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: <00A5FFD1-7C47-4D4A-BE42-C0F43C86A3C7@stufft.io> > On Sep 28, 2014, at 5:21 PM, Chris Jerdonek wrote: > > Would this also affect the ability to update the "readme" information > for a version on PyPI (i.e. the information displayed on the default > home page generated by PyPI for the release) once the version has > already been uploaded to PyPI? > > There are sometimes issues encountered with the display of that > information that can't easily be checked without doing an actual > version upload. > > I haven't recently tried reuploading the metadata for a version, > mainly because of uncertainty around how PyPI would handle it. > No, metadata can be edited at any time currently and even with this proposal. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Sep 28 23:23:10 2014 From: donald at stufft.io (Donald Stufft) Date: Sun, 28 Sep 2014 17:23:10 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287BE9.8090101@stoneleaf.us> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> Message-ID: <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> > On Sep 28, 2014, at 5:21 PM, Ethan Furman wrote: > > On 09/28/2014 12:31 PM, Donald Stufft wrote: >> >> I'd like to discuss the idea of moving PyPI to having immutable files. > > Perhaps I'm missing something, but I already get errors if I try to reupload a package with the same version number. You can delete them and then reupload the same file with different contents. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Sun Sep 28 23:29:14 2014 From: richard at python.org (Richard Jones) Date: Mon, 29 Sep 2014 07:29:14 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> Message-ID: The intent was always that files were immutable. The deleting loophole is just something that I never got around to fixing. +1 to fix that bug :) On 29 September 2014 07:23, Donald Stufft wrote: > > On Sep 28, 2014, at 5:21 PM, Ethan Furman wrote: > > On 09/28/2014 12:31 PM, Donald Stufft wrote: > > > I'd like to discuss the idea of moving PyPI to having immutable files. > > > Perhaps I'm missing something, but I already get errors if I try to > reupload a package with the same version number. > > > You can delete them and then reupload the same file with different > contents. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Sun Sep 28 23:33:28 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 28 Sep 2014 14:33:28 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> Message-ID: <54287EA8.6040108@stoneleaf.us> On 09/28/2014 02:29 PM, Richard Jones wrote: > The intent was always that files were immutable. The deleting loophole is just something that I never got around to fixing. > > +1 to fix that bug :) Agreed! :) -- ~Ethan~ From mal at egenix.com Sun Sep 28 23:36:45 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 28 Sep 2014 23:36:45 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: <54287F6D.4010901@egenix.com> On 28.09.2014 21:31, Donald Stufft wrote: > Hello All! > > I'd like to discuss the idea of moving PyPI to having immutable files. This > would mean that once you publish a particular file you can never reupload that > file again with different contents. This would still allow deleting the file or > reuploading it if the checksums match what was there prior. > > This would be good for a few reasons: > > * It represents "best practices" for version numbers. Ideally if two people > have version "2.1" of a project, they'll have the same code, however as it > stands two people installing at two different times could have two very > different versions. > > * This will make improving the PyPI infrastructure easier, in particular it > will make it simpler to move away from using a glusterfs storage array and > switch to a redudant set of cloud object stores. > > > In the past this was brought up and a few points were brought against it, those > were: > > 1. That authors could simply change files that were hosted on not PyPI anyways > so it didn't really do much. > > 2. That it was too hard to test a release prior to uploading it due to the > nature of distutils requiring you to build the release in the same command > as the upload. > > With the fact that pip no longer hits external URLs by default, I believe that > the first item is no longer that large of a factor. People can do whatever they > want on external URLs of course, however if something is coming from PyPI as > end users should now be aware of, they can know it is immutable. > > Now that there is twine, which allows uploading already created packages, I > also believe that the second item is no longer a concern. People can easily > create a distribution using ``setup.py sdist``, test it, and then upload that > exact thing they tested using ``twine upload ``. -1. It does happen that files need to be reuploaded because of a bug in the release process and how people manage their code is really *their* business, not that of PyPI. FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate the way package authors are supposed to build, manage and host their Python packages and release process. Can we please stop this ? -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Sep 28 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-09-30: Python Meeting Duesseldorf ... 2 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From richard at python.org Sun Sep 28 23:54:11 2014 From: richard at python.org (Richard Jones) Date: Mon, 29 Sep 2014 07:54:11 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: Just to reiterate: from the beginning uploaded files were immutable, but the later addition of deletion gave uploaders the loophole through which they could not confuse downloaders of their packages. On 29 September 2014 07:36, M.-A. Lemburg wrote: > On 28.09.2014 21:31, Donald Stufft wrote: > > Hello All! > > > > I'd like to discuss the idea of moving PyPI to having immutable files. > This > > would mean that once you publish a particular file you can never > reupload that > > file again with different contents. This would still allow deleting the > file or > > reuploading it if the checksums match what was there prior. > > > > This would be good for a few reasons: > > > > * It represents "best practices" for version numbers. Ideally if two > people > > have version "2.1" of a project, they'll have the same code, however > as it > > stands two people installing at two different times could have two very > > different versions. > > > > * This will make improving the PyPI infrastructure easier, in particular > it > > will make it simpler to move away from using a glusterfs storage array > and > > switch to a redudant set of cloud object stores. > > > > > > In the past this was brought up and a few points were brought against > it, those > > were: > > > > 1. That authors could simply change files that were hosted on not PyPI > anyways > > so it didn't really do much. > > > > 2. That it was too hard to test a release prior to uploading it due to > the > > nature of distutils requiring you to build the release in the same > command > > as the upload. > > > > With the fact that pip no longer hits external URLs by default, I > believe that > > the first item is no longer that large of a factor. People can do > whatever they > > want on external URLs of course, however if something is coming from > PyPI as > > end users should now be aware of, they can know it is immutable. > > > > Now that there is twine, which allows uploading already created > packages, I > > also believe that the second item is no longer a concern. People can > easily > > create a distribution using ``setup.py sdist``, test it, and then upload > that > > exact thing they tested using ``twine upload ``. > > -1. > > It does happen that files need to be reuploaded because of a bug > in the release process and how people manage their code is really > *their* business, not that of PyPI. > > FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate > the way package authors are supposed to build, manage and host their > Python packages and release process. Can we please stop this ? > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Sep 28 2014) > >>> Python Projects, Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-09-30: Python Meeting Duesseldorf ... 2 days to go > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokoproject at gmail.com Sun Sep 28 23:54:51 2014 From: gokoproject at gmail.com (John Yeuk Hon Wong) Date: Sun, 28 Sep 2014 17:54:51 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> Message-ID: <542883AB.4050107@gmail.com> On 9/28/14 5:23 PM, Donald Stufft wrote: > You can delete them and then reupload the same file with different > contents. > Sorry, but I am confused: what does different content mean in contrast to "same file"? -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Sun Sep 28 23:58:29 2014 From: richard at python.org (Richard Jones) Date: Mon, 29 Sep 2014 07:58:29 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: And here's a version run through a "for other people to read" filter ;) >From the beginning uploaded files were immutable, but the later we added the ability to delete releases which gave uploaders the loophole through which they could confuse downloaders of their packages. On 29 September 2014 07:54, Richard Jones wrote: > Just to reiterate: from the beginning uploaded files were immutable, but > the later addition of deletion gave uploaders the loophole through which > they could not confuse downloaders of their packages. > > On 29 September 2014 07:36, M.-A. Lemburg wrote: > >> On 28.09.2014 21:31, Donald Stufft wrote: >> > Hello All! >> > >> > I'd like to discuss the idea of moving PyPI to having immutable files. >> This >> > would mean that once you publish a particular file you can never >> reupload that >> > file again with different contents. This would still allow deleting the >> file or >> > reuploading it if the checksums match what was there prior. >> > >> > This would be good for a few reasons: >> > >> > * It represents "best practices" for version numbers. Ideally if two >> people >> > have version "2.1" of a project, they'll have the same code, however >> as it >> > stands two people installing at two different times could have two >> very >> > different versions. >> > >> > * This will make improving the PyPI infrastructure easier, in >> particular it >> > will make it simpler to move away from using a glusterfs storage >> array and >> > switch to a redudant set of cloud object stores. >> > >> > >> > In the past this was brought up and a few points were brought against >> it, those >> > were: >> > >> > 1. That authors could simply change files that were hosted on not PyPI >> anyways >> > so it didn't really do much. >> > >> > 2. That it was too hard to test a release prior to uploading it due to >> the >> > nature of distutils requiring you to build the release in the same >> command >> > as the upload. >> > >> > With the fact that pip no longer hits external URLs by default, I >> believe that >> > the first item is no longer that large of a factor. People can do >> whatever they >> > want on external URLs of course, however if something is coming from >> PyPI as >> > end users should now be aware of, they can know it is immutable. >> > >> > Now that there is twine, which allows uploading already created >> packages, I >> > also believe that the second item is no longer a concern. People can >> easily >> > create a distribution using ``setup.py sdist``, test it, and then >> upload that >> > exact thing they tested using ``twine upload ``. >> >> -1. >> >> It does happen that files need to be reuploaded because of a bug >> in the release process and how people manage their code is really >> *their* business, not that of PyPI. >> >> FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate >> the way package authors are supposed to build, manage and host their >> Python packages and release process. Can we please stop this ? >> >> -- >> Marc-Andre Lemburg >> eGenix.com >> >> Professional Python Services directly from the Source (#1, Sep 28 2014) >> >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >> >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >> >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ >> ________________________________________________________________________ >> 2014-09-30: Python Meeting Duesseldorf ... 2 days to go >> >> ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: >> >> eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 >> D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg >> Registered at Amtsgericht Duesseldorf: HRB 46611 >> http://www.egenix.com/company/contact/ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gaynor at gmail.com Sun Sep 28 23:58:20 2014 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Sun, 28 Sep 2014 21:58:20 +0000 (UTC) Subject: [Distutils] Immutable Files on PyPI References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: M.-A. Lemburg egenix.com> writes: > > -1. > > It does happen that files need to be reuploaded because of a bug > in the release process and how people manage their code is really > *their* business, not that of PyPI. It's not just the business of the package authors, because as soon as it's uploaded it's visible to uesrs, and swapping it out from under their feet is a crummy thing to do. > > FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate > the way package authors are supposed to build, manage and host their > Python packages and release process. Can we please stop this ? > I want to specifically reply to this: Over the past 6-12 months, the quality of my experience using PyPI and PIP has increased so dramatically, it leaves me wondering how I ever used Python before. I used to on a regular basis, experience pip randomly hang trying to spider external stuff, have my downloads silently exposed to MITM attacks via HTTP, and randomly start getting alphas of packages people uploaded without realizing that the machinery didn't know about pre-release vs. release packages. The changes to pip and PyPI that have resolved these issues, and dozens of others. Yes, we've constrained PyPI, but across the board we've almost exclusively constrained things that are nearly universally agreed to be a bad idea. To quote Glyph, "Constraints make the medium". PyPI is a medium, a canvas for us to paint a user experience on. Having it be a simple "index" as it was originally conceived gives package authors a nearly unlimited ability to create bad, misleading, and insecure experiences for user. By constraining what the medium of PyPI is, we make it SO much easier for users and package authors to be a part of a good eco-system. So I say: Carry on Donald and others, keep pushing for the only user experience to be a great one. +1 on this proposal, Alex From richard at python.org Sun Sep 28 23:58:58 2014 From: richard at python.org (Richard Jones) Date: Mon, 29 Sep 2014 07:58:58 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542883AB.4050107@gmail.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> <542883AB.4050107@gmail.com> Message-ID: He means a file with the same file name, but not necessarily the same content. On 29 September 2014 07:54, John Yeuk Hon Wong wrote: > On 9/28/14 5:23 PM, Donald Stufft wrote: > > > You can delete them and then reupload the same file with different > contents. > > Sorry, but I am confused: what does different content mean in contrast to > "same file"? > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Sep 28 23:51:13 2014 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 29 Sep 2014 10:51:13 +1300 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: On 29 September 2014 10:36, M.-A. Lemburg wrote: > -1. > > It does happen that files need to be reuploaded because of a bug > in the release process and how people manage their code is really > *their* business, not that of PyPI. > > FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate > the way package authors are supposed to build, manage and host their > Python packages and release process. Can we please stop this ? PyPI is mirrored by many people, most hopefully using bandersnatch. If you change the contents of a release, that will usually break someone somewhere. Places I've seen it break: BSD ports trees [sha1sum no longer matches] Dpkg and rpm source builds [content no longer matches upstream, doesn't break hash because those projects cache the source code themselves] Non-bandersnatch mirrors (such as devpi, or pypi-mirror) which assume files are immutable and don't cross-check once a file is successfully downloaded. PEP-440 provides the postN version suffix *specifically* to allow folk to fix a release without running into these issues. Is that something you can use? I don't see the work being done on PyPI as dictating how code is managed: you can delete things, you can upload new things. What its doing with this specific change is enforcing immutability of *public artifacts* which most of the software ecosystem already depends on. +1 from ,e. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ethan at stoneleaf.us Sun Sep 28 23:59:42 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 28 Sep 2014 14:59:42 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542883AB.4050107@gmail.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287BE9.8090101@stoneleaf.us> <5755CF17-1023-4AD3-8A87-CAB88F824BE2@stufft.io> <542883AB.4050107@gmail.com> Message-ID: <542884CE.7080201@stoneleaf.us> On 09/28/2014 02:54 PM, John Yeuk Hon Wong wrote: > On 9/28/14 5:23 PM, Donald Stufft wrote: >> >> You can delete them and then reupload the same file with different contents. > > Sorry, but I am confused: what does different content mean in contrast to "same file"? Same file name, but different stuff in that file. -- ~Ethan~ From donald.stufft at RACKSPACE.COM Sun Sep 28 23:59:01 2014 From: donald.stufft at RACKSPACE.COM (Donald Stufft) Date: Sun, 28 Sep 2014 21:59:01 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> On Sep 28, 2014, at 5:36 PM, M.-A. Lemburg > wrote: On 28.09.2014 21:31, Donald Stufft wrote: Hello All! I'd like to discuss the idea of moving PyPI to having immutable files. This would mean that once you publish a particular file you can never reupload that file again with different contents. This would still allow deleting the file or reuploading it if the checksums match what was there prior. This would be good for a few reasons: * It represents "best practices" for version numbers. Ideally if two people have version "2.1" of a project, they'll have the same code, however as it stands two people installing at two different times could have two very different versions. * This will make improving the PyPI infrastructure easier, in particular it will make it simpler to move away from using a glusterfs storage array and switch to a redudant set of cloud object stores. In the past this was brought up and a few points were brought against it, those were: 1. That authors could simply change files that were hosted on not PyPI anyways so it didn't really do much. 2. That it was too hard to test a release prior to uploading it due to the nature of distutils requiring you to build the release in the same command as the upload. With the fact that pip no longer hits external URLs by default, I believe that the first item is no longer that large of a factor. People can do whatever they want on external URLs of course, however if something is coming from PyPI as end users should now be aware of, they can know it is immutable. Now that there is twine, which allows uploading already created packages, I also believe that the second item is no longer a concern. People can easily create a distribution using ``setup.py sdist``, test it, and then upload that exact thing they tested using ``twine upload ``. -1. It does happen that files need to be reuploaded because of a bug in the release process and how people manage their code is really *their* business, not that of PyPI. Can you describe a reasonable hypothetical situation where this would occur often enough as to be something that is likely to happen on a consistent basis? Originally the problem was there was little ability to easily upload pre-created files so there was a reasonable chance that there may be a packaging bug that didn?t get exposed until you actually packaged + released. With the advent of twine though it?s now possible to test the exact bits that get uploaded to PyPI making that particular issue no longer a problem. However, the fact that the files are not immutable *do* cause a number of problems that need to be worked around in the mirroring infrastructure, the CDN, and for scaling PyPI out and removing the glusterfs component. FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate the way package authors are supposed to build, manage and host their Python packages and release process. Can we please stop this ? I recognize your annoyance, however I think that the changes that have been made are overall good changes that negatively affect a minor subset of people and positively affect a much wider group of people. Speaking as one of the people who are pushing the hardest for the kinds of changes that I assume you?re talking about, I do try and figure out ways to continue to enable the ?alternative? methods of doing things while still allowing forward progress on making things better for the masses. If there?s something I could have done more to ease that pain other than *not* making changes at all then I would be gracious to hear them! I don?t want to make these changes painful for people where that can be helped. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at RACKSPACE.COM Mon Sep 29 00:02:51 2014 From: donald.stufft at RACKSPACE.COM (Donald Stufft) Date: Sun, 28 Sep 2014 22:02:51 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: On Sep 28, 2014, at 5:36 PM, M.-A. Lemburg > wrote: On 28.09.2014 21:31, Donald Stufft wrote: Hello All! I'd like to discuss the idea of moving PyPI to having immutable files. This would mean that once you publish a particular file you can never reupload that file again with different contents. This would still allow deleting the file or reuploading it if the checksums match what was there prior. This would be good for a few reasons: * It represents "best practices" for version numbers. Ideally if two people have version "2.1" of a project, they'll have the same code, however as it stands two people installing at two different times could have two very different versions. * This will make improving the PyPI infrastructure easier, in particular it will make it simpler to move away from using a glusterfs storage array and switch to a redudant set of cloud object stores. In the past this was brought up and a few points were brought against it, those were: 1. That authors could simply change files that were hosted on not PyPI anyways so it didn't really do much. 2. That it was too hard to test a release prior to uploading it due to the nature of distutils requiring you to build the release in the same command as the upload. With the fact that pip no longer hits external URLs by default, I believe that the first item is no longer that large of a factor. People can do whatever they want on external URLs of course, however if something is coming from PyPI as end users should now be aware of, they can know it is immutable. Now that there is twine, which allows uploading already created packages, I also believe that the second item is no longer a concern. People can easily create a distribution using ``setup.py sdist``, test it, and then upload that exact thing they tested using ``twine upload ``. -1. It does happen that files need to be reuploaded because of a bug in the release process and how people manage their code is really *their* business, not that of PyPI. FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate the way package authors are supposed to build, manage and host their Python packages and release process. Can we please stop this ? I forgot to mention, there is also testpypi.python.org which can be used to test builds prior to publishing them to PyPI. There is also devpi which I believe has the option to push from one of the devpi indexes straight to PyPI as well (I could be wrong about that though?). In general the tooling has gotten *a lot* better at making it possible to test things thoroughly before uploading them to PyPI, where uploading them to PyPI is the very last piece in the puzzle. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Sep 29 00:04:35 2014 From: ethan at stoneleaf.us (Ethan Furman) Date: Sun, 28 Sep 2014 15:04:35 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: <542885F3.80307@stoneleaf.us> On 09/28/2014 02:36 PM, M.-A. Lemburg wrote: > > -1. > > It does happen that files need to be reuploaded because of a bug > in the release process and how people manage their code is really > *their* business, not that of PyPI. There exists problems in the ecosphere now with pylockfile because two different versions of the same version were released, and one works and one doesn't. If you screw up your build, bump the version and rerelease it. -- ~Ethan~ From richard at python.org Mon Sep 29 00:06:47 2014 From: richard at python.org (Richard Jones) Date: Mon, 29 Sep 2014 08:06:47 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: On 29 September 2014 08:02, Donald Stufft wrote: > > I forgot to mention, there is also testpypi.python.org which can be used > to test > builds prior to publishing them to PyPI. There is also devpi which I > believe has > the option to push from one of the devpi indexes straight to PyPI as well > (I could > be wrong about that though?). > Yes, and devpi has integrated testing of release files with recording of test runs built in. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 00:51:12 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 08:51:12 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54287F6D.4010901@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: On 29 Sep 2014 07:37, "M.-A. Lemburg" wrote: > > -1. > > It does happen that files need to be reuploaded because of a bug > in the release process and how people manage their code is really > *their* business, not that of PyPI. > > FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate > the way package authors are supposed to build, manage and host their > Python packages and release process. Can we please stop this ? As others have noted, these changes represent the PyPA being opinionated on behalf of the user community, to provide the best possible user experience for the overall Python ecosystem. We'll accommodate the existing publisher community as far as is feasible (that's why PEP 440 is as complicated as it is, for example), but there are going to be times where improving the end user experience means adding new constraints on publishers. External hosting (using PyPI as an index, without also using it for release file hosting) is the primary "escape clause" for software publishers that prefer to do things differently from the way PyPI does them. That's a user experience we'll also continue to work to improve, to ensure it is clear that it's a fully supported part of the distribution model. Regards, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Sep 29 02:23:52 2014 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 28 Sep 2014 17:23:52 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: > > It does happen that files need to be reuploaded because of a bug > > in the release process and how people manage their code is really > > *their* business, not that of PyPI. > > It's not just the business of the package authors, because as soon as it's > uploaded it's visible to uesrs, and swapping it out from under their feet > is a > crummy thing to do. > agreed, +1 to the proposal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Mon Sep 29 04:26:41 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Sun, 28 Sep 2014 21:26:41 -0500 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: +1 I know I abused this a couple times a couple years ago, but it bothered me that I could. It also worried me because if my account were ever compromised, someone could release malware under files named exactly the same as my real released software. This won't prevent them from deleting those other versions and uploading something new, but it will provide a small bit of extra assurance. On Sun, Sep 28, 2014 at 7:23 PM, Marcus Smith wrote: > >> >> > It does happen that files need to be reuploaded because of a bug >> > in the release process and how people manage their code is really >> > *their* business, not that of PyPI. >> >> It's not just the business of the package authors, because as soon as it's >> uploaded it's visible to uesrs, and swapping it out from under their feet >> is a >> crummy thing to do. > > > agreed, +1 to the proposal. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From yasumoto7 at gmail.com Mon Sep 29 04:06:41 2014 From: yasumoto7 at gmail.com (Joe Smith) Date: Sun, 28 Sep 2014 19:06:41 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: Thanks for the well-reasoned response Nick. Donald: +1 to your proposal. This will increase stability for the main package repository, which is a good direction to move toward. On Sun, Sep 28, 2014 at 3:51 PM, Nick Coghlan wrote: > > On 29 Sep 2014 07:37, "M.-A. Lemburg" wrote: > > > > -1. > > > > It does happen that files need to be reuploaded because of a bug > > in the release process and how people manage their code is really > > *their* business, not that of PyPI. > > > > FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate > > the way package authors are supposed to build, manage and host their > > Python packages and release process. Can we please stop this ? > > As others have noted, these changes represent the PyPA being opinionated > on behalf of the user community, to provide the best possible user > experience for the overall Python ecosystem. > > We'll accommodate the existing publisher community as far as is feasible > (that's why PEP 440 is as complicated as it is, for example), but there are > going to be times where improving the end user experience means adding new > constraints on publishers. > > External hosting (using PyPI as an index, without also using it for > release file hosting) is the primary "escape clause" for software > publishers that prefer to do things differently from the way PyPI does > them. That's a user experience we'll also continue to work to improve, to > ensure it is clear that it's a fully supported part of the distribution > model. > > Regards, > Nick. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at pov.lt Mon Sep 29 07:28:04 2014 From: marius at pov.lt (Marius Gedminas) Date: Mon, 29 Sep 2014 08:28:04 +0300 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: <20140929052804.GA8453@fridge.pov.lt> On Sun, Sep 28, 2014 at 02:21:11PM -0700, Chris Jerdonek wrote: > Would this also affect the ability to update the "readme" information > for a version on PyPI (i.e. the information displayed on the default > home page generated by PyPI for the release) once the version has > already been uploaded to PyPI? > > There are sometimes issues encountered with the display of that > information that can't easily be checked without doing an actual > version upload. restview has a --pypi-strict mode for this use-case #shamelessplug https://pypi.python.org/pypi/restview > I haven't recently tried reuploading the metadata for a version, > mainly because of uncertainty around how PyPI would handle it. In the past when I needed to fix stupid mistakes in my long_description, I'd edit it using the web interface. arius Gedminas -- if (DefRel.empty() == false) -- apt-pkg/policy.cc (apt 0.5.23) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From noah at coderanger.net Mon Sep 29 07:40:38 2014 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 28 Sep 2014 22:40:38 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: On Sep 28, 2014, at 12:31 PM, Donald Stufft wrote: > Hello All! > > I'd like to discuss the idea of moving PyPI to having immutable files. This > would mean that once you publish a particular file you can never reupload that > file again with different contents. This would still allow deleting the file or > reuploading it if the checksums match what was there prior. > +1. Would vastly simplify the infra side! --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.jerdonek at gmail.com Mon Sep 29 09:06:38 2014 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 29 Sep 2014 00:06:38 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140929052804.GA8453@fridge.pov.lt> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929052804.GA8453@fridge.pov.lt> Message-ID: On Sun, Sep 28, 2014 at 10:28 PM, Marius Gedminas wrote: > On Sun, Sep 28, 2014 at 02:21:11PM -0700, Chris Jerdonek wrote: >> Would this also affect the ability to update the "readme" information >> for a version on PyPI (i.e. the information displayed on the default >> home page generated by PyPI for the release) once the version has >> already been uploaded to PyPI? >> >> There are sometimes issues encountered with the display of that >> information that can't easily be checked without doing an actual >> version upload. > > restview has a --pypi-strict mode for this use-case #shamelessplug > https://pypi.python.org/pypi/restview > >> I haven't recently tried reuploading the metadata for a version, >> mainly because of uncertainty around how PyPI would handle it. > > In the past when I needed to fix stupid mistakes in my long_description, > I'd edit it using the web interface. Thanks for the plug, Marius. But I should clarify that the metadata issues aren't restricted to the long_description. For example, there was a case where a co-maintainer of a project uploaded a new version of the project; and for reasons still unknown to me, the Trove classifiers did not render on the PyPI home page for the project, even though they rendered as expected for previous releases (and there was no change to the setup.py aside from the version). --Chris > > arius Gedminas > -- > if (DefRel.empty() == false) > -- apt-pkg/policy.cc (apt 0.5.23) > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > From mal at egenix.com Mon Sep 29 10:46:26 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 29 Sep 2014 10:46:26 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> Message-ID: <54291C62.8050405@egenix.com> On 28.09.2014 23:59, Donald Stufft wrote: > >> On Sep 28, 2014, at 5:36 PM, M.-A. Lemburg > wrote: >> >> On 28.09.2014 21:31, Donald Stufft wrote: >>> Hello All! >>> >>> I'd like to discuss the idea of moving PyPI to having immutable files. This >>> would mean that once you publish a particular file you can never reupload that >>> file again with different contents. This would still allow deleting the file or >>> reuploading it if the checksums match what was there prior. >>> >>> This would be good for a few reasons: >>> >>> * It represents "best practices" for version numbers. Ideally if two people >>> have version "2.1" of a project, they'll have the same code, however as it >>> stands two people installing at two different times could have two very >>> different versions. >>> >>> * This will make improving the PyPI infrastructure easier, in particular it >>> will make it simpler to move away from using a glusterfs storage array and >>> switch to a redudant set of cloud object stores. >>> >>> >>> In the past this was brought up and a few points were brought against it, those >>> were: >>> >>> 1. That authors could simply change files that were hosted on not PyPI anyways >>> so it didn't really do much. >>> >>> 2. That it was too hard to test a release prior to uploading it due to the >>> nature of distutils requiring you to build the release in the same command >>> as the upload. >>> >>> With the fact that pip no longer hits external URLs by default, I believe that >>> the first item is no longer that large of a factor. People can do whatever they >>> want on external URLs of course, however if something is coming from PyPI as >>> end users should now be aware of, they can know it is immutable. >>> >>> Now that there is twine, which allows uploading already created packages, I >>> also believe that the second item is no longer a concern. People can easily >>> create a distribution using ``setup.py sdist``, test it, and then upload that >>> exact thing they tested using ``twine upload ``. >> >> -1. >> >> It does happen that files need to be reuploaded because of a bug >> in the release process and how people manage their code is really >> *their* business, not that of PyPI. > > Can you describe a reasonable hypothetical situation where this would occur > often enough as to be something that is likely to happen on a consistent > basis? Originally the problem was there was little ability to easily upload > pre-created files so there was a reasonable chance that there may be a > packaging bug that didn?t get exposed until you actually packaged + released. > > With the advent of twine though it?s now possible to test the exact bits that > get uploaded to PyPI making that particular issue no longer a problem. > > However, the fact that the files are not immutable *do* cause a number of > problems that need to be worked around in the mirroring infrastructure, the > CDN, and for scaling PyPI out and removing the glusterfs component. You are missing out on cases, where the release process causes files to be omitted, human errors where packagers forget to apply changes to e.g. documentation files, version files, change logs, etc., where packagers want to add information that doesn't affect the software itself, but meta information included in the distribution files. Such changes often do not affect the software itself, and so are not detected by software tests. If I understand you correctly, you are essentially suggesting that it becomes impossible to ever delete anything uploaded to PyPI, i.e. turning PyPI into a WORM. This would mean that package authors could never correct mistakes, remove broken packages distribution files, ones which they may be forced to remove for legal reasons, ones which they find are infected with a virus or trojan, ones which they uploaded for fun or by mistake. This doesn't have anything to do with making the user experience a better one. It is ignorant to assume that package authors who sometimes delete distribution files, or at least want to have the possibility to do so, don't care for their users. We are in Python land, so most authors will know what they are doing and do care for their users. After all: Why do you think I'm arguing against this proposal ? Because I want users of our packages to get the best experience they can get, by downloading complete, correct and working distribution files. This whole idea also has another angle, namely a legal one: the PSF doesn't own the distribution files it hosts on PyPI. So far, the argument to not fix the much too broad license on PyPI was that authors were able to delete files on PyPI to work around the unneeded "irrevocable" part of that license. With the suggested change, authors would have to give up complete control over their distribution files to the PSF in order for their packages to be installable by pip using its default settings. This kind of lock-in and removal of author rights is not something I can support as PSF director. Those authors are the ones that have created a large part of our Python eco system and they are the ones that have put in work to get Python to where it is now: one of the best integrated programming languages you can find. We owe a lot to those authors and need to care for them. Finally, changes such as the above will result in more authors to switch to alternative hosting platforms such as conda/binstar.org or plain github clone + setup.py install (which is becoming increasingly popular). Do you really believe that this will make the user experience a better one in the long run ? If we want to make it attractive for package authors to host their packages on PyPI, we have to give them flexibility, respect their rights and be welcoming. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Sep 29 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-09-30: Python Meeting Duesseldorf ... tomorrow ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From p.f.moore at gmail.com Mon Sep 29 11:02:34 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 29 Sep 2014 10:02:34 +0100 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54291C62.8050405@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> Message-ID: On 29 September 2014 09:46, M.-A. Lemburg wrote: > If I understand you correctly, you are essentially suggesting that it > becomes impossible to ever delete anything uploaded to PyPI, i.e. > turning PyPI into a WORM. My understanding (I'm sure Donald will correct me if I'm wrong) is that it will still be possible to delete files. What will *not* be possible will be to later upload a replacement for a deleted file which contains different content. >From a user perspective this means that it's possible for a release to be withdrawn, but replacements will always be new versions, and if I download a particular version, what I get won't depend on when I do the download. This seems like a good thing to me. Paul From mal at egenix.com Mon Sep 29 11:04:17 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 29 Sep 2014 11:04:17 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> Message-ID: <54292091.5010507@egenix.com> On 29.09.2014 00:51, Nick Coghlan wrote: > On 29 Sep 2014 07:37, "M.-A. Lemburg" wrote: >> >> -1. >> >> It does happen that files need to be reuploaded because of a bug >> in the release process and how people manage their code is really >> *their* business, not that of PyPI. >> >> FWIW, I am getting increasingly annoyed how PyPI and pip try to dictate >> the way package authors are supposed to build, manage and host their >> Python packages and release process. Can we please stop this ? > > As others have noted, these changes represent the PyPA being opinionated on > behalf of the user community, to provide the best possible user experience > for the overall Python ecosystem. See my reply to Donald. I find this wrong on several different levels. PyPI is run by the PSF, it's a community resource we provide for package authors and downloaders. We (the PSF) don't take sides. Instead, we want to help everyone feel at home: the package authors who provide the Python eco system with fresh software, as well as the users who greatly benefit from this software. The PyPA takes care of the technical aspects of this, but not the ethical and community building aspects. > We'll accommodate the existing publisher community as far as is feasible > (that's why PEP 440 is as complicated as it is, for example), but there are > going to be times where improving the end user experience means adding new > constraints on publishers. > > External hosting (using PyPI as an index, without also using it for release > file hosting) is the primary "escape clause" for software publishers that > prefer to do things differently from the way PyPI does them. That's a user > experience we'll also continue to work to improve, to ensure it is clear > that it's a fully supported part of the distribution model. Right, so authors will move away from PyPI and put their stuff up elsewhere. Now, how does this help our community ? What if people find that they can only get packages using conda instead of pip, or only by cloning from github, because package authors don't want to bother cutting distribution files anymore ? Do you seriously want to force package authors to cut a new release just because a single uploaded distribution file is broken for some reason and then ask all users who have already installed one of the non-broken ones to upgrade again, even though they are not affected ? Please repeat with me: Package authors care for their users :-) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Sep 29 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-09-30: Python Meeting Duesseldorf ... tomorrow ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From holger at merlinux.eu Mon Sep 29 11:04:55 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 29 Sep 2014 09:04:55 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54291C62.8050405@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> Message-ID: <20140929090455.GQ7954@merlinux.eu> On Mon, Sep 29, 2014 at 10:46 +0200, M.-A. Lemburg wrote: > On 28.09.2014 23:59, Donald Stufft wrote: > > > >> On Sep 28, 2014, at 5:36 PM, M.-A. Lemburg > wrote: > >> > >> On 28.09.2014 21:31, Donald Stufft wrote: > >>> Hello All! > >>> > >>> I'd like to discuss the idea of moving PyPI to having immutable files. This > >>> would mean that once you publish a particular file you can never reupload that > >>> file again with different contents. This would still allow deleting the file or > >>> reuploading it if the checksums match what was there prior. > >>> > >>> This would be good for a few reasons: > >>> > >>> * It represents "best practices" for version numbers. Ideally if two people > >>> have version "2.1" of a project, they'll have the same code, however as it > >>> stands two people installing at two different times could have two very > >>> different versions. > >>> > >>> * This will make improving the PyPI infrastructure easier, in particular it > >>> will make it simpler to move away from using a glusterfs storage array and > >>> switch to a redudant set of cloud object stores. > >>> > >>> > >>> In the past this was brought up and a few points were brought against it, those > >>> were: > >>> > >>> 1. That authors could simply change files that were hosted on not PyPI anyways > >>> so it didn't really do much. > >>> > >>> 2. That it was too hard to test a release prior to uploading it due to the > >>> nature of distutils requiring you to build the release in the same command > >>> as the upload. > >>> > >>> With the fact that pip no longer hits external URLs by default, I believe that > >>> the first item is no longer that large of a factor. People can do whatever they > >>> want on external URLs of course, however if something is coming from PyPI as > >>> end users should now be aware of, they can know it is immutable. > >>> > >>> Now that there is twine, which allows uploading already created packages, I > >>> also believe that the second item is no longer a concern. People can easily > >>> create a distribution using ``setup.py sdist``, test it, and then upload that > >>> exact thing they tested using ``twine upload ``. > >> > >> -1. > >> > >> It does happen that files need to be reuploaded because of a bug > >> in the release process and how people manage their code is really > >> *their* business, not that of PyPI. > > > > Can you describe a reasonable hypothetical situation where this would occur > > often enough as to be something that is likely to happen on a consistent > > basis? Originally the problem was there was little ability to easily upload > > pre-created files so there was a reasonable chance that there may be a > > packaging bug that didn?t get exposed until you actually packaged + released. > > > > With the advent of twine though it?s now possible to test the exact bits that > > get uploaded to PyPI making that particular issue no longer a problem. > > > > However, the fact that the files are not immutable *do* cause a number of > > problems that need to be worked around in the mirroring infrastructure, the > > CDN, and for scaling PyPI out and removing the glusterfs component. > > You are missing out on cases, where the release process causes files to > be omitted, human errors where packagers forget to apply changes to > e.g. documentation files, version files, change logs, etc., where > packagers want to add information that doesn't affect the software > itself, but meta information included in the distribution files. I've had such cases myself. That's the only real cave-eat i see with the proposal. Then again, wheels don't allow uploading docs/changelogs today. And pypi would continue to allow to change metadata [*]. I also see the advantage of immutability of the (filename->content) relation so I am +0 on the proposal currently. > Such changes often do not affect the software itself, and so are not > detected by software tests. > > If I understand you correctly, you are essentially suggesting that it > becomes impossible to ever delete anything uploaded to PyPI, i.e. > turning PyPI into a WORM. No, Donald said deleting would be fine. But you couldn't then re-upload to the same filename with a different checksum because pypi would memorize those properties. > This would mean that package authors could never correct mistakes, > remove broken packages distribution files, ones which they may be > forced to remove for legal reasons, ones which they find are infected > with a virus or trojan, ones which they uploaded for fun or > by mistake. In this case you would just delete the release under Donald's proposal. best, holger [*] In some way, retro-actively changing the license in release metadata is also questionable. Maybe it should just be made clear that the "license" pypi metadata is not reliable and one needs to check with the release file itself. I've had a number of companies contact me over related licensing issues of my pypi published software. > This doesn't have anything to do with making the user experience > a better one. It is ignorant to assume that package authors who > sometimes delete distribution files, or at least want to have the > possibility to do so, don't care for their users. We are in > Python land, so most authors will know what they are doing and > do care for their users. > > After all: Why do you think I'm arguing against this proposal ? > Because I want users of our packages to get the best experience > they can get, by downloading complete, correct and working > distribution files. > > This whole idea also has another angle, namely a legal one: > the PSF doesn't own the distribution files it hosts on PyPI. > > So far, the argument to not fix the much too broad license on PyPI > was that authors were able to delete files on PyPI to work around > the unneeded "irrevocable" part of that license. > > With the suggested change, authors would have to give up complete > control over their distribution files to the PSF in order for their > packages to be installable by pip using its default settings. > > This kind of lock-in and removal of author rights is not something > I can support as PSF director. Those authors are the ones that have > created a large part of our Python eco system and they are the ones that > have put in work to get Python to where it is now: one of the best > integrated programming languages you can find. We owe a lot to those > authors and need to care for them. > > Finally, changes such as the above will result in more authors > to switch to alternative hosting platforms such as conda/binstar.org > or plain github clone + setup.py install (which is becoming increasingly > popular). Do you really believe that this will make the user experience > a better one in the long run ? > > If we want to make it attractive for package authors to host their > packages on PyPI, we have to give them flexibility, respect their > rights and be welcoming. > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Sep 29 2014) > >>> Python Projects, Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-09-30: Python Meeting Duesseldorf ... tomorrow > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Mon Sep 29 11:39:14 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 19:39:14 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54291C62.8050405@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> Message-ID: On 29 Sep 2014 18:49, "M.-A. Lemburg" wrote: > > You are missing out on cases, where the release process causes files to > be omitted, human errors where packagers forget to apply changes to > e.g. documentation files, version files, change logs, etc., where > packagers want to add information that doesn't affect the software > itself, but meta information included in the distribution files. > > Such changes often do not affect the software itself, and so are not > detected by software tests. Fixing such packaging errors is the primary intended use case of the "post" field in PEP 440. Alternatively, many projects will just spin a new release that addresses those issues, just as they would for any other bug. Both of those approaches have the advantage of letting users (especially those operating mirrors, or downloading tarballs and feeding them into a separate redistribution system) easily tell whether or not they have the fixed version. > If I understand you correctly, you are essentially suggesting that it > becomes impossible to ever delete anything uploaded to PyPI, i.e. > turning PyPI into a WORM. No, deletion remains supported. The only capability being removed is silent substitution of hosted files with different ones bearing the same name. So if, for example, release "3.1.4" had a packaging error, then deleting it would still be possible, but the replacement would need to be something like "3.1.4.post1" or "3.1.5", rather than being permitted to reuse the "3.1.4" name. > This would mean that package authors could never correct mistakes, > remove broken packages distribution files, ones which they may be > forced to remove for legal reasons, ones which they find are infected > with a virus or trojan, ones which they uploaded for fun or > by mistake. Removal will still be supported, for these kinds of reasons. Only the silent file substitution loophole is being eliminated - packaging fixes are still fully supported, they're just going to be required to be done in a way that is visible to end users. > This doesn't have anything to do with making the user experience > a better one. It is ignorant to assume that package authors who > sometimes delete distribution files, or at least want to have the > possibility to do so, don't care for their users. We are in > Python land, so most authors will know what they are doing and > do care for their users. Again, only the silent file substitution loophole is being removed. Outright deletion remains a fully supported feature. > After all: Why do you think I'm arguing against this proposal ? > Because I want users of our packages to get the best experience > they can get, by downloading complete, correct and working > distribution files. You are entirely correct that removing the ability to delete hosted files would be a bad idea (even though we can't guarantee deletion from third party mirrors). However, there is nothing user friendly about retaining the ability for software publishers to silently replace the contents of a PyPI hosted file without bumping the version number. In particular, that practice actively risks breaking deployments for anyone using the peep installer to automatically cache the first seen hash for each released file (since peep can't tell the difference between the package author doing it and a malicious attacker doing it). > This whole idea also has another angle, namely a legal one: > the PSF doesn't own the distribution files it hosts on PyPI. > > So far, the argument to not fix the much too broad license on PyPI > was that authors were able to delete files on PyPI to work around > the unneeded "irrevocable" part of that license. > > With the suggested change, authors would have to give up complete > control over their distribution files to the PSF in order for their > packages to be installable by pip using its default settings. We are not removing the ability to delete files. > This kind of lock-in and removal of author rights is not something > I can support as PSF director. Those authors are the ones that have > created a large part of our Python eco system and they are the ones that > have put in work to get Python to where it is now: one of the best > integrated programming languages you can find. We owe a lot to those > authors and need to care for them. Yes, we do, but requiring them to bump their version numbers when changing the contents of published files seems like an entirely reasonable constraint to me. If the proposal was to remove the ability to delete files entirely, I would be on your side. Fortunately, that is not the proposal - the proposal is solely to prevent silently modifying their contents without renaming them. > Finally, changes such as the above will result in more authors > to switch to alternative hosting platforms such as conda/binstar.org > or plain github clone + setup.py install (which is becoming increasingly > popular). Do you really believe that this will make the user experience > a better one in the long run ? Other release hosting services do not, as far as I am aware, typically allow silently replacing a previously published file with a new file containing different contents. Hash based version control systems in particular prevent it by design. Linux distributions *certainly* disallow reusing the same "epoch, name, version, release" details for different software releases - if a release contains a packaging error, addressing that is one of the things bumping the distro supplied "release" field covers. > If we want to make it attractive for package authors to host their > packages on PyPI, we have to give them flexibility, respect their > rights and be welcoming. And we do. We just impose some contraints (like "deleting files is OK, silently replacing them with something else is not") on behalf of end users. The external hosting support is then the mechanism by which authors can retain complete and total control over their release process. That approach avoids all licensing concerns (including those related to US export controls), as well as ensuring they have the ability to silently change the contents of previously released files if they choose to do so (although, as noted above, actually doing so may break installation for anyone installing with peep, which checks file hashes based on the first version downloaded). Regards, Nick. > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Sep 29 2014) > >>> Python Projects, Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-09-30: Python Meeting Duesseldorf ... tomorrow > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 11:50:45 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 19:50:45 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54292091.5010507@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> Message-ID: On 29 Sep 2014 19:04, "M.-A. Lemburg" wrote: > > Do you seriously want to force package authors to cut a new release > just because a single uploaded distribution file is broken for > some reason and then ask all users who have already installed one > of the non-broken ones to upgrade again, even though they are not > affected ? Yes, I do. Silently changing released artefacts is actively user hostile. It breaks mirroring, it breaks redistribution, it breaks security audits, and it can even break installation for security conscious users that are using peep rather than pip. > > Please repeat with me: Package authors care for their users :-) If that's the case, then checking releases on devpi or the PyPI test instance shouldn't be a problem. I am personally quite open to suggestions for making such checks easier to automate in a consistent way. I am thoroughly *against* retaining a general capability to silently substitute the contents of previously released files with a different payload solely to handle the case of packaging errors that aren't otherwise severe enough to warrant bumping the version number - if they're that insignificant that users that installed the "broken" one don't need to update, then there doesn't seem to be any urgency in getting the fix published at all, so the package author may even decide to wait until their next release, rather than pushing out an immediate fix. Regards, Nick. > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Sep 29 2014) > >>> Python Projects, Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-09-30: Python Meeting Duesseldorf ... tomorrow > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 12:01:36 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 20:01:36 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> Message-ID: On 29 Sep 2014 19:50, "Nick Coghlan" wrote: > > > On 29 Sep 2014 19:04, "M.-A. Lemburg" wrote: > > > > Do you seriously want to force package authors to cut a new release > > just because a single uploaded distribution file is broken for > > some reason and then ask all users who have already installed one > > of the non-broken ones to upgrade again, even though they are not > > affected ? > > Yes, I do. Silently changing released artefacts is actively user hostile. It breaks mirroring, it breaks redistribution, it breaks security audits, and it can even break installation for security conscious users that are using peep rather than pip. One caveat on this: it would potentially be convenient to have a "release" field in the wheel naming scheme, and adopt a similar approach for other binary formats like Windows installers, specifically to allow those to be updated without needing to do a full source version update. It's the silent substitution of file contents I have a fundamental problem with, not the notion of being able to publish an updated platform specific build artefact without having to bump the source release version. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at RACKSPACE.COM Mon Sep 29 13:04:43 2014 From: donald.stufft at RACKSPACE.COM (Donald Stufft) Date: Mon, 29 Sep 2014 11:04:43 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> Message-ID: <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> On Sep 29, 2014, at 6:01 AM, Nick Coghlan > wrote: On 29 Sep 2014 19:50, "Nick Coghlan" > wrote: > > > On 29 Sep 2014 19:04, "M.-A. Lemburg" > wrote: > > > > Do you seriously want to force package authors to cut a new release > > just because a single uploaded distribution file is broken for > > some reason and then ask all users who have already installed one > > of the non-broken ones to upgrade again, even though they are not > > affected ? > > Yes, I do. Silently changing released artefacts is actively user hostile. It breaks mirroring, it breaks redistribution, it breaks security audits, and it can even break installation for security conscious users that are using peep rather than pip. One caveat on this: it would potentially be convenient to have a "release" field in the wheel naming scheme, and adopt a similar approach for other binary formats like Windows installers, specifically to allow those to be updated without needing to do a full source version update. It's the silent substitution of file contents I have a fundamental problem with, not the notion of being able to publish an updated platform specific build artefact without having to bump the source release version. Wheel files already include the idea of a build number baked into the filename. That would be a different filename and thus would be allowed to be uploaded even if you deleted the original Wheel. Is there something about that which wouldn?t work or did it just slip your mind? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Sep 29 13:20:30 2014 From: holger at merlinux.eu (holger krekel) Date: Mon, 29 Sep 2014 11:20:30 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> Message-ID: <20140929112030.GT7954@merlinux.eu> (Fixed quoting indent + some own comments) On Mon, Sep 29, 2014 at 11:04 +0000, Donald Stufft wrote: > On Sep 29, 2014, at 6:01 AM, Nick Coghlan > wrote: > > On 29 Sep 2014 19:50, "Nick Coghlan" > wrote: > > > > > > On 29 Sep 2014 19:04, "M.-A. Lemburg" > wrote: > > > > > > Do you seriously want to force package authors to cut a new release > > > just because a single uploaded distribution file is broken for > > > some reason and then ask all users who have already installed one > > > of the non-broken ones to upgrade again, even though they are not > > > affected ? > > > > Yes, I do. Silently changing released artefacts is actively user hostile. It breaks mirroring, it breaks redistribution, it breaks security audits, and it can even break installation for security conscious users that are using peep rather than pip. > >> One caveat on this: it would potentially be convenient to have a >> "release" field in the wheel naming scheme, and adopt a similar >> approach for other binary formats like Windows installers, >> specifically to allow those to be updated without needing to do a >> full source version update. > >> It's the silent substitution of file contents I have a fundamental >> problem with, not the notion of being able to publish an updated >> platform specific build artefact without having to bump the source >> release version. > > Wheel files already include the idea of a build number baked into the filename. That would be > a different filename and thus would be allowed to be uploaded even if you deleted the original > Wheel. Is there something about that which wouldn?t work or did it just slip your mind? FWIW I'd prefer to go with the "each filename maps to one binary content or was deleted" guarantee irrespective if it's a wheel, tar, egg or zip file. Besides, the cited mirroring/distribution simplifications wouldn't otherwise materialize i guess. holger --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From donald.stufft at RACKSPACE.COM Mon Sep 29 13:22:18 2014 From: donald.stufft at RACKSPACE.COM (Donald Stufft) Date: Mon, 29 Sep 2014 11:22:18 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54291C62.8050405@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> Message-ID: <6ABC621E-BC68-47A7-B617-2FE0B2CCD551@rackspace.com> On Sep 29, 2014, at 4:46 AM, M.-A. Lemburg > wrote: You are missing out on cases, where the release process causes files to be omitted, human errors where packagers forget to apply changes to e.g. documentation files, version files, change logs, etc., where packagers want to add information that doesn't affect the software itself, but meta information included in the distribution files. Such changes often do not affect the software itself, and so are not detected by software tests. If I understand you correctly, you are essentially suggesting that it becomes impossible to ever delete anything uploaded to PyPI, i.e. turning PyPI into a WORM. This would mean that package authors could never correct mistakes, remove broken packages distribution files, ones which they may be forced to remove for legal reasons, ones which they find are infected with a virus or trojan, ones which they uploaded for fun or by mistake. This doesn't have anything to do with making the user experience a better one. It is ignorant to assume that package authors who sometimes delete distribution files, or at least want to have the possibility to do so, don't care for their users. We are in Python land, so most authors will know what they are doing and do care for their users. After all: Why do you think I'm arguing against this proposal ? Because I want users of our packages to get the best experience they can get, by downloading complete, correct and working distribution files. This whole idea also has another angle, namely a legal one: the PSF doesn't own the distribution files it hosts on PyPI. So far, the argument to not fix the much too broad license on PyPI was that authors were able to delete files on PyPI to work around the unneeded "irrevocable" part of that license. With the suggested change, authors would have to give up complete control over their distribution files to the PSF in order for their packages to be installable by pip using its default settings. Others already said it, but let me be clear about it, this proposal does not in any way seek to prevent authors from being able to delete files from PyPI. It still allows them to delete anything at anytime and it still publishes that information for mirrors (although mirrors are certainly under no obligation to respect it if they desire not to). I completely agree with you that disallowing authors to *delete* files would be incredibly short sighted and wrong and I would be one of the people against such a change. This proposal is strictly limited to the ability to delete a particular file, let's say "foobar-1.0.tar.gz" and then reupload a different "foobar-1.0.tar.gz" in it's place. If a mistake is made in the release, that's *ok* it can be deleted, the only constraint is that with this change you'd need to increment the version in some way, likely with a .postN or just bumping the last digit, to signal to users that the bits in this has changed in some way. This kind of lock-in and removal of author rights is not something I can support as PSF director. Those authors are the ones that have created a large part of our Python eco system and they are the ones that have put in work to get Python to where it is now: one of the best integrated programming languages you can find. We owe a lot to those authors and need to care for them. I *do* deeply care for the experience as an author as well as someone installing from PyPI. After all I use PyPI in both capacities on a regular basis. Finally, changes such as the above will result in more authors to switch to alternative hosting platforms such as conda/binstar.org or plain github clone + setup.py install (which is becoming increasingly popular). Do you really believe that this will make the user experience a better one in the long run ? If we want to make it attractive for package authors to host their packages on PyPI, we have to give them flexibility, respect their rights and be welcoming. I don't believe it's accurate to say people are switching away from PyPI in any sort of relevant numbers. PyPI's usage is increasing, both in the number of people releasing packages and in the number of people consuming packages. Particularly the number of people consuming packages has risen massively. Do you have any numbers or proof to backup the claim that people are switching away? To be completly honest the feedback that I get and see is overwhelmingly positive for every change that has been made so far. That's not to say there haven't been those who have been against some or all of the changes but those people are generally in an extreme minority. This isn't to say that the changes have been globally liked, but that it would be very surprising to me that people are moving away from PyPI and if you have numbers/proof of that I would love to see it. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 13:48:38 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 21:48:38 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> Message-ID: On 29 Sep 2014 21:04, "Donald Stufft" wrote: > > >> On Sep 29, 2014, at 6:01 AM, Nick Coghlan wrote: >> >> One caveat on this: it would potentially be convenient to have a "release" field in the wheel naming scheme, and adopt a similar approach for other binary formats like Windows installers, specifically to allow those to be updated without needing to do a full source version update. >> >> It's the silent substitution of file contents I have a fundamental problem with, not the notion of being able to publish an updated platform specific build artefact without having to bump the source release version. > > Wheel files already include the idea of a build number baked into the filename. That would be > a different filename and thus would be allowed to be uploaded even if you deleted the original > Wheel. Is there something about that which wouldn?t work or did it just slip your mind? Slipped my mind - we generally leave it out of examples, so I managed to forget the capability was already part of the spec. That means this case should already be fully covered then, even if it requires a pre-upload file renaming. It does suggest that a short section on "Dealing with release errors" might need to find a home somewhere in PyPUG, though. Cheers, Nick. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 13:58:57 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Sep 2014 21:58:57 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140929112030.GT7954@merlinux.eu> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> Message-ID: On 29 Sep 2014 21:20, "holger krekel" wrote: > > (Fixed quoting indent + some own comments) > > On Mon, Sep 29, 2014 at 11:04 +0000, Donald Stufft wrote: > > On Sep 29, 2014, at 6:01 AM, Nick Coghlan > wrote: > > > >> It's the silent substitution of file contents I have a fundamental > >> problem with, not the notion of being able to publish an updated > >> platform specific build artefact without having to bump the source > >> release version. > > > > Wheel files already include the idea of a build number baked into the filename. That would be > > a different filename and thus would be allowed to be uploaded even if you deleted the original > > Wheel. Is there something about that which wouldn?t work or did it just slip your mind? > > FWIW I'd prefer to go with the "each filename maps to one binary content > or was deleted" guarantee irrespective if it's a wheel, tar, > egg or zip file. Besides, the cited mirroring/distribution simplifications > wouldn't otherwise materialize i guess. Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. Cheers, Nick. > > holger > > --- > > Donald Stufft > > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Mon Sep 29 14:09:02 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 29 Sep 2014 14:09:02 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> Message-ID: <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> On 29 Sep 2014, at 13:58, Nick Coghlan wrote: > Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. > > Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. > This is just as relevant for sdists as well. It is quite common to see a broken release due to a missing or wrong MANIFEST.in. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Sep 29 15:21:26 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Sep 2014 09:21:26 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> Message-ID: On September 29, 2014 at 8:54:26 AM, Wichert Akkerman (wichert at wiggy.net) wrote: On 29 Sep 2014, at 13:58, Nick Coghlan wrote: Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. This is just as relevant for sdists as well. It is quite common to see a broken release due to a missing or wrong MANIFEST.in. Test them prior to uploading them. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From wichert at wiggy.net Mon Sep 29 15:25:28 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Mon, 29 Sep 2014 15:25:28 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> Message-ID: On 29 Sep 2014, at 15:21, Donald Stufft wrote: > On September 29, 2014 at 8:54:26 AM, Wichert Akkerman (wichert at wiggy.net) wrote: >> On 29 Sep 2014, at 13:58, Nick Coghlan wrote: >>> Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. >>> >>> Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. >>> >> This is just as relevant for sdists as well. It is quite common to see a broken release due to a missing or wrong MANIFEST.in. >> > > Test them prior to uploading them. You can make the exact same argument about binary distributions, so I don?t understand what you?re trying to say here? Mistakes are made everywhere - I?m just trying to point out that a packaging error is not unique to binary distributions. Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Sep 29 15:38:21 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Sep 2014 09:38:21 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> Message-ID: On September 29, 2014 at 9:25:37 AM, Wichert Akkerman (wichert at wiggy.net) wrote: On 29 Sep 2014, at 15:21, Donald Stufft wrote: On September 29, 2014 at 8:54:26 AM, Wichert Akkerman (wichert at wiggy.net) wrote: On 29 Sep 2014, at 13:58, Nick Coghlan wrote: Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. This is just as relevant for sdists as well. It is quite common to see a broken release due to a missing or wrong MANIFEST.in. Test them prior to uploading them. You can make the exact same argument about binary distributions, so I don?t understand what you?re trying to say here? Mistakes are made everywhere - I?m just trying to point out that a packaging error is not unique to binary distributions. Wichert. The difference is that a binary distribution is a produced *from* a source distribution.? This means that if a source distribution is incorrect then any binary distributions built from that source distribution needs to be regenerated. This affects the Wheels which are placed on PyPI, Wheels created ?downstream? (e.g. a Wheel cache on a user?s machine, a shared wheel builder via devpi + devpi-builder), and it affects other packaging systems such as the OS package tools like apt-get, FreeBSD ports, Homebrew etc. However if a Wheel is incorrect that really only effects direct consumers of that Wheel. It?s not a generally supported to take a binary distribution and repack it into another form, however even if someone was doing so, the ?build number? metadata in the Wheel tag signifies that a new version *of that wheel* has been created. Another way of looking at it, is that the version number identifies the source distribution, while the version number, build number, and platform tags identifies the wheel files. This is reflected in their respective filenames and since this proposal is simply boiled down to ?there should exist, at all points in time only one set of bytes per uniquely identified file?. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Mon Sep 29 16:28:04 2014 From: jim at zope.com (Jim Fulton) Date: Mon, 29 Sep 2014 10:28:04 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: On Sun, Sep 28, 2014 at 3:31 PM, Donald Stufft wrote: > Hello All! > > I'd like to discuss the idea of moving PyPI to having immutable files. ... +1 Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From barry at python.org Mon Sep 29 16:36:09 2014 From: barry at python.org (Barry Warsaw) Date: Mon, 29 Sep 2014 10:36:09 -0400 Subject: [Distutils] Immutable Files on PyPI References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: <20140929103609.6a418fe5@anarchist.wooz.org> On Sep 28, 2014, at 07:31 PM, Donald Stufft wrote: >I'd like to discuss the idea of moving PyPI to having immutable files. This >would mean that once you publish a particular file you can never reupload >that file again with different contents. This would still allow deleting the >file or reuploading it if the checksums match what was there prior. Although I have abused this in the past, as others have pointed out, because once uploaded I realize there is a bug in the package. There's a certain class of such bugs that prompt a quick re-upload rather than a version rev, such as some display problem on PyPI (because of package metadata), or some follow on packaging bug, such as a missing MANIFEST.in causing Debian package build to fail. Yes, the latter is more easily checked before upload, but sometimes you feel optimistic. ;) This won't make your lives easier, but I'd like to propose some support for "embargoed" uploads. These would be normal uploads except that they wouldn't be publicly available until a 'publish' button were pushed. Such embargoed uploads wouldn't be subject to the checksum limitation, and we'd have to figure out exactly how such packages would be available (certainly to a logged in owner of the project via the web, but perhaps through an authenticated scriptable interface). Even if you decide against supporting something like this, I'd still be okay with the checksum restriction. You never run out of version numbers. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From graffatcolmingov at gmail.com Mon Sep 29 16:40:37 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 29 Sep 2014 09:40:37 -0500 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140929103609.6a418fe5@anarchist.wooz.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: On Mon, Sep 29, 2014 at 9:36 AM, Barry Warsaw wrote: > On Sep 28, 2014, at 07:31 PM, Donald Stufft wrote: > >>I'd like to discuss the idea of moving PyPI to having immutable files. This >>would mean that once you publish a particular file you can never reupload >>that file again with different contents. This would still allow deleting the >>file or reuploading it if the checksums match what was there prior. > > Although I have abused this in the past, as others have pointed out, because > once uploaded I realize there is a bug in the package. There's a certain > class of such bugs that prompt a quick re-upload rather than a version rev, > such as some display problem on PyPI (because of package metadata), or some > follow on packaging bug, such as a missing MANIFEST.in causing Debian package > build to fail. Yes, the latter is more easily checked before upload, but > sometimes you feel optimistic. ;) > > This won't make your lives easier, but I'd like to propose some support for > "embargoed" uploads. These would be normal uploads except that they wouldn't > be publicly available until a 'publish' button were pushed. Such embargoed > uploads wouldn't be subject to the checksum limitation, and we'd have to > figure out exactly how such packages would be available (certainly to a logged > in owner of the project via the web, but perhaps through an authenticated > scriptable interface). > > Even if you decide against supporting something like this, I'd still be okay > with the checksum restriction. You never run out of version numbers. > > -Barry That's essentially what I see as the chief use-case for testpypi.python.org. I don't think pypi.python.org needs to support this as well. Simple is better than complex after all :) Cheers, Ian From donald at stufft.io Mon Sep 29 16:43:11 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Sep 2014 10:43:11 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: On September 29, 2014 at 10:41:07 AM, Ian Cordasco (graffatcolmingov at gmail.com) wrote: On Mon, Sep 29, 2014 at 9:36 AM, Barry Warsaw wrote: > On Sep 28, 2014, at 07:31 PM, Donald Stufft wrote: > >>I'd like to discuss the idea of moving PyPI to having immutable files. This >>would mean that once you publish a particular file you can never reupload >>that file again with different contents. This would still allow deleting the >>file or reuploading it if the checksums match what was there prior. > > Although I have abused this in the past, as others have pointed out, because > once uploaded I realize there is a bug in the package. There's a certain > class of such bugs that prompt a quick re-upload rather than a version rev, > such as some display problem on PyPI (because of package metadata), or some > follow on packaging bug, such as a missing MANIFEST.in causing Debian package > build to fail. Yes, the latter is more easily checked before upload, but > sometimes you feel optimistic. ;) > > This won't make your lives easier, but I'd like to propose some support for > "embargoed" uploads. These would be normal uploads except that they wouldn't > be publicly available until a 'publish' button were pushed. Such embargoed > uploads wouldn't be subject to the checksum limitation, and we'd have to > figure out exactly how such packages would be available (certainly to a logged > in owner of the project via the web, but perhaps through an authenticated > scriptable interface). > > Even if you decide against supporting something like this, I'd still be okay > with the checksum restriction. You never run out of version numbers. > > -Barry That's essentially what I see as the chief use-case for testpypi.python.org. I don't think pypi.python.org needs to support this as well. Simple is better than complex after all :) Cheers, Ian _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig Yea I don?t think PyPI needs anything for this, if someone wants to do it they can use testpypi.python.org, or they can stand up a devpi instance which offers a similar thing plus a lot more for a release process. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From pydanny at gmail.com Mon Sep 29 16:33:52 2014 From: pydanny at gmail.com (Daniel Greenfeld) Date: Mon, 29 Sep 2014 07:33:52 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> Message-ID: +1 On Sep 29, 2014 7:29 AM, "Jim Fulton" wrote: > On Sun, Sep 28, 2014 at 3:31 PM, Donald Stufft > wrote: > > Hello All! > > > > I'd like to discuss the idea of moving PyPI to having immutable files. > ... > > +1 > > Jim > > -- > Jim Fulton > http://www.linkedin.com/in/jimfulton > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Sep 29 17:03:09 2014 From: barry at python.org (Barry Warsaw) Date: Mon, 29 Sep 2014 11:03:09 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: <20140929110309.2cc5c23e@anarchist.wooz.org> On Sep 29, 2014, at 09:40 AM, Ian Cordasco wrote: >That's essentially what I see as the chief use-case for >testpypi.python.org. I don't think pypi.python.org needs to support >this as well. Simple is better than complex after all :) Can we then make it easy to upload to testpypi via the cli? Maybe it already is or I'm using the wrong tools. (I'm just a dumb `setup.py upload` monkey.) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From graffatcolmingov at gmail.com Mon Sep 29 17:10:49 2014 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 29 Sep 2014 10:10:49 -0500 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140929110309.2cc5c23e@anarchist.wooz.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> <20140929110309.2cc5c23e@anarchist.wooz.org> Message-ID: On Mon, Sep 29, 2014 at 10:03 AM, Barry Warsaw wrote: > On Sep 29, 2014, at 09:40 AM, Ian Cordasco wrote: > >>That's essentially what I see as the chief use-case for >>testpypi.python.org. I don't think pypi.python.org needs to support >>this as well. Simple is better than complex after all :) > > Can we then make it easy to upload to testpypi via the cli? Maybe it already > is or I'm using the wrong tools. (I'm just a dumb `setup.py upload` monkey.) > > Cheers, > -Barry There are some easy to follow instructions linked from testpypi's homepage: https://wiki.python.org/moin/TestPyPI Did you want more than that? From donald at stufft.io Mon Sep 29 17:11:27 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Sep 2014 11:11:27 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140929110309.2cc5c23e@anarchist.wooz.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> <20140929110309.2cc5c23e@anarchist.wooz.org> Message-ID: On September 29, 2014 at 11:04:42 AM, Barry Warsaw (barry at python.org) wrote: On Sep 29, 2014, at 09:40 AM, Ian Cordasco wrote: >That's essentially what I see as the chief use-case for >testpypi.python.org. I don't think pypi.python.org needs to support >this as well. Simple is better than complex after all :) Can we then make it easy to upload to testpypi via the cli? Maybe it already is or I'm using the wrong tools. (I'm just a dumb `setup.py upload` monkey.) Cheers, -Barry _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig I think you can do it with setup.py upload -r https://testpypi.python.org/, you might have to add it to your ~/.pypirc first though like:?https://bpaste.net/show/d3a0edff41b4 and then do setup.py upload -r testpypi. Though you should go change your password and never type setup.py upload again. It is unsafe, use twine instead (twine upload -r testpypi). --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 23:33:35 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Sep 2014 07:33:35 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: On 30 Sep 2014 00:43, "Donald Stufft" wrote: > > > Yea I don?t think PyPI needs anything for this, if someone wants to do it they can use testpypi.python.org, or they can stand up a devpi instance which offers a similar thing plus a lot more for a release process. It occurs to me that a devpi quickstart for OpenShift (or another PaaS's) free tier could be useful - if a devpi instance is just for pre-release testing of packages, then the free tier should accommodate it comfortably. Cheers, Nick. > > --- > Donald Stufft > PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Sep 29 23:36:48 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Sep 2014 07:36:48 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <2551EE75-E659-447E-BA9C-89876432BAF9@rackspace.com> <20140929112030.GT7954@merlinux.eu> <54B44D33-08E0-4365-8E6B-B58ECCE8935E@wiggy.net> Message-ID: On 29 Sep 2014 22:09, "Wichert Akkerman" wrote: > > On 29 Sep 2014, at 13:58, Nick Coghlan wrote: >> >> Right, this is my perspective as well. The point that the wheel format already includes a build ordering field was significant because that file naming scheme has an official specification. >> >> Other commands like bdist_egg, bdist_dumb and bdist_wininst aren't as strict about the expected file names, although it would be good to define a suggested optional build numbering convention at least for bdist_egg, such that easy_install will do the right thing, even if the full source level version number isn't bumped. > > This is just as relevant for sdists as well. It is quite common to see a broken release due to a missing or wrong MANIFEST.in. As Donald noted, if the source package contents change, the public version should really change, with all binary artefacts being updated accordingly. I didn't make that clear though - it was an unstated assumption. Cheers, Nick. > > Wichert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Sep 30 00:16:21 2014 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Sep 2014 18:16:21 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: On September 29, 2014 at 5:33:38 PM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 30 Sep 2014 00:43, "Donald Stufft" wrote: > > > > > > Yea I don?t think PyPI needs anything for this, if someone wants to do it > they can use testpypi.python.org, or they can stand up a devpi instance > which offers a similar thing plus a lot more for a release process. > > It occurs to me that a devpi quickstart for OpenShift (or another PaaS's) > free tier could be useful - if a devpi instance is just for pre-release > testing of packages, then the free tier should accommodate it comfortably. > >? I'm not familiar with with what is available on OpenShift, however this sounds like something that would be great for Heroku's Deploy button[1]. [1] https://devcenter.heroku.com/articles/heroku-button --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Tue Sep 30 00:26:34 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 29 Sep 2014 23:26:34 +0100 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: On 29 September 2014 23:16, Donald Stufft wrote: >> It occurs to me that a devpi quickstart for OpenShift (or another PaaS's) >> free tier could be useful - if a devpi instance is just for pre-release >> testing of packages, then the free tier should accommodate it comfortably. >> >> > > I'm not familiar with with what is available on OpenShift, however this sounds > like something that would be great for Heroku's Deploy button[1]. > > [1] https://devcenter.heroku.com/articles/heroku-button I've no experience with either of those services, but something like that sounds awesome! (I've toyed with setting up my own local devpi instance, but never got round to it... Paul From mal at egenix.com Tue Sep 30 11:06:40 2014 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 30 Sep 2014 11:06:40 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> Message-ID: <542A72A0.3070006@egenix.com> Thanks for the confirmation that my interpretation was wrong. This makes things a lot better :-) More below... On 29.09.2014 11:39, Nick Coghlan wrote: > On 29 Sep 2014 18:49, "M.-A. Lemburg" wrote: >> >> You are missing out on cases, where the release process causes files to >> be omitted, human errors where packagers forget to apply changes to >> e.g. documentation files, version files, change logs, etc., where >> packagers want to add information that doesn't affect the software >> itself, but meta information included in the distribution files. >> >> Such changes often do not affect the software itself, and so are not >> detected by software tests. > > Fixing such packaging errors is the primary intended use case of the "post" > field in PEP 440. Alternatively, many projects will just spin a new release > that addresses those issues, just as they would for any other bug. > > Both of those approaches have the advantage of letting users (especially > those operating mirrors, or downloading tarballs and feeding them into a > separate redistribution system) easily tell whether or not they have the > fixed version. I don't see how that would help. AFAIU, PyPI would regard a "3.1.4.post1" as new release and so invalidate all other already uploaded distribution files rather than just regard the fixed upload as update to the "3.1.4" release. If we could get a widely adopted notion of build numbers into the tools that would help a lot. Installers and PyPI would then regard "3.1.4-1" as belonging to release "3.1.4", but being a more current build as a distribution file carrying "3.1.4" in its file name. This would also have to work for all files uploaded for a release, not only for some distribution formats, of course, including source release files, Windows installers, egg files, etc. I'd have to run some experiments, but don't believe that setuptools and associated tools support this at the moment. >> If I understand you correctly, you are essentially suggesting that it >> becomes impossible to ever delete anything uploaded to PyPI, i.e. >> turning PyPI into a WORM. > > No, deletion remains supported. The only capability being removed is silent > substitution of hosted files with different ones bearing the same name. > > So if, for example, release "3.1.4" had a packaging error, then deleting it > would still be possible, but the replacement would need to be something > like "3.1.4.post1" or "3.1.5", rather than being permitted to reuse the > "3.1.4" name. So just to summarize: the proposal is to turn PyPI into a WORM, but at least it's still possible to remove distribution files. > The external hosting support is then the mechanism by which authors can > retain complete and total control over their release process. That approach > avoids all licensing concerns (including those related to US export > controls), as well as ensuring they have the ability to silently change the > contents of previously released files if they choose to do so (although, as > noted above, actually doing so may break installation for anyone installing > with peep, which checks file hashes based on the first version downloaded). You're regularly bringing up this argument. Let's just be fair here: external hosting of packages has been made so user unfriendly in recent pip releases, that this has pretty much become a non-option for anyone who wants to create a user friendly package installation environment. pip is unfortunately using the same kind of --no-one-will-want-to-use-this-option-because-its-too-long approach as setuptools/easy_install has done in the past to force people into installing packages as eggs rather than installing the packages in the standard write to site-packages dir way. The end result is the same: users will not want to go through those extra hoops and thus packages not hosted on PyPI itself will be regarded as broken (because they don't install using the standard method; not because they are really broken). This is what I'm trying to address in discussions like these all along. PyPI has a responsibility not only for the part consuming part of the Python community, but also for the creating part of it. While PyPI is great for indexing packages, it's not necessarily the best answer for hosting the distribution files and I believe we should open up some more to allow for making it possible to offer the same kind of user experience while not making pypi.python.org the only source of distribution files. In the Linux world, this already works great by having multiple repos which you can switch on/off easily. For eGenix, we've found a way to work around the PyPI constraints: http://www.egenix.com/library/presentations/PyCon-UK-2014-Python-Web-Installer/ which addresses our user's problems. Still, we'd much rather use standard ways of working *with* PyPI rather than work around it. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Sep 30 2014) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2014-09-30: Python Meeting Duesseldorf ... today ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From ncoghlan at gmail.com Tue Sep 30 11:19:23 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Sep 2014 19:19:23 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542A72A0.3070006@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: On 30 Sep 2014 19:06, "M.-A. Lemburg" wrote: > You're regularly bringing up this argument. > > Let's just be fair here: external hosting of packages has been made so > user unfriendly in recent pip releases, that this has pretty much > become a non-option for anyone who wants to create a user friendly > package installation environment. > > pip is unfortunately using the same kind of > --no-one-will-want-to-use-this-option-because-its-too-long > approach as setuptools/easy_install has done in the past to force > people into installing packages as eggs rather than installing > the packages in the standard write to site-packages dir way. > > The end result is the same: users will not want to go > through those extra hoops and thus packages not hosted on PyPI > itself will be regarded as broken (because they don't install using > the standard method; not because they are really broken). I personally think pip needs a virtualenv friendly equivalent to yum.repos.d or conda channels to improve the multi-index experience. This is a solved problem from my perspective, we just need folks willing to spend the time making the case for the proven solution, and adapting it to the PyPI ecosystem. > This is what I'm trying to address in discussions like these all > along. PyPI has a responsibility not only for the part consuming > part of the Python community, but also for the creating part of it. > > While PyPI is great for indexing packages, it's not necessarily > the best answer for hosting the distribution files and I believe > we should open up some more to allow for making it possible to > offer the same kind of user experience while not making pypi.python.org > the only source of distribution files. > > In the Linux world, this already works great by having multiple > repos which you can switch on/off easily. And I've said all along that's the experience I would like to see in upstream Python as well. But the code isn't going to write itself, and there's also the political work needed to persuade folks that it's the right path to go down. > For eGenix, we've found a way to work around the PyPI constraints: > > http://www.egenix.com/library/presentations/PyCon-UK-2014-Python-Web-Installer/ > > which addresses our user's problems. Still, we'd much rather use > standard ways of working *with* PyPI rather than work around it. Sure, I'd like to see that to. It's just only one problem amongst a great many of them, and people's upstream work is driven by their own balancing of a complex set of priorities. Cheers, Nick. > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Sep 30 2014) > >>> Python Projects, Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2014-09-30: Python Meeting Duesseldorf ... today > > ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Sep 30 12:23:59 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 11:23:59 +0100 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542A72A0.3070006@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: On 30 September 2014 10:06, M.-A. Lemburg wrote: > Let's just be fair here: external hosting of packages has been made so > user unfriendly in recent pip releases, that this has pretty much > become a non-option for anyone who wants to create a user friendly > package installation environment. This is a fair point. But it is an acknowledged problem in pip and we're working on resolving it. See http://legacy.python.org/dev/peps/pep-0470/. The problem (as with anything else) is that someone needs to write the code, and that hasn't happened yet. We'll get to it, but the usual "PRs welcome" comment applies here :-) Paul From donald at stufft.io Tue Sep 30 13:26:32 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Sep 2014 07:26:32 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542A72A0.3070006@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: On September 30, 2014 at 5:07:06 AM, M.-A. Lemburg (mal at egenix.com) wrote: > Thanks for the confirmation that my interpretation was wrong. This > makes things a lot better :-) > > More below... > > On 29.09.2014 11:39, Nick Coghlan wrote: > > On 29 Sep 2014 18:49, "M.-A. Lemburg" wrote: > >> > >> You are missing out on cases, where the release process causes files to > >> be omitted, human errors where packagers forget to apply changes to > >> e.g. documentation files, version files, change logs, etc., where > >> packagers want to add information that doesn't affect the software > >> itself, but meta information included in the distribution files. > >> > >> Such changes often do not affect the software itself, and so are not > >> detected by software tests. > > > > Fixing such packaging errors is the primary intended use case of the "post" > > field in PEP 440. Alternatively, many projects will just spin a new release > > that addresses those issues, just as they would for any other bug. > > > > Both of those approaches have the advantage of letting users (especially > > those operating mirrors, or downloading tarballs and feeding them into a > > separate redistribution system) easily tell whether or not they have the > > fixed version. > > I don't see how that would help. AFAIU, PyPI would regard a "3.1.4.post1" > as new release and so invalidate all other already uploaded distribution > files rather than just regard the fixed upload as update > to the "3.1.4" release. > > If we could get a widely adopted notion of build numbers into the > tools that would help a lot. > > Installers and PyPI would then regard "3.1.4-1" as belonging to > release "3.1.4", but being a more current build as a distribution > file carrying "3.1.4" in its file name. > > This would also have to work for all files uploaded for a release, > not only for some distribution formats, of course, including source > release files, Windows installers, egg files, etc. > > I'd have to run some experiments, but don't believe that setuptools > and associated tools support this at the moment. I don?t personally believe it makes sense for a source distribution to have a build number. Yes it would invalidate all other uploads because it should, They take the source distribution as an input, if you change the input then in all likelihood you change the output. This won?t be true in every case but to be able to determine when and where it won?t be true requires intimate knowledge of the package formats as well as the project in question. On the other hand Wheel files already support the concept of build numbers, I don?t see any technical reason why Windows installers can?t support them given that nothing automated pulls them down. Getting Egg files to support them may be a lost cause though since I?m not sure there?s a way to add that information to the Egg spec without mandating a new setuptools. > > >> If I understand you correctly, you are essentially suggesting that it > >> becomes impossible to ever delete anything uploaded to PyPI, i.e. > >> turning PyPI into a WORM. > > > > No, deletion remains supported. The only capability being removed is silent > > substitution of hosted files with different ones bearing the same name. > > > > So if, for example, release "3.1.4" had a packaging error, then deleting it > > would still be possible, but the replacement would need to be something > > like "3.1.4.post1" or "3.1.5", rather than being permitted to reuse the > > "3.1.4" name. > > So just to summarize: the proposal is to turn PyPI into a WORM, > but at least it's still possible to remove distribution files. > > > The external hosting support is then the mechanism by which authors can > > retain complete and total control over their release process. That approach > > avoids all licensing concerns (including those related to US export > > controls), as well as ensuring they have the ability to silently change the > > contents of previously released files if they choose to do so (although, as > > noted above, actually doing so may break installation for anyone installing > > with peep, which checks file hashes based on the first version downloaded). > > You're regularly bringing up this argument. > > Let's just be fair here: external hosting of packages has been made so > user unfriendly in recent pip releases, that this has pretty much > become a non-option for anyone who wants to create a user friendly > package installation environment. > > pip is unfortunately using the same kind of > --no-one-will-want-to-use-this-option-because-its-too-long > approach as setuptools/easy_install has done in the past to force > people into installing packages as eggs rather than installing > the packages in the standard write to site-packages dir way. > > The end result is the same: users will not want to go > through those extra hoops and thus packages not hosted on PyPI > itself will be regarded as broken (because they don't install using > the standard method; not because they are really broken). > > This is what I'm trying to address in discussions like these all > along. PyPI has a responsibility not only for the part consuming > part of the Python community, but also for the creating part of it. > > While PyPI is great for indexing packages, it's not necessarily > the best answer for hosting the distribution files and I believe > we should open up some more to allow for making it possible to > offer the same kind of user experience while not making pypi.python.org > the only source of distribution files. > > In the Linux world, this already works great by having multiple > repos which you can switch on/off easily. This is already 100% possible in the same fashion as the Linux repos already do. You can add extra indexes to pip and have been anble to for a long time. This almost exactly matches the behavior of the Linux repositories where ``apt-get install thing-that-isnt-in-a-repo-i-have-added`` throws an error that it can't find something. However you can then add an external index either via command line flag, environment variable, or config file (currently only per user, pip 6.0 will have per machine and per virtual env files as well). In fact the only difference between not using PyPI as the package repository and using PyPI as the package repository is that pip and setuptools ship with the PypI URL out of the box. Then PEP 470, which Paul mentioned, does one better and enables a discovery mechanism so that if you register your external index with PyPI for your project, that someone doing ``pip install a-thing-which-requires-an-external-index`` it'll suggest to them to add the registered external index. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Tue Sep 30 13:30:56 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Sep 2014 07:30:56 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: On September 30, 2014 at 5:19:50 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 30 Sep 2014 19:06, "M.-A. Lemburg" wrote: > > You're regularly bringing up this argument. > > > > Let's just be fair here: external hosting of packages has been made so > > user unfriendly in recent pip releases, that this has pretty much > > become a non-option for anyone who wants to create a user friendly > > package installation environment. > > > > pip is unfortunately using the same kind of > > --no-one-will-want-to-use-this-option-because-its-too-long > > approach as setuptools/easy_install has done in the past to force > > people into installing packages as eggs rather than installing > > the packages in the standard write to site-packages dir way. > > > > The end result is the same: users will not want to go > > through those extra hoops and thus packages not hosted on PyPI > > itself will be regarded as broken (because they don't install using > > the standard method; not because they are really broken). > > I personally think pip needs a virtualenv friendly equivalent to > yum.repos.d or conda channels to improve the multi-index experience. > > This is a solved problem from my perspective, we just need folks willing to > spend the time making the case for the proven solution, and adapting it to > the PyPI ecosystem. In pip 1.5.x (current latest release) people can add additional indexes (or replace PyPI all together) on a per user basis. In pip 6.0 (next release) that still exists, and in addition it also includes a global configuration file for the entire machine and a per virtual env configuration file. About the only thing it doesn't support is a directory of files ala apt.sources.d/*.list. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Tue Sep 30 14:58:09 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Sep 2014 22:58:09 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: On 30 September 2014 21:30, Donald Stufft wrote: > In pip 1.5.x (current latest release) people can add additional indexes (or > replace PyPI all together) on a per user basis. In pip 6.0 (next release) that > still exists, and in addition it also includes a global configuration file > for the entire machine and a per virtual env configuration file. > > About the only thing it doesn't support is a directory of files ala > apt.sources.d/*.list. Donald & I chatted about this off-list, and the base URL is currently the only configuration setting pip retains per index. While we may still want to explore named indices at some point in the future, at this point in time, they wouldn't add a lot over just using the index URL directly. So I think this is basically already covered on the pip side (once 6.0 goes out), and then PEP 470 aims to improve the user experience for external index discovery. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From david.genest at ubisoft.com Tue Sep 30 15:32:14 2014 From: david.genest at ubisoft.com (David Genest) Date: Tue, 30 Sep 2014 13:32:14 +0000 Subject: [Distutils] Wheels and dependent third party dlls on windows Message-ID: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> Hi, I was wondering what is the recommended approach to bundling runtime dll dependencies when using wheels. We are migrating from egg to wheels for environment installation and of various python dependencies. Some of those have extension modules, and some have extension modules that depend on the presence of a third party dll (in our situation, libzmq-v100-mt-4_0_3.dll). Up to now, these dlls have been installed by the use of the scripts parameter in the setup command of setup.py, but https://mail.python.org/pipermail/distutils-sig/2014-July/024554.html points to it as not being a good idea. But the only way to get a dependent dll found on windows is to have it on PATH, and the scripts directory on windows is on path when a virtualenv is activated. I have observed two situations: 1) If we use pip wheel to build the wheel, the scripts parameter is ignored and the dlls do not even get to the archive. 2) If we use setup.py bdist_wheel, the dll gets into the archive, but this relies on the non-documented feature of packaging scripts-as-data of dlls. What is the correct approach at this time ? Thanks, David From p.f.moore at gmail.com Tue Sep 30 16:17:49 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 15:17:49 +0100 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 30 September 2014 14:32, David Genest wrote: > But the only way to get a dependent dll found on windows is to have it on PATH, and the scripts directory on > windows is on path when a virtualenv is activated. This is not true. Python loads DLLs with LOAD_WITH_ALTERED_SEARCH_PATH, to allow them to be located alongside the pyd file. You should therefore be able to ship the dependent dll in the package directory (which wheels support fine). Paul From p.f.moore at gmail.com Tue Sep 30 16:25:47 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 15:25:47 +0100 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> Message-ID: On 28 August 2014 19:58, Donald Stufft wrote: > To fix this I'm going to modify PyPI so that it uses the normalized name in > the /simple/ URL and redirects everything else to the non-normalized name. > I'm also going to submit a PR to bandersnatch so that it will use normalized > names for it's directories and such as well. These two changes will make it so > that the client side will know ahead of time exactly what form the server expects > any given name to be in. This will allow a change in pip to happen which > will pre-normalize all names which will make the interaction with mirrors > better and will reduce the number of HTTP requests that a single ``pip install`` > needs to make. Just to clarify, this means that if I want to find the simple index page for a distribution, without hitting redirects, I should first normalise the project name (so "Django" becomes "django") and then request https://pypi.python.org/simple// (with a slash on the end). Is that correct? It seems to match what I see in practice (in particular, the version without a terminating slash redirects to the version with a terminating slash). The JSON API has the opposite behaviour - https://pypi.python.org/pypi/Django/json redirects to https://pypi.python.org/pypi/django/json. Should that not be changed to match? Will it be? Paul From david.genest at ubisoft.com Tue Sep 30 16:31:55 2014 From: david.genest at ubisoft.com (David Genest) Date: Tue, 30 Sep 2014 14:31:55 +0000 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> > This is not true. Python loads DLLs with LOAD_WITH_ALTERED_SEARCH_PATH, to allow them to be located alongside the pyd file. You should therefore be able to ship the > dependent dll in the package directory (which wheels support fine). > Paul Ok, so what if the dll is shared in a given environment (multiple extensions use it)?, the shared dll should be copied to every package? Won't that cause multiple loads by the system? Thanks for your response, D. From fungi at yuggoth.org Tue Sep 30 16:34:59 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Sep 2014 14:34:59 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: <20140930143459.GJ9816@yuggoth.org> On 2014-09-30 07:26:32 -0400 (-0400), Donald Stufft wrote: [...] > I don?t personally believe it makes sense for a source > distribution to have a build number. [...] I'm becoming less and less convinced it actually *is* a source distribution any more. My constant interaction with downstream Linux distro packagers shows a growing disinterest in consuming release "tarballs" of software, that they would generally prefer to pull releases directly from tags in the project's revision control systems instead. Couple this with the fact that setup.py sdist can (and often does) include autogenerated content in its output which the packagers would rather strip or regenerate themselves, and I'm of the opinion that the tarballs I create are only for PyPI/pip consumption any longer. This effectively makes them a channel-specific packaging format rather than a generally reusable release source artifact. -- Jeremy Stanley From p.f.moore at gmail.com Tue Sep 30 16:37:16 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 15:37:16 +0100 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 30 September 2014 15:31, David Genest wrote: > Ok, so what if the dll is shared in a given environment (multiple extensions use it)?, the shared dll should be copied to every package? Won't that cause multiple loads by the system? I honestly don't know in that case, sorry. You might get a better answer on python-list for that, if no-one here can help. Presumably the usage is all within one distribution, otherwise the question would have to be, which distribution ships the DLL? But that question ends up leading onto the sort of discussion that starts "well, I wouldn't design your system the way you have", which isn't likely to be of much help to you :-( Sorry I can't offer any more help. Paul From ncoghlan at gmail.com Tue Sep 30 16:41:38 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 1 Oct 2014 00:41:38 +1000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140930143459.GJ9816@yuggoth.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> Message-ID: On 1 October 2014 00:34, Jeremy Stanley wrote: > On 2014-09-30 07:26:32 -0400 (-0400), Donald Stufft wrote: > [...] >> I don?t personally believe it makes sense for a source >> distribution to have a build number. > [...] > > I'm becoming less and less convinced it actually *is* a source > distribution any more. My constant interaction with downstream Linux > distro packagers shows a growing disinterest in consuming release > "tarballs" of software, that they would generally prefer to pull > releases directly from tags in the project's revision control > systems instead. Which distro packagers? For Fedora, even if we pull from an upstream source control system, we'll still wrap it as a tarball inside an SRPM in order to feed it into the buld system. > Couple this with the fact that setup.py sdist can > (and often does) include autogenerated content in its output which > the packagers would rather strip or regenerate themselves, and I'm > of the opinion that the tarballs I create are only for PyPI/pip > consumption any longer. This effectively makes them a > channel-specific packaging format rather than a generally reusable > release source artifact. Why is your setup.py sdist including autogenerated content? It shouldn't be doing that. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fungi at yuggoth.org Tue Sep 30 16:24:35 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Sep 2014 14:24:35 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542A72A0.3070006@egenix.com> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: <20140930142434.GI9816@yuggoth.org> On 2014-09-30 11:06:40 +0200 (+0200), M.-A. Lemburg wrote: [...] > You're regularly bringing up this argument. > > Let's just be fair here: external hosting of packages has been > made so user unfriendly in recent pip releases, that this has > pretty much become a non-option for anyone who wants to create a > user friendly package installation environment. [...] And I'm seeing this argument regularly brought up as well. As a heavy user of Python packages and someone who maintains very large systems depending on them, I had a hard time trusting pip back when it still would spontaneously grab software from wherever the upstream author had decided to stick it that day (with whatever hosting stability issues that implied). Our projects would regularly audit our hundreds of requirements just to make sure nobody *accidentally* added one which was hosted off PyPI, and that our dependencies hadn't suddenly decided to start sticking new releases off PyPI. The suggestion that some developers want to control their release process *so* tightly that they host their software in random places without uploading them to the community package system or quietly replace broken releases with new packages using the same version numbers is a non-argument as far as I'm concerned. The software authors I've talked to in these cases are pretty much always easily convinced that those are troublesome behaviors (once it's pointed out) and readily adjust their release processes to a more user-friendly end result. If there are a few who are so completely insistent on continuing in this manner the projects I work on will not use them (for our own sanity), and if pip and PyPI implement assurances against these which have a side effect of driving *those particular* development teams off of the community packaging channel then that can only be a positive net effect in my opinion. -- Jeremy Stanley From ncoghlan at gmail.com Tue Sep 30 16:44:18 2014 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 1 Oct 2014 00:44:18 +1000 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 1 October 2014 00:37, Paul Moore wrote: > On 30 September 2014 15:31, David Genest wrote: >> Ok, so what if the dll is shared in a given environment (multiple extensions use it)?, the shared dll should be copied to every package? Won't that cause multiple loads by the system? > > I honestly don't know in that case, sorry. You might get a better > answer on python-list for that, if no-one here can help. > > Presumably the usage is all within one distribution, otherwise the > question would have to be, which distribution ships the DLL? But that > question ends up leading onto the sort of discussion that starts > "well, I wouldn't design your system the way you have", which isn't > likely to be of much help to you :-( > > Sorry I can't offer any more help. Note that this is the external binary dependency problem that the scientific folks are currently using conda to address. It's basically the point where you cross the line from "language specific packaging system" to "multi-language cross-platform platform". That said, pip/wheel *may* get some capabilities along these lines in the future, it just isn't a high priority at this point. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Sep 30 16:45:22 2014 From: dholth at gmail.com (Daniel Holth) Date: Tue, 30 Sep 2014 10:45:22 -0400 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: Or you could just create a Python package that only contains the dll, and depend on it from your others. On Tue, Sep 30, 2014 at 10:44 AM, Nick Coghlan wrote: > On 1 October 2014 00:37, Paul Moore wrote: >> On 30 September 2014 15:31, David Genest wrote: >>> Ok, so what if the dll is shared in a given environment (multiple extensions use it)?, the shared dll should be copied to every package? Won't that cause multiple loads by the system? >> >> I honestly don't know in that case, sorry. You might get a better >> answer on python-list for that, if no-one here can help. >> >> Presumably the usage is all within one distribution, otherwise the >> question would have to be, which distribution ships the DLL? But that >> question ends up leading onto the sort of discussion that starts >> "well, I wouldn't design your system the way you have", which isn't >> likely to be of much help to you :-( >> >> Sorry I can't offer any more help. > > Note that this is the external binary dependency problem that the > scientific folks are currently using conda to address. It's basically > the point where you cross the line from "language specific packaging > system" to "multi-language cross-platform platform". > > That said, pip/wheel *may* get some capabilities along these lines in > the future, it just isn't a high priority at this point. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From robin at reportlab.com Tue Sep 30 12:35:15 2014 From: robin at reportlab.com (Robin Becker) Date: Tue, 30 Sep 2014 11:35:15 +0100 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> Message-ID: <542A8763.3020506@chamonix.reportlab.co.uk> On 29/09/2014 10:50, Nick Coghlan wrote: > On 29 Sep 2014 19:04, "M.-A. Lemburg" wrote: >> >> Do you seriously want to force package authors to cut a new release >> just because a single uploaded distribution file is broken for >> some reason and then ask all users who have already installed one >> of the non-broken ones to upgrade again, even though they are not >> affected ? > > Yes, I do. Silently changing released artefacts is actively user hostile. > It breaks mirroring, it breaks redistribution, it breaks security audits, > and it can even break installation for security conscious users that are > using peep rather than pip. > >> > ....... What would be the objection to removing or nulling a release package that had actual malware embedded in it some how. It seems reasonable to have some last resort take down mechanism. -- Robin Becker From yasumoto7 at gmail.com Tue Sep 30 00:42:08 2014 From: yasumoto7 at gmail.com (Joe Smith) Date: Mon, 29 Sep 2014 15:42:08 -0700 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <20140929103609.6a418fe5@anarchist.wooz.org> Message-ID: To add slightly to the bikeshed (sorry, since I'm not going to implement this and have no horse in the race) you could also take a look at spinning up a vm using Vagrant- it's worked out pretty well for Apache Aurora as a testing environment . Although you have the high upfront-cost of downloading the image, it's slightly offset by not requiring an internet connection for subsequent testing. On Mon, Sep 29, 2014 at 3:26 PM, Paul Moore wrote: > On 29 September 2014 23:16, Donald Stufft wrote: > >> It occurs to me that a devpi quickstart for OpenShift (or another > PaaS's) > >> free tier could be useful - if a devpi instance is just for pre-release > >> testing of packages, then the free tier should accommodate it > comfortably. > >> > >> > > > > I'm not familiar with with what is available on OpenShift, however this > sounds > > like something that would be great for Heroku's Deploy button[1]. > > > > [1] https://devcenter.heroku.com/articles/heroku-button > > I've no experience with either of those services, but something like > that sounds awesome! (I've toyed with setting up my own local devpi > instance, but never got round to it... > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Sep 30 17:03:58 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 16:03:58 +0100 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On 30 September 2014 15:45, Daniel Holth wrote: > Or you could just create a Python package that only contains the dll, > and depend on it from your others. The problem is getting the DLL on PATH. What you could do is distribute a package containing: 1. The dll 2. An __init__.py that adds the package directory (where the DLL is) to os.environ['PATH']. If you import this package before any of the ones that depend on the DLL (or even in the __init__ of those packages) then you should have PATH set up correctly, and things will work as you need. I think this works, although I will admit it feels like a bit of a hack to me. Paul From p.f.moore at gmail.com Tue Sep 30 17:04:45 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 16:04:45 +0100 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542A8763.3020506@chamonix.reportlab.co.uk> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <54292091.5010507@egenix.com> <542A8763.3020506@chamonix.reportlab.co.uk> Message-ID: On 30 September 2014 11:35, Robin Becker wrote: > What would be the objection to removing or nulling a release package that > had actual malware embedded in it some how. It seems reasonable to have some > last resort take down mechanism. None at all. Removal is specifically still allowed. Paul From p.f.moore at gmail.com Tue Sep 30 17:14:20 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 16:14:20 +0100 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> Message-ID: On 30 September 2014 15:25, Paul Moore wrote: > On 28 August 2014 19:58, Donald Stufft wrote: >> To fix this I'm going to modify PyPI so that it uses the normalized name in >> the /simple/ URL and redirects everything else to the non-normalized name. >> I'm also going to submit a PR to bandersnatch so that it will use normalized >> names for it's directories and such as well. These two changes will make it so >> that the client side will know ahead of time exactly what form the server expects >> any given name to be in. This will allow a change in pip to happen which >> will pre-normalize all names which will make the interaction with mirrors >> better and will reduce the number of HTTP requests that a single ``pip install`` >> needs to make. > > Just to clarify, this means that if I want to find the simple index > page for a distribution, without hitting redirects, I should first > normalise the project name (so "Django" becomes "django") and then > request https://pypi.python.org/simple// (with a > slash on the end). Is that correct? It seems to match what I see in > practice (in particular, the version without a terminating slash > redirects to the version with a terminating slash). > > The JSON API has the opposite behaviour - > https://pypi.python.org/pypi/Django/json redirects to > https://pypi.python.org/pypi/django/json. Should that not be changed > to match? Will it be? One further thought. Where is the definition of how to normalise a name? I could probably dig through the pip sources and find it, but it would be nice if it were documented somewhere. From experiment, it seems like lowercase, and with hyphens rather than underscores, is the definition. Does PyPI allow names not allowed by http://legacy.python.org/dev/peps/pep-0426/#name and if it does, how are they normalised? In case it's not obvious, I'm writing a client for the PyPI API, and these questions are coming out of that process. Paul. PS The Python wiki has pages for the XMLRPC and JSON API. Any objections to me adding a page for the simple API? (The obvious objection being that it's documented somewhere else, and I should just put a pointer to the real documentation...) Paul From carl at oddbird.net Tue Sep 30 17:22:29 2014 From: carl at oddbird.net (Carl Meyer) Date: Tue, 30 Sep 2014 09:22:29 -0600 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> Message-ID: <542ACAB5.6010705@oddbird.net> On 09/30/2014 08:41 AM, Nick Coghlan wrote: > Why is your setup.py sdist including autogenerated content? It > shouldn't be doing that. Don't almost all sdists? At the very least egg-info is autogenerated, MANIFEST usually is too. Carl From barry at python.org Tue Sep 30 17:35:10 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 30 Sep 2014 11:35:10 -0400 Subject: [Distutils] Immutable Files on PyPI References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> Message-ID: <20140930113510.34bf128e@anarchist.wooz.org> On Sep 30, 2014, at 11:06 AM, M.-A. Lemburg wrote: >Installers and PyPI would then regard "3.1.4-1" as belonging to >release "3.1.4", but being a more current build as a distribution >file carrying "3.1.4" in its file name. Please don't literally use "3.1.4-1". That will cause all kinds of havoc with the Debian ecosystem. There we use a dash to separate upstream version numbers from Debian version numbers. Thus 3.1.4-1 means the first Debian upload of upstream's 3.1.4. 3.1.4-2 is the second, etc. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From wichert at wiggy.net Tue Sep 30 17:40:17 2014 From: wichert at wiggy.net (Wichert Akkerman) Date: Tue, 30 Sep 2014 17:40:17 +0200 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140930113510.34bf128e@anarchist.wooz.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930113510.34bf128e@anarchist.wooz.org> Message-ID: <97F49C9B-7786-40ED-8DDE-86F2A6E33EAA@wiggy.net> > On 30 Sep 2014, at 17:35, Barry Warsaw wrote: > > On Sep 30, 2014, at 11:06 AM, M.-A. Lemburg wrote: > >> Installers and PyPI would then regard "3.1.4-1" as belonging to >> release "3.1.4", but being a more current build as a distribution >> file carrying "3.1.4" in its file name. > > Please don't literally use "3.1.4-1". That will cause all kinds of havoc with > the Debian ecosystem. There we use a dash to separate upstream version > numbers from Debian version numbers. Thus 3.1.4-1 means the first Debian > upload of upstream's 3.1.4. 3.1.4-2 is the second, etc. Debian does allow 3.1.4-1-1. I forgot the exact rules, but I seem to remember the package version is considered to start after the last dash. Debian will also sort 3.1.4a after 3.1.4 unlike Python rules, so version ?massaging? might be necessary in other situations as well. Wichert. From fungi at yuggoth.org Tue Sep 30 17:44:00 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Sep 2014 15:44:00 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <542ACAB5.6010705@oddbird.net> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> <542ACAB5.6010705@oddbird.net> Message-ID: <20140930154359.GL9816@yuggoth.org> On 2014-09-30 09:22:29 -0600 (-0600), Carl Meyer wrote: > On 09/30/2014 08:41 AM, Nick Coghlan wrote: > > Why is your setup.py sdist including autogenerated content? It > > shouldn't be doing that. > > Don't almost all sdists? At the very least egg-info is > autogenerated, MANIFEST usually is too. Bingo. Also in some cases I see files autogenerated from VCS metadata to avoid double-entry... things like change histories, authors lists, documentation indices, and even version numbers. -- Jeremy Stanley From fungi at yuggoth.org Tue Sep 30 17:55:30 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Sep 2014 15:55:30 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> Message-ID: <20140930155530.GM9816@yuggoth.org> On 2014-10-01 00:41:38 +1000 (+1000), Nick Coghlan wrote: [...] > Which distro packagers? For Fedora, even if we pull from an > upstream source control system, we'll still wrap it as a tarball > inside an SRPM in order to feed it into the buld system. [...] Precisely that. Also lot of Debian, Ubuntu, SuSE, et cetera packagers are following that same pattern. Even when there is an upstream release tarball available, many prefer to create one from the VCS themselves and use that as the basis for their packaging. What I've seen suggests the increase in (not necessarily Python-based) projects who don't bother to create tarballs and simply "release" from their version control systems has resulted in a proliferation of distro packager countermeasures/immune responses. They're beginning to rely on tools which automate dealing with the fact that there may be no initial tarball (Debian's git-buildpackage for example), and in the end it becomes easier for them to just assume there is never an initial tarball and always create their own anyway. So while their packaging formats use tarballs internally, a lot of them are no longer using *upstream-provided* tarballs in source packages and the existence of tarballs in their source packages has instead become a mere implementation detail. -- Jeremy Stanley From olivier.grisel at ensta.org Tue Sep 30 17:56:40 2014 From: olivier.grisel at ensta.org (Olivier Grisel) Date: Tue, 30 Sep 2014 17:56:40 +0200 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: Thank you very Steve for pushing that installer out, this is very appreciated. What is the story for project maintainers who want to also support Python 3.3+ (for 32 bit and 64 bit python) for their project with binary wheels for windows? At the moment it's possible to use the Windows SDK as documented here: http://scikit-learn.org/dev/install.html#building-on-windows However getting VC Express + Windows SDK is hard and slow to setup and cannot be scripted in a CI environment. In the mean time, it's possible to use CI environments that already feature all the necessary versions of the VC compilers and libraries such as appveyor.com, see this demo project: https://github.com/ogrisel/python-appveyor-demo https://ci.appveyor.com/project/ogrisel/python-appveyor-demo -- Olivier From p.f.moore at gmail.com Tue Sep 30 18:03:42 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 17:03:42 +0100 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: On 30 September 2014 16:56, Olivier Grisel wrote: > What is the story for project maintainers who want to also support > Python 3.3+ (for 32 bit and 64 bit python) for their project with > binary wheels for windows? It would be so easy at this point to ask "What's the chance of a similarly packaged version of VS2010 for Python 3.2/3.3/3.4?" But I really don't want Steve to get into any trouble with people saying "now look what you've started" :-) Paul From Steve.Dower at microsoft.com Tue Sep 30 18:04:43 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 30 Sep 2014 16:04:43 +0000 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: <5e3f2a37096540b1af22ba767904f398@BN1PR0301MB0723.namprd03.prod.outlook.com> Olivier Grisel wrote: > Thank you very Steve for pushing that installer out, this is very appreciated. > > What is the story for project maintainers who want to also support Python 3.3+ > (for 32 bit and 64 bit python) for their project with binary wheels for windows? > At the moment it's possible to use the Windows SDK as documented here: > > http://scikit-learn.org/dev/install.html#building-on-windows > > However getting VC Express + Windows SDK is hard and slow to setup and cannot be > scripted in a CI environment. It can be, but there are a few tricks involved... > In the mean time, it's possible to use CI environments that already feature all > the necessary versions of the VC compilers and libraries such as appveyor.com, > see this demo project: > > https://github.com/ogrisel/python-appveyor-demo > https://ci.appveyor.com/project/ogrisel/python-appveyor-demo This is the best way to have it set up - create a base VM image for your CI environment manually and clone it. I believe all the major cloud providers support this, though using a CI specialist like Appveyor makes it even easier. As far as the future story, it will probably be "move to 3.5 on VC14 as soon as possible". Internally, I'll be pushing for a CI-compatible installer for our build tools, which I expect will actually get quite a bit of traction right now. Unfortunately, going back in time to do it for both VC9 and VC10 was not an option. We chose VC9 because 2.7 is where people are stuck, while migrating from 3.3->3.5 should not be as big an issue. Cheers, Steve > -- > Olivier From barry at python.org Tue Sep 30 18:07:23 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 30 Sep 2014 12:07:23 -0400 Subject: [Distutils] Immutable Files on PyPI References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> Message-ID: <20140930120723.44c48bca@anarchist.wooz.org> On Sep 30, 2014, at 02:34 PM, Jeremy Stanley wrote: >I'm becoming less and less convinced it actually *is* a source >distribution any more. My constant interaction with downstream Linux >distro packagers shows a growing disinterest in consuming release >"tarballs" of software, that they would generally prefer to pull >releases directly from tags in the project's revision control >systems instead. This is not a universally held consensus. We had a discussion about this at the recently concluded Debian conference. There are folks who only want to use git tags as the consumption point for Debian packages, but this opinion was not the majority opinion. There's no guarantee that what you get from a tagged upstream source revision will match what comes in the sdist tarball. Plus, the greater Debian ecosystem is firmly camped in the tarball world, so even if you do checkout from a tag, you still have to build a tarball for uploads, *and* you have to do it in a binary exact copy reproducible way. Thus, in the Debian Python team our policy is that if upstream produces tarballs (as is the case for the vast majority of our packages, which are sourced from PyPI), then we want the Debian package to use tarballs. There can be exceptions to the rule, but still today they are exceptions. I don't think the tarball format is dead yet. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From barry at python.org Tue Sep 30 18:11:50 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 30 Sep 2014 12:11:50 -0400 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <97F49C9B-7786-40ED-8DDE-86F2A6E33EAA@wiggy.net> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930113510.34bf128e@anarchist.wooz.org> <97F49C9B-7786-40ED-8DDE-86F2A6E33EAA@wiggy.net> Message-ID: <20140930121150.0afb65e6@anarchist.wooz.org> On Sep 30, 2014, at 05:40 PM, Wichert Akkerman wrote: >Debian does allow 3.1.4-1-1. I forgot the exact rules, but I seem to remember >the package version is considered to start after the last dash. Debian will >also sort 3.1.4a after 3.1.4 unlike Python rules, so version ?massaging? >might be necessary in other situations as well. Yeah, but it's maddeningly confusing. The havoc I mention is human based[*] if not tools based. ;) -Barry [*] at least to *this* human. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From Steve.Dower at microsoft.com Tue Sep 30 18:07:49 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 30 Sep 2014 16:07:49 +0000 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> Message-ID: <1607e5f43fc545f9bc48d35d576b6243@BN1PR0301MB0723.namprd03.prod.outlook.com> Paul Moore wrote: > On 30 September 2014 16:56, Olivier Grisel wrote: >> What is the story for project maintainers who want to also support >> Python 3.3+ (for 32 bit and 64 bit python) for their project with >> binary wheels for windows? > > It would be so easy at this point to ask "What's the chance of a similarly packaged > version of VS2010 for Python 3.2/3.3/3.4?" But I really don't want Steve to get into > any trouble with people saying "now look what you've started" :-) :-) The answer is basically no chance - the slippery slope was considered and shut down. If VC14 slips significantly and we have to stick with VC10 for Python 3.5, then I'll make the case again and see what we get, but for now the future story is to upgrade. Luckily, 3.3->3.5 is not going to be as hard as 2.7->3.5. Cheers, Steve > >Paul From fungi at yuggoth.org Tue Sep 30 18:25:31 2014 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 30 Sep 2014 16:25:31 +0000 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <20140930120723.44c48bca@anarchist.wooz.org> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> <20140930120723.44c48bca@anarchist.wooz.org> Message-ID: <20140930162531.GN9816@yuggoth.org> On 2014-09-30 12:07:23 -0400 (-0400), Barry Warsaw wrote: > We had a discussion about this at the recently concluded Debian > conference. There are folks who only want to use git tags as the > consumption point for Debian packages, but this opinion was not > the majority opinion. Good to know. The Debian Developer packaging the majority of the projects I work on must be in that minority. > There's no guarantee that what you get from a tagged upstream > source revision will match what comes in the sdist tarball. [...] Indeed, we've implemented quite a few workarounds specifically requested by distro packagers who want to be able to ignore our tarballs and use their own tools/workflow to generate them without ever even running sdist. It seems backwards to me, but I'm not the one doing their packaging work. > Thus, in the Debian Python team our policy is that if upstream produces > tarballs (as is the case for the vast majority of our packages, which are > sourced from PyPI), then we want the Debian package to use tarballs. [...] Refreshing to hear! -- Jeremy Stanley From p.f.moore at gmail.com Tue Sep 30 18:54:01 2014 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Sep 2014 17:54:01 +0100 Subject: [Distutils] Microsoft Visual C++ Compiler for Python 2.7 In-Reply-To: <1607e5f43fc545f9bc48d35d576b6243@BN1PR0301MB0723.namprd03.prod.outlook.com> References: <416b70639b2c41a1a109fd14623e762b@DM2PR0301MB0734.namprd03.prod.outlook.com> <3d976f18b62d4db1a9b9da30976cdd85@DM2PR0301MB0734.namprd03.prod.outlook.com> <1607e5f43fc545f9bc48d35d576b6243@BN1PR0301MB0723.namprd03.prod.outlook.com> Message-ID: On 30 September 2014 17:07, Steve Dower wrote: > The answer is basically no chance - the slippery slope was considered and shut down. Fair enough. Actually, it's good to know that this sort of thing was thought through. > If VC14 slips significantly and we have to stick with VC10 for Python 3.5, then I'll make the case again and see what we get, but for now the future story is to upgrade. Luckily, 3.3->3.5 is not going to be as hard as 2.7->3.5. Agreed - and given that VC14 Express will include 32- and 64-bit compilers, the whole SDK rigmarole is avoided which was the key pain point (well, that and the fact that things went out of support, but the forward compatibility of VC14 addresses that one too). Paul From barry at python.org Tue Sep 30 20:07:31 2014 From: barry at python.org (Barry Warsaw) Date: Tue, 30 Sep 2014 14:07:31 -0400 Subject: [Distutils] Immutable Files on PyPI References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930143459.GJ9816@yuggoth.org> <20140930120723.44c48bca@anarchist.wooz.org> <20140930162531.GN9816@yuggoth.org> Message-ID: <20140930140731.62d3a5d1@anarchist.wooz.org> On Sep 30, 2014, at 04:25 PM, Jeremy Stanley wrote: >Good to know. The Debian Developer packaging the majority of the >projects I work on must be in that minority. IIRC, the OpenStack packagers were probably the most prominent proponent of release-from-tag. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From robertc at robertcollins.net Tue Sep 30 21:17:45 2014 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 1 Oct 2014 08:17:45 +1300 Subject: [Distutils] Immutable Files on PyPI In-Reply-To: <97F49C9B-7786-40ED-8DDE-86F2A6E33EAA@wiggy.net> References: <0F84196B-1546-416D-B2A7-0099DF82089B@rackspace.com> <54287F6D.4010901@egenix.com> <1B83C5C2-373C-4C85-9306-4429C5BD5250@rackspace.com> <54291C62.8050405@egenix.com> <542A72A0.3070006@egenix.com> <20140930113510.34bf128e@anarchist.wooz.org> <97F49C9B-7786-40ED-8DDE-86F2A6E33EAA@wiggy.net> Message-ID: On 1 October 2014 04:40, Wichert Akkerman wrote: > >> On 30 Sep 2014, at 17:35, Barry Warsaw wrote: >> >> On Sep 30, 2014, at 11:06 AM, M.-A. Lemburg wrote: >> >>> Installers and PyPI would then regard "3.1.4-1" as belonging to >>> release "3.1.4", but being a more current build as a distribution >>> file carrying "3.1.4" in its file name. >> >> Please don't literally use "3.1.4-1". That will cause all kinds of havoc with >> the Debian ecosystem. There we use a dash to separate upstream version >> numbers from Debian version numbers. Thus 3.1.4-1 means the first Debian >> upload of upstream's 3.1.4. 3.1.4-2 is the second, etc. > > Debian does allow 3.1.4-1-1. I forgot the exact rules, but I seem to remember the package version is considered to start after the last dash. Debian will also sort 3.1.4a after 3.1.4 unlike Python rules, so version ?massaging? might be necessary in other situations as well. It's all in policy :) PyPI 3.1.4a should be 3.1.4~a in Debian. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From chris.barker at noaa.gov Tue Sep 30 21:30:12 2014 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 30 Sep 2014 12:30:12 -0700 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On Tue, Sep 30, 2014 at 7:45 AM, Daniel Holth wrote: > Or you could just create a Python package that only contains the dll, > and depend on it from your others. but it won't be on the dll search path. Paul Moore wrote: > What you could do is distribute a package containing: > 1. The dll > 2. An __init__.py that adds the package directory (where the DLL is) > to os.environ['PATH']. > If you import this package before any of the ones that depend on the > DLL (or even in the __init__ of those packages) then you should have > PATH set up correctly, and things will work as you need. > I think this works, although I will admit it feels like a bit of a hack to > me. > -- I'm not sure it will -- I tried somethign similar -- where I compiled some code into an extension, hoping that importing that extension would make that code available to other extensions -- works fine on OS-X and Linux, but no dice on Windows. So we build a dll of the code we need to share, and link all the extensions that need it to it. In our case, all the extensions are part of the same package, so we can put the dll in with the extensions and we're set. It seems the "right" thing to do here is put the dll in with the dlls provided by python (can't remember that path right now -- no Windows box running) -- but I don't know that you can do that with wheel -- and it would make it easy for different packages to stomp on each-other. I actually think the thing to do here is either statically link it to each extension that needs it, or deliver it with each of them, in the package dir. Or, if you don't want duplicates, then use conda -- it's designed just for this. I'd also look at what Chris Gohlke does with his MSI installers: http://www.lfd.uci.edu/~gohlke/pythonlibs/ maybe he's doing something you can do with MSI that you can't do with wheels, but worth a look. -Chris Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Sep 30 21:45:23 2014 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Sep 2014 15:45:23 -0400 Subject: [Distutils] Handling Case/Normalization Differences In-Reply-To: References: <26B92C5C-FE81-4D7D-BEC1-7E89D7791451@stufft.io> Message-ID: > On Sep 30, 2014, at 11:14 AM, Paul Moore wrote: > > On 30 September 2014 15:25, Paul Moore wrote: >> On 28 August 2014 19:58, Donald Stufft wrote: >>> To fix this I'm going to modify PyPI so that it uses the normalized name in >>> the /simple/ URL and redirects everything else to the non-normalized name. >>> I'm also going to submit a PR to bandersnatch so that it will use normalized >>> names for it's directories and such as well. These two changes will make it so >>> that the client side will know ahead of time exactly what form the server expects >>> any given name to be in. This will allow a change in pip to happen which >>> will pre-normalize all names which will make the interaction with mirrors >>> better and will reduce the number of HTTP requests that a single ``pip install`` >>> needs to make. >> >> Just to clarify, this means that if I want to find the simple index >> page for a distribution, without hitting redirects, I should first >> normalise the project name (so "Django" becomes "django") and then >> request https://pypi.python.org/simple// (with a >> slash on the end). Is that correct? It seems to match what I see in >> practice (in particular, the version without a terminating slash >> redirects to the version with a terminating slash). >> >> The JSON API has the opposite behaviour - >> https://pypi.python.org/pypi/Django/json redirects to >> https://pypi.python.org/pypi/django/json. Should that not be changed >> to match? Will it be? > > One further thought. Where is the definition of how to normalise a > name? I could probably dig through the pip sources and find it, but it > would be nice if it were documented somewhere. From experiment, it > seems like lowercase, and with hyphens rather than underscores, is the > definition. Does PyPI allow names not allowed by > http://legacy.python.org/dev/peps/pep-0426/#name and if it does, how > are they normalised? > > In case it's not obvious, I'm writing a client for the PyPI API, and > these questions are coming out of that process. > > Paul. > > PS The Python wiki has pages for the XMLRPC and JSON API. Any > objections to me adding a page for the simple API? (The obvious > objection being that it's documented somewhere else, and I should just > put a pointer to the real documentation...) > > Paul PyPI follows PEP 426, I think we even include the confusables support. Generally the normalization is done with pkg_resources.safe_name(?).lower(). I don?t think there?s any reason not to document it, setuptools has it?s routine documented but that does?t have everything that the /simple/ API supports documented since it?s really documentation for what setuptools does. The URL redirect for the json endpoint was made to match what happens with /pypi/django/. Lately I?ve been thinking that maybe we should just use the normalized form in URLs always and use the author provided name for display purposes. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From Steve.Dower at microsoft.com Tue Sep 30 22:28:44 2014 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 30 Sep 2014 20:28:44 +0000 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: <7e2ea5d99be446b1ac7d6b4b263fbc96@DM2PR0301MB0734.namprd03.prod.outlook.com> David Genest wrote: > Subject: Re: [Distutils] Wheels and dependent third party dlls on windows > > >> This is not true. Python loads DLLs with >> LOAD_WITH_ALTERED_SEARCH_PATH, to allow them to be located alongside the pyd > file. You should therefore be able to ship the dependent dll in the package > directory (which wheels support fine). > >> Paul > > Ok, so what if the dll is shared in a given environment (multiple extensions use > it)?, the shared dll should be copied to every package? Won't that cause > multiple loads by the system? A DLL can only be loaded once per process (python.exe, in this case) and it will be loaded based on its file name (not the full path). Whoever loads first will win every future load for the same filename. If you're loading it directly, it's fairly easy to rename a DLL to something likely to be unique to your project (or at least to put a version number in it) so that other packages won't use it. There are more complicated approaches using manifests and activation contexts (this is how different .pyd files with the same name can be correctly loaded), but ensuring a unique name is much easier. If the DLL is loaded implicitly by a .pyd, then as Paul says it should be loaded correctly if it is alongside the .pyd. Dependency Walker from www.dependencywalker.com is a great tool for checking what DLLs will be loaded by an executable or DLL. I recommend enabling profiling of your python.exe process when you try and import your packages to see where it is looking for its dependencies. Hope that helps, Steve > Thanks for your response, > > D. > From cournape at gmail.com Tue Sep 30 22:47:59 2014 From: cournape at gmail.com (David Cournapeau) Date: Tue, 30 Sep 2014 21:47:59 +0100 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On Tue, Sep 30, 2014 at 3:31 PM, David Genest wrote: > > > This is not true. Python loads DLLs with LOAD_WITH_ALTERED_SEARCH_PATH, > to allow them to be located alongside the pyd file. You should therefore be > able to ship the > > dependent dll in the package directory (which wheels support fine). > > > Paul > > Ok, so what if the dll is shared in a given environment (multiple > extensions use it)?, the shared dll should be copied to every package? > Won't that cause multiple loads by the system? > Yes it will, and it is indeed problematic. There are no great solutions: - bundle it in your package - have a separate wheel and then put it in PATH - have a separate wheel but use preload tricks (e.g. using ctypes) to avoid using PATH There are better solutions if one can patch python itself, though that's obviously not practical in most cases. David > Thanks for your response, > > D. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Tue Sep 30 22:49:36 2014 From: cournape at gmail.com (David Cournapeau) Date: Tue, 30 Sep 2014 21:49:36 +0100 Subject: [Distutils] Wheels and dependent third party dlls on windows In-Reply-To: References: <469c55682b3d415e8a0bc272be536d2b@MSR-MAIL-EXCH01.ubisoft.org> <390b0a534a3042e79a74affb90f3cafe@MSR-MAIL-EXCH01.ubisoft.org> Message-ID: On Tue, Sep 30, 2014 at 3:44 PM, Nick Coghlan wrote: > On 1 October 2014 00:37, Paul Moore wrote: > > On 30 September 2014 15:31, David Genest > wrote: > >> Ok, so what if the dll is shared in a given environment (multiple > extensions use it)?, the shared dll should be copied to every package? > Won't that cause multiple loads by the system? > > > > I honestly don't know in that case, sorry. You might get a better > > answer on python-list for that, if no-one here can help. > > > > Presumably the usage is all within one distribution, otherwise the > > question would have to be, which distribution ships the DLL? But that > > question ends up leading onto the sort of discussion that starts > > "well, I wouldn't design your system the way you have", which isn't > > likely to be of much help to you :-( > > > > Sorry I can't offer any more help. > > Note that this is the external binary dependency problem that the > scientific folks are currently using conda to address. It's basically > the point where you cross the line from "language specific packaging > system" to "multi-language cross-platform platform". > Conda is one such solution, not the solution ;) I don't know any "sumo" distribution which solves this problem correctly ATM, and windows makes this rather difficult to solve. David > That said, pip/wheel *may* get some capabilities along these lines in > the future, it just isn't a high priority at this point. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: