From g2p.code at gmail.com Mon Jul 1 00:39:33 2013 From: g2p.code at gmail.com (Gabriel de Perthuis) Date: Sun, 30 Jun 2013 22:39:33 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: Message-ID: On Sun, 30 Jun 2013 21:21:54 +1000, Nick Coghlan wrote: > On 30 June 2013 18:53, Vinay Sajip wrote: >> Nick Coghlan gmail.com> writes: >> >>> No, because the semantic dependencies form a Cartesian product with >>> the extras. You can define :meta:, :run:, :test:, :build: and :dev: >>> dependencies for any extra. So if, for example, you have a "localweb" >>> extra, then you can define extra test dependencies for that. >>> >>> The semantic specifiers determine *which sets of dependencies* you're >>> interested in, while the explicit extras define optional subsets of >>> those. >> >> Isn't that the same as having an additional field in the dependency mapping? >> It seems like that's how one would organise it at the RDBMS level, anyway. >> >> { >> "install": "localweb-test-util [win] (>= 1.0)", >> "extra": "localweb", >> "environment": "sys_platform == 'win32'", >> "kind": ":test:" >> } > > You certainly *could* define it like that, but no existing dependency > system I'm aware of does it that way. If they allow for anything other > than runtime dependencies in the first place, they define a different > top level field: > > * setuptools has requires and install_requires > * PEP 346 has Requires-Dist and Setup-Requires-Dist > * RPM has Requires and BuildRequires > * npm has dependencies and devDependencies At least for Debian, and probably RPM, source dependencies have a different field name because they are carried by a source package rather than a binary one. The nature of the dependencies isn't different, the required packages are binary in both cases. The cartesian product might be overkill. If someone elects to install development dependencies I don't see a point in picking and choosing. There's enough support noise when people fail to build from source, and while an author is knowledgeable and might conceive more than one way to set things up, publishing them would cause more trouble than it's worth. So it would prefer that dev and test be extras with well known names, so that dev, test, and any other extras define dependencies with a minimum of ambiguity and without the need for a second level of qualifiers. From donald at stufft.io Mon Jul 1 00:46:51 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Jun 2013 18:46:51 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: Message-ID: <3EB91E50-ECCD-4397-A25A-2D4300BE7F93@stufft.io> On Jun 30, 2013, at 6:39 PM, Gabriel de Perthuis wrote: > So it would prefer that dev and test be extras with well known names, > so that dev, test, and any other extras define dependencies with a > minimum of ambiguity and without the need for a second level of > qualifiers. "Well known names" is way more ambiguous than a top level field. It's easy to have minor variances across various packages, "test" vs "tests", "docs", "doc", "documentation". Both top level and "kind" share the fact that there is a limited number of allowed names, which makes it simple to validate that the same name is being used everywhere (because anything outside of those limited numbers are rejected). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From g2p.code at gmail.com Mon Jul 1 00:51:24 2013 From: g2p.code at gmail.com (Gabriel de Perthuis) Date: Sun, 30 Jun 2013 22:51:24 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <3EB91E50-ECCD-4397-A25A-2D4300BE7F93@stufft.io> Message-ID: On Sun, 30 Jun 2013 18:46:51 -0400, Donald Stufft wrote: > On Jun 30, 2013, at 6:39 PM, Gabriel de Perthuis wrote: > >> So it would prefer that dev and test be extras with well known names, >> so that dev, test, and any other extras define dependencies with a >> minimum of ambiguity and without the need for a second level of >> qualifiers. > > "Well known names" is way more ambiguous than a top level field. > It's easy to have minor variances across various packages, "test" vs > "tests", "docs", "doc", "documentation". Both top level and "kind" > share the fact that there is a limited number of allowed names, > which makes it simple to validate that the same name is being used > everywhere (because anything outside of those limited numbers are > rejected). These well-known names would also have some tool support. Something like `pip install-dev` would be sufficient. From donald at stufft.io Mon Jul 1 00:52:46 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Jun 2013 18:52:46 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <3EB91E50-ECCD-4397-A25A-2D4300BE7F93@stufft.io> Message-ID: On Jun 30, 2013, at 6:51 PM, Gabriel de Perthuis wrote: > On Sun, 30 Jun 2013 18:46:51 -0400, Donald Stufft wrote: >> On Jun 30, 2013, at 6:39 PM, Gabriel de Perthuis wrote: >> >>> So it would prefer that dev and test be extras with well known names, >>> so that dev, test, and any other extras define dependencies with a >>> minimum of ambiguity and without the need for a second level of >>> qualifiers. >> >> "Well known names" is way more ambiguous than a top level field. >> It's easy to have minor variances across various packages, "test" vs >> "tests", "docs", "doc", "documentation". Both top level and "kind" >> share the fact that there is a limited number of allowed names, >> which makes it simple to validate that the same name is being used >> everywhere (because anything outside of those limited numbers are >> rejected). > > These well-known names would also have some tool support. > Something like `pip install-dev` would be sufficient. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig But when defining them, it's very easy to accidentally use "tests" instead of "test". ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From g2p.code at gmail.com Mon Jul 1 00:58:15 2013 From: g2p.code at gmail.com (Gabriel de Perthuis) Date: Sun, 30 Jun 2013 22:58:15 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <3EB91E50-ECCD-4397-A25A-2D4300BE7F93@stufft.io> Message-ID: On Sun, 30 Jun 2013 18:52:46 -0400, Donald Stufft wrote: > On Jun 30, 2013, at 6:51 PM, Gabriel de Perthuis wrote: >> On Sun, 30 Jun 2013 18:46:51 -0400, Donald Stufft wrote: >>> On Jun 30, 2013, at 6:39 PM, Gabriel de Perthuis wrote: >>> >>>> So it would prefer that dev and test be extras with well known names, >>>> so that dev, test, and any other extras define dependencies with a >>>> minimum of ambiguity and without the need for a second level of >>>> qualifiers. >>> >>> "Well known names" is way more ambiguous than a top level field. >>> It's easy to have minor variances across various packages, "test" vs >>> "tests", "docs", "doc", "documentation". Both top level and "kind" >>> share the fact that there is a limited number of allowed names, >>> which makes it simple to validate that the same name is being used >>> everywhere (because anything outside of those limited numbers are >>> rejected). >> >> These well-known names would also have some tool support. >> Something like `pip install-dev` would be sufficient. >> > But when defining them, it's very easy to accidentally use "tests" > instead of "test". A lint tool can warn about these names, and a PyPI server could even block them for new-style packages. From donald at stufft.io Mon Jul 1 01:00:31 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 30 Jun 2013 19:00:31 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <3EB91E50-ECCD-4397-A25A-2D4300BE7F93@stufft.io> Message-ID: <531E0B11-62E9-4806-ADFB-6F84BDA1DC99@stufft.io> On Jun 30, 2013, at 6:58 PM, Gabriel de Perthuis wrote: > On Sun, 30 Jun 2013 18:52:46 -0400, Donald Stufft wrote: >> On Jun 30, 2013, at 6:51 PM, Gabriel de Perthuis wrote: >>> On Sun, 30 Jun 2013 18:46:51 -0400, Donald Stufft wrote: >>>> On Jun 30, 2013, at 6:39 PM, Gabriel de Perthuis wrote: >>>> >>>>> So it would prefer that dev and test be extras with well known names, >>>>> so that dev, test, and any other extras define dependencies with a >>>>> minimum of ambiguity and without the need for a second level of >>>>> qualifiers. >>>> >>>> "Well known names" is way more ambiguous than a top level field. >>>> It's easy to have minor variances across various packages, "test" vs >>>> "tests", "docs", "doc", "documentation". Both top level and "kind" >>>> share the fact that there is a limited number of allowed names, >>>> which makes it simple to validate that the same name is being used >>>> everywhere (because anything outside of those limited numbers are >>>> rejected). >>> >>> These well-known names would also have some tool support. >>> Something like `pip install-dev` would be sufficient. >>> >> But when defining them, it's very easy to accidentally use "tests" >> instead of "test". > > A lint tool can warn about these names, and a PyPI server could even > block them for new-style packages. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Or use a separate field (either the name, or the aforementioned "kind" field) and remove all ambiguity from the concept and remove the need to have a lint tool or guess what the person might mean. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon Jul 1 01:04:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 1 Jul 2013 09:04:58 +1000 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: Message-ID: On 1 Jul 2013 08:40, "Gabriel de Perthuis" wrote: > > On Sun, 30 Jun 2013 21:21:54 +1000, Nick Coghlan wrote: > > On 30 June 2013 18:53, Vinay Sajip wrote: > >> Nick Coghlan gmail.com> writes: > >> > >>> No, because the semantic dependencies form a Cartesian product with > >>> the extras. You can define :meta:, :run:, :test:, :build: and :dev: > >>> dependencies for any extra. So if, for example, you have a "localweb" > >>> extra, then you can define extra test dependencies for that. > >>> > >>> The semantic specifiers determine *which sets of dependencies* you're > >>> interested in, while the explicit extras define optional subsets of > >>> those. > >> > >> Isn't that the same as having an additional field in the dependency mapping? > >> It seems like that's how one would organise it at the RDBMS level, anyway. > >> > >> { > >> "install": "localweb-test-util [win] (>= 1.0)", > >> "extra": "localweb", > >> "environment": "sys_platform == 'win32'", > >> "kind": ":test:" > >> } > > > > You certainly *could* define it like that, but no existing dependency > > system I'm aware of does it that way. If they allow for anything other > > than runtime dependencies in the first place, they define a different > > top level field: > > > > * setuptools has requires and install_requires > > * PEP 346 has Requires-Dist and Setup-Requires-Dist > > * RPM has Requires and BuildRequires > > * npm has dependencies and devDependencies > > At least for Debian, and probably RPM, source dependencies have a > different field name because they are carried by a source package > rather than a binary one. The nature of the dependencies isn't > different, the required packages are binary in both cases. > > The cartesian product might be overkill. If someone elects to install > development dependencies I don't see a point in picking and choosing. > There's enough support noise when people fail to build from source, > and while an author is knowledgeable and might conceive more than one > way to set things up, publishing them would cause more trouble than > it's worth. I've had to port stuff to build on s390s - it would have made my life much easier if the dependencies that were only needed for optional x86_64 specific C accelerators had been clearly marked, rather than my having to weed them out through trial and error. What you're talking about is a rationale for sensible defaults and helper commands in tools (and the PEP does go into that), but it's not a good reason to limit the expressiveness of the format itself. > > So it would prefer that dev and test be extras with well known names, > so that dev, test, an any other extras define dependencies with a > minimum of ambiguity and without the need for a second level of > qualifiers. How would you express an optional dependency on Cython for optional C accelerators in such a system? The PEP is as it is because I think the payoff in expressiveness is worth the increase in complexity. Saying "You shouldn't want to describe such situations clearly and succinctly" is not a compelling argument. Cheers, Nick. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Jul 1 15:11:47 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Jul 2013 09:11:47 -0400 Subject: [Distutils] PyPI Hosting Mode Migration Message-ID: <787B20BB-1369-4CCE-974D-AE7715FC5282@stufft.io> Hello! I've just migrated PyPI to mostly no external url hosting modes as per PEP438. This should result in a huge speedup when installing a wide number of packages. Some stats: 18,753 packages migrated to pypi-explicit (First option) 1,022 packages migrated to pypi-scrape (Second option) 25,759 total packages using pypi-explicit 1,046 total packages using pypi-scrape 5,384 total packages using pypi-scrape-crawl The scripts that discovered what version a project could be are located at https://github.com/dstufft/pypi.linkcheck/tree/migration. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Jul 1 15:14:57 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 1 Jul 2013 09:14:57 -0400 Subject: [Distutils] PyPI Hosting Mode Migration In-Reply-To: <787B20BB-1369-4CCE-974D-AE7715FC5282@stufft.io> References: <787B20BB-1369-4CCE-974D-AE7715FC5282@stufft.io> Message-ID: On Jul 1, 2013, at 9:11 AM, Donald Stufft wrote: > Hello! > > I've just migrated PyPI to mostly no external url hosting modes as per PEP438. This should result in a huge speedup when installing a wide number of packages. > > Some stats: > > 18,753 packages migrated to pypi-explicit (First option) > 1,022 packages migrated to pypi-scrape (Second option) > > 25,759 total packages using pypi-explicit > 1,046 total packages using pypi-scrape > 5,384 total packages using pypi-scrape-crawl > > The scripts that discovered what version a project could be are located at https://github.com/dstufft/pypi.linkcheck/tree/migration. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig s/version a project/mode a project/ ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ethan at stoneleaf.us Mon Jul 1 20:19:01 2013 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 01 Jul 2013 11:19:01 -0700 Subject: [Distutils] PyPI upload: zip okay, tar fails with "invalid distribution file" Message-ID: <51D1C815.6030303@stoneleaf.us> I migrated my split py2/py3 code base into one as I couldn't get setup to do what I wanted, and now I am having this problem. Here's my setup.py: ======================================================================== import sys, os from distutils.core import setup long_desc = open('enum/doc/enum.rst').read() setup( name='enum34', version='0.9.1', url='https://pypi.python.org/pypi/enum34', packages=['enum'], package_data={ 'enum' : [ 'LICENSE', 'README', 'doc/enum.rst', 'doc/enum.pdf', 'test/test_enum.py', ] }, license='BSD License', description='Python 3.4 Enum backported to 3.3, 3.2, 3.1, 2.7, 2.6, 2.5, and 2.4', long_description=long_desc, provides=['enum'], author='Ethan Furman', author_email='ethan at stoneleaf.us', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Programming Language :: Python', 'Topic :: Software Development' ], ) ======================================================================== and an example run: ======================================================================== ethan at media:~/source/enum$ python setup.py sdist --formats tar upload running sdist running check reading manifest template 'MANIFEST.in' writing manifest file 'MANIFEST' creating enum34-0.9.1 creating enum34-0.9.1/enum creating enum34-0.9.1/enum/doc creating enum34-0.9.1/enum/test making hard links in enum34-0.9.1... hard linking LICENSE -> enum34-0.9.1 hard linking README -> enum34-0.9.1 hard linking setup.py -> enum34-0.9.1 hard linking enum/__init__.py -> enum34-0.9.1/enum hard linking enum/doc/enum.pdf -> enum34-0.9.1/enum/doc hard linking enum/doc/enum.rst -> enum34-0.9.1/enum/doc hard linking enum/test/test_enum.py -> enum34-0.9.1/enum/test Creating tar archive removing 'enum34-0.9.1' (and everything under it) running upload Submitting dist/enum34-0.9.1.tar to http://pypi.python.org/pypi Upload failed (400): invalid distribution file ======================================================================== Any ideas? -- ~Ethan~ From p.f.moore at gmail.com Mon Jul 1 21:11:26 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 1 Jul 2013 20:11:26 +0100 Subject: [Distutils] PyPI upload: zip okay, tar fails with "invalid distribution file" In-Reply-To: <51D1C815.6030303@stoneleaf.us> References: <51D1C815.6030303@stoneleaf.us> Message-ID: On 1 July 2013 19:19, Ethan Furman wrote: > ethan at media:~/source/enum$ python setup.py sdist --formats tar upload > running sdist > running check > reading manifest template 'MANIFEST.in' > writing manifest file 'MANIFEST' > creating enum34-0.9.1 > creating enum34-0.9.1/enum > creating enum34-0.9.1/enum/doc > creating enum34-0.9.1/enum/test > making hard links in enum34-0.9.1... > hard linking LICENSE -> enum34-0.9.1 > hard linking README -> enum34-0.9.1 > hard linking setup.py -> enum34-0.9.1 > hard linking enum/__init__.py -> enum34-0.9.1/enum > hard linking enum/doc/enum.pdf -> enum34-0.9.1/enum/doc > hard linking enum/doc/enum.rst -> enum34-0.9.1/enum/doc > hard linking enum/test/test_enum.py -> enum34-0.9.1/enum/test > Creating tar archive > removing 'enum34-0.9.1' (and everything under it) > running upload > Submitting dist/enum34-0.9.1.tar to http://pypi.python.org/pypi > Upload failed (400): invalid distribution file > ==============================**==============================** > ============ > > Any ideas? > You probably want format gztar rather than tar. I don't think I've ever seen an uncompressed tar on PyPI - they probably aren't allowed... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon Jul 1 21:12:51 2013 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 01 Jul 2013 12:12:51 -0700 Subject: [Distutils] PyPI upload: zip okay, tar fails with "invalid distribution file" In-Reply-To: References: <51D1C815.6030303@stoneleaf.us> Message-ID: <51D1D4B3.2090901@stoneleaf.us> On 07/01/2013 12:11 PM, Paul Moore wrote: > On 1 July 2013 19:19, Ethan Furman > wrote: > > ethan at media:~/source/enum$ python setup.py sdist --formats tar upload > running sdist > running check > reading manifest template 'MANIFEST.in' > writing manifest file 'MANIFEST' > creating enum34-0.9.1 > creating enum34-0.9.1/enum > creating enum34-0.9.1/enum/doc > creating enum34-0.9.1/enum/test > making hard links in enum34-0.9.1... > hard linking LICENSE -> enum34-0.9.1 > hard linking README -> enum34-0.9.1 > hard linking setup.py -> enum34-0.9.1 > hard linking enum/__init__.py -> enum34-0.9.1/enum > hard linking enum/doc/enum.pdf -> enum34-0.9.1/enum/doc > hard linking enum/doc/enum.rst -> enum34-0.9.1/enum/doc > hard linking enum/test/test_enum.py -> enum34-0.9.1/enum/test > Creating tar archive > removing 'enum34-0.9.1' (and everything under it) > running upload > Submitting dist/enum34-0.9.1.tar to http://pypi.python.org/pypi > Upload failed (400): invalid distribution file > ==============================__==============================__============ > > Any ideas? > > > You probably want format gztar rather than tar. I don't think I've ever seen an uncompressed tar on PyPI - they probably > aren't allowed... I could've sworn I've used tar before, but at any rate going with gztar did the trick. Thanks! -- ~Ethan~ From iwan at reahl.org Tue Jul 2 12:59:52 2013 From: iwan at reahl.org (Iwan Vosloo) Date: Tue, 02 Jul 2013 12:59:52 +0200 Subject: [Distutils] Finding modules in an egg / distribution Message-ID: <51D2B2A8.5060404@reahl.org> Hi there, We have been struggling to find a nice way to list all the modules (or packages) that are part of a particular Distribution (or egg). Nice should also mean that it works when the egg is installed. We have a need to do some introspection on the code shipped as an egg. Any ideas? Regards - Iwan From m.van.rees at zestsoftware.nl Tue Jul 2 13:32:58 2013 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Tue, 02 Jul 2013 13:32:58 +0200 Subject: [Distutils] PyPI upload: zip okay, tar fails with "invalid distribution file" In-Reply-To: <51D1D4B3.2090901@stoneleaf.us> References: <51D1C815.6030303@stoneleaf.us> <51D1D4B3.2090901@stoneleaf.us> Message-ID: Op 01-07-13 21:12, Ethan Furman schreef: > On 07/01/2013 12:11 PM, Paul Moore wrote: >> You probably want format gztar rather than tar. I don't think I've >> ever seen an uncompressed tar on PyPI - they probably >> aren't allowed... > > I could've sworn I've used tar before, but at any rate going with gztar > did the trick. I always use --formats=zip, because in some corner cases Python2.4 has problems with the gztar format, not while creating a distribution file but when installing it. -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl From davbo at davbo.org Tue Jul 2 11:33:47 2013 From: davbo at davbo.org (David King) Date: Tue, 02 Jul 2013 10:33:47 +0100 Subject: [Distutils] PyPI mirrors Message-ID: <1372757627.28766.140661250871401.45F5CCD5@webmail.messagingengine.com> Hi all, Has the relationship between PyPI mirrors changed since PyPI has started being served behind a CDN? I know people have been recommending against using "--use-mirrors" with pip since it doesn't take advantage of the CDN. I've been considering trying to get a public PyPI mirror setup and wanted to know how/if they're still being used. Thanks, Dave, From wking at tremily.us Tue Jul 2 13:47:32 2013 From: wking at tremily.us (W. Trevor King) Date: Tue, 02 Jul 2013 07:47:32 -0400 Subject: [Distutils] PyPI upload failed (302): Moved Temporarily Message-ID: <20130702114732.GC8649@odin.tremily.us> Hello list! I'm having a bit of trouble getting a new package setup on PyPI. I've done this a few times in the past, but maybe not since the wiki hacking that inspired the HTTPS transition back in February. I did change my password back then, so I don't think that's the problem. Anyhow, with the old URL (still mentioned in the distutils docs [1]): http://www.python.org/pypi in my ~/.pypirc, I get: $ python setup.py register -r pypi running register running check Registering pycalendar to http://www.python.org/pypi Server response (403): You are not allowed to store 'pycalendar' package information With https://pypi.python.org/ in my ~/.pypirc, I get: $ python setup.py register -r pypi running register running check Registering pycalendar to https://pypi.python.org/ Server response (200): OK Ok, how about uploading a tarball? With the https:// URL: $ python setup.py sdist upload -r pypi ? running upload Submitting dist/pycalendar-0.1.tar.gz to https://pypi.python.org/ Upload failed (302): Moved Temporarily With the http:// URL: $ python setup.py sdist upload -r pypi ? running upload Submitting dist/pycalendar-0.1.tar.gz to http://www.python.org/pypi Upload failed (403): You are not allowed to edit 'pycalendar' package information What am I doing wrong? Thanks, Trevor [1]: http://docs.python.org/3/distutils/packageindex.html#the-pypirc-file -- This email may be signed or encrypted with GnuPG (http://www.gnupg.org). For more information, see http://en.wikipedia.org/wiki/Pretty_Good_Privacy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From pje at telecommunity.com Tue Jul 2 17:08:13 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 2 Jul 2013 11:08:13 -0400 Subject: [Distutils] Finding modules in an egg / distribution In-Reply-To: <51D2B2A8.5060404@reahl.org> References: <51D2B2A8.5060404@reahl.org> Message-ID: On Tue, Jul 2, 2013 at 6:59 AM, Iwan Vosloo wrote: > Hi there, > > We have been struggling to find a nice way to list all the modules (or > packages) that are part of a particular Distribution (or egg). Nice should > also mean that it works when the egg is installed. We have a need to do some > introspection on the code shipped as an egg. > > Any ideas? If you are targeting at least Python 2.5, see: http://docs.python.org/2/library/pkgutil.html#pkgutil.walk_packages From noah at coderanger.net Tue Jul 2 19:14:42 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 2 Jul 2013 10:14:42 -0700 Subject: [Distutils] PyPI mirrors In-Reply-To: <1372757627.28766.140661250871401.45F5CCD5@webmail.messagingengine.com> References: <1372757627.28766.140661250871401.45F5CCD5@webmail.messagingengine.com> Message-ID: On Jul 2, 2013, at 2:33 AM, David King wrote: > Hi all, > > Has the relationship between PyPI mirrors changed since PyPI has started > being served behind a CDN? > > I know people have been recommending against using "--use-mirrors" with > pip since it doesn't take advantage of the CDN. I've been considering > trying to get a public PyPI mirror setup and wanted to know how/if > they're still being used. Yes, the use of public mirrors is no longer recommended as a best practice. The idea is that mirrors will continue to be an important part of the ecosystem for things like deploy caching, internal company mirrors, etc, but the federated, public mirror network concept is being retired. Several of the public mirrors have already shut down and just point back at PyPI, but others are still available if you want to use them. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Tue Jul 2 19:25:41 2013 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 2 Jul 2013 10:25:41 -0700 Subject: [Distutils] "Python Packaging User Guide" Message-ID: Everyone: Soon after pycon, the "The Hitchhiker?s Guide to Packaging" (HHGTP) was forked into the "Python Packaging User Guide". src: https://bitbucket.org/pypa/python-packaging-user-guide built: https://python-packaging-user-guide.readthedocs.org/en/latest/ Here's the original discussion on distutils-sig: http://mail.python.org/pipermail/distutils-sig/2013-March/020215.html It's a PyPA project, and the goal is for it to become the de facto place for tutorials and where to find the current state of PEPs and development efforts. The idea is that once it's worthy, the python core docs would re-link from the HHGTP to the new guide, but it **needs work.** The TOC is sound and some of the sections are OK, but the two main tutorials are sorely lacking. I intend to keep working on it when I can, but help is appreciated. The project is open for issue logging and pull requests. Thanks, Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwan at reahl.org Tue Jul 2 19:30:35 2013 From: iwan at reahl.org (Iwan Vosloo) Date: Tue, 02 Jul 2013 19:30:35 +0200 Subject: [Distutils] Finding modules in an egg / distribution In-Reply-To: References: <51D2B2A8.5060404@reahl.org> Message-ID: <51D30E3B.4060608@reahl.org> On 02/07/2013 17:08, PJ Eby wrote: > If you are targeting at least Python 2.5, see: > http://docs.python.org/2/library/pkgutil.html#pkgutil.walk_packages We're targeting Python 2.7. Trouble is that pkgutil.walk_packages needs a path to search from. Distribution.location is always your site-packages directory once a Distribution is installed, so walking that just gives ALL installed packages. If your Distribution contains some sort of main package that contains everything in it, you can use that package's .__path__, but then you'd need to discover what that package is. Distributions could also contain more than one package next to each other, and top-level modules. (The __path__ of a top-level module is also simply the site-packages directory.) There is a metadata file top_level.txt which one could use to get the names of top-level packages/modules in the Distribution. This can however contain a namespace package too - and you don't want all the packages inside the namespace package - just the bits inside the chosen Distribution... Regards - Iwan From merwok at netwok.org Tue Jul 2 19:37:07 2013 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 02 Jul 2013 13:37:07 -0400 Subject: [Distutils] Finding modules in an egg / distribution In-Reply-To: <51D30E3B.4060608@reahl.org> References: <51D2B2A8.5060404@reahl.org> <51D30E3B.4060608@reahl.org> Message-ID: <51D30FC3.7010702@netwok.org> Hello Iwan, This project can answer your questions: https://pypi.python.org/pypi/pkginfo FTR we planned to include something like it when the distutils2 project was active; distlib or the variant of distlib that will end up in the 3.4 standard library could include similar features. Regards From donald at stufft.io Tue Jul 2 19:52:15 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 2 Jul 2013 13:52:15 -0400 Subject: [Distutils] PyPI mirrors In-Reply-To: References: <1372757627.28766.140661250871401.45F5CCD5@webmail.messagingengine.com> Message-ID: <60EA19B6-3F4C-4CB9-B0AB-85B2F3853B53@stufft.io> Of course as I look it appears all if the official mirrors are out if date again. On Jul 2, 2013, at 1:14 PM, Noah Kantrowitz wrote: > > On Jul 2, 2013, at 2:33 AM, David King wrote: > >> Hi all, >> >> Has the relationship between PyPI mirrors changed since PyPI has started >> being served behind a CDN? >> >> I know people have been recommending against using "--use-mirrors" with >> pip since it doesn't take advantage of the CDN. I've been considering >> trying to get a public PyPI mirror setup and wanted to know how/if >> they're still being used. > > Yes, the use of public mirrors is no longer recommended as a best practice. The idea is that mirrors will continue to be an important part of the ecosystem for things like deploy caching, internal company mirrors, etc, but the federated, public mirror network concept is being retired. Several of the public mirrors have already shut down and just point back at PyPI, but others are still available if you want to use them. > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From pje at telecommunity.com Tue Jul 2 20:27:43 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 2 Jul 2013 14:27:43 -0400 Subject: [Distutils] Finding modules in an egg / distribution In-Reply-To: <51D30E3B.4060608@reahl.org> References: <51D2B2A8.5060404@reahl.org> <51D30E3B.4060608@reahl.org> Message-ID: On Tue, Jul 2, 2013 at 1:30 PM, Iwan Vosloo wrote: > On 02/07/2013 17:08, PJ Eby wrote: >> >> If you are targeting at least Python 2.5, see: >> http://docs.python.org/2/library/pkgutil.html#pkgutil.walk_packages > > > We're targeting Python 2.7. > > Trouble is that pkgutil.walk_packages needs a path to search from. > Distribution.location is always your site-packages directory once a > Distribution is installed, so walking that just gives ALL installed > packages. If your Distribution contains some sort of main package that > contains everything in it, you can use that package's .__path__, but then > you'd need to discover what that package is. > > Distributions could also contain more than one package next to each other, > and top-level modules. (The __path__ of a top-level module is also simply > the site-packages directory.) > > There is a metadata file top_level.txt which one could use to get the names > of top-level packages/modules in the Distribution. This can however contain > a namespace package too - and you don't want all the packages inside the > namespace package - just the bits inside the chosen Distribution... Ah, well in that case you'll have to inspect either .egg-info/SOURCES.txt or the PEP 376 installation manifest. I don't know of any reliable way to do what you want for system-installed packages at the moment. From alexjeffburke at gmail.com Wed Jul 3 01:18:16 2013 From: alexjeffburke at gmail.com (Alex Burke) Date: Wed, 3 Jul 2013 01:18:16 +0200 Subject: [Distutils] "Python Packaging User Guide" Message-ID: Hi Marcus, I would very much like to get involved and help with this effort - in fact I was looking to package up some of my own work recently and stumbled on The Hitchhiker?s Guide to Packaging but could tell some of the advice was out of date having followed the discussions for a while now. What I was thinking is it could be useful my documenting exactly what I'm having to do, things to remember etc and it'd help me learn things much more deeply on the way too. My only concern however would be where to go to find the right 'new world' answers. Aside from reading the PEPs etc, are there any other points of contact? Is it ok to ask such questions on this list? Thanks, Alex J Burke. From dholth at gmail.com Wed Jul 3 02:57:20 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 2 Jul 2013 20:57:20 -0400 Subject: [Distutils] "Python Packaging User Guide" In-Reply-To: References: Message-ID: On Tue, Jul 2, 2013 at 7:18 PM, Alex Burke wrote: > Hi Marcus, > > I would very much like to get involved and help with this effort - in fact I was looking to package up some of my own work recently and stumbled on The Hitchhiker?s Guide to Packaging but could tell some of the advice was out of date having followed the discussions for a while now. > > What I was thinking is it could be useful my documenting exactly what I'm having to do, things to remember etc and it'd help me learn things much more deeply on the way too. My only concern however would be where to go to find the right 'new world' answers. Aside from reading the PEPs etc, are there any other points of contact? Is it ok to ask such questions on this list? Sounds like a great idea, assemble your FAQ, and we will remember what's confusing about packaging when you haven't been embroiled in it for years. From qwcode at gmail.com Wed Jul 3 07:28:37 2013 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 2 Jul 2013 22:28:37 -0700 Subject: [Distutils] pip and virtualenv release candidates Message-ID: pip-1.4rc2 and virtualenv-1.10rc3 are available for testing from github. A few highlights: - pip added support for installing and building wheel archives. ( http://www.pip-installer.org/en/latest/cookbook.html#building-and-installing-wheels ) - virtualenv is now using the new merged setuptools, and no longer supports distribute. - pip now only installs stable versions by default, and offers a new --pre option to also find pre-releases. - Dropped support for Python 2.5. Changelogs: pip: http://www.pip-installer.org/en/release-1.4/news.html virtualenv: http://www.virtualenv.org/en/release-1.10/news.html Download Links: pip: gz: https://github.com/pypa/pip/archive/1.4rc2.tar.gz md5=0426430fc8a261c83bcd083fb03fb7d6 zip: https://github.com/pypa/pip/archive/1.4rc2.zip md5=c86dc0d94ed787eadba6dceb06f1676f virtualenv: gz: https://github.com/pypa/virtualenv/archive/1.10rc3.tar.gz md5=b24cdf59b561acf26ae3f639098d5a34 zip: https://github.com/pypa/virtualenv/archive/1.10rc3.zip md5=a6ee1a1570a751aa50f95833d9898649 Installation: The easiest way to try them both and *not* affect your current system, is like so: e.g. on Linux: $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc3.tar.gz $ echo "b24cdf59b561acf26ae3f639098d5a34 1.10rc3.tar.gz" | md5sum -c 1.10rc3.tar.gz: OK $ tar zxf 1.10rc3.tar.gz $ python virtualenv-1.10rc3/virtualenv.py myVE $ myVE/bin/pip --version pip 1.4rc2 *Note*: If instead, you choose to upgrade an existing pip (and setuptools), know this: 1) pip's wheel support requires setuptools>=0.8b2 (this will become final before pip is released final) 2) setuptools-0.8bx is not on pypi and can be found here: https://bitbucket.org/pypa/setuptools/downloads 3) Older pip's can not currently upgrade distribute to setuptools (until distribute-0.7.3 is released on ~July-7th) (for more upgrade details: http://www.pip-installer.org/en/latest/installing.html#requirements) Offering Feedback: You can respond to this email, or log issues in our tracker: https://github.com/pypa/pip/issues?state=open -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jul 3 10:03:21 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 3 Jul 2013 18:03:21 +1000 Subject: [Distutils] "Python Packaging User Guide" In-Reply-To: References: Message-ID: On 3 July 2013 09:18, Alex Burke wrote: > Hi Marcus, > > I would very much like to get involved and help with this effort - in fact > I was looking to package up some of my own work recently and stumbled on > The Hitchhiker?s Guide to Packaging but could tell some of the advice was > out of date having followed the discussions for a while now. > > What I was thinking is it could be useful my documenting exactly what I'm > having to do, things to remember etc and it'd help me learn things much > more deeply on the way too. My only concern however would be where to go to > find the right 'new world' answers. Aside from reading the PEPs etc, are > there any other points of contact? Is it ok to ask such questions on this > list? > Asking here, or filing tracker issues on https://bitbucket.org/pypa/python-packaging-user-guide would be good. Unless things have changed in the past couple of weeks, the aim is for people starting from scratch now to grab setuptools 0.7 (or later) and pip 1.4 (or later & once it is released) I'm not sure on the specific plan for wheel uploads to PyPI myself - I *believe* we're going to recommend using pip to retrieve http://wheel.readthedocs.org/en/latest/ to obtain the bdist_wheel command. There's also the problem of hosting for the setuptools & pip bootstrap scripts. I believe the plan is to host it at a canonical URL on pypi.python.org, but I'm not sure of the current status of that. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Schneider at uni-duesseldorf.de Wed Jul 3 11:17:51 2013 From: Alexander.Schneider at uni-duesseldorf.de (Alexander Schneider) Date: Wed, 03 Jul 2013 11:17:51 +0200 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: References: Message-ID: <51D3EC3F.8040405@uni-duesseldorf.de> Hello and sorry if I am on the wrong mailing list. I'm working on a dependency resolution resolver and wanted to implement support for the new Metadata format. (As of now i'm parsing the setup.py for dependency information and am dependent on a self build metadata database of all PyPI packages) Will there be build-in Metadata in the new PEP 426 format online on PyPI for all packages? If yes, are there already specifications on how they will be retreavable? Thanks, Alexander Schneider From ncoghlan at gmail.com Wed Jul 3 13:58:21 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 3 Jul 2013 21:58:21 +1000 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: <51D3EC3F.8040405@uni-duesseldorf.de> References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: On 3 July 2013 19:17, Alexander Schneider < Alexander.Schneider at uni-duesseldorf.de> wrote: > Hello and sorry if I am on the wrong mailing list. > > I'm working on a dependency resolution resolver and wanted to implement > support for the new Metadata format. (As of now i'm parsing the setup.py > for dependency information and am dependent on a self build metadata > database of all PyPI packages) > > Will there be build-in Metadata in the new PEP 426 format online on PyPI > for all packages? If yes, are there already specifications on how they > will be retreavable? > Yes there will, but actually figuring out the details of those APIs is some time away. Note that the version currently referenced from the PEP is a little out of date ( http://mail.python.org/pipermail/distutils-sig/2013-June/021357.html). I will hopefully get it updated at the PyCon AU sprints next week. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Jul 3 16:51:02 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 3 Jul 2013 14:51:02 +0000 (UTC) Subject: [Distutils] Metadataformat PEP 426 on PyPI? References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: Alexander Schneider uni-duesseldorf.de> writes: > Will there be build-in Metadata in the new PEP 426 format online on PyPI > for all packages? If yes, are there already specifications on how they > will be retreavable? I have experimental support for PEP 426 metadata available, which is up-to-date with the spec apart from the changes Nick linked to. For any given package, you can access some JSON at an URL based on the project name. For example, setuptools 0.7.5 metadata is available at http://www.red-dove.com/pypi/projects/S/setuptools/package-0.7.5.json If you deserialize the JSON at an URL like the above into a dict, the PEP 426 metadata is available in the subdict at key "index-metadata" in the top-level dict. Example from setuptools 0.7.5: "index-metadata": { "description": "omitted for brevity", "license": "PSF or ZPL", "contacts": [ { "role": "author", "name": "The fellowship of the packaging", "email": "distutils-sig at python.org" } ], "summary": "Easily download, build, install, upgrade, and uninstall Python packages", "project_urls": { "Home": "https://pypi.python.org/pypi/setuptools" }, "keywords": [ "CPAN", "PyPI", "distutils", "eggs", "package", "management" ], "metadata_version": "2.0", "extras": [ "certs", "ssl" ], "version": "0.7.5", "run_may_require": [ { "environment": "sys_platform=='win32'", "dependencies": [ "wincertstore (== 0.1)" ], "extra": "ssl" }, { "environment": "sys_platform=='win32' and python_version=='2.4'", "dependencies": [ "ctypes (== 1.0.2)" ], "extra": "ssl" }, { "dependencies": [ "certifi (== 0.0.8)" ], "extra": "certs" }, { "environment": "python_version in '2.4, 2.5'", "dependencies": [ "ssl (== 1.16)" ], "extra": "ssl" } ], "classifiers": [ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: Python Software Foundation License", "License :: OSI Approved :: Zope Public License", "Operating System :: OS Independent", "Programming Language :: Python :: 2.4", "Programming Language :: Python :: 2.5", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.1", "Programming Language :: Python :: 3.2", "Programming Language :: Python :: 3.3", "Topic :: Software Development :: Libraries :: Python Modules", "Topic :: System :: Archiving :: Packaging", "Topic :: System :: Systems Administration", "Topic :: Utilities" ], "name": "setuptools" }, I expect this metadata to track the PEP as changes to it are published. Currently, the top-level dict contains some legacy representations of the metadata which will be removed in due course. I would hope that once the dust settles on the PEP, this metadata (the PEP 426 part) can be migrated to PyPI. Regards, Vinay Sajip From alanwilter at gmail.com Wed Jul 3 15:55:17 2013 From: alanwilter at gmail.com (Alan) Date: Wed, 3 Jul 2013 14:55:17 +0100 Subject: [Distutils] $MACOSX_DEPLOYMENT_TARGET mismatch Message-ID: Hi there, I am trying to install readline and cx_Oracle on my Mac OSX 10.8.4. I've just installed python 2.7.3 standard in my $HOME dir and then I did: export PATH=$HOME/bin:$PATH ~/setuptools-0.7.7 python setup.py install --prefix=$HOME which worked. I tried to installed ipython 0.13.2 which seems to have worked but it fails to start because of readline. easy_install readline Searching for readline Reading https://pypi.python.org/simple/readline/ Best match: readline 6.2.4.1 Downloading https://pypi.python.org/packages/source/r/readline/readline-6.2.4.1.tar.gz#md5=578237939c81fdbc2c8334d168b17907 Processing readline-6.2.4.1.tar.gz Writing /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/setup.cfg Running readline-6.2.4.1/setup.py -q bdist_egg --dist-dir /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/egg-dist-tmp-eyGcUE error: Setup script exited with error: $MACOSX_DEPLOYMENT_TARGET mismatch: now "10.3" but "10.4" during configure How can solve this problem please? Thanks in advance, Alan -- Alan Wilter SOUSA da SILVA, DSc Bioinformatician, UniProt - PANDA, EMBL-EBI CB10 1SD, Hinxton, Cambridge, UK +44 1223 49 4588 -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Wed Jul 3 18:26:19 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 3 Jul 2013 09:26:19 -0700 Subject: [Distutils] [venv] pip and virtualenv release candidates In-Reply-To: <51D44736.7050502@gmail.com> References: <51D44736.7050502@gmail.com> Message-ID: > FWIW, the following works to update a distribute-baesd virtualenv to the > new setuptools / vr: > > $ bin/easy_install \ > --find-links https://bitbucket.org/pypa/setuptools/downloads/ \ > -U distribute > > good pt. if you use the unofficial release location, you can do it with pip as well. pip install -U --find-links=https://bitbucket.org/pypa/setuptools/downloads/distribute P.S. if you already have pip-1.4, add "--pre" so it can find the setuptools-0.8 betas (that pip also needs to make this work) Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From tres.seaver at gmail.com Wed Jul 3 17:45:58 2013 From: tres.seaver at gmail.com (Tres Seaver) Date: Wed, 03 Jul 2013 11:45:58 -0400 Subject: [Distutils] [venv] pip and virtualenv release candidates In-Reply-To: References: Message-ID: <51D44736.7050502@gmail.com> On 07/03/2013 01:28 AM, Marcus Smith wrote: > 3) Older pip's can not currently upgrade distribute to setuptools > (until distribute-0.7.3 is released on ~July-7th) > (for more upgrade details: > http://www.pip-installer.org/en/latest/installing.html#requirements) FWIW, the following works to update a distribute-baesd virtualenv to the new setuptools / vr: $ bin/easy_install \ --find-links https://bitbucket.org/pypa/setuptools/downloads/ \ -U distribute No-looking-back'ly, Tres. -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com From pje at telecommunity.com Wed Jul 3 20:19:16 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 3 Jul 2013 14:19:16 -0400 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: On Wed, Jul 3, 2013 at 10:51 AM, Vinay Sajip wrote: > If you deserialize the JSON at an URL like the above into a dict, the PEP > 426 metadata is available in the subdict at key "index-metadata" in the > top-level dict. Example from setuptools 0.7.5: > > "index-metadata": { > .... > "name": "setuptools" > }, > > I expect this metadata to track the PEP as changes to it are published. > Currently, the top-level dict contains some legacy representations of the > metadata which will be removed in due course. Just an FYI, not sure if this is an issue with your converter or with the new spec, but the metadata shown for setuptools is missing something important: 0.7.x pins specific distributions of its dependencies using dependency_links URLs with #md5 hashes, so that SSL support can be installed in a reasonably secure manner, as long as you're starting from a trusted copy of the distribution. The converted metadata you show lacks this pinning. Granted, the pinning is somewhat kludged, and the specific need is perhaps a rare use case outside of installer tools themselves. But I thought it worth pointing out as a limitation of either the converter or with the spec itself in relation to version support. From donald at stufft.io Wed Jul 3 20:34:55 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 3 Jul 2013 14:34:55 -0400 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: On Jul 3, 2013, at 2:19 PM, PJ Eby wrote: > On Wed, Jul 3, 2013 at 10:51 AM, Vinay Sajip wrote: >> If you deserialize the JSON at an URL like the above into a dict, the PEP >> 426 metadata is available in the subdict at key "index-metadata" in the >> top-level dict. Example from setuptools 0.7.5: >> >> "index-metadata": { >> .... >> "name": "setuptools" >> }, >> >> I expect this metadata to track the PEP as changes to it are published. >> Currently, the top-level dict contains some legacy representations of the >> metadata which will be removed in due course. > > Just an FYI, not sure if this is an issue with your converter or with > the new spec, but the metadata shown for setuptools is missing > something important: 0.7.x pins specific distributions of its > dependencies using dependency_links URLs with #md5 hashes, so that SSL > support can be installed in a reasonably secure manner, as long as > you're starting from a trusted copy of the distribution. The > converted metadata you show lacks this pinning. > > Granted, the pinning is somewhat kludged, and the specific need is > perhaps a rare use case outside of installer tools themselves. But I > thought it worth pointing out as a limitation of either the converter > or with the spec itself in relation to version support. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig PEP426 does not support dependency_links. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Wed Jul 3 20:50:28 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 3 Jul 2013 11:50:28 -0700 Subject: [Distutils] $MACOSX_DEPLOYMENT_TARGET mismatch In-Reply-To: References: Message-ID: This is really a build question, rather than a distributuion question -- I"d try the pythonmac list: http://mail.python.org/mailman/listinfo/pythonmac-sig I recall that readline is a bit of a pain on the Mac, but don't recall the solution (nor am I running 10.8 yet). Good luck, -Chris On Wed, Jul 3, 2013 at 6:55 AM, Alan wrote: > Hi there, > > I am trying to install readline and cx_Oracle on my Mac OSX 10.8.4. I've > just installed python 2.7.3 standard in my $HOME dir and then I did: > > export PATH=$HOME/bin:$PATH > ~/setuptools-0.7.7 > python setup.py install --prefix=$HOME > > which worked. > > I tried to installed ipython 0.13.2 which seems to have worked but it fails > to start because of readline. > > easy_install readline > Searching for readline > Reading https://pypi.python.org/simple/readline/ > Best match: readline 6.2.4.1 > Downloading > https://pypi.python.org/packages/source/r/readline/readline-6.2.4.1.tar.gz#md5=578237939c81fdbc2c8334d168b17907 > Processing readline-6.2.4.1.tar.gz > Writing > /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/setup.cfg > Running readline-6.2.4.1/setup.py -q bdist_egg --dist-dir > /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/egg-dist-tmp-eyGcUE > error: Setup script exited with error: $MACOSX_DEPLOYMENT_TARGET mismatch: > now "10.3" but "10.4" during configure > > How can solve this problem please? > > Thanks in advance, > > Alan > > -- > Alan Wilter SOUSA da SILVA, DSc > Bioinformatician, UniProt - PANDA, EMBL-EBI > CB10 1SD, Hinxton, Cambridge, UK > +44 1223 49 4588 > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From m.van.rees at zestsoftware.nl Wed Jul 3 21:39:25 2013 From: m.van.rees at zestsoftware.nl (Maurits van Rees) Date: Wed, 03 Jul 2013 21:39:25 +0200 Subject: [Distutils] PyPI upload failed (302): Moved Temporarily In-Reply-To: <20130702114732.GC8649@odin.tremily.us> References: <20130702114732.GC8649@odin.tremily.us> Message-ID: Op 02-07-13 13:47, W. Trevor King schreef: > Hello list! > > I'm having a bit of trouble getting a new package setup on PyPI. I've > done this a few times in the past, but maybe not since the wiki > hacking that inspired the HTTPS transition back in February. I did > change my password back then, so I don't think that's the problem. > Anyhow, with the old URL (still mentioned in the distutils docs [1]): > > http://www.python.org/pypi > > in my ~/.pypirc, I get: > > $ python setup.py register -r pypi > running register > running check > Registering pycalendar to http://www.python.org/pypi > Server response (403): You are not allowed to store 'pycalendar' package information > > With https://pypi.python.org/ in my ~/.pypirc, I get: > > $ python setup.py register -r pypi > running register > running check > Registering pycalendar to https://pypi.python.org/ > Server response (200): OK > > Ok, how about uploading a tarball? With the https:// URL: > > $ python setup.py sdist upload -r pypi > ? > running upload > Submitting dist/pycalendar-0.1.tar.gz to https://pypi.python.org/ > Upload failed (302): Moved Temporarily > > With the http:// URL: > > $ python setup.py sdist upload -r pypi > ? > running upload > Submitting dist/pycalendar-0.1.tar.gz to http://www.python.org/pypi > Upload failed (403): You are not allowed to edit 'pycalendar' package information > > What am I doing wrong? > > Thanks, > Trevor > > [1]: http://docs.python.org/3/distutils/packageindex.html#the-pypirc-file I'll post my ~/.pypirc below for comparison. I have a second index-server, which you should not need. Note that I did not enter a repository url for pypi, so apparently that is not necessary. [distutils] index-servers = pypi plone [pypi] username:maurits password:secret [plone] repository:http://plone.org/products username:maurits password:secret If you have basically the same, then triple check that you have the correct username and password, that you have probably already done that. Maybe change the password again just to be sure. BTW, I am using Python 2.6 or 2.7. I guess it works the same with Python 3 though. -- Maurits van Rees: http://maurits.vanrees.org/ Zest Software: http://zestsoftware.nl From alanwilter at gmail.com Wed Jul 3 21:12:57 2013 From: alanwilter at gmail.com (Alan) Date: Wed, 3 Jul 2013 20:12:57 +0100 Subject: [Distutils] $MACOSX_DEPLOYMENT_TARGET mismatch In-Reply-To: References: Message-ID: Well, I found out that if before compiling my python I set export MACOSX_DEPLOYMENT_TARGET=10.3 and then do all the rest, then I get easy_install to work. Besides, I had the same error trying to install cx_Oracle. So, somehow my python (or setuptools) need to build for "10.3". For me it's a bug in setuptools since the function that do this check and raise the error are from setuptools package. Alan On 3 July 2013 19:50, Chris Barker - NOAA Federal wrote: > This is really a build question, rather than a distributuion question > -- I"d try the pythonmac list: > > http://mail.python.org/mailman/listinfo/pythonmac-sig > > I recall that readline is a bit of a pain on the Mac, but don't recall > the solution (nor am I running 10.8 yet). > > Good luck, > -Chris > > On Wed, Jul 3, 2013 at 6:55 AM, Alan wrote: > > Hi there, > > > > I am trying to install readline and cx_Oracle on my Mac OSX 10.8.4. I've > > just installed python 2.7.3 standard in my $HOME dir and then I did: > > > > export PATH=$HOME/bin:$PATH > > ~/setuptools-0.7.7 > > python setup.py install --prefix=$HOME > > > > which worked. > > > > I tried to installed ipython 0.13.2 which seems to have worked but it > fails > > to start because of readline. > > > > easy_install readline > > Searching for readline > > Reading https://pypi.python.org/simple/readline/ > > Best match: readline 6.2.4.1 > > Downloading > > > https://pypi.python.org/packages/source/r/readline/readline-6.2.4.1.tar.gz#md5=578237939c81fdbc2c8334d168b17907 > > Processing readline-6.2.4.1.tar.gz > > Writing > > > /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/setup.cfg > > Running readline-6.2.4.1/setup.py -q bdist_egg --dist-dir > > > /var/folders/0c/n_0by3qj74bg5m68fxrw2kw40000gp/T/easy_install-3WQA6K/readline-6.2.4.1/egg-dist-tmp-eyGcUE > > error: Setup script exited with error: $MACOSX_DEPLOYMENT_TARGET > mismatch: > > now "10.3" but "10.4" during configure > > > > How can solve this problem please? > > > > Thanks in advance, > > > > Alan > > > > -- > > Alan Wilter SOUSA da SILVA, DSc > > Bioinformatician, UniProt - PANDA, EMBL-EBI > > CB10 1SD, Hinxton, Cambridge, UK > > +44 1223 49 4588 > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -- Alan Wilter SOUSA da SILVA, DSc Bioinformatician, UniProt - PANDA, EMBL-EBI CB10 1SD, Hinxton, Cambridge, UK +44 1223 49 4588 -------------- next part -------------- An HTML attachment was scrubbed... URL: From js at hipro.co.in Wed Jul 3 22:03:43 2013 From: js at hipro.co.in (Joe Steeve) Date: Thu, 04 Jul 2013 01:33:43 +0530 Subject: [Distutils] "python" option in buildout 2.1.1 Message-ID: <1372881823.4672.3.camel@localhost> From the buildout 2.1.1 docs on pypi: python The name of a section containing information about the default Python interpreter. Recipes that need a installation typically have options to tell them which Python installation to use. By convention, if a section-specific option isn't used, the option is looked for in the buildout section. The option must point to a section with an executable option giving the path to a Python executable. By default, the buildout section defines the default Python as the Python used to run the buildout. Is the above still relevant? I am trying to build a custom python using zc.recipe.cmmi and use the interpreter in other parts, like in: http://goo.gl/ufWwT But, that simply does not work. Regards, Joe -- Joe Steeve HiPro IT Solutions Private Limited http://hipro.co.in/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From jim at zope.com Wed Jul 3 22:32:10 2013 From: jim at zope.com (Jim Fulton) Date: Wed, 3 Jul 2013 16:32:10 -0400 Subject: [Distutils] "python" option in buildout 2.1.1 In-Reply-To: <1372881823.4672.3.camel@localhost> References: <1372881823.4672.3.camel@localhost> Message-ID: On Wed, Jul 3, 2013 at 4:03 PM, Joe Steeve wrote: > From the buildout 2.1.1 docs on pypi: > > python > The name of a section containing information about the That's a documentation bug. :( I'll fix that. > Is the above still relevant? No. The support for multiple Python interpreters in a single buildout was dropped from buildout 2. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From chris.barker at noaa.gov Wed Jul 3 22:36:38 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Wed, 3 Jul 2013 13:36:38 -0700 Subject: [Distutils] $MACOSX_DEPLOYMENT_TARGET mismatch In-Reply-To: References: Message-ID: On Wed, Jul 3, 2013 at 12:12 PM, Alan wrote: > Well, I found out that if before compiling my python I set > > export MACOSX_DEPLOYMENT_TARGET=10.3 > > and then do all the rest, then I get easy_install to work. cool - well done. > So, somehow my python (or setuptools) need to build for "10.3". > > For me it's a bug in setuptools since the function that do this check and > raise the error are from setuptools package. >>> I've just installed python 2.7.3 standard How did you install it? not sure what "standard" is. The build is supposed to setup distutils to do the right thing, and I'm pretty sure it does with the binaries from python.org. I will note that unless you want to re-package (i.e. with py2app) and distribute to folks with old machines, you probably don't need to support way back to 10.3 -- the "intel build" on python.org is set up for 10.5 and greater, and may avoid this issue. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From vinay_sajip at yahoo.co.uk Wed Jul 3 23:41:52 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 3 Jul 2013 21:41:52 +0000 (UTC) Subject: [Distutils] Metadataformat PEP 426 on PyPI? References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: PJ Eby telecommunity.com> writes: > Just an FYI, not sure if this is an issue with your converter or with > the new spec, but the metadata shown for setuptools is missing > something important: 0.7.x pins specific distributions of its > dependencies using dependency_links URLs with #md5 hashes, so that SSL > support can be installed in a reasonably secure manner, as long as > you're starting from a trusted copy of the distribution. The > converted metadata you show lacks this pinning. True, although I do capture the dependency links under the 'dependency-urls' key of the top level dict of the JSON I linked to. While dependency_links is not directly supported by PEP 426, the intent is there via "direct references". When installing using distlib/distil, SSL host verification and hash verification are done, even when direct references are not specified, since the versions of dependencies are pinned. For example, if I install setuptools into a fresh venv: $ pyvenv-3.3 /tmp/venv $ distil -e /tmp/venv install "setuptools [ssl,certs]" Checking requirements for setuptools (0.7.7) ... done. The following new packages will be downloaded and installed: certifi (0.0.8) [for setuptools] setuptools (0.7.7) Proceed? (y/n) y Downloading certifi-0.0.8.tar.gz to /tmp/tmpccek0f [for setuptools] 115KB @ 667 KB/s 100 % Done: 00:00:00 Unpacking ... done. Downloading setuptools-0.7.7.tar.gz to /tmp/tmpchxc1x 736KB @ 393 KB/s 100 % Done: 00:00:01 Unpacking ... done. [installation feedback snipped] Below is an extract from distil.log for the above installation, showing the downloading and verification operations: Downloading certifi-0.0.8.tar.gz to /tmp/tmpccek0f [for setuptools] Digest specified: dc5f5e7f0b5fc08d27654b17daa6ecec Host verified: pypi.python.org Digest verified: dc5f5e7f0b5fc08d27654b17daa6ecec Library location: venv site-packages Downloading setuptools-0.7.7.tar.gz to /tmp/tmpchxc1x Digest specified: 0d7bc0e1a34b70a97e706ef74aa7f37f Host verified: pypi.python.org Digest verified: 0d7bc0e1a34b70a97e706ef74aa7f37f Library location: venv site-packages Distil includes the Mozilla certs and thus is able to do SSL host validation. The hash support is currently limited to MD5 because PyPI has not supported other formats, but I expect that will be rectified in due course. Regards, Vinay Sajip From js at hipro.co.in Wed Jul 3 22:57:51 2013 From: js at hipro.co.in (Joe Steeve) Date: Thu, 04 Jul 2013 02:27:51 +0530 Subject: [Distutils] zc.recipe.egg skips script-generation Message-ID: <1372885071.28665.8.camel@localhost> Hello all, If this is not the right place to talk about 'zc.recipe.egg', please be kind and point me to the right place :) I am trying to create a buildout to work with some eggs using ipython. I have a buildout.cfg like this: [buildout] develop = . parts = app ipython_part [app] recipe = zc.recipe.egg:scripts eggs = stool [ipython_part] recipe = zc.recipe.egg:scripts dependent-scripts = true eggs = ${app:eggs} ipython scripts = ipython I have an ipython (0.13.1) installed in my system python's site-packages. When I run buildout with the above, buildout simply says "Installing ipython_part" and does not do anything else. If I remove my system ipython (apt-get purge ipython), then buildout installs a script locally. I tried creating a virtualenv, and bootstrapping with the virtualenv's python. I have the same effect. How do I get an 'ipython' script with my desired eggs in its path? Regards, Joe -- Joe Steeve HiPro IT Solutions Private Limited http://hipro.co.in/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From js at hipro.co.in Wed Jul 3 22:39:27 2013 From: js at hipro.co.in (Joe Steeve) Date: Thu, 04 Jul 2013 02:09:27 +0530 Subject: [Distutils] "python" option in buildout 2.1.1 In-Reply-To: References: <1372881823.4672.3.camel@localhost> Message-ID: <1372883967.28665.0.camel@localhost> On Wed, 2013-07-03 at 16:32 -0400, Jim Fulton wrote: > That's a documentation bug. :( I'll fix that. Thank you. -- Joe Steeve HiPro IT Solutions Private Limited http://hipro.co.in/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From donald at stufft.io Thu Jul 4 01:04:22 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 3 Jul 2013 19:04:22 -0400 Subject: [Distutils] Weekly Download Counters Enabled Message-ID: <7760F9D0-A7A7-4BD1-A964-3DDB41D39B5C@stufft.io> Just a quick follow up to last weeks email the download counters for weekly numbers has now been enabled on PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Thu Jul 4 01:38:24 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 3 Jul 2013 19:38:24 -0400 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: On Wed, Jul 3, 2013 at 2:34 PM, Donald Stufft wrote: > PEP426 does not support dependency_links. Right - I would've expected direct references in this scenario, assuming the PEP still has them. From donald at stufft.io Thu Jul 4 01:40:52 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 3 Jul 2013 19:40:52 -0400 Subject: [Distutils] Metadataformat PEP 426 on PyPI? In-Reply-To: References: <51D3EC3F.8040405@uni-duesseldorf.de> Message-ID: <059B6EC3-B5CD-48D2-A972-480FDD6D5BA5@stufft.io> On Jul 3, 2013, at 7:38 PM, PJ Eby wrote: > On Wed, Jul 3, 2013 at 2:34 PM, Donald Stufft wrote: >> PEP426 does not support dependency_links. > > Right - I would've expected direct references in this scenario, > assuming the PEP still has them. Yea PEP440 direct references can be used as an approximate dependency_links replacement with the caveat you can't publish them on PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From js at hipro.co.in Thu Jul 4 00:55:05 2013 From: js at hipro.co.in (Joe Steeve) Date: Thu, 04 Jul 2013 04:25:05 +0530 Subject: [Distutils] zc.recipe.egg skips script-generation In-Reply-To: <1372887902.31748.0.camel@localhost> References: <1372887902.31748.0.camel@localhost> Message-ID: <1372892105.31748.5.camel@localhost> On Thu, 2013-07-04 at 03:15 +0530, Joe Steeve wrote: > I have an ipython (0.13.1) installed in my system python's > site-packages. When I run buildout with the above, buildout simply > says "Installing ipython_part" and does not do anything else. If I > remove my system ipython (apt-get purge ipython), then buildout > installs a script locally. > > I tried creating a virtualenv, and bootstrapping with the virtualenv's > python. I have the same effect. Sorry, false alarm. Retried with a virtualenv after clearing out all the buildout directories. Works fine now. Thanks, Joe -- Joe Steeve HiPro IT Solutions Private Limited http://hipro.co.in/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From donald at stufft.io Thu Jul 4 07:38:19 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 01:38:19 -0400 Subject: [Distutils] PyPI CDN Updates For Greater Availability Message-ID: <3C448D97-1210-4227-AC1B-147EECC7F79B@stufft.io> Several changes were just deployed to PyPI's CDN. The general theme behind the changes is making it so that PyPI appears as functional as possible through a failure of the server hosting it. This should increase the availability of PyPI and enable things such as installation and browsing the site to continue to work through a catastrophic host failure on the PSF infrastructure. The details of what changes are: - Anonymous users will find that /pypi* pages are now cached for a short amount of time (currently 60s). - Objects will be stored in the cache for some time past their expiration date. They will not be used except in two circumstances: - A request is taking longer than 15s to complete, a "stale" object will be returned to prevent a pile up from occurring. - The backend[1] has been deemed unhealthy, in which case stale objects will be served in order to allow some level of functionality until the backend has been restored. - In the advent of an unhealthy backend all requests will be forced to be anonymous, making them eligible for the stale objects that have been cached. - The /mirrors and /security pages will be cached for a week, allowing them to likely be available through a backend failure making it easy to locate mirrors[2] or report a security issue. - Miscellaneous changes to normalize various things so that a single item in the cache will be able to be used for more requests, making it more likely that any particular request will be served from the Cache. [1] Backend in this context means the server hosting PyPI itself, what the CDN itself connects too. [2] Using the mirrors is done so at your own risk. None of the tools currently verify the downloads and they are downloaded over HTTP. This makes it trivial for an attacker to execute arbitrary code on your machine via a MITM. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 4 10:51:13 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 4 Jul 2013 08:51:13 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: Message-ID: Nick Coghlan gmail.com> writes: > * "install": the installation specifier for the dependency > * "extra": as per the current PEP (for conditional dependencies) > * "environment": as per the current PEP (for conditional dependencies) > > 4. The "install" subfield is compulsory, the other two are optional > (as now, using either of the latter creates a "conditional > dependency", while dependency declarations with only the "install" > subfield are unconditional) > > 5. An installation specifier is what PEP 426 currently calls a > dependency specifier: the "name [extras] (constraints)" format. They > will get their own top level section (similar to the existing Extras > and Environment markers sections) Is there a particular benefit of the install subfield being a single installation specifier, as opposed to a list of such specifiers? It's perhaps neither here nor there for machine-processed metadata, but I expect this metadata would have human readers too. Not using a list would lead to more verbose metadata. Regards, Vinay Sajip From ncoghlan at gmail.com Thu Jul 4 13:00:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 4 Jul 2013 21:00:19 +1000 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: Message-ID: On 4 Jul 2013 18:52, "Vinay Sajip" wrote: > > Nick Coghlan gmail.com> writes: > > > * "install": the installation specifier for the dependency > > * "extra": as per the current PEP (for conditional dependencies) > > * "environment": as per the current PEP (for conditional dependencies) > > > > 4. The "install" subfield is compulsory, the other two are optional > > (as now, using either of the latter creates a "conditional > > dependency", while dependency declarations with only the "install" > > subfield are unconditional) > > > > 5. An installation specifier is what PEP 426 currently calls a > > dependency specifier: the "name [extras] (constraints)" format. They > > will get their own top level section (similar to the existing Extras > > and Environment markers sections) > > Is there a particular benefit of the install subfield being a single > installation specifier, as opposed to a list of such specifiers? It's > perhaps neither here nor there for machine-processed metadata, but I expect > this metadata would have human readers too. Not using a list would lead to > more verbose metadata. Hmm, I guess as long as it's consistent, the only difference when processing is list.append vs list.extend. There's a little extra work when serialising to group like entries together, but I'm OK with that (and that would be a SHOULD rather than a MUST anyway). If I don't hear a good argument against it, I'll make that field a list. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 4 13:35:17 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 07:35:17 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: Message-ID: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> On Jul 4, 2013, at 7:00 AM, Nick Coghlan wrote: > > On 4 Jul 2013 18:52, "Vinay Sajip" wrote: > > > > Nick Coghlan gmail.com> writes: > > > > > * "install": the installation specifier for the dependency > > > * "extra": as per the current PEP (for conditional dependencies) > > > * "environment": as per the current PEP (for conditional dependencies) > > > > > > 4. The "install" subfield is compulsory, the other two are optional > > > (as now, using either of the latter creates a "conditional > > > dependency", while dependency declarations with only the "install" > > > subfield are unconditional) > > > > > > 5. An installation specifier is what PEP 426 currently calls a > > > dependency specifier: the "name [extras] (constraints)" format. They > > > will get their own top level section (similar to the existing Extras > > > and Environment markers sections) > > > > Is there a particular benefit of the install subfield being a single > > installation specifier, as opposed to a list of such specifiers? It's > > perhaps neither here nor there for machine-processed metadata, but I expect > > this metadata would have human readers too. Not using a list would lead to > > more verbose metadata. > > Hmm, I guess as long as it's consistent, the only difference when processing is list.append vs list.extend. > > There's a little extra work when serialising to group like entries together, but I'm OK with that (and that would be a SHOULD rather than a MUST anyway). > > If I don't hear a good argument against it, I'll make that field a list. > > > > > Regards, > > > > Vinay Sajip > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I would prefer a single entry. It makes the serialization format map to the modeling simpler, and I think it's simpler for humans too. I don't see much benefit to making it a list except arbitrarily adding another level of nesting. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Jul 4 14:26:51 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 4 Jul 2013 22:26:51 +1000 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On 4 Jul 2013 21:35, "Donald Stufft" wrote: > > > On Jul 4, 2013, at 7:00 AM, Nick Coghlan wrote: > >> >> On 4 Jul 2013 18:52, "Vinay Sajip" wrote: >> > >> > Nick Coghlan gmail.com> writes: >> > >> > > * "install": the installation specifier for the dependency >> > > * "extra": as per the current PEP (for conditional dependencies) >> > > * "environment": as per the current PEP (for conditional dependencies) >> > > >> > > 4. The "install" subfield is compulsory, the other two are optional >> > > (as now, using either of the latter creates a "conditional >> > > dependency", while dependency declarations with only the "install" >> > > subfield are unconditional) >> > > >> > > 5. An installation specifier is what PEP 426 currently calls a >> > > dependency specifier: the "name [extras] (constraints)" format. They >> > > will get their own top level section (similar to the existing Extras >> > > and Environment markers sections) >> > >> > Is there a particular benefit of the install subfield being a single >> > installation specifier, as opposed to a list of such specifiers? It's >> > perhaps neither here nor there for machine-processed metadata, but I expect >> > this metadata would have human readers too. Not using a list would lead to >> > more verbose metadata. >> >> Hmm, I guess as long as it's consistent, the only difference when processing is list.append vs list.extend. >> >> There's a little extra work when serialising to group like entries together, but I'm OK with that (and that would be a SHOULD rather than a MUST anyway). >> >> If I don't hear a good argument against it, I'll make that field a list. >> >> > >> > Regards, >> > >> > Vinay Sajip >> > >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > http://mail.python.org/mailman/listinfo/distutils-sig >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > I would prefer a single entry. It makes the serialization format map to the modeling simpler, and I think it's simpler for humans too. I don't see much benefit to making it a list except arbitrarily adding another level of nesting. The main benefit is that all the dependencies for an extra will typically be in one place. However, I briefly forgot the "machine readable" part again, and for that TOOWTDI is to have one entry per dependency. Merging common criteria would then be a UI thing with multiple ways to do it (e.g. whether to group by extra or environment first for conditional dependencies). If you allow a list instead, then you have the problem of offering two ways to say the same thing (all in one entry or split across multiple entries). So the install subfield will remain a single string in the data interchange format, even if tools choose to structure it differently in their UI. Note repeating the key names as well some subfield values doesn't bother me - that's what streaming compression is for. This is what happens when I don't write my rationale down, though - I forget why I did things a certain way :) Cheers, Nick. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jul 4 14:31:45 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 4 Jul 2013 12:31:45 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I would prefer a single entry. It makes the serialization format map to the > modeling simpler, and I think it's simpler for humans too. I don't see much > benefit to making it a list except arbitrarily adding another level of > nesting. It's a question of { "install": ["a", "b", "c"] } versus { "install": "a" }, { "install": "b" }, { "install": "c" } and I can't see why you think the latter is in any way better. IMO implementation details (such as "it's easier for the Django ORM to map it") should not take precedence over other considerations of readability/simplicity. In any case, I can't see why there would be any particular modelling problem with the scheme I've suggested. Is the modelling work you're doing public? I had a quick look at your warehouse repo (github.com/dstufft/warehouse) and I don't see any models beyond User and Email. Is that the correct location? I'd be happy to take a closer look to get a better understanding of what modelling problem you're seeing/foreseeing. FYI the metadata that I'm maintaining on red-dove.com is stored in a SQL database. While my SQL schema is not yet fully aligned with the PEP (as it's WIP), I don't see any modelling problem between an RDBMS backend and any of the JSON formats which have been published in the various revisions of the PEP. Some more detail would help :-) Regards, Vinay Sajip From donald at stufft.io Thu Jul 4 14:37:04 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 08:37:04 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> On Jul 4, 2013, at 8:31 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I would prefer a single entry. It makes the serialization format map to the >> modeling simpler, and I think it's simpler for humans too. I don't see much >> benefit to making it a list except arbitrarily adding another level of >> nesting. > > It's a question of > > { > "install": ["a", "b", "c"] > } > > versus > > { > "install": "a" > }, > { > "install": "b" > }, > { > "install": "c" > } > > and I can't see why you think the latter is in any way better. IMO > implementation details (such as "it's easier for the Django ORM to map it") > should not take precedence over other considerations of > readability/simplicity. In any case, I can't see why there would be any > particular modelling problem with the scheme I've suggested. It's not that it's easier for the Django ORM to map it, it's just a simpler structure all together. It goes from a single relation to what's essentially a M2M with extra data on the intermediate table. I think it's better because there's less moving parts and this is designed for machines. Human readability is a nice to have but not hardly a requirement. Simpler, not impossible vs impossible ;) > > Is the modelling work you're doing public? I had a quick look at your > warehouse repo (github.com/dstufft/warehouse) and I don't see any models > beyond User and Email. Is that the correct location? I'd be happy to take a > closer look to get a better understanding of what modelling problem you're > seeing/foreseeing. And yea it's in that repo. Still in a branch though as I haven't finished it. > > FYI the metadata that I'm maintaining on red-dove.com is stored in a SQL > database. While my SQL schema is not yet fully aligned with the PEP (as it's > WIP), I don't see any modelling problem between an RDBMS backend and any of > the JSON formats which have been published in the various revisions of the > PEP. Some more detail would help :-) > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Thu Jul 4 15:23:44 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 09:23:44 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> Message-ID: I also prefer the list install : [] Have you played with Postgresql's JSON support :-) On Thu, Jul 4, 2013 at 8:37 AM, Donald Stufft wrote: > > On Jul 4, 2013, at 8:31 AM, Vinay Sajip wrote: > >> Donald Stufft stufft.io> writes: >> >>> I would prefer a single entry. It makes the serialization format map to the >>> modeling simpler, and I think it's simpler for humans too. I don't see much >>> benefit to making it a list except arbitrarily adding another level of >>> nesting. >> >> It's a question of >> >> { >> "install": ["a", "b", "c"] >> } >> >> versus >> >> { >> "install": "a" >> }, >> { >> "install": "b" >> }, >> { >> "install": "c" >> } >> >> and I can't see why you think the latter is in any way better. IMO >> implementation details (such as "it's easier for the Django ORM to map it") >> should not take precedence over other considerations of >> readability/simplicity. In any case, I can't see why there would be any >> particular modelling problem with the scheme I've suggested. > > It's not that it's easier for the Django ORM to map it, it's just a simpler structure > all together. It goes from a single relation to what's essentially a M2M with > extra data on the intermediate table. I think it's better because there's less > moving parts and this is designed for machines. Human readability is a > nice to have but not hardly a requirement. > > Simpler, not impossible vs impossible ;) > >> >> Is the modelling work you're doing public? I had a quick look at your >> warehouse repo (github.com/dstufft/warehouse) and I don't see any models >> beyond User and Email. Is that the correct location? I'd be happy to take a >> closer look to get a better understanding of what modelling problem you're >> seeing/foreseeing. > > And yea it's in that repo. Still in a branch though as I haven't finished it. > >> >> FYI the metadata that I'm maintaining on red-dove.com is stored in a SQL >> database. While my SQL schema is not yet fully aligned with the PEP (as it's >> WIP), I don't see any modelling problem between an RDBMS backend and any of >> the JSON formats which have been published in the various revisions of the >> PEP. Some more detail would help :-) >> >> Regards, >> >> Vinay Sajip >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From donald at stufft.io Thu Jul 4 15:29:05 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 09:29:05 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> Message-ID: <6C675963-D954-4392-ABBB-E31662E07B01@stufft.io> Yea. It's slow and requires invoking plv8 to do much of anything useful ;) On Jul 4, 2013, at 9:23 AM, Daniel Holth wrote: > Have you played with Postgresql's JSON support :-) From dholth at gmail.com Thu Jul 4 15:34:07 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 09:34:07 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: <6C675963-D954-4392-ABBB-E31662E07B01@stufft.io> References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> <6C675963-D954-4392-ABBB-E31662E07B01@stufft.io> Message-ID: If you don't waste your time enforcing the uniqueness of (condition, extra) in the list of requirements then you can pretend install: is a single item if you want to... Wheel converts the flat Metadata 1.3 format to 2.0 draft easily with a defaultdict: https://bitbucket.org/dholth/wheel/src/fb7a900808f31f440049b89a656089b5f55027ce/wheel/metadata.py?at=default#cl-49 On Thu, Jul 4, 2013 at 9:29 AM, Donald Stufft wrote: > Yea. It's slow and requires invoking plv8 to do much of anything useful ;) > > On Jul 4, 2013, at 9:23 AM, Daniel Holth wrote: > >> Have you played with Postgresql's JSON support :-) From donald at stufft.io Thu Jul 4 15:36:59 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 09:36:59 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> <6C675963-D954-4392-ABBB-E31662E07B01@stufft.io> Message-ID: Yea I just spent significant effort cleaning up the database from a lack if enforced constraints. I will pass on not using them. On Jul 4, 2013, at 9:34 AM, Daniel Holth wrote: > If you don't waste your time enforcing the uniqueness of (condition, > extra) in the list of requirements then you can pretend install: is a > single item if you want to... From dholth at gmail.com Thu Jul 4 15:45:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 09:45:35 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <2A394821-338D-47D6-B4EB-24C83B0F103A@stufft.io> <6C675963-D954-4392-ABBB-E31662E07B01@stufft.io> Message-ID: On Thu, Jul 4, 2013 at 9:36 AM, Donald Stufft wrote: > Yea I just spent significant effort cleaning up the database from a lack if enforced constraints. I will pass on not using them. > > On Jul 4, 2013, at 9:34 AM, Daniel Holth wrote: > >> If you don't waste your time enforcing the uniqueness of (condition, >> extra) in the list of requirements then you can pretend install: is a >> single item if you want to... [ { 'extra':'foo', install:['thing1']}, {'extra:'foo', install:['thing2']} ] From vinay_sajip at yahoo.co.uk Thu Jul 4 21:38:25 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 4 Jul 2013 19:38:25 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > The main benefit is that all the dependencies for an extra will typically > be in one place. > However, I briefly forgot the "machine readable" part again, and for that > TOOWTDI is to have one entry per dependency. One record per dependency is indeed the case at the RDBMS level, but there's no reason why that scheme needs to be slavishly copied over to the JSON. Let me try to illustrate this. I couldn't find any modelling code in Donald's public repo - I checked both branches and couldn't find it (Donald, please point me to it if it's on GitHub rather than just your local clone). So I knocked up a simple model (using SQLAlchemy, but the model is so simple that just about any ORM should do). The entities are Project, Release and Dependency. I've created a simple script, depmodel.py, along with two JSON files which have the relevant subset of the PEP 426 metadata for setuptools 0.7.7 and Pyramid 1.4.2. These are available at https://gist.github.com/vsajip/5929707 This code/data uses the older schema (run_requires / run_may_require, etc. and using 'dependencies' rather than 'install' as a key). This is the JSON which is supposed to be problematic, so I wanted to see what the problems might be. I couldn't find any, so I'm linking to the code here so that Donald/Nick can point out any misunderstanding on my part. The script allows importing the dependencies from JSON to RDBMS (34 lines for the import function) and also exporting from RDBMS to JSON (43 lines for the export function). I've used SQLite for the database. python depmodel.py -i setuptools-0.7.7.json will read the dependencies into SQLite, and python depmodel.py -e setuptools/0.7.7 will print the SQLite records as JSON. I understand that people might have particular preferences, but I can't see any technical reason why we couldn't have lists in the JSON. The import and export code looks pretty simple to me. What have I missed? Regards, Vinay Sajip From donald at stufft.io Fri Jul 5 01:52:06 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 19:52:06 -0400 Subject: [Distutils] g.pypi.python.org out of date Message-ID: <6A6F245F-2E27-4540-997E-0DB20EE1A050@stufft.io> It appears the "g" mirror is ~10 days out of date. Do we know who owns it? It needs kicked to start back up or we need to remove it from the pool if we can't get it going again. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Fri Jul 5 02:09:23 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 20:09:23 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On the plus side if we're arguing about something as banal as this, maybe we are almost done! On Thu, Jul 4, 2013 at 3:38 PM, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> The main benefit is that all the dependencies for an extra will typically >> be in one place. >> However, I briefly forgot the "machine readable" part again, and for that >> TOOWTDI is to have one entry per dependency. > > One record per dependency is indeed the case at the RDBMS level, but there's > no reason why that scheme needs to be slavishly copied over to the JSON. Let > me try to illustrate this. > > I couldn't find any modelling code in Donald's public repo - I checked both > branches and couldn't find it (Donald, please point me to it if it's on > GitHub rather than just your local clone). So I knocked up a simple model > (using SQLAlchemy, but the model is so simple that just about any ORM should > do). The entities are Project, Release and Dependency. I've created a simple > script, depmodel.py, along with two JSON files which have the relevant > subset of the PEP 426 metadata for setuptools 0.7.7 and Pyramid 1.4.2. These > are available at > > https://gist.github.com/vsajip/5929707 > > This code/data uses the older schema (run_requires / run_may_require, etc. > and using 'dependencies' rather than 'install' as a key). This is the JSON > which is supposed to be problematic, so I wanted to see what the problems > might be. I couldn't find any, so I'm linking to the code here so that > Donald/Nick can point out any misunderstanding on my part. > > The script allows importing the dependencies from JSON to RDBMS (34 lines > for the import function) and also exporting from RDBMS to JSON (43 lines for > the export function). I've used SQLite for the database. > > python depmodel.py -i setuptools-0.7.7.json > > will read the dependencies into SQLite, and > > python depmodel.py -e setuptools/0.7.7 > > will print the SQLite records as JSON. > > I understand that people might have particular preferences, but I can't see > any technical reason why we couldn't have lists in the JSON. The import and > export code looks pretty simple to me. What have I missed? > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From vinay_sajip at yahoo.co.uk Fri Jul 5 02:24:37 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 5 Jul 2013 00:24:37 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: Daniel Holth gmail.com> writes: > On the plus side if we're arguing about something as banal as this, > maybe we are almost done! I don't exactly see it as an argument - it's just a discussion (although of course we "argue" for our point of view). I don't think we're done by a long chalk, but as I see it, we might as well polish the various pieces of the puzzle as best we can as we go along. Code is more malleable than the spec, so we should try to get that to be as good as we reasonably can. Regards, Vinay Sajip From dholth at gmail.com Fri Jul 5 02:29:47 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 20:29:47 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On Thu, Jul 4, 2013 at 8:24 PM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: > >> On the plus side if we're arguing about something as banal as this, >> maybe we are almost done! > > I don't exactly see it as an argument - it's just a discussion (although of > course we "argue" for our point of view). > > I don't think we're done by a long chalk, but as I see it, we might as well > polish the various pieces of the puzzle as best we can as we go along. Code > is more malleable than the spec, so we should try to get that to be as good > as we reasonably can. ... certainly not with the whole project, just with this particular piece of metadata ... From ncoghlan at gmail.com Fri Jul 5 03:45:10 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 5 Jul 2013 11:45:10 +1000 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On 4 Jul 2013 22:32, "Vinay Sajip" wrote: > > Donald Stufft stufft.io> writes: > > > I would prefer a single entry. It makes the serialization format map to the > > modeling simpler, and I think it's simpler for humans too. I don't see much > > benefit to making it a list except arbitrarily adding another level of > > nesting. > > It's a question of > > { > "install": ["a", "b", "c"] > } > > versus > > { > "install": "a" > }, > { > "install": "b" > }, > { > "install": "c" > } > > and I can't see why you think the latter is in any way better. The basic problem with the list form is that allowing two representations for the same metadata makes for extra complexity we don't really want. It means we have to decide if the decomposed version (3 separate entries with one item in each install list) is still legal. What I will do is draft PEP text for the list version that explicitly declares the decomposed form non-compliant with the spec. If I think the extra complexity looks tolerable, I'll switch it over. Cheers, Nick. >IMO > implementation details (such as "it's easier for the Django ORM to map it") > should not take precedence over other considerations of > readability/simplicity. In any case, I can't see why there would be any > particular modelling problem with the scheme I've suggested. > > Is the modelling work you're doing public? I had a quick look at your > warehouse repo (github.com/dstufft/warehouse) and I don't see any models > beyond User and Email. Is that the correct location? I'd be happy to take a > closer look to get a better understanding of what modelling problem you're > seeing/foreseeing. > > FYI the metadata that I'm maintaining on red-dove.com is stored in a SQL > database. While my SQL schema is not yet fully aligned with the PEP (as it's > WIP), I don't see any modelling problem between an RDBMS backend and any of > the JSON formats which have been published in the various revisions of the > PEP. Some more detail would help :-) > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 5 03:47:09 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 4 Jul 2013 21:47:09 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On Jul 4, 2013, at 9:45 PM, Nick Coghlan wrote: > > What I will do is draft PEP text for the list version that explicitly declares the decomposed form non-compliant with the spec. If I think the extra complexity looks tolerable, I'll switch it over. > > What is the "decomposed form" ? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Fri Jul 5 03:50:46 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 4 Jul 2013 21:50:46 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: I don't think you can get around the complexity. Consider: {extra:'foo', condition:'platform == win32', install=[]} {extra:'foo', condition:'platform == linux', install=[]} They have to be flattened into a single list of all the 'foo' extras that are installable in the current environment anyway. It's exactly the same work you might try to avoid by worrying about whether install is a list. On Thu, Jul 4, 2013 at 9:45 PM, Nick Coghlan wrote: > > On 4 Jul 2013 22:32, "Vinay Sajip" wrote: >> >> Donald Stufft stufft.io> writes: >> >> > I would prefer a single entry. It makes the serialization format map to >> > the >> > modeling simpler, and I think it's simpler for humans too. I don't see >> > much >> > benefit to making it a list except arbitrarily adding another level of >> > nesting. >> >> It's a question of >> >> { >> "install": ["a", "b", "c"] >> } >> >> versus >> >> { >> "install": "a" >> }, >> { >> "install": "b" >> }, >> { >> "install": "c" >> } >> >> and I can't see why you think the latter is in any way better. > > The basic problem with the list form is that allowing two representations > for the same metadata makes for extra complexity we don't really want. It > means we have to decide if the decomposed version (3 separate entries with > one item in each install list) is still legal. > > What I will do is draft PEP text for the list version that explicitly > declares the decomposed form non-compliant with the spec. If I think the > extra complexity looks tolerable, I'll switch it over. > > Cheers, > Nick. > >>IMO >> implementation details (such as "it's easier for the Django ORM to map >> it") >> should not take precedence over other considerations of >> readability/simplicity. In any case, I can't see why there would be any >> particular modelling problem with the scheme I've suggested. >> >> Is the modelling work you're doing public? I had a quick look at your >> warehouse repo (github.com/dstufft/warehouse) and I don't see any models >> beyond User and Email. Is that the correct location? I'd be happy to take >> a >> closer look to get a better understanding of what modelling problem you're >> seeing/foreseeing. >> >> FYI the metadata that I'm maintaining on red-dove.com is stored in a SQL >> database. While my SQL schema is not yet fully aligned with the PEP (as >> it's >> WIP), I don't see any modelling problem between an RDBMS backend and any >> of >> the JSON formats which have been published in the various revisions of the >> PEP. Some more detail would help :-) >> >> Regards, >> >> Vinay Sajip >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From vinay_sajip at yahoo.co.uk Fri Jul 5 10:25:16 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 5 Jul 2013 08:25:16 +0000 (UTC) Subject: [Distutils] Upcoming changes to PEP 426/440 References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > The basic problem with the list form is that allowing two representations > for the same metadata makes for extra complexity we don't really want. It > means we have to decide if the decomposed version (3 separate entries > with one item in each install list) is still legal. I'm not sure how prescriptive we need to be. For example, posit metadata like: { "install": ["a", "b", "c"], "extra": "foo" }, { "install": ["d", "e", "f"], "extra": "foo" }, { "install": ["g"], "extra": "foo" } Even though there's no particular rationale for structuring it like this, the intention is clear: "a" .. "g" are dependencies when extra "foo" is specified. As long as the method by which these entries are processed is clear in the PEP, then it's not clear what's to be gained by being overly constraining. There are numerous ways in which dependency information can be represented which are not worth the effort to canonicalise. For example, the order in which extras or version constraints are declared in a dependency specifier: dist-name [foo,bar] (>= 1.0, < 2.0) and dist-name [bar,foo] (< 2.0, >= 1.0) are equivalent, but in any simplistic handling this would slip past e.g. database uniqueness constraints. More sophisticated handling (by modelling below the Dependency level) is possible, but whether it's worth it is debatable. Regards, Vinay Sajip From jaraco at jaraco.com Fri Jul 5 20:05:19 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Fri, 5 Jul 2013 18:05:19 +0000 Subject: [Distutils] Setuptools 0.8 and Distribute 0.7.3 (legacy wrapper) now released Message-ID: <216687c5a1bb4f288c29324a7466008d@BLUPR06MB003.namprd06.prod.outlook.com> The PyPA is excited to announce the public release of Setuptools 0.8. This release of setuptools provides no additional functionality over Setuptools 0.7.x except that it no longer requires 2to3 to build/install on Python 3. What this means for packaging is that tools like pip and virtualenv can now invoke setuptools directly on all supported Python versions (currently 2.4+). This build enables more natural upgrades and helps address many of the bugs that the 2to3 conversion process triggered. Additionally, Distribute 0.7.3 has also been released to PyPI. Distribute 0.7 was designed to ease the upgrade process from Distribute 0.6.x to Setuptools 0.7. This new version, 0.7.3, is a re-release of the legacy wrapper 0.7, but additionally bundles the Setuptools 0.8 code for the purposes of bootstrapping the upgrade. This version specifically eases upgrades on systems running older systems. Now, one can readily upgrade any environment with Distribute 0.6 by simply upgrading (using pip or easy_install) to Distribute 0.7.3, which will replace the 'distribute' package with an empty shell leaving setuptools >= 0.7 (probably 0.8) installed. Enjoy, and please report any issues with either of these packages at the Setuptools project page (https://bitbucket.org/pypa/setuptools). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From jim at zope.com Fri Jul 5 20:30:49 2013 From: jim at zope.com (Jim Fulton) Date: Fri, 5 Jul 2013 14:30:49 -0400 Subject: [Distutils] Buildout 2.2.0 released Message-ID: zc.buildout 2.2.0 has been released to PyPI. The main feature of this release is support for setuptools 0.7 and later. See: https://pypi.python.org/pypi/zc.buildout/2.2.0#id3 for a full list of changes. To get this release, you need to download and run a new bootstrap.py file from: http://downloads.buildout.org/2/bootstrap.py Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From sdouche at gmail.com Sat Jul 6 03:08:00 2013 From: sdouche at gmail.com (Sebastien Douche) Date: Sat, 6 Jul 2013 03:08:00 +0200 Subject: [Distutils] Buildout 2.2.0 released In-Reply-To: References: Message-ID: On Fri, Jul 5, 2013 at 8:30 PM, Jim Fulton wrote: > zc.buildout 2.2.0 has been released to PyPI. Yeah \o/. Thanks Jim and all contributors. And big up Tres :). -- Sebastien Douche Twitter: @sdouche / G+: +sdouche From ncoghlan at gmail.com Sat Jul 6 06:07:41 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 6 Jul 2013 14:07:41 +1000 Subject: [Distutils] Setuptools 0.8 and Distribute 0.7.3 (legacy wrapper) now released In-Reply-To: <216687c5a1bb4f288c29324a7466008d@BLUPR06MB003.namprd06.prod.outlook.com> References: <216687c5a1bb4f288c29324a7466008d@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 6 Jul 2013 04:08, "Jason R. Coombs" wrote: > > The PyPA is excited to announce the public release of Setuptools 0.8. This release of setuptools provides no additional functionality over Setuptools 0.7.x except that it no longer requires 2to3 to build/install on Python 3. What this means for packaging is that tools like pip and virtualenv can now invoke setuptools directly on all supported Python versions (currently 2.4+). This build enables more natural upgrades and helps address many of the bugs that the 2to3 conversion process triggered. Great news! Thanks for working to get these out quickly to improve the upgrade path from older versions :) Cheers, Nick. > > > > Additionally, Distribute 0.7.3 has also been released to PyPI. Distribute 0.7 was designed to ease the upgrade process from Distribute 0.6.x to Setuptools 0.7. This new version, 0.7.3, is a re-release of the legacy wrapper 0.7, but additionally bundles the Setuptools 0.8 code for the purposes of bootstrapping the upgrade. This version specifically eases upgrades on systems running older systems. Now, one can readily upgrade any environment with Distribute 0.6 by simply upgrading (using pip or easy_install) to Distribute 0.7.3, which will replace the ?distribute? package with an empty shell leaving setuptools >= 0.7 (probably 0.8) installed. > > > > Enjoy, and please report any issues with either of these packages at the Setuptools project page (https://bitbucket.org/pypa/setuptools). > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From setuptools at bugs.python.org Sat Jul 6 08:52:05 2013 From: setuptools at bugs.python.org (mbogosian) Date: Sat, 06 Jul 2013 06:52:05 +0000 Subject: [Distutils] [issue152] setuptools breaks with from __future__ import unicode_literals in setup.py Message-ID: <1373093525.02.0.347910542596.issue152@psf.upfronthosting.co.za> New submission from mbogosian: unicode_literals break a bunch of stuff in setuptools. Considering they may become the default at some point, this should be fixed...? I do not know if this is related to issue 78. To reproduce, run the attached setup.py (output below). Comment out the unicode_literals line in setup.py and try it again (everything should work). % DISTUTILS_DEBUG=t python -c 'import setuptools ; print setuptools.__version__' 0.8 % unzip -d foo_test.zip ; cd foo_test ... % DISTUTILS_DEBUG=t python setup.py build options (after parsing config files): options (after parsing command line): option dict for 'aliases' command: {} option dict for 'build' command: {} option dict for 'nosetests' command: {'all_modules': ('setup.cfg', '1'), 'cover_package': ('setup.cfg', 'foo'), 'detailed_errors': ('setup.cfg', '1'), 'verbosity': ('setup.cfg', '2'), 'with_coverage': ('setup.cfg', '1'), 'with_doctest': ('setup.cfg', '1')} running build Distribution.get_command_obj(): creating 'build' command object running build_py Distribution.get_command_obj(): creating 'build_py' command object Traceback (most recent call last): File "setup.py", line 58, in setuptools.setup(**_SETUP_ARGS) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build.py", line 127, in run self.run_command(cmd_name) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools/command/build_py.py", line 89, in run self.build_packages() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_py.py", line 372, in build_packages self.build_module(module, module_file, package) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools/command/build_py.py", line 106, in build_module outfile, copied = _build_py.build_module(self, module, module_file, package) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/command/build_py.py", line 333, in build_module "'package' must be a string (dot-separated), list, or tuple") TypeError: 'package' must be a string (dot-separated), list, or tuple % DISTUTILS_DEBUG=t python setup.py nosetests options (after parsing config files): options (after parsing command line): option dict for 'aliases' command: {} option dict for 'nosetests' command: {'all_modules': ('setup.cfg', '1'), 'cover_package': ('setup.cfg', 'foo'), 'detailed_errors': ('setup.cfg', '1'), 'verbosity': ('setup.cfg', '2'), 'with_coverage': ('setup.cfg', '1'), 'with_doctest': ('setup.cfg', '1')} running nosetests Distribution.get_command_obj(): creating 'nosetests' command object setting options for 'nosetests' command: with_coverage = 1 (from setup.cfg) verbosity = 2 (from setup.cfg) cover_package = foo (from setup.cfg) all_modules = 1 (from setup.cfg) with_doctest = 1 (from setup.cfg) detailed_errors = 1 (from setup.cfg) running egg_info Distribution.get_command_obj(): creating 'egg_info' command object Traceback (most recent call last): File "setup.py", line 58, in setuptools.setup(**_SETUP_ARGS) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/core.py", line 152, in setup dist.run_commands() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nose/commands.py", line 132, in run self.run_command('egg_info') File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 326, in run_command self.distribution.run_command(command) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/dist.py", line 971, in run_command cmd_obj.ensure_finalized() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 109, in ensure_finalized self.finalize_options() File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 103, in finalize_options self.ensure_dirname('egg_base') File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 269, in ensure_dirname "'%s' does not exist or is not a directory") File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 255, in _ensure_tested_string val = self._ensure_stringlike(option, what, default) File ".../Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/distutils/cmd.py", line 216, in _ensure_stringlike "'%s' must be a %s (got `%s`)" % (option, what, val) distutils.errors.DistutilsOptionError: 'egg_base' must be a directory name (got `src`) % ... # Comment out unicode_literals line in setup.py % python setup.py nosetests running nosetests running egg_info writing src/foo.egg-info/PKG-INFO writing namespace_packages to src/foo.egg-info/namespace_packages.txt writing top-level names to src/foo.egg-info/top_level.txt writing dependency_links to src/foo.egg-info/dependency_links.txt reading manifest file 'src/foo.egg-info/SOURCES.txt' writing manifest file 'src/foo.egg-info/SOURCES.txt' running build_ext Doctest: foo.base.pity.Foo ... ok testFoo (test.base.pity.TestFoo) ... ok Name Stmts Miss Cover Missing --------------------------------------------- foo 3 0 100% foo.base 0 0 100% foo.base.pity 9 0 100% --------------------------------------------- TOTAL 12 0 100% ---------------------------------------------------------------------- Ran 2 tests in 0.034s OK ---------- files: foo_test.zip messages: 724 nosy: mbogosian priority: bug status: unread title: setuptools breaks with from __future__ import unicode_literals in setup.py Added file: http://bugs.python.org/setuptools/file92/foo_test.zip _______________________________________________ Setuptools tracker _______________________________________________ -------------- next part -------------- A non-text attachment was scrubbed... Name: foo_test.zip Type: application/zip Size: 4252 bytes Desc: not available URL: From a.badger at gmail.com Sat Jul 6 10:11:50 2013 From: a.badger at gmail.com (Toshio Kuratomi) Date: Sat, 6 Jul 2013 01:11:50 -0700 Subject: [Distutils] [issue152] setuptools breaks with from __future__ import unicode_literals in setup.py In-Reply-To: <1373093525.02.0.347910542596.issue152@psf.upfronthosting.co.za> References: <1373093525.02.0.347910542596.issue152@psf.upfronthosting.co.za> Message-ID: <20130706081150.GA30804@unaka.lan> On Sat, Jul 06, 2013 at 06:52:05AM +0000, mbogosian wrote: > > New submission from mbogosian: > > unicode_literals break a bunch of stuff in setuptools. Considering they may become the default at some point, this should be fixed...? I do not know if this is related to issue 78. > > To reproduce, run the attached setup.py (output below). Comment out the unicode_literals line in setup.py and try it again (everything should work). > > % DISTUTILS_DEBUG=t python -c 'import setuptools ; print setuptools.__version__' > 0.8 > % unzip -d foo_test.zip ; cd foo_test > ... > % DISTUTILS_DEBUG=t python setup.py build [snip output] > % DISTUTILS_DEBUG=t python setup.py nosetests [snip output] Not sure what the unicode model is in setuptools but one way to look at this is that in python2, the setuptools API takes byte str and in python3, the API takes unicode str. So this is a case of the setup.py being invalid. If you have: from __future__ import unicode_literals That doesn't change what the api takes as input; it only changes how you express it. So a package author who does from __future__ import unicode_literals would also need to do this to make things work: 'package_dir' : { '': b'src' }, 'packages' : setuptools.find_packages(b'src', exclude = ( b'foo', b'test', b'test.*' )), Someone else will have to speak to whether that's the intended model, though. -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From vinay_sajip at yahoo.co.uk Sat Jul 6 12:19:07 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 6 Jul 2013 10:19:07 +0000 (UTC) Subject: [Distutils] =?utf-8?q?=5Bissue152=5D_setuptools_breaks_with_from_?= =?utf-8?q?=5F=5Ffuture=5F=5F=09import=09unicode=5Fliterals_in_setu?= =?utf-8?q?p=2Epy?= References: <1373093525.02.0.347910542596.issue152@psf.upfronthosting.co.za> Message-ID: mbogosian bugs.python.org> writes: > unicode_literals break a bunch of stuff in setuptools. Considering they may become the default at some > point, this should be fixed...? I do not know if this is related to issue 78. > These appear to be because distutils (not setuptools) is unduly restrictive, refusing to accept unicode when it could do so. > ".../lib/python2.7/distutils/command/build_py.py", > line 333, in build_module > "'package' must be a string (dot-separated), list, or tuple") > TypeError: 'package' must be a string (dot-separated), list, or tuple distutils code is testing for str (as opposed to list/tuple), but it could as well test for basestring here. > File ".../lib/python2.7/distutils/cmd.py", > line 216, in _ensure_stringlike > "'%s' must be a %s (got `%s`)" % (option, what, val) > distutils.errors.DistutilsOptionError: 'egg_base' must be a directory name (got `src`) Again, this is in distutils. It seems as if testing for basestring rather than str would be needed. There could well be other places in distutils where it's testing for str where basestring would do, but I'm not sure whether these would be regarded as bugs for 2.7. Regards, Vinay Sajip From qwcode at gmail.com Sat Jul 6 21:14:51 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 6 Jul 2013 12:14:51 -0700 Subject: [Distutils] pip and virtualenv release candidates In-Reply-To: References: Message-ID: pip-1.4rc3 and virtualenv-1.10rc4 are now available the changes from the previous RCs: - a bug fix for a new use of urlparse ( https://github.com/pypa/pip/pull/1032) - "pip list" doesn't ignore showing setuptools and pip anymore (although "pip freeze" still does) - the wheel setuptools requirement was changed to ">=0.8" - updated installation instructions related to the release of setuptools-0.8 (http://www.pip-installer.org/en/latest/installing.html#requirements) here's the gist of the RC install instructions again: $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc4.tar.gz $ tar zxf 1.10rc4.tar.gz $ python virtualenv-1.10rc4/virtualenv.py myVE $ myVE/bin/pip --version pip 1.4rc3 On Tue, Jul 2, 2013 at 10:28 PM, Marcus Smith wrote: > pip-1.4rc2 and virtualenv-1.10rc3 are available for testing from github. > > A few highlights: > - pip added support for installing and building wheel archives. > ( > http://www.pip-installer.org/en/latest/cookbook.html#building-and-installing-wheels > ) > - virtualenv is now using the new merged setuptools, and no longer > supports distribute. > - pip now only installs stable versions by default, and offers a new > --pre option to also find pre-releases. > - Dropped support for Python 2.5. > > Changelogs: > pip: http://www.pip-installer.org/en/release-1.4/news.html > virtualenv: http://www.virtualenv.org/en/release-1.10/news.html > > Download Links: > pip: > gz: https://github.com/pypa/pip/archive/1.4rc2.tar.gz > md5=0426430fc8a261c83bcd083fb03fb7d6 > zip: https://github.com/pypa/pip/archive/1.4rc2.zip > md5=c86dc0d94ed787eadba6dceb06f1676f > virtualenv: > gz: https://github.com/pypa/virtualenv/archive/1.10rc3.tar.gz > md5=b24cdf59b561acf26ae3f639098d5a34 > zip: https://github.com/pypa/virtualenv/archive/1.10rc3.zip > md5=a6ee1a1570a751aa50f95833d9898649 > > Installation: > The easiest way to try them both and *not* affect your current system, is > like so: > e.g. on Linux: > $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc3.tar.gz > $ echo "b24cdf59b561acf26ae3f639098d5a34 1.10rc3.tar.gz" | md5sum -c > 1.10rc3.tar.gz: OK > $ tar zxf 1.10rc3.tar.gz > $ python virtualenv-1.10rc3/virtualenv.py myVE > $ myVE/bin/pip --version > pip 1.4rc2 > > *Note*: If instead, you choose to upgrade an existing pip (and > setuptools), know this: > 1) pip's wheel support requires setuptools>=0.8b2 (this will become > final before pip is released final) > 2) setuptools-0.8bx is not on pypi and can be found here: > https://bitbucket.org/pypa/setuptools/downloads > 3) Older pip's can not currently upgrade distribute to setuptools > (until distribute-0.7.3 is released on ~July-7th) > (for more upgrade details: > http://www.pip-installer.org/en/latest/installing.html#requirements) > > Offering Feedback: > You can respond to this email, or log issues in our tracker: > https://github.com/pypa/pip/issues?state=open > -------------- next part -------------- An HTML attachment was scrubbed... URL: From setuptools at bugs.python.org Sun Jul 7 04:29:32 2013 From: setuptools at bugs.python.org (Tom Brennan) Date: Sun, 07 Jul 2013 02:29:32 +0000 Subject: [Distutils] [issue153] Jython - easy_install -- ssl_support.py: module has no attribute 'CERT_REQUIRED' Message-ID: <1373164172.5.0.684661114137.issue153@psf.upfronthosting.co.za> New submission from Tom Brennan: Can't install anything with easy_install or pip. Happens in at least v.0.8 and 0.7.8. (stack trace attached is from 0.7.8) ---------- files: easy_install_stack_trace.txt messages: 729 nosy: tjb1982 priority: bug status: unread title: Jython - easy_install -- ssl_support.py: module has no attribute 'CERT_REQUIRED' Added file: http://bugs.python.org/setuptools/file93/easy_install_stack_trace.txt _______________________________________________ Setuptools tracker _______________________________________________ -------------- next part -------------- Searching for pip Reading https://pypi.python.org/simple/pip/ Traceback (most recent call last): File "/home/tjb1982/jython2.7/bin/easy_install", line 8, in sys.exit( File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/command/easy_install.py", line 1978, in main File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/command/easy_install.py", line 1965, in with_ei_usage File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/command/easy_install.py", line 1978, in File "/home/tjb1982/jython2.7/Lib/distutils/core.py", line 152, in setup dist.run_commands() File "/home/tjb1982/jython2.7/Lib/distutils/core.py", line 152, in setup dist.run_commands() File "/home/tjb1982/jython2.7/Lib/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/home/tjb1982/jython2.7/Lib/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/command/easy_install.py", line 373, in run File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/command/easy_install.py", line 602, in easy_install File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 523, in fetch_distribution File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 359, in find_packages File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 698, in scan_url File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 236, in process_url File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 639, in open_url File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 639, in open_url File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 873, in _socket_timeout File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/package_index$py.class", line 920, in open_with_auth File "/home/tjb1982/jython2.7/Lib/urllib2.py", line 400, in open response = self._open(req, data) File "/home/tjb1982/jython2.7/Lib/urllib2.py", line 417, in _open result = self._call_chain(self.handle_open, protocol, protocol + File "/home/tjb1982/jython2.7/Lib/urllib2.py", line 378, in _call_chain result = func(*args) File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/ssl_support$py.class", line 175, in https_open File "/home/tjb1982/jython2.7/Lib/urllib2.py", line 1174, in do_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/home/tjb1982/jython2.7/Lib/urllib2.py", line 1174, in do_open h.request(req.get_method(), req.get_selector(), req.data, headers) File "/home/tjb1982/jython2.7/Lib/httplib.py", line 958, in request self._send_request(method, url, body, headers) File "/home/tjb1982/jython2.7/Lib/httplib.py", line 992, in _send_request self.endheaders(body) File "/home/tjb1982/jython2.7/Lib/httplib.py", line 954, in endheaders self._send_output(message_body) File "/home/tjb1982/jython2.7/Lib/httplib.py", line 814, in _send_output self.send(msg) File "/home/tjb1982/jython2.7/Lib/httplib.py", line 776, in send self.connect() File "/home/tjb1982/jython2.7/Lib/site-packages/setuptools-0.7.8-py2.7.egg/setuptools/ssl_support$py.class", line 189, in connect AttributeError: 'module' object has no attribute 'CERT_REQUIRED' From chris at simplistix.co.uk Sun Jul 7 09:08:37 2013 From: chris at simplistix.co.uk (Chris Withers) Date: Sun, 07 Jul 2013 08:08:37 +0100 Subject: [Distutils] buildout/setuptools/distribute unhelpful error message (0.7.x issue?) Message-ID: <51D913F5.3080804@simplistix.co.uk> Hi All, What is this exception trying to tell me? Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-0.7.2.tar.gz Extracting in /tmp/tmpJNVsOY Now working in /tmp/tmpJNVsOY/setuptools-0.7.2 Building a Setuptools egg in /tmp/tmpBLZGeg /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg Traceback (most recent call last): File "bootstrap.py", line 91, in pkg_resources.working_set.add_entry(path) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 451, in add_entry self.add(dist, entry, False) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 542, in add self._added_new(dist) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 705, in _added_new callback(dist) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2727, in add_activation_listener(lambda dist: dist.activate()) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2227, in activate self.insert_on(path) File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2328, in insert_on "with distribute. Found one at %s" % str(self.location)) ValueError: A 0.7-series setuptools cannot be installed with distribute. Found one at /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg I don't see any distribute in there, and I don't know where it might be... $ python2.6 Python 2.6.8 (unknown, Jan 29 2013, 10:05:44) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import setuptools Traceback (most recent call last): File "", line 1, in ImportError: No module named setuptools cheers, Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From tseaver at palladion.com Sun Jul 7 15:32:08 2013 From: tseaver at palladion.com (Tres Seaver) Date: Sun, 07 Jul 2013 09:32:08 -0400 Subject: [Distutils] buildout/setuptools/distribute unhelpful error message (0.7.x issue?) In-Reply-To: <51D913F5.3080804@simplistix.co.uk> References: <51D913F5.3080804@simplistix.co.uk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/07/2013 03:08 AM, Chris Withers wrote: > > I don't see any distribute in there, and I don't know where it might > be... 'pkg_resources' (/usr/lib/python2.6/dist-packages/pkg_resources.py) comes from either setuptools or distribute -- in your case, distribute (probably via a 'python-distribute' Debian package, given that path). Either you need to get that system pacakge updated, or else removed, or else insulate yourself from it (e.g., with a 'virtualenv --no-setuptools' or a self-built Python). Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHZbdIACgkQ+gerLs4ltQ7y1ACfSuEQBXY/THSNDsoLysgORTQv HeQAoI5TH6WOqNG3TR9Fuu4VKURpeil2 =kkLS -----END PGP SIGNATURE----- From pje at telecommunity.com Sun Jul 7 21:32:42 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 7 Jul 2013 15:32:42 -0400 Subject: [Distutils] [issue153] Jython - easy_install -- ssl_support.py: module has no attribute 'CERT_REQUIRED' In-Reply-To: <1373164172.5.0.684661114137.issue153@psf.upfronthosting.co.za> References: <1373164172.5.0.684661114137.issue153@psf.upfronthosting.co.za> Message-ID: On Sat, Jul 6, 2013 at 10:29 PM, Tom Brennan wrote: > > New submission from Tom Brennan: > > Can't install anything with easy_install or pip. Happens in at least v.0.8 and 0.7.8. (stack trace attached is from 0.7.8) > > ---------- > files: easy_install_stack_trace.txt > messages: 729 > nosy: tjb1982 > priority: bug > status: unread > title: Jython - easy_install -- ssl_support.py: module has no attribute 'CERT_REQUIRED' > Added file: http://bugs.python.org/setuptools/file93/easy_install_stack_trace.txt It appears that Jython's ssl module is not 100% compatible with Python's. I'm not sure what to do about that. ( By the way, bugs for setuptools 0.7 and higher should be reported to the issue tracker at https://bitbucket.org/pypa/setuptools ) From ct at gocept.com Mon Jul 8 09:55:32 2013 From: ct at gocept.com (Christian Theune) Date: Mon, 8 Jul 2013 09:55:32 +0200 Subject: [Distutils] bandersnatch 1.0.2: CDN compatibility for mirrors Message-ID: Hi, over the last weeks I worked with Donald to get mirrors working consistently again. I updated the bandersnatch mirror client and the 1.0.2 release has been running on my mirror (f.pypi.python.org) for a few days correctly. If you're running a mirror, please get it and report any further errors you should encounter to the bitbucket issue page: https://bitbucket.org/ctheune/bandersnatch/issues Cheers, Christian Small caveat, nothing big: the statistics that are generated by pypi-mirrors.org show a reduced number of packages available on my mirror, however, that is due to the fact that the master reports a number that includes all packages even without releases. The master serves 404s for their simple pages. Bandersnatch, thus, does not keep their directories. To support the CDN I have to generate the big simple index page myself. The only reliable way to do this is to list all simple pages I actually have. As packages without releases don't have simple pages -> back to square one. From reinout at vanrees.org Mon Jul 8 12:29:40 2013 From: reinout at vanrees.org (Reinout van Rees) Date: Mon, 08 Jul 2013 12:29:40 +0200 Subject: [Distutils] Setuptools 0.8 and Distribute 0.7.3 (legacy wrapper) now released In-Reply-To: <216687c5a1bb4f288c29324a7466008d@BLUPR06MB003.namprd06.prod.outlook.com> References: <216687c5a1bb4f288c29324a7466008d@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 05-07-13 20:05, Jason R. Coombs wrote: > > Additionally, Distribute 0.7.3 has also been released to PyPI. > Distribute 0.7 was designed to ease the upgrade process from Distribute > 0.6.x to Setuptools 0.7. This new version, 0.7.3, is a re-release of the > legacy wrapper 0.7, but additionally bundles the Setuptools 0.8 code for > the purposes of bootstrapping the upgrade. This version specifically > eases upgrades on systems running older systems. Now, one can readily > upgrade any environment with Distribute 0.6 by simply upgrading (using > pip or easy_install) to Distribute 0.7.3, which will replace the > ?distribute? package with an empty shell leaving setuptools >= 0.7 > (probably 0.8) installed. I tried this on OSX and ran into a problem: $ sudo pip install -U distribute Password: Downloading/unpacking distribute from https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a Downloading distribute-0.7.3.zip (145Kb): 145Kb downloaded Running setup.py egg_info for package distribute Downloading/unpacking setuptools>=0.7 (from distribute) Downloading setuptools-0.8.tar.gz (756Kb): 756Kb downloaded Running setup.py egg_info for package setuptools Installing collected packages: distribute, setuptools Found existing installation: distribute 0.6.28 Uninstalling distribute: Successfully uninstalled distribute Running setup.py install for distribute Found existing installation: distribute 0.6.28 Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-1.1-py2.7.egg/pip/basecommand.py", line 104, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-1.1-py2.7.egg/pip/commands/install.py", line 250, in run requirement_set.install(install_options, global_options) File "/Library/Python/2.7/site-packages/pip-1.1-py2.7.egg/pip/req.py", line 1129, in install requirement.uninstall(auto_confirm=True) File "/Library/Python/2.7/site-packages/pip-1.1-py2.7.egg/pip/req.py", line 477, in uninstall config.readfp(FakeFile(dist.get_metadata_lines('entry_points.txt'))) File "build/bdist.macosx-10.8-intel/egg/pkg_resources.py", line 1213, in get_metadata_lines File "build/bdist.macosx-10.8-intel/egg/pkg_resources.py", line 1205, in get_metadata File "build/bdist.macosx-10.8-intel/egg/pkg_resources.py", line 1270, in _get IOError: zipimport: can not open file /Library/Python/2.7/site-packages/distribute-0.6.28-py2.7.egg So it downloaded 0.7.3 just fine, but barfed on a non-zipfile distribute 0.6.28. The directory didn't exist afterwards, so I assume the upgrade process somehow removed it (or it got confused because it was or wasn't a zipfile). Afterwards nothing pip-related worked as it missed setuptools. The `/usr/bin/easy_install setuptools` also didn't work. So I had to set it up anew, which worked: $ wget https://bitbucket.org/pypa/setuptools/raw/0.8/ez_setup.py $ sudo /usr/bin/python ez_setup.py Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "If you're not sure what to do, make something. -- Paul Graham" From lgautier at gmail.com Mon Jul 8 13:01:25 2013 From: lgautier at gmail.com (Laurent Gautier) Date: Mon, 08 Jul 2013 13:01:25 +0200 Subject: [Distutils] Python packaging + conditional use of C library shipped with the package Message-ID: <51DA9C05.5050802@gmail.com> Hi, I would like to make a package that is able to use a system's given C-library if found, or compile its own version shipped with the package. This is bringing up two questions: - Is there a distutils/distribute facility to help test for the presence (and version) of a C library, or do I have to roll my own system ? - When having the source for a C library shipping with a package, is there a way to get distutils/distribute compile it, and get it seen by Python at runtime (so I can just use ctypes, or cffi, and even C extensions in other Python package see the headers and compiled libraries) ? Best, Laurent PS: I have noticed that pyzmq seems to be doing a lot of that, but they also appear to have a relatively big set of custom code do so (setup.py is already a little longer than most install scripts, to which a complete package 'buildutils' should be added)... this makes me anticipate that the answer to my two questions will be "no" ( :/ ) but I'd like a confirmation from the experts. From g2p.code at gmail.com Mon Jul 8 13:32:31 2013 From: g2p.code at gmail.com (Gabriel de Perthuis) Date: Mon, 8 Jul 2013 11:32:31 +0000 (UTC) Subject: [Distutils] Python packaging + conditional use of C library shipped with the package References: <51DA9C05.5050802@gmail.com> Message-ID: On Mon, 08 Jul 2013 13:01:25 +0200, Laurent Gautier wrote: > Hi, > > I would like to make a package that is able to use a system's given > C-library if found, or compile its own version shipped with the package. > > This is bringing up two questions: > > - Is there a distutils/distribute facility to help test for the presence > (and version) of a C library, or do I have to roll my own system ? No. Here is a rather clean script that encapsulates pkg-config: http://git.enlightenment.org/bindings/python/python-efl.git/tree/setup.py > - When having the source for a C library shipping with a package, is > there a way to get distutils/distribute compile it, and get it seen by > Python at runtime (so I can just use ctypes, or cffi, and even C > extensions in other Python package see the headers and compiled libraries) ? I don't think you should do any kind of bundling. That will easily decuple the complexity of your setup scripts, for a subpar result. Point to the C project and let people install it as they will. From ct at gocept.com Mon Jul 8 15:09:21 2013 From: ct at gocept.com (Christian Theune) Date: Mon, 8 Jul 2013 15:09:21 +0200 Subject: [Distutils] bandersnatch 1.0.2: CDN compatibility for mirrors References: Message-ID: On 2013-07-08 07:55:32 +0000, Christian Theune said: > Hi, > > over the last weeks I worked with Donald to get mirrors working > consistently again. > > I updated the bandersnatch mirror client and the 1.0.2 release has been > running on my mirror (f.pypi.python.org) for a few days correctly. > > If you're running a mirror, please get it and report any further errors > you should encounter to the bitbucket issue page: > https://bitbucket.org/ctheune/bandersnatch/issues Someone pointed out that I managed to create a brownbag release. The 1.0.2 code is fine, but I managed to screw up releasing an updated requirements.txt and updating the "stable" tag. 1.0.3 fixes this. Christian From tk47 at students.poly.edu Mon Jul 8 14:47:50 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Mon, 8 Jul 2013 20:47:50 +0800 Subject: [Distutils] bandersnatch 1.0.2: CDN compatibility for mirrors In-Reply-To: References: Message-ID: <51DAB4F6.5090502@students.poly.edu> Christian, I don't think I see 1.0.2 yet, it seems to be still 1.0.1 for me. I use this command to upgrade --- is it obsolete? pip install --upgrade -r https://bitbucket.org/ctheune/bandersnatch/raw/stable/requirements.txt On 07/08/2013 03:55 PM, Christian Theune wrote: > Hi, > > over the last weeks I worked with Donald to get mirrors working > consistently again. > > I updated the bandersnatch mirror client and the 1.0.2 release has > been running on my mirror (f.pypi.python.org) for a few days correctly. > > If you're running a mirror, please get it and report any further > errors you should encounter to the bitbucket issue page: > https://bitbucket.org/ctheune/bandersnatch/issues > > Cheers, > Christian > > Small caveat, nothing big: the statistics that are generated by > pypi-mirrors.org show a reduced number of packages available on my > mirror, however, that is due to the fact that the master reports a > number that includes all packages even without releases. The master > serves 404s for their simple pages. Bandersnatch, thus, does not keep > their directories. To support the CDN I have to generate the big > simple index page myself. The only reliable way to do this is to list > all simple pages I actually have. As packages without releases don't > have simple pages -> back to square one. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From zooko at zooko.com Mon Jul 8 17:33:05 2013 From: zooko at zooko.com (zooko) Date: Mon, 8 Jul 2013 19:33:05 +0400 Subject: [Distutils] Python packaging + conditional use of C library shipped with the package In-Reply-To: <51DA9C05.5050802@gmail.com> References: <51DA9C05.5050802@gmail.com> Message-ID: <20130708153304.GA18441@zooko.com> On Mon, Jul 08, 2013 at 01:01:25PM +0200, Laurent Gautier wrote: > Hi, > > I would like to make a package that is able to use a system's given > C-library if found, or compile its own version shipped with the > package. We do something similar in pycryptopp, but instead of automatically testing for the locally-available C library, we just ask the human to manually pass "--disable-embedded-cryptopp" if they want it to attempt to link to a library external to its own bundled one: https://tahoe-lafs.org/trac/pycryptopp/browser/git/setup.py?annotate=blame&rev=f789ed951b49b33e7cc49d16fdc8b398f7ec7223 > - Is there a distutils/distribute facility to help test for the > presence (and version) of a C library, or do I have to roll my own > system ? If you succeed at this, I'd like to know how you did it! Maybe we could do something similar for pycryptopp. > - When having the source for a C library shipping with a package, is > there a way to get distutils/distribute compile it, and get it seen > by Python at runtime (so I can just use ctypes, or cffi, and even C > extensions in other Python package see the headers and compiled > libraries) ? I don't understand the question. This sounds like the normal thing that distutils has always done for modules made up of compiled C code. By the way, if I were starting pycryptopp today I would use cffi. (And I would name it "crpyto".) Regards, Zooko From lgautier at gmail.com Mon Jul 8 18:26:45 2013 From: lgautier at gmail.com (Laurent Gautier) Date: Mon, 08 Jul 2013 18:26:45 +0200 Subject: [Distutils] Python packaging + conditional use of C library shipped with the package In-Reply-To: <20130708153304.GA18441@zooko.com> References: <51DA9C05.5050802@gmail.com> <20130708153304.GA18441@zooko.com> Message-ID: <51DAE845.9090501@gmail.com> On 07/08/2013 05:33 PM, zooko wrote: > On Mon, Jul 08, 2013 at 01:01:25PM +0200, Laurent Gautier wrote: >> Hi, >> >> I would like to make a package that is able to use a system's given >> C-library if found, or compile its own version shipped with the >> package. > We do something similar in pycryptopp, but instead of automatically testing for > the locally-available C library, we just ask the human to manually pass > "--disable-embedded-cryptopp" if they want it to attempt to link to a library > external to its own bundled one: > > https://tahoe-lafs.org/trac/pycryptopp/browser/git/setup.py?annotate=blame&rev=f789ed951b49b33e7cc49d16fdc8b398f7ec7223 Thanks for this, I'll look at it. >> - Is there a distutils/distribute facility to help test for the >> presence (and version) of a C library, or do I have to roll my own >> system ? > If you succeed at this, I'd like to know how you did it! Maybe we could do > something similar for pycryptopp. pyzmq seems to be doing this sort of thing (from a quick look, it seems like they are implementing something similar to what autoconf can do), but that's a lot of work if each project must reimplement its own. > >> - When having the source for a C library shipping with a package, is >> there a way to get distutils/distribute compile it, and get it seen >> by Python at runtime (so I can just use ctypes, or cffi, and even C >> extensions in other Python package see the headers and compiled >> libraries) ? > I don't understand the question. This sounds like the normal thing that > distutils has always done for modules made up of compiled C code. May be I was not clear enough. What I mean is the C library is just a C library, not a C-extension to Python. For example: mypackage/ clib/ myClibrary.c myClibrary.h src/ mypackage.py (calling myClibrary.so) myotherpackage/ clib/ myotherClibrary.c (requiring myClibrary.h, and obviously link to myClibrary.so) I'll look at the code example you are giving and see it this is automagically taken care of by the package installation system. L. > > By the way, if I were starting pycryptopp today I would use cffi. (And I would > name it "crpyto".) > > Regards, > > Zooko From r1chardj0n3s at gmail.com Wed Jul 10 05:16:47 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 10 Jul 2013 13:16:47 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated Message-ID: [firstly, my apologies for posting the announcement yesterday of the pip bootstrap implementation and PEP updates to the pypa-dev list instead of distutils-sig... I blame PyCon AU exhaustion :-)] Firstly, I've just made some additional changes to PEP 439 to include: - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) - mention the possibility of inclusion in a future Python 2.7 release - clarify the SSL certificate situation The bootstrap code has also been updated to: - not run the full pip command if it's "pip3 install setuptools" or either of the other two packages it has just installed (thus preventing a possibly confusing message to the user) - also install virtualenv The intention is that the pip, setuptools and actually all Python projects will promote a single bootstrap process: "pip3 install setuptools" or "pip3 install Django" And then there's instructions for getting "pip" if it's not installed. Exact wording etc. to be determined :-) The original message I sent to pypa-dev yesterday is below: The bootstrap that I wrote at the PyCon AU sprints to implement PEP 439 has been added to pypa on bitbucket: https://bitbucket.org/pypa/bootstrap I've also updated the PEP with the following changes: - mention current plans for HTTPS cert verification in Python 3.4+ (sans PEP reference for now) - remove setuptools note; setuptools will now be installed - mention bootstrapping target (user vs. system) and command-line options - mention python 2.6+ bootstrap possibility - remove consideration of support for unnecessary installation options (beyond -i/--index-url) - mention having pip default to --user when itself installed in ~/.local What the last item alludes to is the idea that it'd be nice if pip installed in ~/.local would default to installing packages also in ~/.local as though the --user switch had been provided. Otherwise the user needs to remember it every time they install a package. Note that the bootstrapping uses two different flags to control where the pip implementation is installed: --bootstrap and --bootstrap-system (these were chosen to encourage user installs). It would be ideal if pip could support those flags, as the pip3 command currently must remove them before invoking pip main. Once we're happy with the shape of pip3 we can fork it to Python 2 and use it as the canonical bootstrap script for installing pip and setuptools. I think we should also consider installing virtualenv in Python 2... Happy to clarify where needed and code review is welcome. It's been a looong four days here :-) Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 10 05:20:38 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 9 Jul 2013 23:20:38 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: > [firstly, my apologies for posting the announcement yesterday of the pip bootstrap implementation and PEP updates to the pypa-dev list instead of distutils-sig... I blame PyCon AU exhaustion :-)] > > Firstly, I've just made some additional changes to PEP 439 to include: > > - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ > - mention the possibility of inclusion in a future Python 2.7 release > - clarify the SSL certificate situation > > The bootstrap code has also been updated to: > > - not run the full pip command if it's "pip3 install setuptools" or either of the other two packages it has just installed (thus preventing a possibly confusing message to the user) > - also install virtualenv > > The intention is that the pip, setuptools and actually all Python projects will promote a single bootstrap process: > > "pip3 install setuptools" or "pip3 install Django" > > And then there's instructions for getting "pip" if it's not installed. Exact wording etc. to be determined :-) > > The original message I sent to pypa-dev yesterday is below: > > The bootstrap that I wrote at the PyCon AU sprints to implement PEP 439 has been added to pypa on bitbucket: > > https://bitbucket.org/pypa/bootstrap > > I've also updated the PEP with the following changes: > > - mention current plans for HTTPS cert verification in Python 3.4+ (sans PEP reference for now) > - remove setuptools note; setuptools will now be installed > - mention bootstrapping target (user vs. system) and command-line options > - mention python 2.6+ bootstrap possibility > - remove consideration of support for unnecessary installation options (beyond -i/--index-url) > - mention having pip default to --user when itself installed in ~/.local > > What the last item alludes to is the idea that it'd be nice if pip installed in ~/.local would default to installing packages also in ~/.local as though the --user switch had been provided. Otherwise the user needs to remember it every time they install a package. > > Note that the bootstrapping uses two different flags to control where the pip implementation is installed: --bootstrap and --bootstrap-system (these were chosen to encourage user installs). It would be ideal if pip could support those flags, as the pip3 command currently must remove them before invoking pip main. > > Once we're happy with the shape of pip3 we can fork it to Python 2 and use it as the canonical bootstrap script for installing pip and setuptools. I think we should also consider installing virtualenv in Python 2... > > Happy to clarify where needed and code review is welcome. It's been a looong four days here :-) > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From r1chardj0n3s at gmail.com Wed Jul 10 05:47:44 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 10 Jul 2013 13:47:44 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: On 10 July 2013 13:20, Donald Stufft wrote: > On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: > Firstly, I've just made some additional changes to PEP 439 to include: > > - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) > > > doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ It's my understanding that people still install virtualenv in py3k. Richard From donald at stufft.io Wed Jul 10 06:18:53 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Jul 2013 00:18:53 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> On Jul 9, 2013, at 11:47 PM, Richard Jones wrote: > On 10 July 2013 13:20, Donald Stufft wrote: >> On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: >> Firstly, I've just made some additional changes to PEP 439 to include: >> >> - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) >> >> >> doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ > > It's my understanding that people still install virtualenv in py3k. > > > Richard I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv". Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled?). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it. Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From r1chardj0n3s at gmail.com Wed Jul 10 06:37:47 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 10 Jul 2013 14:37:47 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On 10 July 2013 14:18, Donald Stufft wrote: > On Jul 9, 2013, at 11:47 PM, Richard Jones wrote: >> On 10 July 2013 13:20, Donald Stufft wrote: >>> On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: >>> Firstly, I've just made some additional changes to PEP 439 to include: >>> >>> - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) >>> >>> >>> doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ >> >> It's my understanding that people still install virtualenv in py3k. > > > I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv". OK, thanks. I wonder whether virtualenv.org could mention pyvenv for Py3k users? > Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled?). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it. pip without virtualenv in python 2 contexts is pretty rare (or at least *should* be ) so I think I'll retain it in that bootstrap code. > Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment. There's also the idea I mentioned yesterday: if pip is installed to the user local site-packages then it would be really good if pip's installs could also default to that rather than the system site-packages. In fact I consider it a bug that it does not, and I hope the pip devs will come to think that too :-) Richard From donald at stufft.io Wed Jul 10 06:40:51 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Jul 2013 00:40:51 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On Jul 10, 2013, at 12:37 AM, Richard Jones wrote: > On 10 July 2013 14:18, Donald Stufft wrote: >> On Jul 9, 2013, at 11:47 PM, Richard Jones wrote: >>> On 10 July 2013 13:20, Donald Stufft wrote: >>>> On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: >>>> Firstly, I've just made some additional changes to PEP 439 to include: >>>> >>>> - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) >>>> >>>> >>>> doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ >>> >>> It's my understanding that people still install virtualenv in py3k. >> >> >> I just talked to Carl. He basically said that for 3.3+ pyenv itself should probably used and that "hopefully virtualenv will die in favor of of pyenv". > > OK, thanks. I wonder whether virtualenv.org could mention pyvenv for Py3k users? Probably! > > >> Another reason I think that the bootstrap script shouldn't install virtualenv is that of scope. The point of bootstrapping was to make it so pip could be "included" with Python without actually including it. As far as i'm personally concerned it should concern itself with installing pip and setuptools (assuming we can't make setuptools optional in pip or bundled?). We don't need virtualenv to enable ``pip3 install foo`` so it shouldn't be installing it. > > pip without virtualenv in python 2 contexts is pretty rare (or at > least *should* be ) so I think I'll retain it in that bootstrap > code. Ok, I don't really care enough about that minor scope creep to object too heavily :) > > >> Otoh it would be nicer if PyEnv was taken to integrate with pip (although this is possibly a different pip) in that when creating a new environment if pip has already been installed in the "parent" environment it would be copied over into the pyenv created environment. > > There's also the idea I mentioned yesterday: if pip is installed to > the user local site-packages then it would be really good if pip's > installs could also default to that rather than the system > site-packages. In fact I consider it a bug that it does not, and I > hope the pip devs will come to think that too :-) I don't have an opinion on this as I can't think of a single time I (personally) want to use the user local site-packages so that'd be something to convince the other pip devs of :D > > > Richard ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From carl at oddbird.net Wed Jul 10 06:19:01 2013 From: carl at oddbird.net (Carl Meyer) Date: Tue, 09 Jul 2013 22:19:01 -0600 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: <51DCE0B5.6030506@oddbird.net> Hi Richard, On 07/09/2013 09:47 PM, Richard Jones wrote: > On 10 July 2013 13:20, Donald Stufft wrote: >> On Jul 9, 2013, at 11:16 PM, Richard Jones wrote: >> Firstly, I've just made some additional changes to PEP 439 to include: >> >> - installing virtualenv as well (so now pip, setuptools and virtualenv are installed) >> >> >> doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? What's the purpose of including virtualenv in the bootstrap? http://www.python.org/dev/peps/pep-0405/ > > It's my understanding that people still install virtualenv in py3k. They certainly do today, but that's primarily because pyvenv isn't very useful yet, since the stdlib has no installer and thus a newly-created pyvenv has no way to install anything in it. The bootstrap should fix this very problem (i.e. make an installer available in every newly-created pyvenv) and thus encourage use of pyvenv (which is simpler, more reliable, and built-in) in place of virtualenv. I don't think it makes sense for the stdlib bootstrapper to install an inferior third-party tool instead of using a tool that is now built-in to the standard library on 3.3+. Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make sense for it to install virtualenv there (or, probably even better, for pyvenv to be backported along with the bootstrap). Carl From richard at python.org Wed Jul 10 06:54:18 2013 From: richard at python.org (Richard Jones) Date: Wed, 10 Jul 2013 14:54:18 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <51DCE0B5.6030506@oddbird.net> References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 10 July 2013 14:19, Carl Meyer wrote: > They certainly do today, but that's primarily because pyvenv isn't very > useful yet, since the stdlib has no installer and thus a newly-created > pyvenv has no way to install anything in it. Ah, thanks for clarifying that. > Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make > sense for it to install virtualenv there (or, probably even better, for > pyvenv to be backported along with the bootstrap). I intend to create two forks; one for consideration in a 2.7.5 release as "pip" and the other for users of 2.6+ called "get-pip.py". Richard From p.f.moore at gmail.com Wed Jul 10 08:28:21 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Jul 2013 07:28:21 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <51DCE0B5.6030506@oddbird.net> References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 10 July 2013 05:19, Carl Meyer wrote: > > It's my understanding that people still install virtualenv in py3k. > > They certainly do today, but that's primarily because pyvenv isn't very > useful yet, since the stdlib has no installer and thus a newly-created > pyvenv has no way to install anything in it. > One other problem I have, personally, with pyvenv, is that the activate code for powershell is significantly less user-friendly than that in virtualenv. Add to that the fact that the Python release cycle is significantly slower than that of virtualenv, and using dev versions of Python is far less practical for day to day use, and that's why I stick to virtualenv at the moment (that and the pip point mentioned already). I really ought to post a patch for Python to upgrade the activate script to use the one from virtualenv. Are there any licensing/ownership issues that might make this problematic? For example, the script is signed and I don't know if that signature is attributable to someone specific... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Jul 10 10:28:22 2013 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 10 Jul 2013 04:28:22 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/09/2013 11:20 PM, Donald Stufft wrote: > doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? > What's the purpose of including virtualenv in the bootstrap? > http://www.python.org/dev/peps/pep-0405/ Environments generated by pyvenv lack setuptools, which makes them un-useful compared to those generated by virtualenv. Virtualenv is also useful across the important set of Python versions (2.6, 2.7, 3.2, 3.3), which pyvenv (or any shipped-in-core varieant) can never be. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHdGyYACgkQ+gerLs4ltQ5gMQCfZuHj7XyIWv+Wru0rA5VTk//1 JxkAoILDxz0Yn8zOLWP0jOGCc/gDikY8 =15US -----END PGP SIGNATURE----- From vinay_sajip at yahoo.co.uk Wed Jul 10 11:08:37 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 10 Jul 2013 09:08:37 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: Richard Jones gmail.com> writes: > pip without virtualenv in python 2 contexts is pretty rare (or at > least *should* be ) so I think I'll retain it in that bootstrap > code. Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? Aren't we talking about Python 3.4 and later? I agree with Donald's suggestion that virtualenv *not* be included, or are you saying that you want to include it for those users who have 3.4 *and* 2.x installed? If you include virtualenv, it makes it possible for people to mistakenly use it even though the recommended approach is to use the built-in venv support in Python. Exactly what benefit does including virtualenv provide in a 3.4+ installation? Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Jul 10 11:16:47 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 10 Jul 2013 09:16:47 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: Carl Meyer oddbird.net> writes: > They certainly do today, but that's primarily because pyvenv isn't very > useful yet, since the stdlib has no installer and thus a newly-created > pyvenv has no way to install anything in it. True, though I've provided a script to do that very thing: https://gist.github.com/vsajip/4673395 Of course, that'll now need to be changed to install setuptools rather than distribute :-) Regards, Vinay Sajip From richard at python.org Wed Jul 10 11:38:31 2013 From: richard at python.org (Richard Jones) Date: Wed, 10 Jul 2013 19:38:31 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: Message-ID: On 10 July 2013 18:28, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 07/09/2013 11:20 PM, Donald Stufft wrote: > >> doesn't "PyEnv" which is bundled with Python 3.3+ replace virtualenv? >> What's the purpose of including virtualenv in the bootstrap? >> http://www.python.org/dev/peps/pep-0405/ > > Environments generated by pyvenv lack setuptools, which makes them > un-useful compared to those generated by virtualenv. Yes, but Python 3.4 will have the pip bootstrap which automatically installs setuptools. Unless you mean that pyvenv itself (sans pip) would be more useful with setuptools? > Virtualenv is also > useful across the important set of Python versions (2.6, 2.7, 3.2, 3.3), > which pyvenv (or any shipped-in-core varieant) can never be. Yes, that's why I suggested the Python 2 version will install virtualenv :-) There's currently no plan to release a Python 3.3 version of the bootstrap, and certainly not one for a Py3k version lower than that. Hm. We can think about it though. Richard From richard at python.org Wed Jul 10 11:40:26 2013 From: richard at python.org (Richard Jones) Date: Wed, 10 Jul 2013 19:40:26 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On 10 July 2013 19:08, Vinay Sajip wrote: > Richard Jones gmail.com> writes: > >> pip without virtualenv in python 2 contexts is pretty rare (or at >> least *should* be ) so I think I'll retain it in that bootstrap >> code. > > Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? > Aren't we talking about Python 3.4 and later? It makes sense to me (and Nick) to simplify the packaging overhead for users of Python 2. Currently the story is a bit of a mess (multiple sites with different approaches). > Exactly what benefit does including virtualenv provide in a 3.4+ > installation? That was kinda my question Richard From vinay_sajip at yahoo.co.uk Wed Jul 10 11:55:41 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 10 Jul 2013 09:55:41 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: Richard Jones python.org> writes: > It makes sense to me (and Nick) to simplify the packaging overhead for > users of Python 2. Currently the story is a bit of a mess (multiple > sites with different approaches). No argument there, but I still don't see the relevance of virtualenv in a 3.4+ context. The PEP states "Hereafter the installation of the 'pip implementation' will imply installation of setuptools and virtualenv." and, a few lines further down, "The bootstrap process will proceed as follows: 1. The user system has Python (3.4+) installed." I don't see any mention of backporting this bootstrap to 2.x. > > Exactly what benefit does including virtualenv provide in a 3.4+ > > installation? > > That was kinda my question Sorry, it didn't come across like a question, more like a fait accompli :-) Regards, Vinay Sajip From richard at python.org Wed Jul 10 12:33:16 2013 From: richard at python.org (Richard Jones) Date: Wed, 10 Jul 2013 20:33:16 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On 10 July 2013 19:55, Vinay Sajip wrote: > Richard Jones python.org> writes: > >> It makes sense to me (and Nick) to simplify the packaging overhead for >> users of Python 2. Currently the story is a bit of a mess (multiple >> sites with different approaches). > > No argument there, but I still don't see the relevance of virtualenv in a > 3.4+ context. The PEP states > > "Hereafter the installation of the 'pip implementation' will imply > installation of setuptools and virtualenv." Sorry I've not made this clearer. Per the discussion here I've removed that from the PEP. That version hasn't been built on the web server yet. >> > Exactly what benefit does including virtualenv provide in a 3.4+ >> > installation? >> >> That was kinda my question > > Sorry, it didn't come across like a question, more like a fait accompli :-) Poorly phrased, my apologies. Richard From brett at python.org Wed Jul 10 14:46:12 2013 From: brett at python.org (Brett Cannon) Date: Wed, 10 Jul 2013 08:46:12 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Wed, Jul 10, 2013 at 12:54 AM, Richard Jones wrote: > On 10 July 2013 14:19, Carl Meyer wrote: > > They certainly do today, but that's primarily because pyvenv isn't very > > useful yet, since the stdlib has no installer and thus a newly-created > > pyvenv has no way to install anything in it. > > Ah, thanks for clarifying that. > > > > Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make > > sense for it to install virtualenv there (or, probably even better, for > > pyvenv to be backported along with the bootstrap). > > I intend to create two forks; one for consideration in a 2.7.5 release > as "pip" and the other for users of 2.6+ called "get-pip.py". > Why the specific shift between 2.7 and 2.6 in terms of naming? I realize you are differentiating between the bootstrap being pre-installed with Python vs. not, but is there really anything wrong with the script being called pip (or pip3 for Python 3.3/3.2) if it knows how to do the right thing to get pip up and going? IOW why not make the bootstrap what everyone uses to install pip and it just so happens to come pre-installed with Python 3.4 (and maybe Python 2.7)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jul 10 15:43:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Jul 2013 14:43:59 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 10 July 2013 13:46, Brett Cannon wrote: > pip (or pip3 for Python 3.3/3.2) Sorry to butt in here, but can I just catch this point. There seems to be an ongoing series of assumptions over whether the bootstrap is called pip or pip3. The pep actually says the bootstrap will be called pip3, but I'm not happy with that - specifically because the *existing* pip is not called pip3. So, at present, if I (as a 100% Python 3 user) want to install a package, I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python". I would find it distinctly irritating if in Python 3.4 I have to type "pip3 bootstrap" to bootstrap pip - and even worse if *after* the bootstrap the command I use is still "pip". (And no, there is currently no "pip3" command installed by pip, and even if there were, I would not want to use it, I'm happy with the unsuffixed version). I appreciate that Unix users have different compatibility priorities here, but can I propose that on Windows at least, the bootstrap command is "pip" and that matches the "core" pip that will be downloaded? Oh - and one other thing, on Windows python is often not on the system PATH - that's what the py.exe launcher is for. So where will the pip bootstrap command be installed, and where will it install the real pip? And also, will the venv code be modified to install the pip bootstrap in the venv's Scripts directory? Does virtualenv need to change to do the same? What if pip has already been bootstrapped in the system Python? Maybe I need to properly review the PEP rather than just throwing out random thoughts :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 10 16:02:15 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Jul 2013 10:02:15 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: When bundled the script is supposed to mask the fact you don't have pip installed. Basically if you type pip3 install requests it will first install setuptools and pip and then pass the command into the real pip. If it was called get pip then the workflow would be "attempt to install", "run get-pip", "rerun the original install command" On Jul 10, 2013, at 8:46 AM, Brett Cannon wrote: > > > > On Wed, Jul 10, 2013 at 12:54 AM, Richard Jones wrote: >> On 10 July 2013 14:19, Carl Meyer wrote: >> > They certainly do today, but that's primarily because pyvenv isn't very >> > useful yet, since the stdlib has no installer and thus a newly-created >> > pyvenv has no way to install anything in it. >> >> Ah, thanks for clarifying that. >> >> >> > Certainly if the bootstrap is ever ported to 2.7 or 3.2, it would make >> > sense for it to install virtualenv there (or, probably even better, for >> > pyvenv to be backported along with the bootstrap). >> >> I intend to create two forks; one for consideration in a 2.7.5 release >> as "pip" and the other for users of 2.6+ called "get-pip.py". > > Why the specific shift between 2.7 and 2.6 in terms of naming? I realize you are differentiating between the bootstrap being pre-installed with Python vs. not, but is there really anything wrong with the script being called pip (or pip3 for Python 3.3/3.2) if it knows how to do the right thing to get pip up and going? IOW why not make the bootstrap what everyone uses to install pip and it just so happens to come pre-installed with Python 3.4 (and maybe Python 2.7)? > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Jul 10 16:28:12 2013 From: brett at python.org (Brett Cannon) Date: Wed, 10 Jul 2013 10:28:12 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Wed, Jul 10, 2013 at 9:43 AM, Paul Moore wrote: > On 10 July 2013 13:46, Brett Cannon wrote: > >> pip (or pip3 for Python 3.3/3.2) > > > Sorry to butt in here, but can I just catch this point. There seems to be > an ongoing series of assumptions over whether the bootstrap is called pip > or pip3. The pep actually says the bootstrap will be called pip3, but I'm > not happy with that - specifically because the *existing* pip is not called > pip3. > > So, at present, if I (as a 100% Python 3 user) want to install a package, > I type "pip install XXX". No version suffix. In the same way, to invoke > Python, I type "py" (I'm on Windows here) or if I want the currently active > virtualenv, "python". > But you should be typing python3 here, not python (and PEP 394 is trying to get people to start using python2 as the name to invoke). > > I would find it distinctly irritating if in Python 3.4 I have to type > "pip3 bootstrap" to bootstrap pip - and even worse if *after* the bootstrap > the command I use is still "pip". (And no, there is currently no "pip3" > command installed by pip, and even if there were, I would not want to use > it, I'm happy with the unsuffixed version). > > As Donald pointed out, you would always use pip3. The bootstrapping aspect is a behind-the-scenes thing; just consider the script as "launch pip if installed, else, bootstrap it in and then launch it". > I appreciate that Unix users have different compatibility priorities here, > but can I propose that on Windows at least, the bootstrap command is "pip" > and that matches the "core" pip that will be downloaded? > > There won't be a difference in command-line usage. > Oh - and one other thing, on Windows python is often not on the system > PATH - that's what the py.exe launcher is for. So where will the pip > bootstrap command be installed, and where will it install the real pip? > Covered in the PEP: it will go into the user installation location as if --user had been specified. > And also, will the venv code be modified to install the pip bootstrap in > the venv's Scripts directory? > In the PEP: goes into the venv. > Does virtualenv need to change to do the same? What if pip has already > been bootstrapped in the system Python? > Then nothing special happens; the script just executes pip instead of triggering a bootstrap first. > Maybe I need to properly review the PEP rather than just throwing out > random thoughts :-) > I feel like I just fed a bad habit. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Wed Jul 10 16:37:52 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 10 Jul 2013 16:37:52 +0200 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On 10 Jul, 2013, at 11:40, Richard Jones wrote: > On 10 July 2013 19:08, Vinay Sajip wrote: >> Richard Jones gmail.com> writes: >> >>> pip without virtualenv in python 2 contexts is pretty rare (or at >>> least *should* be ) so I think I'll retain it in that bootstrap >>> code. >> >> Perhaps I misunderstand, but what's the relevance of Python 2 contexts here? >> Aren't we talking about Python 3.4 and later? > > It makes sense to me (and Nick) to simplify the packaging overhead for > users of Python 2. Currently the story is a bit of a mess (multiple > sites with different approaches). New features in a bugfix release? You better hope the RM doesn't hear :-) That said, 2.7 will be around for a while and adding a consistent installation experience to both 3.4 and 2.7 does sound attractive and adding a new script shouldn't have that many side effects. What about backporting pyvenv as well? I guess that's way too invasive for a bugfix release. Ronald From erik.m.bray at gmail.com Wed Jul 10 17:35:10 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 10 Jul 2013 11:35:10 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <5C868F28-F2E9-4960-9E9C-2C766F7F15D2@stufft.io> Message-ID: On Wed, Jul 10, 2013 at 12:37 AM, Richard Jones wrote: > pip without virtualenv in python 2 contexts is pretty rare (or at > least *should* be ) so I think I'll retain it in that bootstrap > code. I agree it *should* be rare in most cases but it most assuredly is not. I can tell you from experience that a lot of people in the scientific community, for example, do not use virtualenv (sometimes with good reasons, but more often not). Erik From barry at python.org Wed Jul 10 17:21:41 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Jul 2013 11:21:41 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: <20130710112141.603dd396@anarchist> On Jul 10, 2013, at 02:43 PM, Paul Moore wrote: >I would find it distinctly irritating if in Python 3.4 I have to type "pip3 >bootstrap" to bootstrap pip - and even worse if *after* the bootstrap the >command I use is still "pip". (And no, there is currently no "pip3" command >installed by pip, and even if there were, I would not want to use it, I'm >happy with the unsuffixed version). I have a lot of sympathy for this, and the general issue has come up in a number of different contexts, e.g. nostests/nosetests3 and so on. On a distro like Debian, this just adds more gunk to /usr/bin, especially since some scripts are also minor-version dependent. One approach is to use `$python -m nose` or in this case `$python -m pip` which cuts down on the number of scripts, is unambiguous, but is far from convenient and may not work in all cases, e.g. for older Python's that don't support -m or don't support it for packages. I think there was a thread on python-ideas about this, but in the back of my mind, I have this horrible idea for a version-aware relauncher you could use in your shebang line. Something like: #! /usr/bin/pylaunch So that you could do something like: $ nosetests -3 $ nosetests -2 $ nosetests -3.3 $ nosetests -2.7 and it would relaunch itself using the correct Python version, consuming the version argument so the actual script wouldn't see it. I'm not sure if the convenience is worth it, and I'm sorry for making you throw up a little in your mouth there. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From p.f.moore at gmail.com Wed Jul 10 18:11:11 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Jul 2013 17:11:11 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 10 July 2013 15:28, Brett Cannon wrote: > So, at present, if I (as a 100% Python 3 user) want to install a package, >> I type "pip install XXX". No version suffix. In the same way, to invoke >> Python, I type "py" (I'm on Windows here) or if I want the currently active >> virtualenv, "python". >> > > But you should be typing python3 here, not python (and PEP 394 is trying > to get people to start using python2 as the name to invoke). > So - that's a major behaviour change on Windows. At the moment, Python 3.3 for Windows installs python.exe and pythonw.exe. There are no versioned executables at all. Are you saying that in 3.4 this will change? That will break so many things I have to believe you're wrong or I've misunderstood you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) which I can then ignore is fine by me (but in that case, the change doesn't affect my point about the pip command). As I say, I understand Unix is different. This is a purely Windows point - and in the context of the PEP, that's what I'm saying, please can we be careful to be clear whether the plan is for the new pip bootstrap to favour existing platform conventions or uniformity (with the further complication of needing to consider the full pip distribution's behaviour - and there, I will be lobbying hard against any change to require a pip3 command to be used, at least on Windows). As things stand, I can assume the PEP specifies Unix behaviour and is vague or silent on Windows variations, or I can ask for clarification, and for the results to be documented in the PEP. Up to now I was doing the former, but I'm moving towards the latter - hence my question(s). Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Jul 10 20:55:00 2013 From: brett at python.org (Brett Cannon) Date: Wed, 10 Jul 2013 14:55:00 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Wed, Jul 10, 2013 at 12:11 PM, Paul Moore wrote: > On 10 July 2013 15:28, Brett Cannon wrote: > >> So, at present, if I (as a 100% Python 3 user) want to install a package, >>> I type "pip install XXX". No version suffix. In the same way, to invoke >>> Python, I type "py" (I'm on Windows here) or if I want the currently active >>> virtualenv, "python". >>> >> >> But you should be typing python3 here, not python (and PEP 394 is trying >> to get people to start using python2 as the name to invoke). >> > > So - that's a major behaviour change on Windows. At the moment, Python 3.3 > for Windows installs python.exe and pythonw.exe. There are no versioned > executables at all. Are you saying that in 3.4 this will change? That will > break so many things I have to believe you're wrong or I've misunderstood > you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) > which I can then ignore is fine by me (but in that case, the change doesn't > affect my point about the pip command). > Didn't know Windows was never updated to use a versioned binary. That's rather unfortunate. -Brett > > As I say, I understand Unix is different. This is a purely Windows point - > and in the context of the PEP, that's what I'm saying, please can we be > careful to be clear whether the plan is for the new pip bootstrap to favour > existing platform conventions or uniformity (with the further complication > of needing to consider the full pip distribution's behaviour - and there, I > will be lobbying hard against any change to require a pip3 command to be > used, at least on Windows). > > As things stand, I can assume the PEP specifies Unix behaviour and is > vague or silent on Windows variations, or I can ask for clarification, and > for the results to be documented in the PEP. Up to now I was doing the > former, but I'm moving towards the latter - hence my question(s). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Jul 10 22:30:11 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 11 Jul 2013 06:30:11 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 11 Jul 2013 04:56, "Brett Cannon" wrote: > > > > > On Wed, Jul 10, 2013 at 12:11 PM, Paul Moore wrote: >> >> On 10 July 2013 15:28, Brett Cannon wrote: >>>> >>>> So, at present, if I (as a 100% Python 3 user) want to install a package, I type "pip install XXX". No version suffix. In the same way, to invoke Python, I type "py" (I'm on Windows here) or if I want the currently active virtualenv, "python". >>> >>> >>> But you should be typing python3 here, not python (and PEP 394 is trying to get people to start using python2 as the name to invoke). >> >> >> So - that's a major behaviour change on Windows. At the moment, Python 3.3 for Windows installs python.exe and pythonw.exe. There are no versioned executables at all. Are you saying that in 3.4 this will change? That will break so many things I have to believe you're wrong or I've misunderstood you. OTOH, adding python3.exe and python3w.exe (or is that pythonw3.exe?) which I can then ignore is fine by me (but in that case, the change doesn't affect my point about the pip command). > > > Didn't know Windows was never updated to use a versioned binary. That's rather unfortunate. Hence the PyLauncher project. Paul's right, though - the PEP is currently very *nix-centric. For Windows, we likely need to consider something based on "py -m pip", which then raises the question of whether or not that's what we should be supporting on *nix as well (with pip and pip3 as convenient shorthand). There's also the fact that the Python launcher is *already* available as a separate Windows installer for earlier releases. Perhaps we should just be bundling the bootstrap script with that for earlier Windows releases. Cheers, Nick. > > -Brett > >> >> >> As I say, I understand Unix is different. This is a purely Windows point - and in the context of the PEP, that's what I'm saying, please can we be careful to be clear whether the plan is for the new pip bootstrap to favour existing platform conventions or uniformity (with the further complication of needing to consider the full pip distribution's behaviour - and there, I will be lobbying hard against any change to require a pip3 command to be used, at least on Windows). >> >> As things stand, I can assume the PEP specifies Unix behaviour and is vague or silent on Windows variations, or I can ask for clarification, and for the results to be documented in the PEP. Up to now I was doing the former, but I'm moving towards the latter - hence my question(s). > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jul 10 22:50:25 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Jul 2013 21:50:25 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 10 July 2013 21:30, Nick Coghlan wrote: > > On 11 Jul 2013 04:56, "Brett Cannon" wrote: > > > > Didn't know Windows was never updated to use a versioned binary. That's > rather unfortunate. > > Hence the PyLauncher project. > > Paul's right, though - the PEP is currently very *nix-centric. For > Windows, we likely need to consider something based on "py -m pip", which > then raises the question of whether or not that's what we should be > supporting on *nix as well (with pip and pip3 as convenient shorthand). > > There's also the fact that the Python launcher is *already* available as a > separate Windows installer for earlier releases. Perhaps we should just be > bundling the bootstrap script with that for earlier Windows releases. > Thanks Nick. I was part way through a much more laboured email basically saying the same thing :-) For reference, PEP 394 is the versioned binary PEP. It is explicitly Unix only and defers Windows to PEP 397 (pylauncher) as being "too complex" to cover alongside the Unix proposal :-) I think "python -m pip" should be the canonical form (used in documentation, examples, etc). The unittest module has taken this route, as has timeit. Traditionally, python-dev have been lukewarm about the -m interface, but its key advantage is that it bypasses all the issues around versioned executables, cross-platform issues, the general dreadfulness of script wrappers on Windows, etc, in one fell swoop. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Jul 10 23:39:42 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 10 Jul 2013 17:39:42 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: <20130710173942.3cecfa2c@anarchist> On Jul 10, 2013, at 09:50 PM, Paul Moore wrote: >I think "python -m pip" should be the canonical form (used in >documentation, examples, etc). The unittest module has taken this route, as >has timeit. Traditionally, python-dev have been lukewarm about the -m >interface, but its key advantage is that it bypasses all the issues around >versioned executables, cross-platform issues, the general dreadfulness of >script wrappers on Windows, etc, in one fell swoop. +1 -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From donald at stufft.io Thu Jul 11 00:00:59 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 10 Jul 2013 18:00:59 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <20130710173942.3cecfa2c@anarchist> References: <51DCE0B5.6030506@oddbird.net> <20130710173942.3cecfa2c@anarchist> Message-ID: <8CF28CAB-0D28-4FFC-BCC7-B63932BF37FA@stufft.io> On Jul 10, 2013, at 5:39 PM, Barry Warsaw wrote: > On Jul 10, 2013, at 09:50 PM, Paul Moore wrote: > >> I think "python -m pip" should be the canonical form (used in >> documentation, examples, etc). The unittest module has taken this route, as >> has timeit. Traditionally, python-dev have been lukewarm about the -m >> interface, but its key advantage is that it bypasses all the issues around >> versioned executables, cross-platform issues, the general dreadfulness of >> script wrappers on Windows, etc, in one fell swoop. > > +1 > > -Barry > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig As long as the non -m way exists so I don't have to use it D: ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Jul 11 00:20:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 10 Jul 2013 23:20:56 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <8CF28CAB-0D28-4FFC-BCC7-B63932BF37FA@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <20130710173942.3cecfa2c@anarchist> <8CF28CAB-0D28-4FFC-BCC7-B63932BF37FA@stufft.io> Message-ID: On 10 July 2013 23:00, Donald Stufft wrote: > As long as the non -m way exists so I don't have to use it D: Fair enough :-) Having a standard method (-m) and a platform-specific Unix method seems fine to me (and the Unix people can debate the merits of pip3 vs pip etc as much or as little as they want). It'll be nice seeing Unix be the non-standard one for a change :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at python.org Thu Jul 11 03:09:03 2013 From: richard at python.org (Richard Jones) Date: Thu, 11 Jul 2013 11:09:03 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 11 July 2013 06:50, Paul Moore wrote: > I think "python -m pip" should be the canonical form (used in documentation, > examples, etc). The unittest module has taken this route, as has timeit. > Traditionally, python-dev have been lukewarm about the -m interface, but its > key advantage is that it bypasses all the issues around versioned > executables, cross-platform issues, the general dreadfulness of script > wrappers on Windows, etc, in one fell swoop. "python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy. Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-) Richard From brett at python.org Thu Jul 11 14:49:33 2013 From: brett at python.org (Brett Cannon) Date: Thu, 11 Jul 2013 08:49:33 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Wed, Jul 10, 2013 at 9:09 PM, Richard Jones wrote: > On 11 July 2013 06:50, Paul Moore wrote: > > I think "python -m pip" should be the canonical form (used in > documentation, > > examples, etc). The unittest module has taken this route, as has timeit. > > Traditionally, python-dev have been lukewarm about the -m interface, but > its > > key advantage is that it bypasses all the issues around versioned > > executables, cross-platform issues, the general dreadfulness of script > > wrappers on Windows, etc, in one fell swoop. > > "python -m pip" does make the bootstrapping a more complex proposition > - the stdlib would have to have something called "pip" that could be > overridden (while it is actually *running*) by something installed in > site-packages. Not easy. > It's also fraught with historical baggage; remember xmlplus? That was extremely painful and something I believe everyone was glad to see go away. Having said that, there are two solutions to this. The compatible solution with older Python versions is to have the bootstrap download pip and have it installed as piplib or some other alternative name that is not masked by a pip stub in the stdlib. The dead-simple, extremely elegant solution (starting in Python 3.4) is to make pip a namespace package in the stdlib with nothing more than a __main__.py file that installs pip; no checking if it's installed and then running it, etc, just blindly install pip. Then, if you install pip as a regular package, it takes precedence and what's in the stdlib is completely ignored (this helps with any possible staleness with the stdlib's bootstrap script vs. what's in pip, etc.). You don't even need to change the __main__.py in pip as it stands today since namespace packages only work if no regular package is found. In case that didn't make sense, here is the file structure: python3.4/ pip/ __main__.py # Install pip, nothing more ~/.local/ bin/ pip # Literally a shebang and two lines of Python; see below lib/python3.4/site-packages pip/ # As it stands today __init__.py __main__.py ... This also means pip3 literally becomes ``import runpy; runpy.run_module('pip')``, so that is even easier to maintain (assuming pip's bin/ stub isn't already doing that because of backwards-compatibility concerns or something with __main__.py or runpy not existing far enough back, otherwise it should =). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Jul 11 15:33:16 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Jul 2013 14:33:16 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 11 July 2013 13:49, Brett Cannon wrote: > The dead-simple, extremely elegant solution (starting in Python 3.4) is to > make pip a namespace package in the stdlib with nothing more than a > __main__.py file that installs pip; no checking if it's installed and then > running it, etc, just blindly install pip. Then, if you install pip as a > regular package, it takes precedence and what's in the stdlib is completely > ignored (this helps with any possible staleness with the stdlib's bootstrap > script vs. what's in pip, etc.). You don't even need to change the > __main__.py in pip as it stands today since namespace packages only work if > no regular package is found. > Wow - that is exceptionally cool. I had never realised namespace packages would work like this. > This also means pip3 literally becomes ``import runpy; > runpy.run_module('pip')``, so that is even easier to maintain (assuming > pip's bin/ stub isn't already doing that because of backwards-compatibility > concerns or something with __main__.py or runpy not existing far enough > back, otherwise it should =). > The pip executable script/wrapper currently uses setuptools entry points and wrapper scripts. I'm not a fan of those, so I'd be happy to see the change you suggest, but OTOH they have been like that since long before I was involved with pip, and I have no idea if there are reasons they need to stay that way. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 11 15:39:21 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Jul 2013 09:39:21 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Jul 11, 2013, at 9:33 AM, Paul Moore wrote: > The pip executable script/wrapper currently uses setuptools entry points and wrapper scripts. I'm not a fan of those, so I'd be happy to see the change you suggest, but OTOH they have been like that since long before I was involved with pip, and I have no idea if there are reasons they need to stay that way. Typically the reasoning is because of the .exe wrapper. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Thu Jul 11 16:20:26 2013 From: brett at python.org (Brett Cannon) Date: Thu, 11 Jul 2013 10:20:26 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Thu, Jul 11, 2013 at 9:39 AM, Donald Stufft wrote: > > On Jul 11, 2013, at 9:33 AM, Paul Moore wrote: > > The pip executable script/wrapper currently uses setuptools entry points > and wrapper scripts. I'm not a fan of those, so I'd be happy to see the > change you suggest, but OTOH they have been like that since long before I > was involved with pip, and I have no idea if there are reasons they need to > stay that way. > > > Typically the reasoning is because of the .exe wrapper. > And if people want to promote the -m option then the executable scripts just become a secondary convenience. Plus you can't exactly require setuptools to create those scripts at install-time with Python if that's when they are going to be installed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Jul 11 16:29:06 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 11 Jul 2013 10:29:06 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Thu, Jul 11, 2013 at 9:33 AM, Paul Moore wrote: > On 11 July 2013 13:49, Brett Cannon wrote: >> >> The dead-simple, extremely elegant solution (starting in Python 3.4) is to >> make pip a namespace package in the stdlib with nothing more than a >> __main__.py file that installs pip; no checking if it's installed and then >> running it, etc, just blindly install pip. Then, if you install pip as a >> regular package, it takes precedence and what's in the stdlib is completely >> ignored (this helps with any possible staleness with the stdlib's bootstrap >> script vs. what's in pip, etc.). You don't even need to change the >> __main__.py in pip as it stands today since namespace packages only work if >> no regular package is found. > > > Wow - that is exceptionally cool. I had never realised namespace packages > would work like this. Not exceptionally cool ... and that's why the namespace_package form is popular, since the first package in a set of namespace packages that gets it wrong breaks everything. From brett at python.org Thu Jul 11 16:47:56 2013 From: brett at python.org (Brett Cannon) Date: Thu, 11 Jul 2013 10:47:56 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Thu, Jul 11, 2013 at 10:29 AM, Daniel Holth wrote: > On Thu, Jul 11, 2013 at 9:33 AM, Paul Moore wrote: > > On 11 July 2013 13:49, Brett Cannon wrote: > >> > >> The dead-simple, extremely elegant solution (starting in Python 3.4) is > to > >> make pip a namespace package in the stdlib with nothing more than a > >> __main__.py file that installs pip; no checking if it's installed and > then > >> running it, etc, just blindly install pip. Then, if you install pip as a > >> regular package, it takes precedence and what's in the stdlib is > completely > >> ignored (this helps with any possible staleness with the stdlib's > bootstrap > >> script vs. what's in pip, etc.). You don't even need to change the > >> __main__.py in pip as it stands today since namespace packages only > work if > >> no regular package is found. > > > > > > Wow - that is exceptionally cool. I had never realised namespace packages > > would work like this. > > Not exceptionally cool ... and that's why the namespace_package form > is popular, since the first package in a set of namespace packages > that gets it wrong breaks everything. > I'm really not following that sentence. You are saying the idea is bad, but is that in general or for this specific case? And you say it's popular because people get it wrong which breaks everything? And how can namespace packages be popular if they are new to Python 3.3 (the ability to execute them with -m is new in Python 3.4)? Are you talking about pkgutil's extend_path hack because I'm talking about NamespaceLoader in importlib? I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it). -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 11 16:52:06 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Jul 2013 10:52:06 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: <80918BCA-78AA-482B-B551-FFDB364A799A@stufft.io> On Jul 11, 2013, at 10:47 AM, Brett Cannon wrote: > I'm just not seeing the downside. We control the stdlib and pip, so we know the expected interaction and we are purposefully using the override mechanics so it's not going to get messed up by us if we consciously use it (and obviously have tests for it). I don't think it's especially a problem for pip. I think Daniel was just speaking how the behavior you suggested we could exploit to make this happen has been a major issue for namespace packages in the past using the other methods. However I'm not sure how it's going to work? python -m pip is going to import the pip namespace package yes? And then when pip is installed it'll shadow that, but in the original process where we ran python -m pip won't the namespace package have been cached in sys.modules already? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Thu Jul 11 17:50:54 2013 From: brett at python.org (Brett Cannon) Date: Thu, 11 Jul 2013 11:50:54 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <80918BCA-78AA-482B-B551-FFDB364A799A@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <80918BCA-78AA-482B-B551-FFDB364A799A@stufft.io> Message-ID: On Thu, Jul 11, 2013 at 10:52 AM, Donald Stufft wrote: > > On Jul 11, 2013, at 10:47 AM, Brett Cannon wrote: > > I'm just not seeing the downside. We control the stdlib and pip, so we > know the expected interaction and we are purposefully using the override > mechanics so it's not going to get messed up by us if we consciously use it > (and obviously have tests for it). > > > I don't think it's especially a problem for pip. I think Daniel was just > speaking how the behavior you suggested we could exploit to make this > happen has been a major issue for namespace packages in the past using the > other methods. > > However I'm not sure how it's going to work? python -m pip is going to > import the pip namespace package yes? And then when pip is installed it'll > shadow that, but in the original process where we ran python -m pip won't > the namespace package have been cached in sys.modules already? > Yes, but you can clear it out of sys.modules before executing runpy to get the desired effect of falling through to the regular package (runpy wouldn't import pip.__main__ so you literally just need ``del sys.modules['pip']``). You could also pull the old pkgutil.extend_path() trick and use the append method on the _NamespacePath object to directly add the new directory that pip was installed to and then import pip.runner.main(), but that feels like more of a hack to me (but then again I'm rather comfortable mucking with the import system =). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Jul 11 17:52:30 2013 From: brett at python.org (Brett Cannon) Date: Thu, 11 Jul 2013 11:52:30 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <80918BCA-78AA-482B-B551-FFDB364A799A@stufft.io> Message-ID: On Thu, Jul 11, 2013 at 11:50 AM, Brett Cannon wrote: > > > > On Thu, Jul 11, 2013 at 10:52 AM, Donald Stufft wrote: > >> >> On Jul 11, 2013, at 10:47 AM, Brett Cannon wrote: >> >> I'm just not seeing the downside. We control the stdlib and pip, so we >> know the expected interaction and we are purposefully using the override >> mechanics so it's not going to get messed up by us if we consciously use it >> (and obviously have tests for it). >> >> >> I don't think it's especially a problem for pip. I think Daniel was just >> speaking how the behavior you suggested we could exploit to make this >> happen has been a major issue for namespace packages in the past using the >> other methods. >> >> However I'm not sure how it's going to work? python -m pip is going to >> import the pip namespace package yes? And then when pip is installed it'll >> shadow that, but in the original process where we ran python -m pip won't >> the namespace package have been cached in sys.modules already? >> > > Yes, but you can clear it out of sys.modules before executing runpy to get > the desired effect of falling through to the regular package (runpy > wouldn't import pip.__main__ so you literally just need ``del > sys.modules['pip']``). You could also pull the old pkgutil.extend_path() > trick and use the append method on the _NamespacePath object to directly > add the new directory that pip was installed to and then import > pip.runner.main(), but that feels like more of a hack to me (but then again > I'm rather comfortable mucking with the import system =). > And if you're still worried you can always invalidate the cache of the finder representing the parent directory pip got installed to (or all finder caches if you really want to get jumpy). -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 11 18:21:13 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Jul 2013 12:21:13 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <80918BCA-78AA-482B-B551-FFDB364A799A@stufft.io> Message-ID: <064659DF-D9F6-4C7C-ABDC-00935AE35E89@stufft.io> On Jul 11, 2013, at 11:50 AM, Brett Cannon wrote: > Yes, but you can clear it out of sys.modules before executing runpy to get the desired effect of falling through to the regular package (runpy wouldn't import pip.__main__ so you literally just need ``del sys.modules['pip']``). You could also pull the old pkgutil.extend_path() trick and use the append method on the _NamespacePath object to directly add the new directory that pip was installed to and then import pip.runner.main(), but that feels like more of a hack to me (but then again I'm rather comfortable mucking with the import system =). Ok, Just making sure :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Thu Jul 11 19:05:33 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 11 Jul 2013 13:05:33 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Thu, Jul 11, 2013 at 10:20 AM, Brett Cannon wrote: > And if people want to promote the -m option then the executable scripts just > become a secondary convenience. Plus you can't exactly require setuptools to > create those scripts at install-time with Python if that's when they are > going to be installed. You don't need setuptools in order to include .exe wrappers, though: there's nothing setuptools-specific about the .exe files, they just run a matching, adjacent 'foo-script.py', which can contain whatever you want. Just take the appropriate wrapper .exe, and rename it to whatever 'foo' you want. IOW, if you want to ship a pip.exe on windows that just does "from pip import __main__; __main__()" (or whatever), you can do precisely that, no setuptools needed. From p.f.moore at gmail.com Thu Jul 11 19:41:13 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Jul 2013 18:41:13 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 11 July 2013 18:05, PJ Eby wrote: > On Thu, Jul 11, 2013 at 10:20 AM, Brett Cannon wrote: > > And if people want to promote the -m option then the executable scripts > just > > become a secondary convenience. Plus you can't exactly require > setuptools to > > create those scripts at install-time with Python if that's when they are > > going to be installed. > > You don't need setuptools in order to include .exe wrappers, though: > there's nothing setuptools-specific about the .exe files, they just > run a matching, adjacent 'foo-script.py', which can contain whatever > you want. Just take the appropriate wrapper .exe, and rename it to > whatever 'foo' you want. > > IOW, if you want to ship a pip.exe on windows that just does "from pip > import __main__; __main__()" (or whatever), you can do precisely that, > no setuptools needed. With the launcher, a .py file with the relevant #! line set pretty much covers things. It's not an exe, although there are very few things I know of that need specifically an exe file, and if you want to omit the ".py" suffix when invoking it you need to add .py to PATHEXT. But actual exe launchers are much less critical nowadays, I believe. What *is* important, though, is some level of consistency. Before setuptools promoted the idea of declaraive entry points, distributions shipped with a ridiculous variety of attempts to make cross-platform launchers (many of which didn't work very well). I care a lot more about promoting a consistent cross-platform approach than about arguing for any particular solution... Paul Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 11 23:48:07 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Jul 2013 07:48:07 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: (Oops, started this yesterday, got distracted and never hit send) On 11 July 2013 11:09, Richard Jones wrote: > > On 11 July 2013 06:50, Paul Moore wrote: > > I think "python -m pip" should be the canonical form (used in documentation, > > examples, etc). The unittest module has taken this route, as has timeit. > > Traditionally, python-dev have been lukewarm about the -m interface, but its > > key advantage is that it bypasses all the issues around versioned > > executables, cross-platform issues, the general dreadfulness of script > > wrappers on Windows, etc, in one fell swoop. > > "python -m pip" does make the bootstrapping a more complex proposition > - the stdlib would have to have something called "pip" that could be > overridden (while it is actually *running*) by something installed in > site-packages. Not easy. I was thinking about that, and I'm wondering if the most sensible option may be to claim the "getpip" name on PyPI for ourselves and then do the following: 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a 2.7.x release) 2. Install it to site-packages in the "Python launcher for Windows" installer for earlier versions getpip would expose at least one function: def bootstrap(index_url=None, system_install=False): ... And executing it as a main module would either: 1. Do nothing, if "import pip" already works 2. Call bootstrap with the appropriate arguments That way, installation instructions can simply say to unconditionally do: python -m getpip And that will either: 1. Report that pip is already installed; 2. Bootstrap pip into the user environment; or 3. Emit a distro-specific message if the distro packagers want to push users to use the system pip instead (since they get to patch the system Python and can tweak the system getpip however they want) The 2.7 change would then be to create a new download that bundles the Windows launcher into the Windows installer. Users aren't stupid - the problem with the status quo is really that the bootstrapping instructions are annoyingly complicated and genuinely confusing, not that an explicit bootstrapping step is needed in the first place. Cheers, Nick. > > Thanks everyone for your brilliant feedback and discussion - I look > forward to being able to say something sensible about Windows in the > PEP :-) > > > > Richard -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia On 11 July 2013 06:50, Paul Moore wrote: > I think "python -m pip" should be the canonical form (used in documentation, > examples, etc). The unittest module has taken this route, as has timeit. > Traditionally, python-dev have been lukewarm about the -m interface, but its > key advantage is that it bypasses all the issues around versioned > executables, cross-platform issues, the general dreadfulness of script > wrappers on Windows, etc, in one fell swoop. "python -m pip" does make the bootstrapping a more complex proposition - the stdlib would have to have something called "pip" that could be overridden (while it is actually *running*) by something installed in site-packages. Not easy. Thanks everyone for your brilliant feedback and discussion - I look forward to being able to say something sensible about Windows in the PEP :-) Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Jul 11 23:57:16 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 11 Jul 2013 17:57:16 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: +1. No magic side effects will make everyone happier. On Jul 11, 2013 5:48 PM, "Nick Coghlan" wrote: > (Oops, started this yesterday, got distracted and never hit send) > > On 11 July 2013 11:09, Richard Jones wrote: > > > > On 11 July 2013 06:50, Paul Moore wrote: > > > I think "python -m pip" should be the canonical form (used in > documentation, > > > examples, etc). The unittest module has taken this route, as has > timeit. > > > Traditionally, python-dev have been lukewarm about the -m interface, > but its > > > key advantage is that it bypasses all the issues around versioned > > > executables, cross-platform issues, the general dreadfulness of script > > > wrappers on Windows, etc, in one fell swoop. > > > > "python -m pip" does make the bootstrapping a more complex proposition > > - the stdlib would have to have something called "pip" that could be > > overridden (while it is actually *running*) by something installed in > > site-packages. Not easy. > > I was thinking about that, and I'm wondering if the most sensible option > may be to claim the "getpip" name on PyPI for ourselves and then do the > following: > > 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a > 2.7.x release) > 2. Install it to site-packages in the "Python launcher for Windows" > installer for earlier versions > > getpip would expose at least one function: > > def bootstrap(index_url=None, system_install=False): > ... > > And executing it as a main module would either: > > 1. Do nothing, if "import pip" already works > 2. Call bootstrap with the appropriate arguments > > That way, installation instructions can simply say to unconditionally do: > > python -m getpip > > And that will either: > > 1. Report that pip is already installed; > 2. Bootstrap pip into the user environment; or > 3. Emit a distro-specific message if the distro packagers want to push > users to use the system pip instead (since they get to patch the system > Python and can tweak the system getpip however they want) > > The 2.7 change would then be to create a new download that bundles the > Windows launcher into the Windows installer. > > Users aren't stupid - the problem with the status quo is really that the > bootstrapping instructions are annoyingly complicated and genuinely > confusing, not that an explicit bootstrapping step is needed in the first > place. > > Cheers, > Nick. > > > > > > > > Thanks everyone for your brilliant feedback and discussion - I look > > forward to being able to say something sensible about Windows in the > > PEP :-) > > > > > > > > > > Richard > > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > On 11 July 2013 06:50, Paul Moore wrote: > > I think "python -m pip" should be the canonical form (used in > documentation, > > examples, etc). The unittest module has taken this route, as has timeit. > > Traditionally, python-dev have been lukewarm about the -m interface, but > its > > key advantage is that it bypasses all the issues around versioned > > executables, cross-platform issues, the general dreadfulness of script > > wrappers on Windows, etc, in one fell swoop. > > "python -m pip" does make the bootstrapping a more complex proposition > - the stdlib would have to have something called "pip" that could be > overridden (while it is actually *running*) by something installed in > site-packages. Not easy. > > Thanks everyone for your brilliant feedback and discussion - I look > forward to being able to say something sensible about Windows in the > PEP :-) > > > Richard > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Fri Jul 12 00:00:01 2013 From: carl at oddbird.net (Carl Meyer) Date: Thu, 11 Jul 2013 16:00:01 -0600 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: <51DF2AE1.2010803@oddbird.net> On 07/11/2013 03:48 PM, Nick Coghlan wrote: > I was thinking about that, and I'm wondering if the most sensible option > may be to claim the "getpip" name on PyPI for ourselves and then do the > following: > > 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a > 2.7.x release) > 2. Install it to site-packages in the "Python launcher for Windows" > installer for earlier versions > > getpip would expose at least one function: > > def bootstrap(index_url=None, system_install=False): > ... > > And executing it as a main module would either: > > 1. Do nothing, if "import pip" already works > 2. Call bootstrap with the appropriate arguments > > That way, installation instructions can simply say to unconditionally do: > > python -m getpip > > And that will either: > > 1. Report that pip is already installed; > 2. Bootstrap pip into the user environment; or > 3. Emit a distro-specific message if the distro packagers want to push > users to use the system pip instead (since they get to patch the system > Python and can tweak the system getpip however they want) > > The 2.7 change would then be to create a new download that bundles the > Windows launcher into the Windows installer. > > Users aren't stupid - the problem with the status quo is really that the > bootstrapping instructions are annoyingly complicated and genuinely > confusing, not that an explicit bootstrapping step is needed in the > first place. +1. This sounds far better to me than the implicit bootstrapping. Carl From donald at stufft.io Fri Jul 12 00:06:01 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Jul 2013 18:06:01 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <51DF2AE1.2010803@oddbird.net> References: <51DCE0B5.6030506@oddbird.net> <51DF2AE1.2010803@oddbird.net> Message-ID: <55CB6E5F-18E6-4A77-BDFE-531FFE842466@stufft.io> On Jul 11, 2013, at 6:00 PM, Carl Meyer wrote: > On 07/11/2013 03:48 PM, Nick Coghlan wrote: >> I was thinking about that, and I'm wondering if the most sensible option >> may be to claim the "getpip" name on PyPI for ourselves and then do the >> following: >> >> 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a >> 2.7.x release) >> 2. Install it to site-packages in the "Python launcher for Windows" >> installer for earlier versions >> >> getpip would expose at least one function: >> >> def bootstrap(index_url=None, system_install=False): >> ... >> >> And executing it as a main module would either: >> >> 1. Do nothing, if "import pip" already works >> 2. Call bootstrap with the appropriate arguments >> >> That way, installation instructions can simply say to unconditionally do: >> >> python -m getpip >> >> And that will either: >> >> 1. Report that pip is already installed; >> 2. Bootstrap pip into the user environment; or >> 3. Emit a distro-specific message if the distro packagers want to push >> users to use the system pip instead (since they get to patch the system >> Python and can tweak the system getpip however they want) >> >> The 2.7 change would then be to create a new download that bundles the >> Windows launcher into the Windows installer. >> >> Users aren't stupid - the problem with the status quo is really that the >> bootstrapping instructions are annoyingly complicated and genuinely >> confusing, not that an explicit bootstrapping step is needed in the >> first place. > > +1. This sounds far better to me than the implicit bootstrapping. > > Carl > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Generally +1, the one negative point I see is it's kinda a degradation in functionality to need to type ``python -m getpip`` in every PyEnv (coming from virtualenv). Maybe PyEnv can be smart enough to automatically install pip that's installed in the interpreter it's installed from? Maybe that's too much magic and the answer will be that tools like virtualenvwrapper will continue to exist and wrap that for you. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Fri Jul 12 00:12:49 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 11 Jul 2013 23:12:49 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: +1 Explicit is better than implicit. Amending venv to automatically install pip (as suggested by Donald) may be worth doing. I'm +0 on that (with the proviso that there's a --no-pip option in that case). OTOH, the venv module is very extensible and writing your own wrapper to import getpip and call bootstrap is pretty much trivial. On 11 July 2013 22:48, Nick Coghlan wrote: > (Oops, started this yesterday, got distracted and never hit send) > > On 11 July 2013 11:09, Richard Jones wrote: > > > > On 11 July 2013 06:50, Paul Moore wrote: > > > I think "python -m pip" should be the canonical form (used in > documentation, > > > examples, etc). The unittest module has taken this route, as has > timeit. > > > Traditionally, python-dev have been lukewarm about the -m interface, > but its > > > key advantage is that it bypasses all the issues around versioned > > > executables, cross-platform issues, the general dreadfulness of script > > > wrappers on Windows, etc, in one fell swoop. > > > > "python -m pip" does make the bootstrapping a more complex proposition > > - the stdlib would have to have something called "pip" that could be > > overridden (while it is actually *running*) by something installed in > > site-packages. Not easy. > > I was thinking about that, and I'm wondering if the most sensible option > may be to claim the "getpip" name on PyPI for ourselves and then do the > following: > > 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a > 2.7.x release) > 2. Install it to site-packages in the "Python launcher for Windows" > installer for earlier versions > > getpip would expose at least one function: > > def bootstrap(index_url=None, system_install=False): > ... > > And executing it as a main module would either: > > 1. Do nothing, if "import pip" already works > 2. Call bootstrap with the appropriate arguments > > That way, installation instructions can simply say to unconditionally do: > > python -m getpip > > And that will either: > > 1. Report that pip is already installed; > 2. Bootstrap pip into the user environment; or > 3. Emit a distro-specific message if the distro packagers want to push > users to use the system pip instead (since they get to patch the system > Python and can tweak the system getpip however they want) > > The 2.7 change would then be to create a new download that bundles the > Windows launcher into the Windows installer. > > Users aren't stupid - the problem with the status quo is really that the > bootstrapping instructions are annoyingly complicated and genuinely > confusing, not that an explicit bootstrapping step is needed in the first > place. > > Cheers, > Nick. > > > > > > > > Thanks everyone for your brilliant feedback and discussion - I look > > forward to being able to say something sensible about Windows in the > > PEP :-) > > > > > > > > > > Richard > > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > On 11 July 2013 06:50, Paul Moore wrote: > > I think "python -m pip" should be the canonical form (used in > documentation, > > examples, etc). The unittest module has taken this route, as has timeit. > > Traditionally, python-dev have been lukewarm about the -m interface, but > its > > key advantage is that it bypasses all the issues around versioned > > executables, cross-platform issues, the general dreadfulness of script > > wrappers on Windows, etc, in one fell swoop. > > "python -m pip" does make the bootstrapping a more complex proposition > - the stdlib would have to have something called "pip" that could be > overridden (while it is actually *running*) by something installed in > site-packages. Not easy. > > Thanks everyone for your brilliant feedback and discussion - I look > forward to being able to say something sensible about Windows in the > PEP :-) > > > Richard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 12 00:37:02 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 11 Jul 2013 18:37:02 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: I hope we will also arrive at a pip that doesn't need to be individually installed per venv... On Jul 11, 2013 6:13 PM, "Paul Moore" wrote: > +1 Explicit is better than implicit. > > Amending venv to automatically install pip (as suggested by Donald) may be > worth doing. I'm +0 on that (with the proviso that there's a --no-pip > option in that case). OTOH, the venv module is very extensible and writing > your own wrapper to import getpip and call bootstrap is pretty much trivial. > > On 11 July 2013 22:48, Nick Coghlan wrote: > >> (Oops, started this yesterday, got distracted and never hit send) >> >> On 11 July 2013 11:09, Richard Jones wrote: >> > >> > On 11 July 2013 06:50, Paul Moore wrote: >> > > I think "python -m pip" should be the canonical form (used in >> documentation, >> > > examples, etc). The unittest module has taken this route, as has >> timeit. >> > > Traditionally, python-dev have been lukewarm about the -m interface, >> but its >> > > key advantage is that it bypasses all the issues around versioned >> > > executables, cross-platform issues, the general dreadfulness of script >> > > wrappers on Windows, etc, in one fell swoop. >> > >> > "python -m pip" does make the bootstrapping a more complex proposition >> > - the stdlib would have to have something called "pip" that could be >> > overridden (while it is actually *running*) by something installed in >> > site-packages. Not easy. >> >> I was thinking about that, and I'm wondering if the most sensible option >> may be to claim the "getpip" name on PyPI for ourselves and then do the >> following: >> >> 1. Provide "getpip" in the standard library for 3.4+ (and perhaps in a >> 2.7.x release) >> 2. Install it to site-packages in the "Python launcher for Windows" >> installer for earlier versions >> >> getpip would expose at least one function: >> >> def bootstrap(index_url=None, system_install=False): >> ... >> >> And executing it as a main module would either: >> >> 1. Do nothing, if "import pip" already works >> 2. Call bootstrap with the appropriate arguments >> >> That way, installation instructions can simply say to unconditionally do: >> >> python -m getpip >> >> And that will either: >> >> 1. Report that pip is already installed; >> 2. Bootstrap pip into the user environment; or >> 3. Emit a distro-specific message if the distro packagers want to push >> users to use the system pip instead (since they get to patch the system >> Python and can tweak the system getpip however they want) >> >> The 2.7 change would then be to create a new download that bundles the >> Windows launcher into the Windows installer. >> >> Users aren't stupid - the problem with the status quo is really that the >> bootstrapping instructions are annoyingly complicated and genuinely >> confusing, not that an explicit bootstrapping step is needed in the first >> place. >> >> Cheers, >> Nick. >> >> >> >> >> > >> > Thanks everyone for your brilliant feedback and discussion - I look >> > forward to being able to say something sensible about Windows in the >> > PEP :-) >> >> >> > >> > >> > >> > Richard >> >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> On 11 July 2013 06:50, Paul Moore wrote: >> > I think "python -m pip" should be the canonical form (used in >> documentation, >> > examples, etc). The unittest module has taken this route, as has timeit. >> > Traditionally, python-dev have been lukewarm about the -m interface, >> but its >> > key advantage is that it bypasses all the issues around versioned >> > executables, cross-platform issues, the general dreadfulness of script >> > wrappers on Windows, etc, in one fell swoop. >> >> "python -m pip" does make the bootstrapping a more complex proposition >> - the stdlib would have to have something called "pip" that could be >> overridden (while it is actually *running*) by something installed in >> site-packages. Not easy. >> >> Thanks everyone for your brilliant feedback and discussion - I look >> forward to being able to say something sensible about Windows in the >> PEP :-) >> >> >> Richard >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 03:53:29 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 01:53:29 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: Daniel Holth gmail.com> writes: > I hope we will also arrive at a pip that doesn't need to be individually > installed per venv... You mean, like distil? :-) Regards, Vinay Sajip From richard at python.org Fri Jul 12 04:12:12 2013 From: richard at python.org (Richard Jones) Date: Fri, 12 Jul 2013 12:12:12 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: The point of PEP 439 is that the current situation of "but first do this" for any given 3rd-party package installation was a bad thing and we desire to move away from it. The PEP therefore proposes to allow "just do this" to eventually become the narrative. The direction this conversation is heading is removing that very significant primary benefit, and I'm not convinced there's any point to the PEP in that case. Richard From donald at stufft.io Fri Jul 12 04:19:45 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 11 Jul 2013 22:19:45 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Jul 11, 2013, at 10:12 PM, Richard Jones wrote: > The point of PEP 439 is that the current situation of "but first do > this" for any given 3rd-party package installation was a bad thing and > we desire to move away from it. The PEP therefore proposes to allow > "just do this" to eventually become the narrative. The direction this > conversation is heading is removing that very significant primary > benefit, and I'm not convinced there's any point to the PEP in that > case. > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Now that I think about it some more I agree (and this was one of my sticking points with PyEnvs). There's already an API given to people who want to run a command to install pip: ``curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python`` Now that's platform dependent obviously but even then I don't see anyone documenting people should do that before installing things and I do think that blessing a script like that in the stdlib seems kind of pointless. The UX of the PEP as written as whenever you want to install something you run ``pip3 install foo``. The fact that pip _isn't_ bundled with Python and is instead fetched from PyPI is an implementation detail. It provides the major benefit of bundling it with Python without tying packaging to the release cycle of the stdlib (which has proven disastrous with distutils). We should remember that in general people have considered PyEnv that ships with Python 3.3 an inferior alternative to virtualenv largely in part because they have to fetch setuptools and pip prior to using it whereas in virtualenv they do not. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Fri Jul 12 07:11:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Jul 2013 15:11:14 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 12 July 2013 12:12, Richard Jones wrote: > The point of PEP 439 is that the current situation of "but first do > this" for any given 3rd-party package installation was a bad thing and > we desire to move away from it. The PEP therefore proposes to allow > "just do this" to eventually become the narrative. The direction this > conversation is heading is removing that very significant primary > benefit, and I'm not convinced there's any point to the PEP in that > case. > That was never the primary benefit to my mind. The status quo sucks because there is *no* simple answer to "first, do this", not because some kind of bootstrapping is needed. The problem in my view is that the "first, do this" step is currently a mess of various arcane platform dependendent incantations that may or may not work (and may even contradict each other) and can't be readily incorporated into an automatic script because they're not idempotent. Accordingly, I consider simplifying that "first, do this" step to "python -m getpip" to be a major upgrade from the status quo: * unlike curl, wget and "python -c" incantations, it's immediately obvious to a reader what it is supposed to do: "Get pip" * unlike curl, wget and "python -c" incantations, it can be easily made platform independent * unlike curl, wget and "python -c" incantations, it can be easily made idempotent (so it does nothing if it has already been run) * through "getpip.bootstrap" it will provide the infrastructure to easily add automatic bootstrapping to other tools In particular, it establishes the infrastructure to have pyvenv automatically bootstrap the installer into each venv, even when it isn't installed system wide (which is the key missing feature of pyvenv relative to virtualenv). Having the retrieval of pip happen automagically as part of an install command initially sounded nice, but I'm now a firm -1 on that because making it work cleanly in a cross-platform way that doesn't conflict with a manual pip install has proven to require several awkward compromises that make it an ugly solution: * we have to introduce a new install command (pip3 vs pip) to avoid packaging problems on Linux distros * this is then inconsistent with Windows (which doesn't have separate versioning for the Python 3 installation) * we have to introduce new bootstrap arguments to pip * we have to special case installation of pip and its dependencies to avoid odd looking warning messages * the implementation is tricky to explain * it doesn't work nicely with the "py" launcher on Windows (or equivalents that may be added to other platforms) If your reaction is "well, in that case, I don't want to write it anymore", I will be disappointed, but that won't stop me from rejecting this approach and waiting for someone else to volunteer to write the more explicit version based on your existing bootstrap code. I'd prefer not to do that though - I'd prefer it if I can persuade you that "python -m getpip" *is* a major upgrade over the status quo that is worth implementing, and one that adheres to the Zen of Python, in particular: * Explicit is better than implicit * Simple is better than complex * Readability counts * Errors should never pass silently, unless explicitly silenced * In the face of ambiguity, refuse the temptation to guess * If the implementation is hard to explain, it's a bad idea. * If the implementation is easy to explain, it may be a good idea. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jul 12 07:19:29 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Jul 2013 15:19:29 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 12 July 2013 15:11, Nick Coghlan wrote: > In particular, it establishes the infrastructure to have pyvenv > automatically bootstrap the installer into each venv, even when it isn't > installed system wide (which is the key missing feature of pyvenv relative > to virtualenv). > The other thing I will note is that *if* we decide to add an implicit bootstrap later (which I doubt will happen, but you never know), then having "getpip" available as an importable module will also make *that* easier to write (since the heart of the current bootstrap code could be replaced by "from getpip import bootstrap") Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 10:35:37 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 08:35:37 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: Donald Stufft stufft.io> writes: > We should remember that in general people have considered PyEnv that ships > with Python 3.3 an inferior alternative to virtualenv largely in part > because they have to fetch setuptools and pip prior to using it whereas in > virtualenv they do not. Let's remember, that's a consequence of packaging being pulled from 3.3 - the original plan was to have the ability to install stuff in venvs without third- party software being necessary. There is no real barrier to using setuptools/pip with Python 3.3+ venvs: For example, I published the pyvenvex.py script which creates venvs and installs setuptools and pip in a single step: https://gist.github.com/vsajip/4673395 Admittedly it's "only a Gist" and not especially publicised to the wider community, but that could be addressed. The current situation, as I see it, is a transitional one. When distlib-like functionality becomes available in the stdlib, other approaches will be possible, which improve upon what's possible with setuptools and pip. I've demonstrated some of this using distil. When targeting Python 3.4, shouldn't we be looking further than just advancing the status quo a little bit? It's been said numerous times that "executable setup.py" must go. ISTM that, notwithstanding "practicality beats purity", a pip bootstrap in Python would bless executable setup.py and help to extend its lifespan. Regards, Vinay Sajip From ncoghlan at gmail.com Fri Jul 12 15:22:41 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 12 Jul 2013 23:22:41 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On 12 Jul 2013 18:36, "Vinay Sajip" wrote: > > Donald Stufft stufft.io> writes: > > > We should remember that in general people have considered PyEnv that ships > > with Python 3.3 an inferior alternative to virtualenv largely in part > > because they have to fetch setuptools and pip prior to using it whereas in > > virtualenv they do not. > > Let's remember, that's a consequence of packaging being pulled from 3.3 - the > original plan was to have the ability to install stuff in venvs without third- > party software being necessary. > > There is no real barrier to using setuptools/pip with Python 3.3+ venvs: For > example, I published the pyvenvex.py script which creates venvs and installs > setuptools and pip in a single step: > > https://gist.github.com/vsajip/4673395 > > Admittedly it's "only a Gist" and not especially publicised to the wider > community, but that could be addressed. > > The current situation, as I see it, is a transitional one. When distlib-like > functionality becomes available in the stdlib, other approaches will be > possible, which improve upon what's possible with setuptools and pip. I've > demonstrated some of this using distil. When targeting Python 3.4, shouldn't > we be looking further than just advancing the status quo a little bit? > > It's been said numerous times that "executable setup.py" must go. ISTM that, > notwithstanding "practicality beats purity", a pip bootstrap in Python > would bless executable setup.py and help to extend its lifespan. Some day pip will get a "wheel only" mode, and that's the step that will kill off the need to run setup.py on production machines even when using the Python specific tools. Blessing both setuptools and pip as the "obvious way to do it" is designed to give us the wedge we need to start a gradual transition to that world without facing the initial barriers to adoption that were part of what scuttled the distutils2 effort. Cheers, Nick. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Jul 12 15:24:09 2013 From: brett at python.org (Brett Cannon) Date: Fri, 12 Jul 2013 09:24:09 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: On Fri, Jul 12, 2013 at 4:35 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > > > We should remember that in general people have considered PyEnv that > ships > > with Python 3.3 an inferior alternative to virtualenv largely in part > > because they have to fetch setuptools and pip prior to using it whereas > in > > virtualenv they do not. > > Let's remember, that's a consequence of packaging being pulled from 3.3 - > the > original plan was to have the ability to install stuff in venvs without > third- > party software being necessary. I think it's also a consequence of having to remember how to install pip. I don't have the get-pip.py URL memorized in order to pass it to curl to download for executing. At least with Nick's suggestion there is nothing more to remember than to run getpip right after you create your venv. It's also a consequence of habit and laziness, both of which programmers are notorious about holding on two with both hands as tightly as possible. =) > There is no real barrier to using setuptools/pip with Python 3.3+ venvs: > For > example, I published the pyvenvex.py script which creates venvs and > installs > setuptools and pip in a single step: > > https://gist.github.com/vsajip/4673395 > > Admittedly it's "only a Gist" and not especially publicised to the wider > community, but that could be addressed. > > The example in the venv docs actually does something similar but with distribute and pip: http://docs.python.org/3.4/library/venv.html#an-example-of-extending-envbuilder. I have filed a bug to update it to setuptools: http://bugs.python.org/issue18434 . > The current situation, as I see it, is a transitional one. When > distlib-like > functionality becomes available in the stdlib, other approaches will be > possible, which improve upon what's possible with setuptools and pip. I've > demonstrated some of this using distil. When targeting Python 3.4, > shouldn't > we be looking further than just advancing the status quo a little bit? > > It's been said numerous times that "executable setup.py" must go. ISTM > that, > notwithstanding "practicality beats purity", a pip bootstrap in Python > would bless executable setup.py and help to extend its lifespan. > I don't think that analogy is quite fair. It's not like setup.py either runs something if it's installed OR installs it and then continues execution. Having installation code execute arbitrary code is not a good thing, but executing code as part of executing an app make sense. =) But I do see the point you're trying to make. I'm personally +0 on the explicit install and +1 on the implicit bootstrap. I'm fine with adding a --no-bootstrap option that prevents the implicit install if people want to block it, or even prompting by default if people want to install and have a --bootstrap option for those that want to happen automatically for script usage. If this were a library we are talking about then I would feel differently, but since this is an app I don't feel bad about it. Then again as long at the getpip script simply exits quietly if pip is already installed then that's not a big thing either. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 12 17:17:12 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 11:17:12 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> On Jul 12, 2013, at 4:35 AM, Vinay Sajip wrote: > The current situation, as I see it, is a transitional one. When distlib-like > functionality becomes available in the stdlib, other approaches will be > possible, which improve upon what's possible with setuptools and pip. I've > demonstrated some of this using distil. When targeting Python 3.4, shouldn't > we be looking further than just advancing the status quo a little bit? > > It's been said numerous times that "executable setup.py" must go. ISTM that, > notwithstanding "practicality beats purity", a pip bootstrap in Python > would bless executable setup.py and help to extend its lifespan. There's very little reason why a pip bootstrap script couldn't unpack a wheel instead of using setup.py. Infact I've advocated for this and plan on contributing a bare bones wheel installation routine that would work well enough to get pip and setuptools installed. I'm also against adding distlib-like functionality to the stdlib. At least at this point in time. We've seen the far reaching effects that adding a packaging lib directly to the stdlib can have. I don't want to see us repeat the mistakes of the past and add distlib into the stdlib. Maybe in time once the packaging world isn't evolving so rapidly and distlib has had a lot of real world use that can be an option. The benefit for me in the way the pip/setuptools bootstrap is handled is that it's not merely imported into the stdlib and called done. It'll fetch the latest pip during each bootstrap, making it not a point of stagnation like distutils was. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Jul 12 17:25:47 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 11:25:47 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> Message-ID: <038A6971-FB53-497C-B1C0-24C734B4B64F@stufft.io> On Jul 12, 2013, at 1:19 AM, Nick Coghlan wrote: > On 12 July 2013 15:11, Nick Coghlan wrote: > In particular, it establishes the infrastructure to have pyvenv automatically bootstrap the installer into each venv, even when it isn't installed system wide (which is the key missing feature of pyvenv relative to virtualenv). > > The other thing I will note is that *if* we decide to add an implicit bootstrap later (which I doubt will happen, but you never know), then having "getpip" available as an importable module will also make *that* easier to write (since the heart of the current bootstrap code could be replaced by "from getpip import bootstrap") > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I prefer the implicit bootstrap approach, but if the explicit bootstrap approach is chosen then something special needs to be done for pyvenv. If an explicit bootstrap is required for every pyvenv then I'm going to guess that people are going to just continue using virtualenv. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 17:52:46 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 15:52:46 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> Message-ID: Nick Coghlan gmail.com> writes: > Some day pip will get a "wheel only" mode, and that's the step that will kill > off the need to run setup.py on production machines even when using the > Python specific tools. > Blessing both setuptools and pip as the "obvious way to do it" is designed to > give us the wedge we need to start a gradual transition to that world without > facing the initial barriers to adoption that were part of what scuttled the > distutils2 effort. I think wheel is a good part of that wedge. Considering the barriers to adoption of distutils2: 1. Distutils2 expected people to migrate their setup.py to setup.cfg while providing only minimal help in doing so. I have gotten quite far in addressing the migration issue, in that I already have fully declarative metadata, *automatically* generated from setup.py / setup.cfg, and distil can do dependency resolution and installation using that metadata for a large number of distributions currently existing on PyPI. The automatic process might not be perfected yet, but it already does much of what one might expect given that it doesn't do e.g. exhaustive analysis of a setup.py to determine all possible code paths, etc. so it can't capture all environment- dependent info. 2. Distutils2 did not do any dependency resolution (not having any index metadata it could rely on for dependency information), but that's not the case with distlib. While it's not a full-blown solver, distlib's dependency resolution appears at least as good as setuptools'. 3. Windows seemed to be an afterthought for distutils2 - that's not the case with distlib. Although it may not be necessary because of the existence of the Python launcher for Windows, distlib has provision for e.g. native executables on Windows, just as setuptools does. 4. Distutils2 did not provide some functionality that setuptools users have come to rely on - e.g. entry points and package resources functionality. Distlib makes good many of these omissions, to the point where an implementation of pip using distlib to replace pkg_resources functionality has been developed and passes the pip test suite. 5. Distutils2 did not support the version scheme that setuptools does, but only the PEP 386 version scheme, which was another migration roadblock. Distlib supports PEP 440 versioning *and* setuptools versioning, so that barrier to adoption is no longer present. 6. Distutils2 did not provide "editable" installations for developers, but distil does (using ordinary .pth files, not setuptools-style "executable" ones). 7. Because wheel was not available in the distutils2 world, it would be hard for distutils to provide a build infrastructure as mature as distutils / setuptools extensions as provided by NumPy, SciPy etc. However, now that the wheel specification exists, and wheels can be built using setup.py and installed using distlib, there's much less of a reason to require setuptools and pip at installation time, and more of a reason to give developers reasons to provide their distributions in wheel format. While I'm not claiming that distlib is feature-complete, and while it doesn't have the benefit of being battle-tested through widespread usage, I'm asserting that it removes the barriers to adoption which distutils2 had, at least those identified above. I'm hoping that those who might disagree will speak up by identifying other barriers to adoption which I've failed to identify, or any requirements that I've failed to address satisfactorily in distlib. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Fri Jul 12 18:16:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 16:16:49 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I'm also against adding distlib-like functionality to the stdlib. At least at > this point in time. We've seen the far reaching effects that adding a > packaging lib directly to the stdlib can have. I don't want to see us repeat > the mistakes of the past and add distlib into the stdlib. Maybe in time once > the packaging world isn't evolving so rapidly and distlib has had a lot of > real world use that can be an option. The benefit for me in the way the pip/ On the question of whether distlib should or shouldn't be added to the stdlib, obviously that's for others to decide. My belief is that infrastructure areas like this need *some* stdlib underpinning. Also, distlib is pretty low-level, at the level of mechanism rather than policy, so there's no reason to be too paranoid about it in general terms. There's also some element of chicken and egg - inertia being what it is, I wouldn't expect *any* new packaging software outside the stdlib to gain significant adoption at any reasonable rate while the status quo is good enough for many people. But the status quo doesn't seem to allow any room for innovation. Distil is completely self-contained and does not require distlib to be in the stdlib, but it already does what could reasonably have been expected of packaging (if it had got into 3.3) and then some. What's more, it doesn't require installing into every venv - one copy covers all venvs (2.6+), user site-packages and system site-packages. > setuptools bootstrap is handled is that it's not merely imported into the > stdlib and called done. It'll fetch the latest pip during each bootstrap, > making it not a point of stagnation like distutils was. My pyvenvex script does this now. For venvs, that's the bootstrap right there. Of course, in cases where you want repeatability, getting the latest version each time might not be what you want :-) Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Fri Jul 12 18:23:52 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 16:23:52 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <038A6971-FB53-497C-B1C0-24C734B4B64F@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I prefer the implicit bootstrap approach, but if the explicit bootstrap > approach is chosen then something special needs to be done for pyvenv. The original pyvenv script did install Distribute and pip, but that functionality was removed before beta because Distribute and pip are third-party packages. If that restriction is lifted, we can easily replace the pyvenv script in Python with pyvenvex, and then (as I understand it) that is equivalent to an implicit bootstrap. Regards, Vinay Sajip From donald at stufft.io Fri Jul 12 18:24:10 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 12:24:10 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: <34C42654-A1EF-4444-94C8-19297864A84F@stufft.io> On Jul 12, 2013, at 12:16 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I'm also against adding distlib-like functionality to the stdlib. At least at >> this point in time. We've seen the far reaching effects that adding a >> packaging lib directly to the stdlib can have. I don't want to see us repeat >> the mistakes of the past and add distlib into the stdlib. Maybe in time once >> the packaging world isn't evolving so rapidly and distlib has had a lot of >> real world use that can be an option. The benefit for me in the way the pip/ > > On the question of whether distlib should or shouldn't be added to the stdlib, > obviously that's for others to decide. My belief is that infrastructure areas > like this need *some* stdlib underpinning. Also, distlib is pretty low-level, > at the level of mechanism rather than policy, so there's no reason to be too > paranoid about it in general terms. There's also some element of chicken > and egg - inertia being what it is, I wouldn't expect *any* new packaging > software outside the stdlib to gain significant adoption at any reasonable rate > while the status quo is good enough for many people. But the status quo doesn't > seem to allow any room for innovation. Eh, installing a pure Python Wheel is pretty simple. Especially if you restrict the options it can have. I don't see any reason why the bootstrap script can't include that as an internal implementation detail. I think it's kind of funny when folks say that new packaging software *needs* to be in the standard library when setuptools has pretty emphatically shown us that no it doesn't. People have problems with packaging, solve them without throwing away the world and they'll migrate. > > Distil is completely self-contained and does not require distlib to be in the > stdlib, but it already does what could reasonably have been expected of > packaging (if it had got into 3.3) and then some. What's more, it doesn't > require installing into every venv - one copy covers all venvs (2.6+), user > site-packages and system site-packages. pip used to have this and it was removed as a misfeature as it caused more problems then it solved. > >> setuptools bootstrap is handled is that it's not merely imported into the >> stdlib and called done. It'll fetch the latest pip during each bootstrap, >> making it not a point of stagnation like distutils was. > > My pyvenvex script does this now. For venvs, that's the bootstrap right there. > > Of course, in cases where you want repeatability, getting the latest version > each time might not be what you want :-) I haven't read your script in depth. But if that's all that's needed let's make sure it's done automatically for folks. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Fri Jul 12 18:56:55 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 12 Jul 2013 17:56:55 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: On 12 July 2013 16:17, Donald Stufft wrote: > There's very little reason why a pip bootstrap script couldn't unpack a > wheel instead of using setup.py. Infact I've advocated for this and plan on > contributing a bare bones wheel installation routine that would work well > enough to get pip and setuptools installed. I've written more than one bare-bones wheel installation script myself. They are easy to write (credit to Daniel for developing a format that's very simple to process!). I'm happy to donate any of the code that's useful. Here's one I've used in the past: https://gist.github.com/pfmoore/5985969 Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 19:10:54 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 17:10:54 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <34C42654-A1EF-4444-94C8-19297864A84F@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > Eh, installing a pure Python Wheel is pretty simple. Especially if you > restrict the options it can have. I don't see any reason why the bootstrap > script can't include that as an internal implementation detail. Sorry, I don't understand what you mean here, in terms of which of my points you are responding to. > I think it's kind of funny when folks say that new packaging software *needs* > to be in the standard library when setuptools has pretty emphatically shown > us that no it doesn't. People have problems with packaging, solve them > without throwing away the world and they'll migrate. Inertia definitely is a thing - otherwise why complain that an explicit bootstrap is much worse than an implicit one? The difference in work to use one rather than another isn't that great. I'm not saying that distlib (or any equivalent software) *has* or *needs* to be in the stdlib, merely that adoption will be faster if it is, and also that it is the right kind of software (infrastructure) which could reasonably be expected to be in the stdlib of a language which is acclaimed for (amongst other things) "batteries included". Setuptools, while not itself in the stdlib, built on packaging software that was, so the cases are not quite equivalent. Users did not have to do a major shift away from "executable setup.py", but if we're asking them to do that, it's slightly more work to migrate, even if you don't "throw away the world". And of course I agree that easing migration is important, which is why I've worked on migrating setup.py logic to declarative PEP 426, as far as is practicable. > pip used to have this and it was removed as a misfeature as it caused more > problems then it solved. Was it exactly the same? I don't remember this. I'd be interested in the specifics - can you point me to any more detailed information about this? > I haven't read your script in depth There's not much to it, it shouldn't take too long to review :-) Regards, Vinay Sajip From dholth at gmail.com Fri Jul 12 19:18:13 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 12 Jul 2013 13:18:13 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: The goal is that it will be equally easy to install packages built with any build system. We are on our way. Getting rid of an executable build script is no longer a goal. Builds inherently need that often. But we don't want people extending distutils against their will. On Jul 12, 2013 11:59 AM, "Paul Moore" wrote: > > On 12 July 2013 16:17, Donald Stufft wrote: > >> There's very little reason why a pip bootstrap script couldn't unpack a >> wheel instead of using setup.py. Infact I've advocated for this and plan on >> contributing a bare bones wheel installation routine that would work well >> enough to get pip and setuptools installed. > > > I've written more than one bare-bones wheel installation script myself. > They are easy to write (credit to Daniel for developing a format that's > very simple to process!). I'm happy to donate any of the code that's > useful. Here's one I've used in the past: > https://gist.github.com/pfmoore/5985969 > > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 12 19:27:19 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 13:27:19 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <34C42654-A1EF-4444-94C8-19297864A84F@stufft.io> Message-ID: <640F3E77-5D43-49F6-9580-27544B3495D2@stufft.io> On Jul 12, 2013, at 1:10 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> Eh, installing a pure Python Wheel is pretty simple. Especially if you >> restrict the options it can have. I don't see any reason why the bootstrap >> script can't include that as an internal implementation detail. > > Sorry, I don't understand what you mean here, in terms of which of my points > you are responding to. Maybe I misunderstood your point :) I thought you were saying that by installing pip using setup.py install we are "blessing" setup.py install again? I was saying we don't need to do that. > >> I think it's kind of funny when folks say that new packaging software *needs* >> to be in the standard library when setuptools has pretty emphatically shown >> us that no it doesn't. People have problems with packaging, solve them >> without throwing away the world and they'll migrate. > > Inertia definitely is a thing - otherwise why complain that an explicit > bootstrap > is much worse than an implicit one? The difference in work to use one rather > than > another isn't that great. I'm not saying that distlib (or any equivalent > software) *has* or *needs* to be in the stdlib, merely that adoption will be > faster if it is, and also that it is the right kind of software > (infrastructure) which could reasonably be expected to be in the stdlib of a > language which is acclaimed for (amongst other things) "batteries included". > > Setuptools, while not itself in the stdlib, built on packaging software that > was, so the cases are not quite equivalent. Users did not have to do a major > shift away from "executable setup.py", but if we're asking them to do that, > it's slightly more work to migrate, even if you don't "throw away the world". > And of course I agree that easing migration is important, which is why I've > worked on migrating setup.py logic to declarative PEP 426, as far as is > practicable. I'm not overly found of bootstrapping setuptools itself, but I think unless pip comes along and bundles setuptools like it has done distlib it's a nesceary evil right now. Ideally In the future we can move things to where setuptools is just a build tool and isn't something needed at install time unless you're doing a build. I generally agree that a packaging library is the type of item that belongs in a stdlib, I don't think it belongs in there *yet*. We can work around it not being there, and that means we can be more agile about it and evolve the tooling till we are happy with them instead of trying to get it in as quickly as possible to make things easier in the short term and possibly harder in the long term. > >> pip used to have this and it was removed as a misfeature as it caused more >> problems then it solved. > > Was it exactly the same? I don't remember this. I'd be interested in the > specifics - can you point me to any more detailed information about this? > >> I haven't read your script in depth > > There's not much to it, it shouldn't take too long to review :-) > > Regards, > > Vinay Sajip > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 19:28:10 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 17:28:10 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: Daniel Holth gmail.com> writes: > Getting rid of an executable build script is no longer a goal. Builds > inherently need that often. But we don't want people extending distutils > against their will. Perhaps I should have been clearer - I meant "executable setup.py install", and as I understand it, it is a goal to get rid of that. Regarding "executable setup.py build", that's less of an issue than for installing, but IIUC, it is still not ideal. Many of the hacks that people have made around distutils/setuptools relate to building, not just installing, or am I wrong? Regards, Vinay Sajip From donald at stufft.io Fri Jul 12 19:34:22 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 13:34:22 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: <19726835-82CC-4133-A161-8E40B590C9BB@stufft.io> On Jul 12, 2013, at 1:28 PM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: > >> Getting rid of an executable build script is no longer a goal. Builds >> inherently need that often. But we don't want people extending distutils >> against their will. > > Perhaps I should have been clearer - I meant "executable setup.py install", > and as I understand it, it is a goal to get rid of that. Yes it's a goal to get rid of setup.py install, but I doubt it will ever fully be gone. At least not for a long time. There's almost 150k source dist packages on PyPI and I'm going to assume the vast bulk of them have a setup.py. > > Regarding "executable setup.py build", that's less of an issue than for > installing, but IIUC, it is still not ideal. Many of the hacks that people > have made around distutils/setuptools relate to building, not just > installing, or am I wrong? It's not ideal, but it's also largely only an issue on the machine of the developer who is packaging the software. If they are fine with the hacks then there's not a major reason to move them away from that. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 19:42:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 17:42:49 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <19726835-82CC-4133-A161-8E40B590C9BB@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > Yes it's a goal to get rid of setup.py install, but I doubt it will ever > fully be gone. At least not for a long time. There's almost 150k source > dist packages on PyPI and I'm going to assume the vast bulk of them have > a setup.py. True, but distil seems to be able to install a fair few (certainly the ones which don't do significant special processing in their setup.py, such as moving files around and creating files) without ever executing setup.py. > It's not ideal, but it's also largely only an issue on the machine of > the developer who is packaging the software. If they are fine with the > hacks then there's not a major reason to move them away from that. It's a smaller community than the users of those projects, and I don't know what the numbers of affected developers are. Obviously it's up to each project how they do their stuff, but from my understanding the NumPy/SciPy communities aren't especially happy with the extensions they've had to do (else, why Bento?) Regards, Vinay Sajip From brett at python.org Fri Jul 12 20:00:06 2013 From: brett at python.org (Brett Cannon) Date: Fri, 12 Jul 2013 14:00:06 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: On Fri, Jul 12, 2013 at 12:16 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > > > I'm also against adding distlib-like functionality to the stdlib. At > least at > > this point in time. We've seen the far reaching effects that adding a > > packaging lib directly to the stdlib can have. I don't want to see us > repeat > > the mistakes of the past and add distlib into the stdlib. Maybe in time > once > > the packaging world isn't evolving so rapidly and distlib has had a lot > of > > real world use that can be an option. The benefit for me in the way the > pip/ > > On the question of whether distlib should or shouldn't be added to the > stdlib, > obviously that's for others to decide. [SNIP] Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 12 20:11:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 18:11:43 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <34C42654-A1EF-4444-94C8-19297864A84F@stufft.io> <640F3E77-5D43-49F6-9580-27544B3495D2@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > Maybe I misunderstood your point :) I thought you were saying that by > installing pip using setup.py install we are "blessing" setup.py install > again? I was saying we don't need to do that. Okay, I see. I'm used to comments referring to points directly above them, and my comment about blessings was at the end of my post. I meant that pip itself, and not just the bootstrap, uses "setup.py install". I would have thought that pip don't need no steenking blessing from anyone :-), but that's what the PEP is about, after all. > I'm not overly found of bootstrapping setuptools itself, but I think > unless pip comes along and bundles setuptools like it has done distlib > it's a nesceary evil right now. Ideally In the future we can move things But aren't you in favour of getting the latest version of setuptools and pip each time? > to where setuptools is just a build tool and isn't something needed at > install time unless you're doing a build. That "unless" - that stops the clean separation between build and install which wheel enables, and which would be a Good Thing to encourage. > I generally agree that a packaging library is the type of item that > belongs in a stdlib, I don't think it belongs in there *yet*. We can work > around it not being there, and that means we can be more agile about it > and evolve the tooling till we are happy with them instead of trying to > get it in as quickly as possible to make things easier in the short term > and possibly harder in the long term. Oh, I agree there's no sense in rushing things. But how do we know when we're happy enough (or not) with something? When we try it out, that's when we can form an opinion - not before. It's been a good while since I first announced distil, both as a test-bed for distlib, but also as a POC for better user experiences with packaging. Apart from Paul Moore (thanks, Paul!), I've had precious little specific feedback from anyone here (and believe me, I'd welcome adverse feedback if it's warranted). It could all be a steaming pile of the proverbial, or the best thing since sliced proverbials, but there's no way to know. Of course there are good reasons for this - we are all busy people. Inertia, thy ways are many :-) Regards, Vinay Sajip From donald at stufft.io Fri Jul 12 20:16:33 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 14:16:33 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> Message-ID: <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> On Jul 12, 2013, at 2:00 PM, Brett Cannon wrote: > Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib. This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first). Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess. I just won't worry about trying to write PEPs :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Fri Jul 12 21:25:04 2013 From: brett at python.org (Brett Cannon) Date: Fri, 12 Jul 2013 15:25:04 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> Message-ID: On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft wrote: > > On Jul 12, 2013, at 2:00 PM, Brett Cannon wrote: > > Speaking with my python-dev hat on which has a badge from when I led the > stdlib cleanup for Python 3, I would say anything that has a PEP should > probably have a module in the stdlib for it. That way standard management > of whatever is specified in the PEP will be uniform and expected to be > maintained and work. Beyond that code will exist outside the stdlib. > > > This is basically the exact opposite of what Nick has said the intent has > been (Ecosystem first). > > Not at all as no module will go in immediately until after a PEP has landed and been vetted as needed. > Adding packaging tools beyond bootstrapping pip at this point in the game > is IMO a huge mistake. If what Nick has said and PEPs are not appropriate > for things that don't have a module in the standard lib well that's fine I > guess. > You misunderstand what I mean. I'm just saying that *if* anything were to go into the stdlib it would only be to have code which implements a PEP in the stdlib to prevent everyone from re-implementing a standard. > I just won't worry about trying to write PEPs :) > No, the PEPs are important to prevent version skew and make sure everyone is on the same page. And that's also what a module in the stdlib would do; make sure everyone is on the same page in terms of semantics by using a single code base. I mean I wouldn't expect anything more than maybe code parsing the JSON metadata that does some validation and parsing version numbers that can support comparisons and verifying platform requirements; that's it. Stuff that every installation tool will need to do in order to follow the PEPs properly. And it wouldn't go in until everyone was very happy with the PEPs and ready to commit to code enshrining it in the stdlib. Otherwise I hope distlib moves into PyPA and everyone who develops installation tools, etc. uses that library. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 12 23:21:06 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 17:21:06 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> Message-ID: <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> On Jul 12, 2013, at 3:25 PM, Brett Cannon wrote: > > > > On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft wrote: > > On Jul 12, 2013, at 2:00 PM, Brett Cannon wrote: > >> Speaking with my python-dev hat on which has a badge from when I led the stdlib cleanup for Python 3, I would say anything that has a PEP should probably have a module in the stdlib for it. That way standard management of whatever is specified in the PEP will be uniform and expected to be maintained and work. Beyond that code will exist outside the stdlib. > > This is basically the exact opposite of what Nick has said the intent has been (Ecosystem first). > > > Not at all as no module will go in immediately until after a PEP has landed and been vetted as needed. > > Adding packaging tools beyond bootstrapping pip at this point in the game is IMO a huge mistake. If what Nick has said and PEPs are not appropriate for things that don't have a module in the standard lib well that's fine I guess. > > You misunderstand what I mean. I'm just saying that *if* anything were to go into the stdlib it would only be to have code which implements a PEP in the stdlib to prevent everyone from re-implementing a standard. > > I just won't worry about trying to write PEPs :) > > No, the PEPs are important to prevent version skew and make sure everyone is on the same page. And that's also what a module in the stdlib would do; make sure everyone is on the same page in terms of semantics by using a single code base. > > I mean I wouldn't expect anything more than maybe code parsing the JSON metadata that does some validation and parsing version numbers that can support comparisons and verifying platform requirements; that's it. Stuff that every installation tool will need to do in order to follow the PEPs properly. And it wouldn't go in until everyone was very happy with the PEPs and ready to commit to code enshrining it in the stdlib. Otherwise I hope distlib moves into PyPA and everyone who develops installation tools, etc. uses that library. I could probably be convinced about something that makes handling versions easier going into the standard lib, but that's about it. There's a few reasons that I don't want these things added to the stdlib themselves. One of the major ones is that of "agility". We've seen with distutils how impossible it can be to make improvements to the system. Now some of this is made better with the way the new system is being designed with versioned metadata but it doesn't completely go away. We can look at Python's past to see just how long any individual version sticks around and we can assume that if something gets added now that particular version will be around for a long time. Another is because of how long it can take a new version of Python to become "standard", especially in the 3.x series since the entire 3.x series itself isn't standard, any changes made to the standard lib won't be usable for years and years. This can be mitigated by releasing a backport on PyPI, but if every version of Python but the latest one is going to require installing these libs from PyPI in order to usefully interact with the "world", then you might as well just require all versions of Python to install bits from PyPI. Yet another is by blessing a particular implementation, that implementations behaviors become the standard (indeed the way the PEP system generally works for this is once it's been added to the standard lib the PEP is a historical document and the documentation becomes the standard). However packaging is not like Enums or urllibs, or smtp. We are essentially defining a protocol, one that non Python tools will be expected to use (for Debian and RPMs for example). We are using these PEPs more like a RFC than a proposal to include something in the stdlib. There's also the case of usefulness. You mention some code that can parse the JSON metadata and validate it. Weel assumingly we'll have the metadata for 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the metadata. Maybe we release a metadata 3.0 and now it can't parse any metadata. But even if it can parse the metadata what does it do with it? The major places you'd be validating the metadata (other than merely consuming it) is either on the tools that create packages or in PyPI performing checks on a valid file upload. In the build tool case they are going to either need to write their own code for actually creating the package or, more likely, they'll reuse something like distlib. If those tools are already going to be using a distlib-like library then we might as just keep the validation code in there. Now the version parsing stuff which I said I could be convinced is slightly different. It is really sort of it's own thing. It's not dependent on the other pieces of packaging to be useful, and it's not versioned. It's also the only bit that's really useful on it's own. People consuming the (future) PyPI API could use it to fully depict the actual metadata so it's kind of like JSON itself in that regard. The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it. Wow that was a lot of words... ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Sat Jul 13 01:14:05 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 12 Jul 2013 23:14:05 +0000 (UTC) Subject: [Distutils] PEP 439 and pip bootstrap updated References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I could probably be convinced about something that makes handling versions > easier going into the standard lib, but that's about it. That seems completely arbitrary to me. Why just versions? Why not, for example, support for the wheel format? Why not agreed metadata formats? > There's a few reasons that I don't want these things added to the stdlib > themselves. > > One of the major ones is that of "agility". We've seen with distutils how > impossible it can be to make improvements to the system. Now some of this You say that, but setuptools, the poster child of packaging, improved quite a lot on distutils. I'm not convinced that it would have been as successful if there were no distutils in the stdlib, but of course you may disagree. I'm well aware of the "the stdlib is where software goes to die" school of thought, and I have considerable sympathy for where it's coming from, but let's not throw the baby out with the bathwater. The agility argument could be made for lots of areas of functionality, to the point where you just basically never add anything new to the stdlib because you're worried about an inability to cope with change. Also, it doesn't seem right to point to particular parts of the stdlib which were hard to adapt to changing requirements and draw the conclusion that all software added to the stdlib would be equally hard to adapt. Of course one could look at a specific piece of software and assess its adaptability, but otherwise, isn't it verging on just arm-waving? > is made better with the way the new system is being designed with versioned > metadata but it doesn't completely go away. We can look at Python's past to > see just how long any individual version sticks around and we can assume that > if something gets added now that particular version will be around for a long > time. That doesn't mean that overall improvements can't take place in the stdlib. For example, getopt -> optparse -> argparse. > Another is because of how long it can take a new version of Python to become > "standard", especially in the 3.x series since the entire 3.x series itself > isn't standard, any changes made to the standard lib won't be usable for > years and years. This can be mitigated by releasing a backport on PyPI, but > if every version of Python but the latest one is going to require installing > these libs from PyPI in order to usefully interact with the "world", then you > might as well just require all versions of Python to install bits from PyPI. Well, other approaches have been looked at - for example, accepting things into the stdlib but warning users about the provisional nature of some APIs. I think that where interoperability between different packaging tools is needed, that's where the argument for something in the stdlib is strongest, as Brett said. > Yet another is by blessing a particular implementation, that implementations > behaviors become the standard (indeed the way the PEP system generally works > for this is once it's been added to the standard lib the PEP is a historical > document and the documentation becomes the standard). However packaging is That's because the PEP is needed to advocate the inclusion in the stdlib and as a record of the discussion and rationale for accepting/rejecting whatever was advocated, but there's no real benefit in keeping the PEP updated as the stdlib component gets refined from its real-world exposure through being in the stdlib. > not like Enums or urllibs, or smtp. We are essentially defining a protocol, > one that non Python tools will be expected to use (for Debian and RPMs for > example). We are using these PEPs more like a RFC than a proposal to include > something in the stdlib. But we can assume that there will either be N different implementations of everything in the RFCs from the ground up, by N different tools, or ideally one canonical implementation in the stdlib that the tool makers can use (but are not forced to use if they don't want to). You might say that if there were some kick-ass implementation of these RFCs on PyPI people would just gravitate to it and the winner would be obvious, but I don't see things working like that. In the web space, look at HTTP Request/Response objects as an example: Pyramid, Werkzeug, Django all have their own, don't really interoperate in practice (though it was a goal of WSGI), and there's very little to choose between them technically. Just a fair amount of duplicated effort on something so low-level, which would have been better spent on truly differentiating features. > There's also the case of usefulness. You mention some code that can parse the > JSON metadata and validate it. Weel assumingly we'll have the metadata for > 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but > then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the > metadata. Maybe we release a metadata 3.0 and now it can't parse any > metadata. But even if it can parse the metadata what does it do with it? The > major places you'd be validating the metadata (other than merely consuming > it) is either on the tools that create packages or in PyPI performing checks > on a valid file upload. In the build tool case they are going to either need > to write their own code for actually creating the package or, more likely, > they'll reuse something like distlib. If those tools are already going to be > using a distlib-like library then we might as just keep the validation code > in there. Is that some blessed-by-being-in-the-stdlib kind of library that everyone uses, or one of several balkanised versions a la HTTP Request / Response? If it's not somehow blessed, why should a particular packaging project use it, even if it's technically up to the job? > Now the version parsing stuff which I said I could be convinced is slightly > different. It is really sort of it's own thing. It's not dependent on the > other pieces of packaging to be useful, and it's not versioned. It's also the > only bit that's really useful on it's own. People consuming the (future) PyPI > API could use it to fully depict the actual metadata so it's kind of like > JSON itself in that regard. That's only because some effort has gone into looking at version comparisons, ordering, pre-/post-/dev-releases, etc. and considering the requirements in some detail. It looks OK now, but so did PEP 386 to many people who hadn't considered the ordering of dev versions of pre-/post-releases. Who's to say that some other issue won't come up that we haven't considered? It's not a reason for doing nothing. > The installer side of things the purist side of me doesn't like adding it to > the standard library for all the same reasons but the pragmatic side of me > wants it there because it enables fetching the other bits that are needed for > "pip install X" to be a reasonable official response to these kind of > questions. But I pushed for and still believe that if a prerequisite for > doing that involves "locking" in pip or any of it's dependencies by adding > them to the standard library then I am vehemently against doing it. Nobody seems to be suggesting doing that, though. Regards, Vinay Sajip From nathaniel at google.com Sat Jul 13 00:58:25 2013 From: nathaniel at google.com (Nathaniel Manista) Date: Fri, 12 Jul 2013 15:58:25 -0700 Subject: [Distutils] Issues with Buildout 2.2.0 and a few other questions Message-ID: Hey Buildout Folks- I'm the lead of the Melange project and we've hit two issues with the recent Buildout release. The first is the setuptools version problem described here, and the second is the node problem described here . I see that a package upgrade is the recommended solution to the first problem, but that's not necessarily easy in our case as we've got a mix of corporate machines and academic machines and otherwise managed platforms - is there anything like a list of supported platforms for Buildout 2.2.0? Is Buildout fully supported on Ubuntu 12.04 (the most recent long-term support release of Ubuntu)? Are there known and unsupported incompatibilities on Ubuntu 12.04? Does Buildout maintain a list of supported platforms and configurations? Is the Buildout team aware of the second (node) issue? In looking for my own answers to these questions, I came across some issues on the buildout.org site. This linkon the front page appears broken, and the latest documentation appears to describe Buildout 1.2.1(with out-of-date references such as "wget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py"). What's going on there? Thanks much, -Nathaniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 02:02:00 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 12 Jul 2013 20:02:00 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> Message-ID: <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> On Jul 12, 2013, at 7:14 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I could probably be convinced about something that makes handling versions >> easier going into the standard lib, but that's about it. > > That seems completely arbitrary to me. Why just versions? Why not, for > example, support for the wheel format? Why not agreed metadata formats? As I said in my email, because it's more or less standalone and it has the greatest utility outside of installers/builders/archivers/indexes. > >> There's a few reasons that I don't want these things added to the stdlib >> themselves. >> >> One of the major ones is that of "agility". We've seen with distutils how >> impossible it can be to make improvements to the system. Now some of this > > You say that, but setuptools, the poster child of packaging, improved quite > a lot on distutils. I'm not convinced that it would have been as successful > if there were no distutils in the stdlib, but of course you may disagree. I've looked at many other languages where they had widely successful packaging tools that weren't added to the standard lib until they were ubiquitous and stable. Something the new tools for Python are not. So I don't think adding it to the standard library is required. And setuptools improved it *outside* of the standard library while distutils itself stagnated. I would venture to guess that if distutils *hadn't* been in the standard library than setuptools could have simply been patches to distutils instead of needing to be essentially "replace" distutils and it just so happened to reuse some it's functionality. So pointing towards setuptools just exposes the fact that improving it in the standard library was hard enough that it was done externally. > > I'm well aware of the "the stdlib is where software goes to die" school of > thought, and I have considerable sympathy for where it's coming from, but > let's not throw the baby out with the bathwater. The agility argument could > be made for lots of areas of functionality, to the point where you just > basically never add anything new to the stdlib because you're worried about > an inability to cope with change. Also, it doesn't seem right to point to > particular parts of the stdlib which were hard to adapt to changing > requirements and draw the conclusion that all software added to the stdlib > would be equally hard to adapt. Of course one could look at a specific piece > of software and assess its adaptability, but otherwise, isn't it verging on > just arm-waving? Well I am of the mind that the standard library is where software goes to die, and I'm also of the mind that a smaller standard library and a strong packaging story and ecosystem is far superior. But that's not what I'm advocating here. A key point to almost every other part of the standard library is if stagnates or falls behind or is unable to adapt then you simply don't use it. This is not a hard thing to do for something like httplib, urllib2, urlib, etc because it's what people have *done* in projects like requests. One persons choice to use url lib in his software has little to no bearing on someone else who might choose to use requests. However a packaging system needs interoperability. My choice to use a particular package software, if there is no interoperability, DRASTICALLY affects you if you want to use my software at all. A huge thing i've been trying to push for is decoupling packaging from a specific implementation so that we have a "protocol" (ala HTTP) and not a "tool" (ala distutils). However the allure of working to the implementation and not the standard is fairly high when there is a singular blessed implementation. > >> is made better with the way the new system is being designed with versioned >> metadata but it doesn't completely go away. We can look at Python's past to >> see just how long any individual version sticks around and we can assume that >> if something gets added now that particular version will be around for a long >> time. > > That doesn't mean that overall improvements can't take place in the stdlib. > For example, getopt -> optparse -> argparse. It's funny you picked and example where improvements *couldn't* take place and the entire system had to be thrown out and a new one written. getopt had to become a new module named opt parse, which had to become a new module named argparse in order to make changes to it. I don't think we need to have distutils, distlib, futurelib, even-further-futurelib and I think that makes packaging even more confusing then it needs to be. This also ties in with the above where one persons use of getopt instead of argparse doesn't drastically affect another person using a different one. > >> Another is because of how long it can take a new version of Python to become >> "standard", especially in the 3.x series since the entire 3.x series itself >> isn't standard, any changes made to the standard lib won't be usable for >> years and years. This can be mitigated by releasing a backport on PyPI, but >> if every version of Python but the latest one is going to require installing >> these libs from PyPI in order to usefully interact with the "world", then you >> might as well just require all versions of Python to install bits from PyPI. > > Well, other approaches have been looked at - for example, accepting things > into the stdlib but warning users about the provisional nature of some APIs. Provisional API's still exist in that version of Python and the only way someone would get a new one is by installing a package. I think that this makes the problem even *worse* because now you're adding API's to the standard library that have a good chance of needing to change and needing to require people to install a package (with no good way to communicate to someone that they need to update it since it's a standard library package and not a versioned installed package). > > I think that where interoperability between different packaging tools is > needed, that's where the argument for something in the stdlib is strongest, > as Brett said. You can gain interoperability in a few ways. One way is to just pick an implementation and make that the standard. Another is to define *actual* standards. The second one is harder, requires more thought and work. But it means that completely different software can work together. It means that something written in Ruby can easily work with a python package without shelling out to Python or without trying to copy all the implementation details and having to guess which ones are significant or not. > >> Yet another is by blessing a particular implementation, that implementations >> behaviors become the standard (indeed the way the PEP system generally works >> for this is once it's been added to the standard lib the PEP is a historical >> document and the documentation becomes the standard). However packaging is > > That's because the PEP is needed to advocate the inclusion in the stdlib and > as a record of the discussion and rationale for accepting/rejecting whatever > was advocated, but there's no real benefit in keeping the PEP updated as the > stdlib component gets refined from its real-world exposure through being in > the stdlib. And that's fine for a certain class of problems. It's not that useful for something where you want interoperability outside of that tool. How terrible would it be if HTTP was "well whatever Apache does, that's what HTTP is". > >> not like Enums or urllibs, or smtp. We are essentially defining a protocol, >> one that non Python tools will be expected to use (for Debian and RPMs for >> example). We are using these PEPs more like a RFC than a proposal to include >> something in the stdlib. > > But we can assume that there will either be N different implementations of > everything in the RFCs from the ground up, by N different tools, or ideally > one canonical implementation in the stdlib that the tool makers can use (but > are not forced to use if they don't want to). You might say that if there > were some kick-ass implementation of these RFCs on PyPI people would just > gravitate to it and the winner would be obvious, but I don't see things > working like that. In the web space, look at HTTP Request/Response objects > as an example: Pyramid, Werkzeug, Django all have their own, don't really > interoperate in practice (though it was a goal of WSGI), and there's very > little to choose between them technically. Just a fair amount of duplicated > effort on something so low-level, which would have been better spent on > truly differentiating features. A singular blessed tool in the standard library incentivizes the standard becoming and implementation detail. I *want* there to be multiple implementations written by different people working on different "slices" of the problem. That incentivizes doing the extra work on PEPs and other documents so that we maintain a highly documented standard. It's true that adding something to the standard library doesn't rule that out but it provides an incentive against properly doing standards because it's easier and simpler to just change it in the implementation. > >> There's also the case of usefulness. You mention some code that can parse the >> JSON metadata and validate it. Weel assumingly we'll have the metadata for >> 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but >> then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the >> metadata. Maybe we release a metadata 3.0 and now it can't parse any >> metadata. But even if it can parse the metadata what does it do with it? The >> major places you'd be validating the metadata (other than merely consuming >> it) is either on the tools that create packages or in PyPI performing checks >> on a valid file upload. In the build tool case they are going to either need >> to write their own code for actually creating the package or, more likely, >> they'll reuse something like distlib. If those tools are already going to be >> using a distlib-like library then we might as just keep the validation code >> in there. > > Is that some blessed-by-being-in-the-stdlib kind of library that everyone > uses, or one of several balkanised versions a la HTTP Request / Response? If > it's not somehow blessed, why should a particular packaging project use it, > even if it's technically up to the job? It's not blessed and a particular packaging project should use it if it fits their needs and they want to use it. Or they shouldn't use it if they don't want. Standards exist for a reason. So you can have multiple implementations that all work together. > >> Now the version parsing stuff which I said I could be convinced is slightly >> different. It is really sort of it's own thing. It's not dependent on the >> other pieces of packaging to be useful, and it's not versioned. It's also the >> only bit that's really useful on it's own. People consuming the (future) PyPI >> API could use it to fully depict the actual metadata so it's kind of like >> JSON itself in that regard. > > That's only because some effort has gone into looking at version > comparisons, ordering, pre-/post-/dev-releases, etc. and considering the > requirements in some detail. It looks OK now, but so did PEP 386 to many > people who hadn't considered the ordering of dev versions of > pre-/post-releases. Who's to say that some other issue won't come up that we > haven't considered? It's not a reason for doing nothing. I didn't make any claims as to it's stability or the amount of testing that went into it. My ability to be convinced of that stems primarily from the fact that it's sort of a side piece of the whole packaging infrastructure and toolchain and it's also a piece that is most likely to be useful on it's own. > >> The installer side of things the purist side of me doesn't like adding it to >> the standard library for all the same reasons but the pragmatic side of me >> wants it there because it enables fetching the other bits that are needed for >> "pip install X" to be a reasonable official response to these kind of >> questions. But I pushed for and still believe that if a prerequisite for >> doing that involves "locking" in pip or any of it's dependencies by adding >> them to the standard library then I am vehemently against doing it. > > Nobody seems to be suggesting doing that, though. I was (trying?) to explain that my belief doesn't extend to only distlib here and instead to the entire toolchain. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sat Jul 13 07:31:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Jul 2013 15:31:14 +1000 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) Message-ID: In addition to the long thread based on Richard's latest set of updates, I've also received a few off-list comments on the current state of the proposal. So, I figured I'd start a new thread summarising my current point of view and see where we want to go from there. 1. However we end up solving the bootstrapping problem, I'm *definitely* a fan of us updating pyvenv in 3.4 to ensure that pip is available by default in new virtual environments created with that tool. I also have an idea for a related import system feature that I'll be sending to import-sig this afternoon (it's a variant on *.pth and *.egg-link files that should be able to address a variety of existing problems, including the one of *selectively* making system and user packages available in a virtual environment in a cross-platform way without needing to copy them) 2. While I was originally a fan of the "implicit bootstrapping on demand" design, I no longer like that notion. While Richard's bootstrap script is a very nice piece of work, the edge cases and "neat tricks" have built up to the point where they trip my "if the implementation is hard to explain, it's a bad idea" filter. Accordingly, I no longer think the implicit bootstrapping is a viable option. 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): * an explicit bootstrapping script * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. For Linux, if you're using the system Python on a Debian or Fedora derivative, then "sudo apt-get python-pip" and "sudo yum install python-pip" are both straightforward, and if you're using something else, then it's unlikely getting pip bootstrapped using the bootstrap script is a task that will bother you :) The "python -m getpip" command is still something we will want to provide, as it is useful to people that build their own copy of Python from source. The bundling idea will obviously need to be discussed with the installer builders, and on python-dev in general, but that was always going to be the case for this PEP anyway (since it *does* touch CPython directly, rather than just being related to the packaging ecosystem). It achieves the aim of allowing people to assume some version of pip will be present on Python 3.4+ installations (or readily available in the case of Linux), while avoiding the problem of coupling pip updates to major Python version updates. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Sat Jul 13 07:57:51 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Fri, 12 Jul 2013 22:57:51 -0700 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <7F3EBECB-1DEC-4187-A33D-31284AB2A944@coderanger.net> On Jul 12, 2013, at 10:31 PM, Nick Coghlan wrote: > In addition to the long thread based on Richard's latest set of updates, I've also received a few off-list comments on the current state of the proposal. So, I figured I'd start a new thread summarising my current point of view and see where we want to go from there. > > 1. However we end up solving the bootstrapping problem, I'm *definitely* a fan of us updating pyvenv in 3.4 to ensure that pip is available by default in new virtual environments created with that tool. I also have an idea for a related import system feature that I'll be sending to import-sig this afternoon (it's a variant on *.pth and *.egg-link files that should be able to address a variety of existing problems, including the one of *selectively* making system and user packages available in a virtual environment in a cross-platform way without needing to copy them) > > 2. While I was originally a fan of the "implicit bootstrapping on demand" design, I no longer like that notion. While Richard's bootstrap script is a very nice piece of work, the edge cases and "neat tricks" have built up to the point where they trip my "if the implementation is hard to explain, it's a bad idea" filter. > > Accordingly, I no longer think the implicit bootstrapping is a viable option. > > 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): > > * an explicit bootstrapping script > * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). > > I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. > > For Linux, if you're using the system Python on a Debian or Fedora derivative, then "sudo apt-get python-pip" and "sudo yum install python-pip" are both straightforward, and if you're using something else, then it's unlikely getting pip bootstrapped using the bootstrap script is a task that will bother you :) > > The "python -m getpip" command is still something we will want to provide, as it is useful to people that build their own copy of Python from source. > > The bundling idea will obviously need to be discussed with the installer builders, and on python-dev in general, but that was always going to be the case for this PEP anyway (since it *does* touch CPython directly, rather than just being related to the packaging ecosystem). It achieves the aim of allowing people to assume some version of pip will be present on Python 3.4+ installations (or readily available in the case of Linux), while avoiding the problem of coupling pip updates to major Python version updates. As someone that has otherwise remained silent on this thread but was talking with people off-list I probably owe them a public +1 for bundling pip as a semi-new category of non-stdlib-but-included project. This would bring us in line with other tools like gem and npm which work out of the box and gives the user experience people want. Care would have to be paid to make sure the final pip binary ends up in the right filename, much in the same way as we do python -> python2 -> python 2.7 and such, but this is a solvable problem. How linux distros adapt to this is certainly another question, but I would absolutely advocate to packagers that installing the main python package results in a working pip install, regardless of how that is accomplished. As someone that has to write system management scripts to install and configure Python, being able to count on both pip and pyenv as standard tools in standard places is near-mind-blowingly awesome (give or take that it would be many years until I could reasonably assume 3.4 as the default python, but a man can dream). While the getpip module is interesting in a few use cases, it is vastly more valuable to me that we focus on the user experience of the majority of Python developers and deployments, and this is somewhere that Ruby and Node are getting it right in having the package tool simply be there by default. Bundling also addresses the myriad ways in which downloading on the fly either during install or afterwards is error-prone and may result in environment fragmentation (servers that don't have internet access, PyPI outages, etc). --Noah From donald at stufft.io Sat Jul 13 08:05:33 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 02:05:33 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> On Jul 13, 2013, at 1:31 AM, Nick Coghlan wrote: > I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. We could simply check it into the site-packages inside the CPython source tree could we not? *Not* providing a bootstrap script and merely checking it into the default site-packages means it's available for everyone. No matter how python installed. If Linux packagers really don't want it installed by default they could simply just remove it and either install it along with Python, or continue to keep it how it is today as a separate package? There are a number of things that have to be taken into account when downloading pip from the internet that are completely side stepped when we well don't download it from the internet :) And bundling it as a pre-installed python module and not in the standard library solves basically all of the problems I have with putting it inside of Python. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Jul 13 08:07:13 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 02:07:13 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: <7F3EBECB-1DEC-4187-A33D-31284AB2A944@coderanger.net> References: <7F3EBECB-1DEC-4187-A33D-31284AB2A944@coderanger.net> Message-ID: <3A9FAB73-6AE4-4E26-959B-5346F24ED306@stufft.io> On Jul 13, 2013, at 1:57 AM, Noah Kantrowitz wrote: > I would absolutely advocate to packagers that installing the main python package results in a working pip install, regardless of how that is accomplished. +1 to this as well. Ideally, if we go down this route, installing python just comes with pip preinstalled. However that takes place :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sat Jul 13 08:28:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Jul 2013 16:28:42 +1000 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> Message-ID: On 13 July 2013 05:25, Brett Cannon wrote: > > On Fri, Jul 12, 2013 at 2:16 PM, Donald Stufft wrote: > >> >> On Jul 12, 2013, at 2:00 PM, Brett Cannon wrote: >> >> Speaking with my python-dev hat on which has a badge from when I led the >> stdlib cleanup for Python 3, I would say anything that has a PEP should >> probably have a module in the stdlib for it. That way standard management >> of whatever is specified in the PEP will be uniform and expected to be >> maintained and work. Beyond that code will exist outside the stdlib. >> >> >> This is basically the exact opposite of what Nick has said the intent has >> been (Ecosystem first). >> >> > Not at all as no module will go in immediately until after a PEP has > landed and been vetted as needed. > > >> Adding packaging tools beyond bootstrapping pip at this point in the game >> is IMO a huge mistake. If what Nick has said and PEPs are not appropriate >> for things that don't have a module in the standard lib well that's fine I >> guess. >> > > You misunderstand what I mean. I'm just saying that *if* anything were to > go into the stdlib it would only be to have code which implements a PEP in > the stdlib to prevent everyone from re-implementing a standard. > What Brett is saying is closer to what I was thinking when we were discussing this at the language summit. However, I'm no longer sure distlib will be quite baked enough to suggest bundling it in 3.4, in which case it will only be a "pip install distlib" away (that's the entire point of PEP 439). > I just won't worry about trying to write PEPs :) >> > > No, the PEPs are important to prevent version skew and make sure everyone > is on the same page. And that's also what a module in the stdlib would do; > make sure everyone is on the same page in terms of semantics by using a > single code base. > > I mean I wouldn't expect anything more than maybe code parsing the JSON > metadata that does some validation and parsing version numbers that can > support comparisons and verifying platform requirements; that's it. Stuff > that every installation tool will need to do in order to follow the PEPs > properly. And it wouldn't go in until everyone was very happy with the PEPs > and ready to commit to code enshrining it in the stdlib. Otherwise I hope > distlib moves into PyPA and everyone who develops installation tools, etc. > uses that library. > Vinay already moved both distlib and pylauncher over to the PyPA account on BitBucket: https://bitbucket.org/pypa/ PEP 439 is the critical piece, since that's the one that takes the pressure off getting the other components (including distlib and pkg_resources) into the base installers. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Sat Jul 13 08:29:20 2013 From: nad at acm.org (Ned Deily) Date: Fri, 12 Jul 2013 23:29:20 -0700 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) References: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> Message-ID: In article <55B209B3-9576-4CF0-B58C-2A1E692AFFF1 at stufft.io>, Donald Stufft wrote: > On Jul 13, 2013, at 1:31 AM, Nick Coghlan wrote: > > I'm currently leaning towards offering both, as we're going to need a tool > > for bootstrapping source builds, but the simplest way to bootstrap pip for > > Windows and Mac OS X users is to just *bundle a copy with the binary > > installers*. So long as the bundled copy looks *exactly* the way it would > > if installed later (so it can update itself), then we avoid the problem of > > coupling the pip update cycles to the standard library feature release > > cycle. The bundled version can be updated to the latest available versions > > when we do a Python maintenance release. Off the top of my head, including a copy of pip as a pre-installed global site-package seems like a very reasonable suggestion. For the python.org OS X installer, it should be no problem to implement. It would be equally easy to implement for future 2.7 and 3.3 maintenance releases. > We could simply check it into the site-packages inside the CPython source > tree could we not? *Not* providing a bootstrap script and merely checking it > into the default site-packages means it's available for everyone. No matter > how python installed. If Linux packagers really don't want it installed by > default they could simply just remove it and either install it along with > Python, or continue to keep it how it is today as a separate package? This sounds an unnecessary complication. I suspect that there is a small minority of users who actually build Python from source. And they should know what they are doing. I believe most users either use a distribution-provided Python (via their OS) or a third-party package provider (including python.org binary installers and their derivatives). The OS distributors are going to do what they currently do; the only change needed is to persuade them to include their pip package as a mandatory dependency. Trying to hack the Python source build process to include a copy of pip is just not worth the effort. -- Ned Deily, nad at acm.org From ncoghlan at gmail.com Sat Jul 13 08:31:16 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Jul 2013 16:31:16 +1000 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> References: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> Message-ID: On 13 July 2013 16:05, Donald Stufft wrote: > > On Jul 13, 2013, at 1:31 AM, Nick Coghlan wrote: > > I'm currently leaning towards offering both, as we're going to need a tool > for bootstrapping source builds, but the simplest way to bootstrap pip for > Windows and Mac OS X users is to just *bundle a copy with the binary > installers*. So long as the bundled copy looks *exactly* the way it would > if installed later (so it can update itself), then we avoid the problem of > coupling the pip update cycles to the standard library feature release > cycle. The bundled version can be updated to the latest available versions > when we do a Python maintenance release. > > > We could simply check it into the site-packages inside the CPython source > tree could we not? *Not* providing a bootstrap script and merely checking > it into the default site-packages means it's available for everyone. No > matter how python installed. > Source code that isn't maintained through bugs.python.org isn't getting checked into the CPython repo - see PEP 360. Getting the latest released version of something from PyPI is a different story, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 08:34:49 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 02:34:49 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> Message-ID: <4376358C-69DC-4F8B-A23C-0C6E63BD14B0@stufft.io> On Jul 13, 2013, at 2:29 AM, Ned Deily wrote: >> We could simply check it into the site-packages inside the CPython source >> tree could we not? *Not* providing a bootstrap script and merely checking it >> into the default site-packages means it's available for everyone. No matter >> how python installed. If Linux packagers really don't want it installed by >> default they could simply just remove it and either install it along with >> Python, or continue to keep it how it is today as a separate package? > > This sounds an unnecessary complication. I suspect that there is a > small minority of users who actually build Python from source. And they > should know what they are doing. I believe most users either use a > distribution-provided Python (via their OS) or a third-party package > provider (including python.org binary installers and their derivatives). > The OS distributors are going to do what they currently do; the only > change needed is to persuade them to include their pip package as a > mandatory dependency. Trying to hack the Python source build process to > include a copy of pip is just not worth the effort. Okies, thought it might be simpler :) Doesn't matter to me where in the process it happens at :) I don't install from source. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From regebro at gmail.com Sat Jul 13 10:02:17 2013 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 13 Jul 2013 10:02:17 +0200 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: <3A9FAB73-6AE4-4E26-959B-5346F24ED306@stufft.io> References: <7F3EBECB-1DEC-4187-A33D-31284AB2A944@coderanger.net> <3A9FAB73-6AE4-4E26-959B-5346F24ED306@stufft.io> Message-ID: On Sat, Jul 13, 2013 at 8:07 AM, Donald Stufft wrote: > +1 to this as well. Ideally, if we go down this route, installing python > just comes with pip preinstalled. However that takes place :) Well, I don't want a "however that takes place" that causes more packaging problems in the future, so I'm -1 on that. :-) I think including it in binary installers in such a way that pip can upgrade itself is definitely a good idea. Including it in the source code distributions seems less of a benefit. In fact, I don't even think I want it as things stand right now. Story time: This week a lot of people had problems with zc.buildout, because of the new releases of setuptools and distribute. Essentially they tried to upgrade themselves and failed for various reasons, such as distribute or setuptools being installed in the system python and not being upgradeable, or not being upgradeable for other reasons, like permissions. Mostly, I had no problems, because the Python installs I use have only one package installed: Virtualenv. That means that for most buildouts I used, buildout had istelf installed setuptools or distribute in its own isolated way. I only got errors where an overly complex build tool insisted on making virtualenvs, installing distribute in them and then running buildout with those virtualenvs. In those cases I had to run bin/pip -U distribute, so it was also an easy fix. So by default I prefer to have no packages except virtualenv for my source-built Pythons. This isn't a big issue, and I'm not gonna throw a hissyfit or anything, but I think I'll -1 the idea of including pip in a source install. Having pyvenv install pip by default in the new environment is +lots from me though. Also, having pyvenv be able to install itself in a non-empty directory would be great. ;-) //Lennart From p.f.moore at gmail.com Sat Jul 13 10:54:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 09:54:38 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> Message-ID: On 13 July 2013 01:02, Donald Stufft wrote: > >> The installer side of things the purist side of me doesn't like adding > it to > >> the standard library for all the same reasons but the pragmatic side of > me > >> wants it there because it enables fetching the other bits that are > needed for > >> "pip install X" to be a reasonable official response to these kind of > >> questions. But I pushed for and still believe that if a prerequisite for > >> doing that involves "locking" in pip or any of it's dependencies by > adding > >> them to the standard library then I am vehemently against doing it. > > > > Nobody seems to be suggesting doing that, though. > > I was (trying?) to explain that my belief doesn't extend to only distlib > here and > instead to the entire toolchain. (The above quote may not be the best point to comment on, but I wanted to avoid quoting the whole text just to make a general point on this subject) In my view packaging (specifically install) tools are a bit different from other things, because generally packaging tools with dependencies suck. Look at pip's reliance on setuptools, for example. Managing setuptools with pip is a pain, and bootstrapping pip involves getting setuptools installed without already having pip available. I'm +1 on having basic infrastructure in the stdlib, because that way people can concentrate on innovating in more important areas of packaging rather than endlessly reinventing basic stuff. The trick is knowing what counts as basic infrastructure. Things I have *regularly* come up against, over a long period of writing little tools: 1. Version parsing and ordering (I usually use distutils.LooseVersion, just because it's available and "close enough" :-() 2. Reading metadata from distributions (not just parsing it, but getting it out of dist-info files or sdists and the like as well) 3. Installing wheels 4. Locating distributions on PyPI or local indexes At the moment, my choices are to write my own code (usually meaning far more code than the actual functionality I want to write!) require distlib (not good in a tool for building zero-dependency venvs from the ground up, for example) or vendoring in distlib (impractical in a one-file script). Having something in the stdlib (even if it's only able to bootstrap distlib or an alternative) solves all of these issues. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 11:05:24 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 10:05:24 +0100 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On 13 July 2013 06:31, Nick Coghlan wrote: > * bundling a *full* copy of pip with the Python installers for Windows and > Mac OS X, but installing it to site-packages rather than to the standard > library directory. That way pip can be used to upgrade itself as normal, > rather than making it part of the standard library per se. This is then > closer to the "bundled application" model adopted for IDLE in PEP 434 (we > could, in fact, move to distributing idle the same way). How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. Personally, I have never upgraded pip using itself, because I only ever install pip in virtualenvs, which don't have a lifespan as long as a pip release cycle :-) It would be easy to imagine a new pip release resulting in a *lot* of bugs raised against Python (rather than pip) saying that the upgrade fails. And of course if an upgrade fails, we can't just release a new version of pip that fixes the issue, because it's the *old* version that is installed and has to do the upgrade. So there's manual fiddling to do. Not a good experience for Python users. My current workflow is to have absolutely nothing installed in the system Python and use virtualenvs for everything. This is a bit extreme, but the issues I've hit in the past when package management has gone wrong have made me very cautious. If the pip upgrade process is rock-solid, this isn't an issue, but I'm not sure that it is, myself. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Sat Jul 13 14:12:01 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 13 Jul 2013 13:12:01 +0100 (BST) Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> Message-ID: <1373717521.98532.YahooMailNeo@web171402.mail.ir2.yahoo.com> > From: Donald Stufft >As I said in my email, because it's more or less standalone and it has the >greatest utility outside of installers/builders/archivers/indexes. Even if that were true, it doesn't mean that it's the *only* thing that's worth considering. >I've looked at many other languages where they had widely successful >packaging tools that weren't added to the standard lib until they were >ubiquitous and stable. Something the new tools for Python are not. So I >don't think adding it to the standard library is required. As I said earlier, I'm not arguing for *premature* inclusion of distlib or anything else in the stdlib. I'm only saying that there's less likelihood that any one approach outside the stdlib will get univerally adopted, leading to balkanisation. >to reuse some it's functionality. So pointing towards setuptools just exposes >the fact that improving it in the standard library was hard enough that it was >done externally. It seems like it wasn't for technical reasons that this approach was taken, just as Distribute wasn't forked from setuptools for technical reasons. >Well I am of the mind that the standard library is where software goes to die, and No kidding? :-) >want to use my software at all. A huge thing i've been trying to push for is decoupling >packaging from a specific implementation so that we have a "protocol" (ala HTTP) >and not a "tool" (ala distutils). However the allure of working to the implementation >and not the standard is fairly high when there is a singular blessed implementation. I'm not aware of this - have you published any protocols around the work you're doing on warehouse, which Nick said was going to be the next-generation PyPI? >It's funny you picked and example where improvements *couldn't* take place and >the entire system had to be thrown out and a new one written. getopt had to become a >new module named opt parse, which had to become a new module named argparse I picked that example specifically to show that even if things go wrong, it's not the end of the world. >You can gain interoperability in a few ways. One way is to just pick an implementation If that were done, it wouldn't make any difference whether the thing picked were in the stdlib or not. But people have a tendency to roll their own stuff, whether there's a good technical reason or not. >and make that the standard. Another is to define *actual* standards. The second >one is harder, requires more thought and work. But it means that completely >different software can work together. It means that something written in Ruby >can easily work with a python package without shelling out to Python or without That's exactly why there are all these packaging PEPs around, isn't it? >And that's fine for a certain class of problems. It's not that useful for something >where you want interoperability outside of that tool. How terrible would it be if >HTTP was "well whatever Apache does, that's what HTTP is". That wouldn't have been so terrible if you replace "Apache" with "W3C", since you would have a reference implementation by the creators of the standard. >A singular blessed tool in the standard library incentivizes the standard becoming >and implementation detail. I *want* there to be multiple implementations written by >different people working on different "slices" of the problem. That incentivizes doing >the extra work on PEPs and other documents so that we maintain a highly documented >standard. It's true that adding something to the standard library doesn't rule that out >but it provides an incentive against properly doing standards because it's easier and >simpler to just change it in the implementation. Are you planning to produce any standards relating to PyPI-like functionality? This is important for the dependency resolution "slice", amongst others. The flip side of this coin is, talking in the abstract without any working code is sub-optimal. It's reasonable for standards and implementations of them to grow together, because each informs the other, at least in the early stages. Most standards PEPs are accepted with a reference implementation in place. >It's not blessed and a particular packaging project should use it if it fits their >needs and they want to use it. Or they shouldn't use it if they don't want. >Standards exist for a reason. So you can have multiple implementations that >all work together. That's true independent of whether one particular implementation of the standard is blessed in some way. >I didn't make any claims as to it's stability or the amount of testing that went into >it. My ability to be convinced of that stems primarily from the fact that it's sort of >a side piece of the whole packaging infrastructure and toolchain and it's also >a piece that is most likely to be useful on it's own. But the arguments about agility and stability apply to any software - version-handling doesn't get a special pass. Proper version handling is central to dependency resolution and is hardly a side issue, though it's not especially complicated. I'll just finish by re-iterating that I think there should be some stdlib underpinning for packaging in general, and that there should be some focus on exactly what that underpinning should be, and that I'm by no means saying that distlib is it. I consider distlib as still in its early days but showing some promise (and deserving of more peer review than it has received to date). Regards, Vinay Sajip From ncoghlan at gmail.com Sat Jul 13 14:25:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 13 Jul 2013 22:25:30 +1000 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On 13 Jul 2013 19:05, "Paul Moore" wrote: > > On 13 July 2013 06:31, Nick Coghlan wrote: >> >> * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). > > > How robust is the process of upgrading pip using itself? Specifically on Windows, where these things typically seem less reliable. > > Personally, I have never upgraded pip using itself, because I only ever install pip in virtualenvs, which don't have a lifespan as long as a pip release cycle :-) It would be easy to imagine a new pip release resulting in a *lot* of bugs raised against Python (rather than pip) saying that the upgrade fails. And of course if an upgrade fails, we can't just release a new version of pip that fixes the issue, because it's the *old* version that is installed and has to do the upgrade. So there's manual fiddling to do. Not a good experience for Python users. > > My current workflow is to have absolutely nothing installed in the system Python and use virtualenvs for everything. This is a bit extreme, but the issues I've hit in the past when package management has gone wrong have made me very cautious. > > If the pip upgrade process is rock-solid, this isn't an issue, but I'm not sure that it is, myself. I think we need to flip the dependencies so that pip as the installer has all the essential code for installation from PyPI and then setuptools and distlib depend on that pip infrastructure. No need to add anything to the standard library prematurely when we can add it to pip instead. Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 15:23:02 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 14:23:02 +0100 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <1373717521.98532.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> <1373717521.98532.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 13 July 2013 13:12, Vinay Sajip wrote: > I'll just finish by re-iterating that I think there should be some stdlib > underpinning for packaging in general, and that there should be some focus > on exactly what that underpinning should be, and that I'm by no means > saying that distlib is it. I consider distlib as still in its early days > but showing some promise (and deserving of more peer review than it has > received to date). +1 to all of this Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jul 13 15:31:01 2013 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jul 2013 09:31:01 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On Sat, Jul 13, 2013 at 8:25 AM, Nick Coghlan wrote: > > On 13 Jul 2013 19:05, "Paul Moore" wrote: > > > > On 13 July 2013 06:31, Nick Coghlan wrote: > >> > >> * bundling a *full* copy of pip with the Python installers for Windows > and Mac OS X, but installing it to site-packages rather than to the > standard library directory. That way pip can be used to upgrade itself as > normal, rather than making it part of the standard library per se. This is > then closer to the "bundled application" model adopted for IDLE in PEP 434 > (we could, in fact, move to distributing idle the same way). > > > > > > How robust is the process of upgrading pip using itself? Specifically on > Windows, where these things typically seem less reliable. > > > > Personally, I have never upgraded pip using itself, because I only ever > install pip in virtualenvs, which don't have a lifespan as long as a pip > release cycle :-) It would be easy to imagine a new pip release resulting > in a *lot* of bugs raised against Python (rather than pip) saying that the > upgrade fails. And of course if an upgrade fails, we can't just release a > new version of pip that fixes the issue, because it's the *old* version > that is installed and has to do the upgrade. So there's manual fiddling to > do. Not a good experience for Python users. > > > > My current workflow is to have absolutely nothing installed in the > system Python and use virtualenvs for everything. This is a bit extreme, > but the issues I've hit in the past when package management has gone wrong > have made me very cautious. > > > > If the pip upgrade process is rock-solid, this isn't an issue, but I'm > not sure that it is, myself. > > I think we need to flip the dependencies so that pip as the installer has > all the essential code for installation from PyPI and then setuptools and > distlib depend on that pip infrastructure. No need to add anything to the > standard library prematurely when we can add it to pip instead. > +1 on the inversion. I don't know what that will do to pip, it makes sense to have the installer self-contained and the packaging/building libraries be something that you grab using the installer. Having to grab the packaging infrastructure to get an installer is the more painful route. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 15:32:57 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 14:32:57 +0100 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On 13 July 2013 13:25, Nick Coghlan wrote: > I think we need to flip the dependencies so that pip as the installer has > all the essential code for installation from PyPI and then setuptools and > distlib depend on that pip infrastructure. No need to add anything to the > standard library prematurely when we can add it to pip instead. Agreed, up to a point. What I've worked on in the past is things like automated wheel/sdist installers for systems with no internet access at all (distributions copied onto the PC via portable disk) or behind broken proxies (Internet Explorer works to download files from the net, but nothing else does). For those environments, the key to me is that I *only* use stuff that is available in a stock Python build. Something like pyton -m getpip *will not work*, so I have to roll my own means of bootstrapping. Of course I could copy a pip sdist to the machine, unpack it and run python setup.py install. More likely, I write bootstrap code in my script to do that automatically. It's a very specialised use case, certainly, and there are plenty of ways around the issues, but I have seen "give up and just use VBScript./perl" used as the fallback, as well, and that makes me sad... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jul 13 15:35:08 2013 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jul 2013 09:35:08 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> Message-ID: On Sat, Jul 13, 2013 at 2:29 AM, Ned Deily wrote: > In article <55B209B3-9576-4CF0-B58C-2A1E692AFFF1 at stufft.io>, > Donald Stufft wrote: > > On Jul 13, 2013, at 1:31 AM, Nick Coghlan wrote: > > > I'm currently leaning towards offering both, as we're going to need a > tool > > > for bootstrapping source builds, but the simplest way to bootstrap pip > for > > > Windows and Mac OS X users is to just *bundle a copy with the binary > > > installers*. So long as the bundled copy looks *exactly* the way it > would > > > if installed later (so it can update itself), then we avoid the > problem of > > > coupling the pip update cycles to the standard library feature release > > > cycle. The bundled version can be updated to the latest available > versions > > > when we do a Python maintenance release. > > Off the top of my head, including a copy of pip as a pre-installed > global site-package seems like a very reasonable suggestion. For the > python.org OS X installer, it should be no problem to implement. It > would be equally easy to implement for future 2.7 and 3.3 maintenance > releases. > Does Apple just install the python.org OS X installer for distribution, or do they build their own thing? My only worry is that Apple will not get the message about including pip and we will end up with an odd skew on OS X (I'm not worried about Linux distros as they all seem to follow Python development closely). And we obviously need to know if Martin is okay with doing something similar on Windows. > > > We could simply check it into the site-packages inside the CPython source > > tree could we not? *Not* providing a bootstrap script and merely > checking it > > into the default site-packages means it's available for everyone. No > matter > > how python installed. If Linux packagers really don't want it installed > by > > default they could simply just remove it and either install it along with > > Python, or continue to keep it how it is today as a separate package? > > This sounds an unnecessary complication. I suspect that there is a > small minority of users who actually build Python from source. And they > should know what they are doing. I believe most users either use a > distribution-provided Python (via their OS) or a third-party package > provider (including python.org binary installers and their derivatives). > The OS distributors are going to do what they currently do; the only > change needed is to persuade them to include their pip package as a > mandatory dependency. Trying to hack the Python source build process to > include a copy of pip is just not worth the effort. > And I doubt that will take much convincing, e.g. ActiveState includes their own installer -- PyPM -- in their distro. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 15:38:29 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 14:38:29 +0100 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On 13 July 2013 14:31, Brett Cannon wrote: > +1 on the inversion. I don't know what that will do to pip, it makes sense > to have the installer self-contained and the packaging/building libraries > be something that you grab using the installer. Having to grab the > packaging infrastructure to get an installer is the more painful route. TBH, I don't understand what "the inversion" implies. If it means pip taking all of the distlib/setuptools code that it currently uses, and making it part of pip and maintained within pip (essentially as a fork while the "inversion" is going on) then I'm not keen on that. Personally, I don't want to have to maintain that code myself - I guess if Vinay and Jason were pip maintainers and looked after that code, then that's an option. If it means pip vendoring distlib and setuptools, then OK (we do that for distlib already) but I don't see the benefit - no-ione should be doing "from pip.vendor.distlib.version import Version". I'd need to know better what it means for pip, I guess... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jul 13 15:46:57 2013 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jul 2013 09:46:57 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On Sat, Jul 13, 2013 at 1:31 AM, Nick Coghlan wrote: > In addition to the long thread based on Richard's latest set of updates, > I've also received a few off-list comments on the current state of the > proposal. So, I figured I'd start a new thread summarising my current point > of view and see where we want to go from there. > > 1. However we end up solving the bootstrapping problem, I'm *definitely* a > fan of us updating pyvenv in 3.4 to ensure that pip is available by default > in new virtual environments created with that tool. I also have an idea for > a related import system feature that I'll be sending to import-sig this > afternoon (it's a variant on *.pth and *.egg-link files that should be able > to address a variety of existing problems, including the one of > *selectively* making system and user packages available in a virtual > environment in a cross-platform way without needing to copy them) > > 2. While I was originally a fan of the "implicit bootstrapping on demand" > design, I no longer like that notion. While Richard's bootstrap script is a > very nice piece of work, the edge cases and "neat tricks" have built up to > the point where they trip my "if the implementation is hard to explain, > it's a bad idea" filter. > > Accordingly, I no longer think the implicit bootstrapping is a viable > option. > > 3. That means there are two main options available to us that I still > consider viable alternatives (the installer bundling idea was suggested in > one of the off list comments I mentioned): > > * an explicit bootstrapping script > * bundling a *full* copy of pip with the Python installers for Windows and > Mac OS X, but installing it to site-packages rather than to the standard > library directory. That way pip can be used to upgrade itself as normal, > rather than making it part of the standard library per se. This is then > closer to the "bundled application" model adopted for IDLE in PEP 434 (we > could, in fact, move to distributing idle the same way). > > I'm currently leaning towards offering both, as we're going to need a tool > for bootstrapping source builds, but the simplest way to bootstrap pip for > Windows and Mac OS X users is to just *bundle a copy with the binary > installers*. So long as the bundled copy looks *exactly* the way it would > if installed later (so it can update itself), then we avoid the problem of > coupling the pip update cycles to the standard library feature release > cycle. The bundled version can be updated to the latest available versions > when we do a Python maintenance release. > > For Linux, if you're using the system Python on a Debian or Fedora > derivative, then "sudo apt-get python-pip" and "sudo yum install > python-pip" are both straightforward, and if you're using something else, > then it's unlikely getting pip bootstrapped using the bootstrap script is a > task that will bother you :) > > The "python -m getpip" command is still something we will want to provide, > as it is useful to people that build their own copy of Python from source. > But is it going to make a difference? If we shift to using included copies of pip in binary installers over a bootstrap I say leave out the bootstrap as anyone building from source should know how to get pip installed on their machine or venv. The only reason I see it worth considering is if pyvenv starts bootstrapping pip and we want to support the case of pip not being installed. But if we are including it in the binary installer and are going to assume it's available through OS distros, then there isn't a need to as pip can then install pip for us into the venv and skip any initial pip bootstrap. If pip isn't found we can simply either point to the docs in the failure message or print out the one-liner it takes to install pip (and obviously there can be a --no-pip flag to skip this for people who want to install it manually like me who build from source). IOW I think taking the worldview in Python 3.4 that pip will come installed with Python unless you build from source negates the need for the bootstrap script beyond just saying ``curl https://pypi.python.org/get-pip.py | python`` if pip isn't found. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Jul 13 15:52:01 2013 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jul 2013 09:52:01 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On Sat, Jul 13, 2013 at 9:38 AM, Paul Moore wrote: > On 13 July 2013 14:31, Brett Cannon wrote: > >> +1 on the inversion. I don't know what that will do to pip, it makes >> sense to have the installer self-contained and the packaging/building >> libraries be something that you grab using the installer. Having to grab >> the packaging infrastructure to get an installer is the more painful route. > > > TBH, I don't understand what "the inversion" implies. If it means pip > taking all of the distlib/setuptools code that it currently uses, and > making it part of pip and maintained within pip (essentially as a fork > while the "inversion" is going on) then I'm not keen on that. Personally, I > don't want to have to maintain that code myself - I guess if Vinay and > Jason were pip maintainers and looked after that code, then that's an > option. If it means pip vendoring distlib and setuptools, then OK (we do > that for distlib already) > The point is you shouldn't have to grab a packaging tool just to install stuff if you never need the packaging tool. Since pip is supposed to be *the* first thing you install for Python you don't want that to have its own dependencies, muddying up the installation process. > but I don't see the benefit - no-ione should be doing "from > pip.vendor.distlib.version import Version". > > That's just asking for trouble if someone did that (plus if you did that it would be pip._vendor to get the privacy point across). > I'd need to know better what it means for pip, I guess... > I suspect we all do. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Sat Jul 13 16:39:45 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 13 Jul 2013 14:39:45 +0000 (UTC) Subject: [Distutils] Current status of PEP 439 (pip boostrapping) References: Message-ID: Nick Coghlan gmail.com> writes: > 1. However we end up solving the bootstrapping problem, I'm *definitely* > a fan of us updating pyvenv in 3.4 to ensure that pip is available by > default in new virtual environments created with that tool. Will that need green-lighting on python-dev? As events have shown, that script has needed updating between Python releases. OTOH, I'm not sure anyone could have predicted at 3.3 release time that setuptools and Distribute would kiss and make up :-) We should probably ensure that the pip and setuptools URLs used in that script (pyvenvex.py, to be renamed to pyvenv.py) are formally agreed with the relevant maintainers. Or, since those URLs just fetch the latest releases, perhaps some different URLs should be used which refer to more stable releases (for some definition of "more stable") - perhaps those should be python.org URLs, rather than BitBucket and GitHub as they are at present. That way, changing the resources which those URLs reference would have to be an active decision by someone, rather than just following the latest developments on setuptools and pip. There are pluses and minuses either way, of course. > 2. While I was originally a fan of the "implicit bootstrapping on demand" > design, I no longer like that notion. While Richard's bootstrap script is > a very nice piece of work, the edge cases and "neat tricks" have built up > to the point where they trip my "if the implementation is hard to > explain, it's a bad idea" filter. > Accordingly, I no longer think the implicit bootstrapping is a viable > option. But if your reservation stems from one specific implementation of the idea, then might not an alternative implementation fit the bill? Consider: the pyvenvex.script merely runs bootstrapping scripts from setuptools and pip in the venv - there's no magic. I couldn't see Richard's script referenced in the PEP, which just referred to the PIP issue tracker which had no obvious link to any commit or script. So I don't know what the edge cases and neat tricks are that you're referring to. I adapted the pyvenvex.py script into a getpip.py script, available here: https://gist.github.com/vsajip/5990837 82 lines all told - what cases does it not cover? It installs setuptools and pip into the system site-packages for the invoking Python, if not already present. It can, of course, be refined to e.g. install even if the packages are already present, which is tantamount to upgrading. I smoke-tested the script on vanilla Python 3.3 installations on Windows and OS X. On OS X, the pip script was written to the appropriate Frameworks folder, but not to /usr/local/bin - I assume it would be simple enough to arrange that? On Windows, the pip script (including Windows native launcher) were written to c:\Python33\Scripts. > The bundling idea will obviously need to be discussed with the installer > builders, and on python-dev in general If python-dev agrees to the updated pyvenv.py script, then this type of addition should be uncontentious, as it basically does the same thing. It seems a whole lot less work than bundling, to me. Regards, Vinay Sajip From p.f.moore at gmail.com Sat Jul 13 16:54:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 15:54:53 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 Message-ID: This issue has been skirted round for some time now, and I think it needs explicit discussion, as I am not at all sure everyone has the same expectations. We're talking about Python 3.4 installations having pip as the default package manager - whether by bundling, having a bootstrap process or whatever. Regardless of the means, pip will be *the* installer for Python 3.4+. And yet, I don't think pip 1.4 currently does what people want "the Python 3.4 pip" to do in some ways - and we need to make sure that any work on the pip side is understood, agreed to, and planned to match the Python 3.4 timescales. So, here's my initial list of things that I think people might be expecting to happen. This is just my impressions, and I don't necessarily have a view on the individual items. And if anyone else can think of other things to add to the list, please do so! 1. Install to user-packages by default. 2. Not depend on setuptools (??? - Nick's "inversion" idea) 3. Possibly change the wrapper command name from pip to pip3 on Unix. 4. Ensure that pip upgrading itself in-place is sufficiently robust and reliable that users don't get "stuck" on the Python-supplied version. I'm sure I've seen people say other things that have made me think "are you expecting the pip maintainers to make that change?" in the various threads, so I doubt this list is definitive. Comments anyone? Is this discussion premature? The pip maintainers team is not huge, so we'll need time (or assistance!) to plan in and make changes like this, if they are needed... At a minimum, can we get the key items logged on the pip issue tracker with a milestone of Python 3.4? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 17:00:27 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 16:00:27 +0100 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On 13 July 2013 13:25, Nick Coghlan wrote: > I think we need to flip the dependencies so that pip as the installer has > all the essential code for installation from PyPI and then setuptools and > distlib depend on that pip infrastructure. No need to add anything to the > standard library prematurely when we can add it to pip instead. If we do this, I think people will start to expect to be able to code scripts to the pip API. (We've had people ask this on the pip tracker already). If we don't want pip to end up like distutils (with people depending on all sorts of random bits of the internals, because there's no documented API) as a backward-compatibility nightmare, we need to consider how to handle this. Of course, saying explicitly "only the python -m pip command line interface is stable and supported" may well be enough. But didn't we just say that setuptools and distlib depend on the pip API? So either they have special privileges (presumably because they are under the umbrella of the PyPA) or we can't avoid documenting/supporting some API... I don't believe that pip is currently in a state to offer a solid documented internal API. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 17:01:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 16:01:22 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: Also (see my reply to Nick's "inversion" proposal) we can add: 5. Provide a stable documented programming interface. Paul On 13 July 2013 15:54, Paul Moore wrote: > This issue has been skirted round for some time now, and I think it needs > explicit discussion, as I am not at all sure everyone has the same > expectations. > > We're talking about Python 3.4 installations having pip as the default > package manager - whether by bundling, having a bootstrap process or > whatever. Regardless of the means, pip will be *the* installer for Python > 3.4+. And yet, I don't think pip 1.4 currently does what people want "the > Python 3.4 pip" to do in some ways - and we need to make sure that any work > on the pip side is understood, agreed to, and planned to match the Python > 3.4 timescales. > > So, here's my initial list of things that I think people might be > expecting to happen. This is just my impressions, and I don't necessarily > have a view on the individual items. And if anyone else can think of other > things to add to the list, please do so! > > 1. Install to user-packages by default. > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > 4. Ensure that pip upgrading itself in-place is sufficiently robust and > reliable that users don't get "stuck" on the Python-supplied version. > > I'm sure I've seen people say other things that have made me think "are > you expecting the pip maintainers to make that change?" in the various > threads, so I doubt this list is definitive. > > Comments anyone? Is this discussion premature? The pip maintainers team is > not huge, so we'll need time (or assistance!) to plan in and make changes > like this, if they are needed... > > At a minimum, can we get the key items logged on the pip issue tracker > with a milestone of Python 3.4? > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 17:03:53 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 11:03:53 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On Jul 13, 2013, at 10:54 AM, Paul Moore wrote: > This issue has been skirted round for some time now, and I think it needs explicit discussion, as I am not at all sure everyone has the same expectations. > > We're talking about Python 3.4 installations having pip as the default package manager - whether by bundling, having a bootstrap process or whatever. Regardless of the means, pip will be *the* installer for Python 3.4+. And yet, I don't think pip 1.4 currently does what people want "the Python 3.4 pip" to do in some ways - and we need to make sure that any work on the pip side is understood, agreed to, and planned to match the Python 3.4 timescales. > > So, here's my initial list of things that I think people might be expecting to happen. This is just my impressions, and I don't necessarily have a view on the individual items. And if anyone else can think of other things to add to the list, please do so! > > 1. Install to user-packages by default. Do people really want this? I hadn't seen it (other than if pip was installed to user by default). I think it's a bad idea to switch this on people. I doubt the user-packages is going to be in people's default PATH so they'll easily get into cases where things are installed but they don't know where it was installed too. > 2. Not depend on setuptools (??? - Nick's "inversion" idea) I wanted to do this anyways. It will still "depend" on it, but it will just bundle setuptools itself like its other dependencies. For pip dependencies are an implementation detail not an actual thing it can/should have. > 3. Possibly change the wrapper command name from pip to pip3 on Unix. Not sure on this. Ideally i'd want the commands to be pipX.Y, pipX, and pip all available and not install the less specific ones if they already exist but that might be too hard? > 4. Ensure that pip upgrading itself in-place is sufficiently robust and reliable that users don't get "stuck" on the Python-supplied version. I've always used pip to upgrade pip. The only time i've had problems is when setuptools messes up (which would be prevented if bundled). > > I'm sure I've seen people say other things that have made me think "are you expecting the pip maintainers to make that change?" in the various threads, so I doubt this list is definitive. > > Comments anyone? Is this discussion premature? The pip maintainers team is not huge, so we'll need time (or assistance!) to plan in and make changes like this, if they are needed... > > At a minimum, can we get the key items logged on the pip issue tracker with a milestone of Python 3.4? > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tseaver at palladion.com Sat Jul 13 17:05:29 2013 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 13 Jul 2013 11:05:29 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/13/2013 08:25 AM, Nick Coghlan wrote: > I think we need to flip the dependencies so that pip as the installer > has all the essential code for installation from PyPI and then > setuptools and distlib depend on that pip infrastructure. No need to > add anything to the standard library prematurely when we can add it to > pip instead. - -1. That would effectively mean inlining the bulk of setuptools' code into pip (which is just a UI / policy shim over it). You might as well just have your bootstrapper install both pip and setuptools and be done Unless distlib (or something like it) lands in the stdlib with enough features to support a setuptools-less pip, of course. At-which-point-the-State-will-wither-away'ly, Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHhbLkACgkQ+gerLs4ltQ51fQCfZrmZN5mJKrtoGFTk0YqQrBHd F/IAnRp6XjoU4SpXZ4v3Uz6iOBrCZZZn =gSA5 -----END PGP SIGNATURE----- From donald at stufft.io Sat Jul 13 17:06:23 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 11:06:23 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: On Jul 13, 2013, at 11:00 AM, Paul Moore wrote: > Of course, saying explicitly "only the python -m pip command line interface is stable and supported" may well be enough. But didn't we just say that setuptools and distlib depend on the pip API? So either they have special privileges (presumably because they are under the umbrella of the PyPA) or we can't avoid documenting/supporting some API... > I don't think we need this any more than if pip was not pre-installed. So still nice to have for things like chef but not a requirement. setuptools and distlib won't depend on pip, pip will just bundle them (like it already does for distlib). The idea should be that pip itself has no dependencies because a package manager with dependencies is kind of strange and can easily lead to issues where the package manager breaks and is unifiable (e.g. setuptools breaks and pip can't be used to fix it). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Jul 13 17:09:00 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 11:09:00 -0400 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <5AB14BB8-3C8E-473C-A52F-2F7EF2D965ED@stufft.io> On Jul 13, 2013, at 11:05 AM, Tres Seaver wrote: > Signed PGP part > On 07/13/2013 08:25 AM, Nick Coghlan wrote: > > I think we need to flip the dependencies so that pip as the installer > > has all the essential code for installation from PyPI and then > > setuptools and distlib depend on that pip infrastructure. No need to > > add anything to the standard library prematurely when we can add it to > > pip instead. > > - -1. That would effectively mean inlining the bulk of setuptools' code > into pip (which is just a UI / policy shim over it). You might as well > just have your bootstrapper install both pip and setuptools and be done > > Unless distlib (or something like it) lands in the stdlib with enough > features to support a setuptools-less pip, of course. > > At-which-point-the-State-will-wither-away'ly, > > > Tres. > - -- > =================================================================== > Tres Seaver +1 540-429-0999 tseaver at palladion.com > Palladion Software "Excellence by Design" http://palladion.com > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I was planning on doing this to pip anyways. pip should not have any dependencies setuptools or otherwise. So regardless of what happens with this PEP I want pip to be inlining setuptools and providing the code to make that transparent during install. This would mean that for people who are _just_ installing packages they don't need setuptools installed. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Jul 13 17:12:35 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 11:12:35 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: <14A2CCC3-305B-4CE3-82E7-D85903F9F258@stufft.io> On Jul 13, 2013, at 11:01 AM, Paul Moore wrote: > 5. Provide a stable documented programming interface. As I said in the other thread I don't think this is required any more than it does normally. I do think we need to have testing infrastructure in pip that tests against the development branch of CPython though. If pip is going to be included in the releases we need to make sure it works prior to it being released. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 17:15:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 16:15:22 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On 13 July 2013 16:03, Donald Stufft wrote: > > > 1. Install to user-packages by default. > > Do people really want this? I hadn't seen it (other than if pip was > installed to user by default). I think it's a bad idea to switch this on > people. I doubt the user-packages is going to be in people's default PATH > so they'll easily get into cases where things are installed but they don't > know where it was installed too. > I believe Nick wants to make user-packages the default. I know at least some of the pip maintainers (yourself included) have reservations. Personally, I've never used user-packages, so I don't know what issues might arise. But I hope to try it out sometime when I get the chance, just to get some specific information. > > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > > I wanted to do this anyways. It will still "depend" on it, but it will > just bundle setuptools itself like its other dependencies. For pip > dependencies are an implementation detail not an actual thing it can/should > have. > Bundling is not the same as Nick's suggestion. I personally have no problem with bundling, but pip install with a bundled setuptools might not work because the setup subprocess won't see the bundled setuptools when it imports it in setup.py. But either way, it's doable, I just want to know if it's on the critical path... > > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > > Not sure on this. Ideally i'd want the commands to be pipX.Y, pipX, and > pip all available and not install the less specific ones if they already > exist but that might be too hard? > > > 4. Ensure that pip upgrading itself in-place is sufficiently robust and > reliable that users don't get "stuck" on the Python-supplied version. > > I've always used pip to upgrade pip. The only time i've had problems is > when setuptools messes up (which would be prevented if bundled). > I've never tried myself, but I'm on Windows and I expect in-place stuff like this to fail. Maybe I'm paranoid :-) Again I need to check. Thanks for the comments. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 17:23:35 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 11:23:35 -0400 Subject: [Distutils] PEP 439 and pip bootstrap updated In-Reply-To: <1373717521.98532.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <51DCE0B5.6030506@oddbird.net> <49DEE7D4-9223-4B8C-B235-E057F3B2DCE3@stufft.io> <1770C961-BC2F-4062-85BB-A82131013FD6@stufft.io> <096D11E0-7859-4466-BA95-FE0BDC43D0B9@stufft.io> <95EAEB1E-07CD-416A-8ADA-2C872FF4609B@stufft.io> <1373717521.98532.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <5F1F3DAC-1BD5-40D4-8671-465A0781E90D@stufft.io> On Jul 13, 2013, at 8:12 AM, Vinay Sajip wrote: > I'm not aware of this - have you published any protocols around the work you're doing on warehouse, which Nick said was going to be the next-generation PyPI? I think we're talking past each other at this point but I wanted to respond to this point. Warehouse will evolve by publishing standards yes. Currently its not making API changes and is primarily working on taking the existing APIs and porting them to a modern framework, adding tests, etc. I do have some changes I want to make to the API and I've started a PEP to propose it that once it's done will be published for discussion here at distutils-sig. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jaraco at jaraco.com Sat Jul 13 18:28:51 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sat, 13 Jul 2013 16:28:51 +0000 Subject: [Distutils] beginner ticket for code base modernization of setuptools Message-ID: I've created the following ticket for Setuptools: https://bitbucket.org/pypa/setuptools/issue/35/code-base-modernization I'm inviting junior developers or students or other less experienced enthusiasts who are interested in learning more about the inner workings of Setuptools to assist in a code modernization effort to bring Setuptools more in line with the expectations of modern Python developers. If this effort might interest you, please follow the link for more details. Regards, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From jaraco at jaraco.com Sat Jul 13 18:37:56 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sat, 13 Jul 2013 16:37:56 +0000 Subject: [Distutils] buildout/setuptools/distribute unhelpful error message (0.7.x issue?) In-Reply-To: <51D913F5.3080804@simplistix.co.uk> References: <51D913F5.3080804@simplistix.co.uk> Message-ID: Hi Chris, It looks like something is trying to install Setuptools 0.7.2, possibly with a temporary version of distribute or one that's not visible by default in your Python environment. When you get that error message, I suggest you upgrade away from distribute. The easiest way to do this if you have distribute installed is to 'easy_install -U distribute', which will grab distribute 0.7.3 and install setuptools>=0.7. If this doesn't work (as it may not if you in fact don't have Distribute), you may be able to pro-actively avoid the problem by installing the latest Setuptools (0.9 at the time of this writing) using the published installation instructions: https://pypi.python.org/pypi/setuptools/0.9 I hope that helps. Please report back if that doesn't get you going. Regards, Jason > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig- > bounces+jaraco=jaraco.com at python.org] On Behalf Of Chris Withers > Sent: Sunday, 07 July, 2013 03:09 > To: distutils sig > Subject: [Distutils] buildout/setuptools/distribute unhelpful error message > (0.7.x issue?) > > Hi All, > > What is this exception trying to tell me? > > Downloading > https://pypi.python.org/packages/source/s/setuptools/setuptools- > 0.7.2.tar.gz > Extracting in /tmp/tmpJNVsOY > Now working in /tmp/tmpJNVsOY/setuptools-0.7.2 Building a Setuptools egg > in /tmp/tmpBLZGeg /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg > Traceback (most recent call last): > File "bootstrap.py", line 91, in > pkg_resources.working_set.add_entry(path) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 451, in > add_entry > self.add(dist, entry, False) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 542, in > add > self._added_new(dist) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 705, in > _added_new > callback(dist) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2727, in > > add_activation_listener(lambda dist: dist.activate()) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2227, in > activate > self.insert_on(path) > File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2328, in > insert_on > "with distribute. Found one at %s" % str(self.location)) > ValueError: A 0.7-series setuptools cannot be installed with distribute. > Found one at /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg > > I don't see any distribute in there, and I don't know where it might be... > > $ python2.6 > Python 2.6.8 (unknown, Jan 29 2013, 10:05:44) [GCC 4.7.2] on linux2 Type > "help", "copyright", "credits" or "license" for more information. > >>> import setuptools > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named setuptools > > cheers, > > Chris > > -- > Simplistix - Content Management, Batch Processing & Python Consulting > - http://www.simplistix.co.uk > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From brett at python.org Sat Jul 13 18:59:18 2013 From: brett at python.org (Brett Cannon) Date: Sat, 13 Jul 2013 12:59:18 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On Sat, Jul 13, 2013 at 11:15 AM, Paul Moore wrote: > On 13 July 2013 16:03, Donald Stufft wrote: > >> >> > 1. Install to user-packages by default. >> >> Do people really want this? I hadn't seen it (other than if pip was >> installed to user by default). I think it's a bad idea to switch this on >> people. I doubt the user-packages is going to be in people's default PATH >> so they'll easily get into cases where things are installed but they don't >> know where it was installed too. >> > > I believe Nick wants to make user-packages the default. I know at least > some of the pip maintainers (yourself included) have reservations. > Personally, I've never used user-packages, so I don't know what issues > might arise. But I hope to try it out sometime when I get the chance, just > to get some specific information. > I would assume the executable script was installed next to the python binary but the library parts went into user-packages. That way -m would work for all binaries of the same version. > > >> > 2. Not depend on setuptools (??? - Nick's "inversion" idea) >> >> I wanted to do this anyways. It will still "depend" on it, but it will >> just bundle setuptools itself like its other dependencies. For pip >> dependencies are an implementation detail not an actual thing it can/should >> have. >> > > Bundling is not the same as Nick's suggestion. I personally have no > problem with bundling, but pip install with a bundled setuptools might not > work because the setup subprocess won't see the bundled setuptools when it > imports it in setup.py. But either way, it's doable, I just want to know if > it's on the critical path... > > >> > 3. Possibly change the wrapper command name from pip to pip3 on Unix. >> >> Not sure on this. Ideally i'd want the commands to be pipX.Y, pipX, and >> pip all available and not install the less specific ones if they already >> exist but that might be too hard? >> > Could we just start to move away from an executable script and start promoting rather aggressively -m instead? It truly solves this problem and since the results are tied to the Python executable used (i.e. where something gets installed) it disambiguates what Python binary pip is going to work with (something I have trouble with thanks to Python 2 and 3 both being installed and each with their own pip installation). I realize older Python versions can't do this (I believe 2.6 and older can't for packages) but at least in the situation we are discussing here of bundling pip it's not an issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sat Jul 13 19:21:58 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 10:21:58 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: > 1. Install to user-packages by default. > there was a thread a few weeks back on this. everyone seemed to agree at the end that just better error messages were enough. changing the default install location is a huge leap. http://mail.python.org/pipermail/distutils-sig/2013-May/020673.html > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > with the bootstrap installing setuptools, it's not necessary, but I plan on considering/helping/working on one or multiple of these for pip v1.5 anyway: 1) "bundling" setuptools (Donald's idea). it might not work, but interesting to try. lotta pros to doing this 2) replacing pkg_resources with distlib (vinay posted a PR for this) 3) if not #1, pip installing setuptools on-demand when building is needed (this was the old plan I think for PEP439 until the recent changes, and get's us closer to the "MEB"s model) Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sat Jul 13 19:26:02 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 10:26:02 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: 2) replacing pkg_resources with distlib (vinay posted a PR for this) > 3) if not #1, pip installing setuptools on-demand when building is needed > (this was the old plan I think for PEP439 until the recent changes, and > get's us closer to the "MEB"s model) > > to be clearer for everyone, #3 depends on #2, so that pip could install setuptools from wheel (without needing setuptools) -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 19:30:29 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 13:30:29 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> On Jul 13, 2013, at 12:59 PM, Brett Cannon wrote: > Could we just start to move away from an executable script and start promoting rather aggressively -m instead? It truly solves this problem and since the results are tied to the Python executable used (i.e. where something gets installed) it disambiguates what Python binary pip is going to work with (something I have trouble with thanks to Python 2 and 3 both being installed and each with their own pip installation). I realize older Python versions can't do this (I believe 2.6 and older can't for packages) but at least in the situation we are discussing here of bundling pip it's not an issue. I find the -m interface ugly as a primary cli api. It's ok for bonus functionality (ala json.tool) and debugging utilities (ala SimpleServer) but as a developer of user facing tools I don't think I'd ever want to tell them that they should use ``python -m`` to execute my tool. It's also a massive change in functionality from the existing pip interface. ``pip install`` is what everyone uses. The point is more or less moot though unless you're advocating not including an executable script at all. Because pip is already able to be executed with ``python -m pip`` however I don't believe i've seen anyone use that in practice. It also provides the "pip" and "pip-X.Y" commands which should probably be normalized to "pip", "pipX", and "pipX.Y". ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From noah at coderanger.net Sat Jul 13 19:31:32 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sat, 13 Jul 2013 10:31:32 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: <24297A81-EE5C-4150-860C-FFB484E6B468@coderanger.net> On Jul 13, 2013, at 9:59 AM, Brett Cannon wrote: > > > > On Sat, Jul 13, 2013 at 11:15 AM, Paul Moore wrote: > On 13 July 2013 16:03, Donald Stufft wrote: > > > 1. Install to user-packages by default. > > Do people really want this? I hadn't seen it (other than if pip was installed to user by default). I think it's a bad idea to switch this on people. I doubt the user-packages is going to be in people's default PATH so they'll easily get into cases where things are installed but they don't know where it was installed too. > > I believe Nick wants to make user-packages the default. I know at least some of the pip maintainers (yourself included) have reservations. Personally, I've never used user-packages, so I don't know what issues might arise. But I hope to try it out sometime when I get the chance, just to get some specific information. > > I would assume the executable script was installed next to the python binary but the library parts went into user-packages. That way -m would work for all binaries of the same version. > > > > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > > I wanted to do this anyways. It will still "depend" on it, but it will just bundle setuptools itself like its other dependencies. For pip dependencies are an implementation detail not an actual thing it can/should have. > > Bundling is not the same as Nick's suggestion. I personally have no problem with bundling, but pip install with a bundled setuptools might not work because the setup subprocess won't see the bundled setuptools when it imports it in setup.py. But either way, it's doable, I just want to know if it's on the critical path... > > > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > > Not sure on this. Ideally i'd want the commands to be pipX.Y, pipX, and pip all available and not install the less specific ones if they already exist but that might be too hard? > > Could we just start to move away from an executable script and start promoting rather aggressively -m instead? It truly solves this problem and since the results are tied to the Python executable used (i.e. where something gets installed) it disambiguates what Python binary pip is going to work with (something I have trouble with thanks to Python 2 and 3 both being installed and each with their own pip installation). I realize older Python versions can't do this (I believe 2.6 and older can't for packages) but at least in the situation we are discussing here of bundling pip it's not an issue. No, this is not how any user ever will expect unix programs to work. I know that python -m is very cute, and I use it myself for some debug and helper functionality at times, but it can never replace normal scripts. This is a user experience expectation, and we will have to meet it. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From noah at coderanger.net Sat Jul 13 19:36:30 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sat, 13 Jul 2013 10:36:30 -0700 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <59573653-32BF-44DE-BCB8-6A38EE8613B6@coderanger.net> On Jul 13, 2013, at 6:46 AM, Brett Cannon wrote: > > > > On Sat, Jul 13, 2013 at 1:31 AM, Nick Coghlan wrote: > In addition to the long thread based on Richard's latest set of updates, I've also received a few off-list comments on the current state of the proposal. So, I figured I'd start a new thread summarising my current point of view and see where we want to go from there. > > 1. However we end up solving the bootstrapping problem, I'm *definitely* a fan of us updating pyvenv in 3.4 to ensure that pip is available by default in new virtual environments created with that tool. I also have an idea for a related import system feature that I'll be sending to import-sig this afternoon (it's a variant on *.pth and *.egg-link files that should be able to address a variety of existing problems, including the one of *selectively* making system and user packages available in a virtual environment in a cross-platform way without needing to copy them) > > 2. While I was originally a fan of the "implicit bootstrapping on demand" design, I no longer like that notion. While Richard's bootstrap script is a very nice piece of work, the edge cases and "neat tricks" have built up to the point where they trip my "if the implementation is hard to explain, it's a bad idea" filter. > > Accordingly, I no longer think the implicit bootstrapping is a viable option. > > 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): > > * an explicit bootstrapping script > * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). > > I'm currently leaning towards offering both, as we're going to need a tool for bootstrapping source builds, but the simplest way to bootstrap pip for Windows and Mac OS X users is to just *bundle a copy with the binary installers*. So long as the bundled copy looks *exactly* the way it would if installed later (so it can update itself), then we avoid the problem of coupling the pip update cycles to the standard library feature release cycle. The bundled version can be updated to the latest available versions when we do a Python maintenance release. > > For Linux, if you're using the system Python on a Debian or Fedora derivative, then "sudo apt-get python-pip" and "sudo yum install python-pip" are both straightforward, and if you're using something else, then it's unlikely getting pip bootstrapped using the bootstrap script is a task that will bother you :) > > The "python -m getpip" command is still something we will want to provide, as it is useful to people that build their own copy of Python from source. > > But is it going to make a difference? If we shift to using included copies of pip in binary installers over a bootstrap I say leave out the bootstrap as anyone building from source should know how to get pip installed on their machine or venv. > > The only reason I see it worth considering is if pyvenv starts bootstrapping pip and we want to support the case of pip not being installed. But if we are including it in the binary installer and are going to assume it's available through OS distros, then there isn't a need to as pip can then install pip for us into the venv and skip any initial pip bootstrap. If pip isn't found we can simply either point to the docs in the failure message or print out the one-liner it takes to install pip (and obviously there can be a --no-pip flag to skip this for people who want to install it manually like me who build from source). > > IOW I think taking the worldview in Python 3.4 that pip will come installed with Python unless you build from source negates the need for the bootstrap script beyond just saying ``curl https://pypi.python.org/get-pip.py | python`` if pip isn't found. This is highly unhelpful for dealing with systems automation. For the foreseeable future, the bulk of Python 3.4 installations will either be source installs, or homegrown packages based on source installs. The bundled pip doesn't need to be included with, say, an hg clone that you then build and install, but it does have to come with an install from an official release source tarball. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Jul 13 19:51:48 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 13:51:48 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On Jul 13, 2013, at 12:59 PM, Brett Cannon wrote: > Could we just start to move away from an executable script and start promoting rather aggressively -m instead? It truly solves this problem and since the results are tied to the Python executable used (i.e. where something gets installed) it disambiguates what Python binary pip is going to work with (something I have trouble with thanks to Python 2 and 3 both being installed and each with their own pip installation). I realize older Python versions can't do this (I believe 2.6 and older can't for packages) but at least in the situation we are discussing here of bundling pip it's not an issue. Also looking at what already ships with Python. idle, idle2, idle2.7 smtpd.py, smtpd2.py, smptd2.7.py pydoc, pydoc2, pydoc2.7 2to3, 2to3-2, 2to3-2.7 This is also the convention anywhere someone does versioned scripts in a Python package in the ecosystem. PEP439 is there to streamline the process so that python dependencies are much easier to install and there's a smaller barrier to "entry" so that projects like Django can give simple instructions for dependencies instead of needing to opt not to have dependencies or have to give instructions on how to install the installer. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sat Jul 13 19:55:24 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 18:55:24 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 13 July 2013 18:30, Donald Stufft wrote: > On Jul 13, 2013, at 12:59 PM, Brett Cannon wrote: > > Could we just start to move away from an executable script and start > promoting rather aggressively -m instead? It truly solves this problem and > since the results are tied to the Python executable used (i.e. where > something gets installed) it disambiguates what Python binary pip is going > to work with (something I have trouble with thanks to Python 2 and 3 both > being installed and each with their own pip installation). I realize older > Python versions can't do this (I believe 2.6 and older can't for packages) > but at least in the situation we are discussing here of bundling pip it's > not an issue. > > > I find the -m interface ugly as a primary cli api. It's ok for bonus > functionality (ala json.tool) and debugging utilities (ala SimpleServer) > but as a developer of user facing tools I don't think I'd ever want to tell > them that they should use ``python -m`` to execute my tool. > > It's also a massive change in functionality from the existing pip > interface. ``pip install`` is what everyone uses. > > The point is more or less moot though unless you're advocating not > including an executable script at all. Because pip is already able to be > executed with ``python -m pip`` however I don't believe i've seen anyone > use that in practice. It also provides the "pip" and "pip-X.Y" commands > which should probably be normalized to "pip", "pipX", and "pipX.Y". > That's the point of "aggressively promote". We'd advocate "python -m pip" as the primary means of running pip. I agree it's less convenient for users than having a simple "pip" command, but there are a number of downsides to "pip" being the primary interface (note that you can always alias pip to "python -m pip" in your shell - it's no harder than managing PATH, which is what many people need to do at the moment). 1. It's not *actually* the case that the command is always "pip". Maybe it's "pip3" if your system makes the default Python be python 2, but you want to use python 3. Maybe you're creating a virtualenv and you haven't activated it yet. In that case a plain "pip" will quietly do the wrong thing (at the moment, I don't install pip in my system python precisely to avoid this issue). 2. On Windows, ...\Python34\Scripts is not on PATH by default. Even if python is (as python.exe is in a different directory to the one distutils installs executables in). Again, you can change your own PATH. 3. There's a lot of clutter. On Windows, you have 3 executables (pip.exe, pip3.exe and pip3.4.exe) and 3 scripts alongside them. For one command. Apart from the first of these, the issues are all Windows ones, and it's reasonable to say "well, fix the Windows setup, then, it's silly". I have some sympathy with that view, but backward compatibility and many, many years of history will make that extremely difficult. Also, some of it may simply not be fixable because people won't agree on the solution (that's been the case in the past). I assume, perhaps naively, that improving the experience on Windows is just as key as improving it on Unix. In my view, the key initial userbase for the new packaging tools will be Windows users wanting access to binary wheels. Paul. PS I actually *do* prefer just having a pip command. It's just that I doubt I'll get that on Windows, no matter *what* approach people take - I'll have to "roll my own" solution. Obviously, aliasing "pip" to "python -m pip" is the easiest, and probably the one I'd choose, but then again, I know what I'm doing :-) A "normal" Windows user will just say "these instructions are c**p, there's no pip command" and either have to resort to Google (probably with a question like "why doesn't Python's packaging work?" which isn't good PR for us), or give up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 19:58:43 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 18:58:43 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On 13 July 2013 18:51, Donald Stufft wrote: > > On Jul 13, 2013, at 12:59 PM, Brett Cannon wrote: > > Could we just start to move away from an executable script and start > promoting rather aggressively -m instead? It truly solves this problem and > since the results are tied to the Python executable used (i.e. where > something gets installed) it disambiguates what Python binary pip is going > to work with (something I have trouble with thanks to Python 2 and 3 both > being installed and each with their own pip installation). I realize older > Python versions can't do this (I believe 2.6 and older can't for packages) > but at least in the situation we are discussing here of bundling pip it's > not an issue. > > > Also looking at what already ships with Python. > > idle, idle2, idle2.7 > smtpd.py, smtpd2.py, smptd2.7.py > pydoc, pydoc2, pydoc2.7 > 2to3, 2to3-2, 2to3-2.7 > > This is also the convention anywhere someone does versioned scripts in a > Python package in the ecosystem. PEP439 is there to streamline the process > so that python dependencies are much easier to install and there's a > smaller barrier to "entry" so that projects like Django can give simple > instructions for dependencies instead of needing to opt not to have > dependencies or have to give instructions on how to install the installer. > None of these commands work at the command line in a base Python install on Windows. They are all obscure enough that nobody cares (or they are GUI apps that have a start menu entry provided, i.e. Idle) - that won't be the case for pip. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 20:24:24 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 14:24:24 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On Jul 13, 2013, at 1:55 PM, Paul Moore wrote: > 1. It's not *actually* the case that the command is always "pip". Maybe it's "pip3" if your system makes the default Python be python 2, but you want to use python 3. Maybe you're creating a virtualenv and you haven't activated it yet. In that case a plain "pip" will quietly do the wrong thing (at the moment, I don't install pip in my system python precisely to avoid this issue). > 2. On Windows, ...\Python34\Scripts is not on PATH by default. Even if python is (as python.exe is in a different directory to the one distutils installs executables in). Again, you can change your own PATH. > 3. There's a lot of clutter. On Windows, you have 3 executables (pip.exe, pip3.exe and pip3.4.exe) and 3 scripts alongside them. For one command. > > Apart from the first of these, the issues are all Windows ones, and it's reasonable to say "well, fix the Windows setup, then, it's silly". I have some sympathy with that view, but backward compatibility and many, many years of history will make that extremely difficult. Also, some of it may simply not be fixable because people won't agree on the solution (that's been the case in the past). > 1. There's no good way to make it so you don't have to modify your command depending on what python you want to install into. In the case of both ``pip`` and ``python -m pip`` the changes a person would need to make is equivalent. They need to add a version number. The virtualenv case I don't see how that's relevant at all because if pip is preinstalled then pip will be available in both the virtualenv and the system environment. So both ``python -m pip`` and ``pip`` will be operating on the system python if you don't have it activated. 2. This sounds like something that needs fixed on Windows. Even if you say ``-m`` for pip then things are still broken by default for any other package on PyPI that installs a script. So this feels like something wrong with Python on windows not wrong with the script approach. 3. I don't really get the clutter argument. Does it *hurt* to have extra files there? I don't think i've ever looked at the directories on my $PATH and gone "wow I wish there was _less_ things in here". Is this an actual problem people have? Even if it was I think user experience trumps this case. I'm not sure what Brett is exactly advocating for. If he just wants to document it as ``python -m pip`` well whatever. I have absolutely zero faith that method of invocation will ever become popular. Every single piece of documentation that i've ever seen out there for installing things with pip tells people to use ``pip install``. Every developer that I've ever seen out there is using ``pip install``. An explicit command is shorter, easier to type, and already has basically all of the mindshare behind it. People gravitate towards what's easiest and in my opinion ``-m`` is easier only for the folks implementing this, not for the end users. Even if ``python -m pip`` is documented we still need to handle the CLI case, and I think that following the convention used by most other programs on *Nix and by windows itself of making the commands, "pip", "pipX", and "pipX.Y" makes the most sense. If Brett is advocating we _remove_ the command line options and expose only the ``python -m pip`` command that I am vehemently against that and in my opinion that makes for a far worst experience than users have now. The very first thing I would do, if it did happen this way, is create a package "pip-sanity" on PyPI that did nothing but restored the commands and then we end up with a similar situation we have now. That people need to run some bullshit before they can start using pip in the way they want to. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sat Jul 13 20:30:40 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 11:30:40 -0700 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) Message-ID: > I think we need to flip the dependencies so that pip as the installer has > all the essential code for installation from PyPI and then setuptools and > distlib depend on that pip infrastructure. No need to add anything to the > standard library prematurely when we can add it to pip instead. > not sure about the flip, but let me break some things down a bit for those who don't know: what pip has internally already (i.e. literally in it's package namespace): - pypi crawling/downloading - wheel installing (does not require the pypi wheel project; only building wheels requires that) what pip has "bundled' already: - distlib (in 'pip.vendor'; currently only used for some --pre version logic) what pip still needs to be self-sufficient to do wheel installs: - something bundled or internal that does what pkg_resources does theoretical options: 1) bundle setuptools/pkg_resources 2) use the bundled distlib to replace our use of pkg_resources 3) internalize pkg_resources as pip.pkg_resources (i.e. fork off pkg_resources) Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 21:14:48 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 15:14:48 -0400 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: References: Message-ID: On Jul 13, 2013, at 2:30 PM, Marcus Smith wrote: > > I think we need to flip the dependencies so that pip as the installer has all the essential code for installation from PyPI and then setuptools and distlib depend on that pip infrastructure. No need to add anything to the standard library prematurely when we can add it to pip instead. > > not sure about the flip, but let me break some things down a bit for those who don't know: > > what pip has internally already (i.e. literally in it's package namespace): > - pypi crawling/downloading > - wheel installing (does not require the pypi wheel project; only building wheels requires that) > > what pip has "bundled' already: > - distlib (in 'pip.vendor'; currently only used for some --pre version logic) > > what pip still needs to be self-sufficient to do wheel installs: > - something bundled or internal that does what pkg_resources does > > theoretical options: > 1) bundle setuptools/pkg_resources > 2) use the bundled distlib to replace our use of pkg_resources > 3) internalize pkg_resources as pip.pkg_resources (i.e. fork off pkg_resources) > > Marcus > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig As you're aware I think it makes the most sense to just bundle setuptools wholesale. This makes it impossible to "break" pip by something going wrong in setuptools causing it to be uninstalled and means that for users who are only doing installs, they don't need setuptools installed just pip. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sat Jul 13 21:35:05 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 12:35:05 -0700 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: References: Message-ID: > As you're aware I think it makes the most sense to just bundle setuptools > wholesale. This makes it impossible to "break" pip by something going wrong > in setuptools causing it to be uninstalled and means that for users who are > only doing installs, they don't need setuptools installed just pip. > I'm a fan of bundling too (if it works), but the "dynamic install of setuptools" idea also offers what you mention, although admittedly with more fragility. If a user uninstalled setuptools, it would be installed again when needed, and users only need pip to get started, and don't have to think about the setuptools dependency themselves. The drawbacks of bundling setuptools: 1) maybe some weird bug/side-effect shows up after we do it (ok, maybe that's FUD) 2) users can't upgrade themselves (for use in pip) 3) more tedium in our release process. 4) feels odd to bundle it knowing we'd likely drop it later, if we do the MEBs thing. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 21:50:01 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 15:50:01 -0400 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: References: Message-ID: <24660B6A-78A8-41E8-AA30-8F60DF6E5C72@stufft.io> On Jul 13, 2013, at 3:35 PM, Marcus Smith wrote: > > As you're aware I think it makes the most sense to just bundle setuptools wholesale. This makes it impossible to "break" pip by something going wrong in setuptools causing it to be uninstalled and means that for users who are only doing installs, they don't need setuptools installed just pip. > > I'm a fan of bundling too (if it works), but the "dynamic install of setuptools" idea also offers what you mention, although admittedly with more fragility. If a user uninstalled setuptools, it would be installed again when needed, and users only need pip to get started, and don't have to think about the setuptools dependency themselves. > > The drawbacks of bundling setuptools: > 1) maybe some weird bug/side-effect shows up after we do it (ok, maybe that's FUD) > 2) users can't upgrade themselves (for use in pip) > 3) more tedium in our release process. > 4) feels odd to bundle it knowing we'd likely drop it later, if we do the MEBs thing. > > Marcus 1) That's kinda FUD-y yea ;) But I'd say it's equally as likely to have weird bugs/side effects due to people using different combinations of pip/setuptools with pip than we've tested. 2) This much is true, the question then becomes how important is that? If there's a major regression in setuptools that needs fixed I'd think we'd release an updated pip. If there's new functionality I would guess we'd need to expose that in pip anyways. 3) I think this isn't as big of a deal as it sounds. Especially given we can write tooling to make it simpler :) 4) Even if MEBs were here *right now* we'd still have nearly 150k source dists that required setuptools. So either in the MEB system we'd be grabbing setuptools *a lot* or we could just bundle it to provide a better UX for people using the large corpus of existing software. I think it will be a long time once the MEBs exist before they gain enough traction that even the bulk of installs are using that system. MEBs depend on sdist 2.0 which hasn't even been started yet ;) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sat Jul 13 21:59:44 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 20:59:44 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 13 July 2013 19:24, Donald Stufft wrote: > 2. This sounds like something that needs fixed on Windows. Even if you say > ``-m`` for pip then things are still broken by default for any other > package on PyPI that installs a script. So this feels like something wrong > with Python on windows not wrong with the script approach. It is, and it should be fixed. But in many years, nobody has managed to come up with an acceptable solution. The debates seem to be largely around what happens if you install multiple versions of Python and then remove some of them, and how badly your system PATH gets messed up by this. I don't know how many people actually do things like that, but nevertheless it's never been sorted out. (Not all of the arguments are trivial, either, there are some genuinely difficult issues to resolve, IIRC). Ultimately, I guess there are a few options: * Accept that Windows is a problem in this regard, but don't worry about it - install executable wrappers/scripts and let the user deal with path issues. * Promote "python -m pip" as a least common denominator approach, and mildly irritate people who don't use Windows (they can still use the commands, but the docs look odd to them). * Only provide "python -m pip" and seriously annoy people who don't use Windows. * Document the difference, which implies either a certain level of repetitious "pip install (or py -m pip install on Windows)" type of thing, or a high level "For Windows, the pip command is not available directly, you should use ``python -m pip`` in its place (or wrap this in the shell if you prefer)" which people may miss. It would be nice to get feedback from "normal users" on this. I suspect that the scientific community would make a good cross-section (AIUI there's quite a lot of Windows use, and for many people in the community Python is very much a tool, rather than a way of life :-)). Does anyone have links into the scipy groups? I lurk on the IPython lists, so I could ask there, at a pinch... Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 13 22:08:13 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 21:08:13 +0100 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: References: Message-ID: On 13 July 2013 20:35, Marcus Smith wrote: > The drawbacks of bundling setuptools: > 1) maybe some weird bug/side-effect shows up after we do it (ok, maybe > that's FUD) > One possible issue is with the "install from sdist" code, which runs Python in a subprocess with the "import setuptools, etc etc" incantation to force always using setuptools. That may break (or at least need changing) for a bundled setuptools, which won't be visible from the top level by default. Worth checking, anyway. Of course, this code path becomes less important as we move towards installing from wheels, but it's going to be a while before that's the norm. Paul PS Apologies if I already said this. I'm losing track of what I've replied to and what I've just thought about on this thread :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 22:14:22 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 16:14:22 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: <2BDA2CD1-CEC9-4EC0-A879-EE6A4D92D33C@stufft.io> On Jul 13, 2013, at 3:59 PM, Paul Moore wrote: > On 13 July 2013 19:24, Donald Stufft wrote: > 2. This sounds like something that needs fixed on Windows. Even if you say ``-m`` for pip then things are still broken by default for any other package on PyPI that installs a script. So this feels like something wrong with Python on windows not wrong with the script approach. > > It is, and it should be fixed. But in many years, nobody has managed to come up with an acceptable solution. The debates seem to be largely around what happens if you install multiple versions of Python and then remove some of them, and how badly your system PATH gets messed up by this. I don't know how many people actually do things like that, but nevertheless it's never been sorted out. (Not all of the arguments are trivial, either, there are some genuinely difficult issues to resolve, IIRC). > > Ultimately, I guess there are a few options: > * Accept that Windows is a problem in this regard, but don't worry about it - install executable wrappers/scripts and let the user deal with path issues. Ultimately I think this is what the community is going to do regardless of what happens here unless we remove the command line tools all together. > * Promote "python -m pip" as a least common denominator approach, and mildly irritate people who don't use Windows (they can still use the commands, but the docs look odd to them). This also has the problem where the existing documentation (project READMEs etc) are pointing to ``pip``. So it fractures the documentation about what the command "should" be. > * Only provide "python -m pip" and seriously annoy people who don't use Windows. Also invalidate all the existing documentation :) > * Document the difference, which implies either a certain level of repetitious "pip install (or py -m pip install on Windows)" type of thing, or a high level "For Windows, the pip command is not available directly, you should use ``python -m pip`` in its place (or wrap this in the shell if you prefer)" which people may miss. This is probably the most realistic approach, at least in my eyes. If the Scripts directory isn't available on windows people are going to need to know to either add it or execute it with python -m pip and that's going to include documentation outside our control. So given that there s a lot of existing documentation around ``pip`` and people are likely to continue that practice windows users will need to know that when random projects say to do ``pip install foo`` they need to translate that too ``python -m pip foo``. > > It would be nice to get feedback from "normal users" on this. I suspect that the scientific community would make a good cross-section (AIUI there's quite a lot of Windows use, and for many people in the community Python is very much a tool, rather than a way of life :-)). Does anyone have links into the scipy groups? I lurk on the IPython lists, so I could ask there, at a pinch? I don't know any windows users off hand except for you ;) (And you already said you use ``pip`` and not ``python -m pip`` which already works with pip :) > > Paul. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sat Jul 13 22:15:33 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 13:15:33 -0700 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: <24660B6A-78A8-41E8-AA30-8F60DF6E5C72@stufft.io> References: <24660B6A-78A8-41E8-AA30-8F60DF6E5C72@stufft.io> Message-ID: yea, all those comebacks make sense to me. we should try the bundle and see if it works. we already do some fancy footwork when working with setup.py https://github.com/pypa/pip/blob/develop/pip/req.py#L602 https://github.com/pypa/pip/blob/develop/pip/req.py#L687 https://github.com/pypa/pip/blob/develop/pip/req.py#L269 https://github.com/pypa/pip/blob/develop/pip/wheel.py#L291 I guess we'd be doing some additional override work in sys.modules. Marcus On Sat, Jul 13, 2013 at 12:50 PM, Donald Stufft wrote: > > On Jul 13, 2013, at 3:35 PM, Marcus Smith wrote: > > > As you're aware I think it makes the most sense to just bundle setuptools >> wholesale. This makes it impossible to "break" pip by something going wrong >> in setuptools causing it to be uninstalled and means that for users who are >> only doing installs, they don't need setuptools installed just pip. >> > > I'm a fan of bundling too (if it works), but the "dynamic install of > setuptools" idea also offers what you mention, although admittedly with > more fragility. If a user uninstalled setuptools, it would be installed > again when needed, and users only need pip to get started, and don't have > to think about the setuptools dependency themselves. > > The drawbacks of bundling setuptools: > 1) maybe some weird bug/side-effect shows up after we do it (ok, maybe > that's FUD) > 2) users can't upgrade themselves (for use in pip) > 3) more tedium in our release process. > 4) feels odd to bundle it knowing we'd likely drop it later, if we do the > MEBs thing. > > Marcus > > > > 1) That's kinda FUD-y yea ;) But I'd say it's equally as likely to have > weird bugs/side effects due to people using different combinations of > pip/setuptools with pip than we've tested. > > 2) This much is true, the question then becomes how important is that? If > there's a major regression in setuptools that needs fixed I'd think we'd > release an updated pip. If there's new functionality I would guess we'd > need to expose that in pip anyways. > > 3) I think this isn't as big of a deal as it sounds. Especially given we > can write tooling to make it simpler :) > > 4) Even if MEBs were here *right now* we'd still have nearly 150k source > dists that required setuptools. So either in the MEB system we'd be > grabbing setuptools *a lot* or we could just bundle it to provide a better > UX for people using the large corpus of existing software. I think it will > be a long time once the MEBs exist before they gain enough traction that > even the bulk of installs are using that system. MEBs depend on sdist 2.0 > which hasn't even been started yet ;) > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat Jul 13 22:16:40 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 16:16:40 -0400 Subject: [Distutils] flip the pip dependencies (was Current status of PEP 439) In-Reply-To: References: Message-ID: <1FEBEA1D-E166-4F30-8FA6-B9913BA304A4@stufft.io> On Jul 13, 2013, at 4:08 PM, Paul Moore wrote: > On 13 July 2013 20:35, Marcus Smith wrote: > The drawbacks of bundling setuptools: > 1) maybe some weird bug/side-effect shows up after we do it (ok, maybe that's FUD) > > One possible issue is with the "install from sdist" code, which runs Python in a subprocess with the "import setuptools, etc etc" incantation to force always using setuptools. That may break (or at least need changing) for a bundled setuptools, which won't be visible from the top level by default. Worth checking, anyway. > > Of course, this code path becomes less important as we move towards installing from wheels, but it's going to be a while before that's the norm. > > Paul > > PS Apologies if I already said this. I'm losing track of what I've replied to and what I've just thought about on this thread :-( Pip already wraps that code to force things to use setuptools even if they use distutils. So in that case pip would just need to modify it's own code to add setuptools to sys.modules (or extend sys.path in that sub process). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sat Jul 13 23:35:55 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 13 Jul 2013 22:35:55 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <2BDA2CD1-CEC9-4EC0-A879-EE6A4D92D33C@stufft.io> References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <2BDA2CD1-CEC9-4EC0-A879-EE6A4D92D33C@stufft.io> Message-ID: On 13 July 2013 21:14, Donald Stufft wrote: > I don't know any windows users off hand except for you ;) (And you already > said you use ``pip`` and not ``python -m pip`` which already works with pip > :) You caught me :-) My problem is that I'm pretty sure I'm seriously atypical in never installing anything into my system Python. (And so I only use the pip command in activated virtualenvs, which *do* add the scripts directory to PATH). Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 00:46:10 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 18:46:10 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <2BDA2CD1-CEC9-4EC0-A879-EE6A4D92D33C@stufft.io>, Message-ID: <3AFF5308-D205-4887-8CD3-35EFBA8C3C50@stufft.io> On Jul 13, 2013, at 6:44 PM, Steve Dower wrote: > Because of the issues around compilation on Windows, we believe that most users avoid pip in favor of precompiled installers. The model of "download an executable that matches my Python version and run it" is more familiar than a command line tool, and unlikely to go away anytime soon. Luckily for them the upcoming pip 1.4 includes support for compiled packages called Wheels ;) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Steve.Dower at microsoft.com Sun Jul 14 00:44:50 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 13 Jul 2013 22:44:50 +0000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <2BDA2CD1-CEC9-4EC0-A879-EE6A4D92D33C@stufft.io>, Message-ID: Because of the issues around compilation on Windows, we believe that most users avoid pip in favor of precompiled installers. The model of "download an executable that matches my Python version and run it" is more familiar than a command line tool, and unlikely to go away anytime soon. Those who use pip are going to be quite capable of managing their PATH variable to ensure the correct one is used (or they'll do what I do and use a full path). There are also GUI apps, but I have no idea how widely used they are. Sent from my Windows Phone ________________________________ From: Paul Moore Sent: ?7/?13/?2013 14:36 To: Donald Stufft Cc: Distutils Subject: Re: [Distutils] Expectations on how pip needs to change for Python 3.4 On 13 July 2013 21:14, Donald Stufft > wrote: I don't know any windows users off hand except for you ;) (And you already said you use ``pip`` and not ``python -m pip`` which already works with pip :) You caught me :-) My problem is that I'm pretty sure I'm seriously atypical in never installing anything into my system Python. (And so I only use the pip command in activated virtualenvs, which *do* add the scripts directory to PATH). Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Sun Jul 14 01:45:06 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 13 Jul 2013 23:45:06 +0000 (UTC) Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 References: Message-ID: Paul Moore gmail.com> writes: > 4. Ensure that pip upgrading itself in-place is sufficiently robust and > reliable that users don't get "stuck" on the Python-supplied version. Perhaps one could add to your list, the ability to downgrade to the previous version should there be a problem with a newly-upgraded version. Regards, Vinay Sajip From qwcode at gmail.com Sun Jul 14 02:06:41 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 17:06:41 -0700 Subject: [Distutils] pip and virtualenv release candidates In-Reply-To: References: Message-ID: pip-1.4rc4 and virtualenv-1.10rc6 are now available the changes from the previous RCs: - virtualenv now contains setuptools v0.9 (which enables indexes to use md5, sha1, or one of the sha2 variants in their urls) - the new "pip install --pre" option now applies to all packages installed in the command, not just top-level requirements - pip support for building and installing pybundles is now noted as deprecated (our plan is to remove it in v1.5) here's the RC install instructions again: $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc6.tar.gz $ tar zxf 1.10rc6.tar.gz $ python virtualenv-1.10rc6/virtualenv.py myVE $ myVE/bin/pip --version pip 1.4rc4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 04:20:31 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 12:20:31 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: On 14 July 2013 00:54, Paul Moore wrote: > This issue has been skirted round for some time now, and I think it needs > explicit discussion, as I am not at all sure everyone has the same > expectations. > > We're talking about Python 3.4 installations having pip as the default > package manager - whether by bundling, having a bootstrap process or > whatever. Regardless of the means, pip will be *the* installer for Python > 3.4+. And yet, I don't think pip 1.4 currently does what people want "the > Python 3.4 pip" to do in some ways - and we need to make sure that any work > on the pip side is understood, agreed to, and planned to match the Python > 3.4 timescales. > Good point. We also need to start to articulate the relevant questions for the core development side of the fence - Richard, it would be good if PEP 439 could be the vehicle for this, even though it does mean I'm shifting the goal posts on you fairly substantially. Then we can turn it into a set of tracker issues for pip and CPython. As a reminder, here's the current deadlines as per PEP 429 (the 3.4 release schedule): Changes to CPython: November 23, 2013 (3.4 beta 1) Changes to pip: January 18, 2014 (3.4 rc 1) (allowing changes to bundled applications up until the first RC is what I think we *should* do, but that will require agreement from Larry Hastings as release manager) So, here's my initial list of things that I think people might be expecting > to happen. This is just my impressions, and I don't necessarily have a view > on the individual items. And if anyone else can think of other things to > add to the list, please do so! > > 1. Install to user-packages by default. > I made this suggestion at one point, but Marcus and others convinced me it was a bad idea. Issue for better error message filed as https://github.com/pypa/pip/issues/1048 (I don't have the power to set milestones) > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > I think Donald is right that bundling a vendor'ed copy of setuptools is the most sensible near-term option - there's too much risk of upgrade/downgrade issues when allowing arbitrary combinations of pip with setuptools on target systems for source based installs. > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > I think there's a bit more to it than that. Really, what we want to try to ensure is that the following commands are available across Windows, Mac OS X and *nix (ignoring, for the moment, the behaviour of vendor provided installations for Mac OS X and Linux): * python, python3, python3.4 * pip, pip3, pip3.4 That is, the version qualifier on the pip executable would relate to the *default Python version* associated with that executable/script, rather than the version of pip. We would take whatever steps were needed in our Windows and Mac OS X installers to ensure all these wrappers were provided. The reason why I think we still want to offer "python -m getpip" is because I think that sends a clearer message to repackagers that we *do* consider pip a part of Python now, but we keep the source control and issue management for the two projects separate for pragmatic reasons (notably, the different update lifecycles). It's still going to map to two separate source tarballs (and hence SRPMs) for While I acknowledge you *can* invoke the Python launcher directly on Windows, I think it is better to leave that in the background as a tool for advanced users, as well the engine that lets us base shebang line processing on Windows file associations. If/when we start offering a "py" style launcher on POSIX systems as well, then we can revisit that question. > 4. Ensure that pip upgrading itself in-place is sufficiently robust and > reliable that users don't get "stuck" on the Python-supplied version. > As Vinay noted, we also need to ensure downgrades work. However, must of these have been related to depending on an external setuptools, so eliminating that should help a lot. > I'm sure I've seen people say other things that have made me think "are > you expecting the pip maintainers to make that change?" in the various > threads, so I doubt this list is definitive. > The other big one is the one you noted about pip *not* offering a stable API, *but* exposing an apparently stable API to introspection. Introspection currently tells me that pip exports *at least* 32 public names (and this is without checking for public submodules that aren't implicitly imported by pip/__init__.py): >>> import pip; public = set(k for k, v in pip.__dict__.items() if not k.startswith('_') and (not hasattr(v, "__name__") or hasattr(v, "__module__") or v.__name__.startswith("pip."))); print(len(public)) 32 If pip really has no stable public API, then it should properly indicate this under introspection (if it already uses relative imports correctly, then the easiest ways to achieve that are to just shove everything under a "pip._impl" subpackage or shuffle it sideways into a "_pip" package). > Comments anyone? Is this discussion premature? The pip maintainers team is > not huge, so we'll need time (or assistance!) to plan in and make changes > like this, if they are needed... > Agreed, I think refocusing the discussion on "What do we need to do in pip?" and "What do we need to do in CPython?" is a very necessary step at this point. > At a minimum, can we get the key items logged on the pip issue tracker > with a milestone of Python 3.4? > The existing 1.5 milestone is probably usable - I expect 1.5 is the version that would be bundled with 3.4. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 04:46:22 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 22:46:22 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> On Jul 13, 2013, at 10:20 PM, Nick Coghlan wrote: > On 14 July 2013 00:54, Paul Moore wrote: > This issue has been skirted round for some time now, and I think it needs explicit discussion, as I am not at all sure everyone has the same expectations. > > We're talking about Python 3.4 installations having pip as the default package manager - whether by bundling, having a bootstrap process or whatever. Regardless of the means, pip will be *the* installer for Python 3.4+. And yet, I don't think pip 1.4 currently does what people want "the Python 3.4 pip" to do in some ways - and we need to make sure that any work on the pip side is understood, agreed to, and planned to match the Python 3.4 timescales. > > Good point. We also need to start to articulate the relevant questions for the core development side of the fence - Richard, it would be good if PEP 439 could be the vehicle for this, even though it does mean I'm shifting the goal posts on you fairly substantially. > > Then we can turn it into a set of tracker issues for pip and CPython. As a reminder, here's the current deadlines as per PEP 429 (the 3.4 release schedule): > > Changes to CPython: November 23, 2013 (3.4 beta 1) > Changes to pip: January 18, 2014 (3.4 rc 1) Good dates to have! > > (allowing changes to bundled applications up until the first RC is what I think we *should* do, but that will require agreement from Larry Hastings as release manager) > > So, here's my initial list of things that I think people might be expecting to happen. This is just my impressions, and I don't necessarily have a view on the individual items. And if anyone else can think of other things to add to the list, please do so! > > 1. Install to user-packages by default. > > I made this suggestion at one point, but Marcus and others convinced me it was a bad idea. Issue for better error message filed as https://github.com/pypa/pip/issues/1048 (I don't have the power to set milestones) Added this to the 1.5 milestone and mentioned my agreement to the implementation on the ticket. > > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > > I think Donald is right that bundling a vendor'ed copy of setuptools is the most sensible near-term option - there's too much risk of upgrade/downgrade issues when allowing arbitrary combinations of pip with setuptools on target systems for source based installs. https://github.com/pypa/pip/issues/1049 > > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > > I think there's a bit more to it than that. Really, what we want to try to ensure is that the following commands are available across Windows, Mac OS X and *nix (ignoring, for the moment, the behaviour of vendor provided installations for Mac OS X and Linux): > > * python, python3, python3.4 > * pip, pip3, pip3.4 > > That is, the version qualifier on the pip executable would relate to the *default Python version* associated with that executable/script, rather than the version of pip. We would take whatever steps were needed in our Windows and Mac OS X installers to ensure all these wrappers were provided. https://github.com/pypa/pip/issues/1050 > > The reason why I think we still want to offer "python -m getpip" is because I think that sends a clearer message to repackagers that we *do* consider pip a part of Python now, but we keep the source control and issue management for the two projects separate for pragmatic reasons (notably, the different update lifecycles). It's still going to map to two separate source tarballs (and hence SRPMs) for I don't care if getpip is available especially if that's the command that is actually executed to pre-install pip for the CPython releases. (To be clear, I agree with Noah that pip should be pre-installed for every type of official release Python makes. However It does not need to be there from a hg.python.org checkout). > > While I acknowledge you *can* invoke the Python launcher directly on Windows, I think it is better to leave that in the background as a tool for advanced users, as well the engine that lets us base shebang line processing on Windows file associations. If/when we start offering a "py" style launcher on POSIX systems as well, then we can revisit that question. > > 4. Ensure that pip upgrading itself in-place is sufficiently robust and reliable that users don't get "stuck" on the Python-supplied version. > > As Vinay noted, we also need to ensure downgrades work. However, must of these have been related to depending on an external setuptools, so eliminating that should help a lot. In the years of using pip the only time i've ever had any issue upgrading or downgrading pip was related to setuptools screw ups. But possibly we do want to have some explicit testing around this? > > I'm sure I've seen people say other things that have made me think "are you expecting the pip maintainers to make that change?" in the various threads, so I doubt this list is definitive. > > The other big one is the one you noted about pip *not* offering a stable API, *but* exposing an apparently stable API to introspection. Introspection currently tells me that pip exports *at least* 32 public names (and this is without checking for public submodules that aren't implicitly imported by pip/__init__.py): > > >>> import pip; public = set(k for k, v in pip.__dict__.items() if not k.startswith('_') and (not hasattr(v, "__name__") or hasattr(v, "__module__") or v.__name__.startswith("pip."))); print(len(public)) > 32 > > If pip really has no stable public API, then it should properly indicate this under introspection (if it already uses relative imports correctly, then the easiest ways to achieve that are to just shove everything under a "pip._impl" subpackage or shuffle it sideways into a "_pip" package). Pip does not use relative imports. Is simply documenting the fact there is no public API enough? Pushing everything into a _impl or _pip directory makes me nervous because that's a lot of code churn (and I know there are people using those APIs, and while they aren't technically stable it feels like moving things around just for the sake of an _ in the name is unfriendly to those people. > > Comments anyone? Is this discussion premature? The pip maintainers team is not huge, so we'll need time (or assistance!) to plan in and make changes like this, if they are needed... > > Agreed, I think refocusing the discussion on "What do we need to do in pip?" and "What do we need to do in CPython?" is a very necessary step at this point. Agreed. > > At a minimum, can we get the key items logged on the pip issue tracker with a milestone of Python 3.4? > > The existing 1.5 milestone is probably usable - I expect 1.5 is the version that would be bundled with 3.4. Instead of a milestone I added a PEP439 tag so that we can differentiate between 1.5 milestone items for PEP439 and not. Ideally we don't need to drop anything from 1.5 but just in case we do. I think we should probably target pip 1.5 to release in the beginning of December? Would need to see what the other team members think, that's a shorter release cycle then we normally have but I think it'd be good to have 1.5 out for a month or so to get real world use to make sure it doesn't need a patch release before inclusion in CPython (assuming the dates you mentioned are correct). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Sun Jul 14 04:59:02 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 13 Jul 2013 21:59:02 -0500 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: It is easy to forget that pip only needs the "package database" part of setuptools (pkg_resources.py) to install things. With the small catch that the rest of setuptools is required to install anything besides wheels. MEBS is just about implementing build requirements properly and giving pip a consistent interface to build traditional sdists *or* any new (sdist 2.0, distil, bento) kinds of packages that may come along. From donald at stufft.io Sun Jul 14 05:00:21 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 23:00:21 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: <42ECB620-5A0F-4C77-8451-159E4D88504C@stufft.io> On Jul 13, 2013, at 10:59 PM, Daniel Holth wrote: > It is easy to forget that pip only needs the "package database" part > of setuptools (pkg_resources.py) to install things. With the small > catch that the rest of setuptools is required to install anything > besides wheels. > > MEBS is just about implementing build requirements properly and giving > pip a consistent interface to build traditional sdists *or* any new > (sdist 2.0, distil, bento) kinds of packages that may come along. Where "besides wheels" represents almost every single pip installable package on PyPI. Wheels are great, but reality is we need something sane for sdists where setuptools is expected to be there. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sun Jul 14 05:08:01 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 13 Jul 2013 23:08:01 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: I've gone through the pip issue tracker and attempted to identify issues related to the things we want to get in for PEP439. Some of them are duplicates but I left them opened. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sun Jul 14 07:00:50 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 13 Jul 2013 22:00:50 -0700 Subject: [Distutils] pip and virtualenv release candidates In-Reply-To: References: Message-ID: pip-1.4rc4 and virtualenv-1.10rc7 are now available the changes from the previous RCs: - virtualenv now contains setuptools v0.9.1 here's the RC install instructions again: $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc7.tar.gz $ tar zxf 1.10rc7.tar.gz $ python virtualenv-1.10rc7/virtualenv.py myVE $ myVE/bin/pip --version pip 1.4rc4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 07:58:02 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 15:58:02 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: On 14 July 2013 12:46, Donald Stufft wrote: > I'm sure I've seen people say other things that have made me think "are >> you expecting the pip maintainers to make that change?" in the various >> threads, so I doubt this list is definitive. >> > > The other big one is the one you noted about pip *not* offering a stable > API, *but* exposing an apparently stable API to introspection. > Introspection currently tells me that pip exports *at least* 32 public > names (and this is without checking for public submodules that aren't > implicitly imported by pip/__init__.py): > > >>> import pip; public = set(k for k, v in pip.__dict__.items() if not > k.startswith('_') and (not hasattr(v, "__name__") or hasattr(v, > "__module__") or v.__name__.startswith("pip."))); print(len(public)) > 32 > > If pip really has no stable public API, then it should properly indicate > this under introspection (if it already uses relative imports correctly, > then the easiest ways to achieve that are to just shove everything under a > "pip._impl" subpackage or shuffle it sideways into a "_pip" package). > > > Pip does not use relative imports. Is simply documenting the fact there is > no public API enough? Pushing everything into a _impl or _pip directory > makes me nervous because that's a lot of code churn (and I know there are > people using those APIs, and while they aren't technically stable it feels > like moving things around just for the sake of an _ in the name is > unfriendly to those people. > Either the existing APIs are moved to a different name, or they get declared stable and pip switches to "internally forked" APIs any time a backwards incompatible change is needed for refactoring purposes (see runpy._run_module_as_main for an example of needing to do this in the standard library). I've had to directly deal with too many issues arising from getting this wrong in the past for me to endorse bundling of a module that doesn't follow this practice with CPython - if introspection indicates an API is public, then it's public and subject to all standard library backwards compatibility guarantees, or else we take the pain *once* and explicitly mark it private by adding a leading underscore rather than leaving it in limbo (contextlib._GeneratorContextManager is a standard library example of the latter approach - it used to lack the leading underscore, suggesting it was a public API when it's really just an implementation detail of contextlib.contextmanager). Mere documentation of public vs private generally doesn't cut it, as too many people use dir(), help() and inspect() rather than the published docs to explore APIs. The only general exception I'm aware of is "test" packages, including the standard library's test package, and for those you can make the case that having "test" or "tests" as a name segment is just as clear an indicator of something being private as at least one name segment starting with a leading underscore. I really this is a fairly big ask for the pip maintainers, but I *don't* consider "Oh, don't use our module API, it isn't stable" to be an adequate answer for something that is bundled with the standard installers. Beyond that, I don't mind if the answer is to declare the 1.5 API stable or to sprinkle underscore where appropriate or moving everything to a private package - the documentation and the naming conventions just need to be consistent in their private vs public distinctions (although your points do suggest heavily that the right answer is to accept the burden of backwards compatibility for all APIs currently marked public, and move towards the introduction of appropriate private APIs over time through refactoring). Instead of a milestone I added a PEP439 tag so that we can differentiate between 1.5 milestone items for PEP439 and not. Ideally we don't need to drop anything from 1.5 but just in case we do. > I think we should probably target pip 1.5 to release in the beginning of > December? Would need to see what the other team members think, that's a > shorter release cycle then we normally have but I think it'd be good to > have 1.5 out for a month or so to get real world use to make sure it > doesn't need a patch release before inclusion in CPython (assuming the > dates you mentioned are correct). > Just to confuse matters a little bit, Richard has suggested explicitly creating a bundling PEP as a *competitor* to PEP 439, thus making it easier to be explicit about our reasons for rejecting bootstrapping in favour of bundling. I think that's a good way to move this forward, but I won't actually reject 439 until the competing bundling PEP has been posted (otherwise people might get the wrong impression that we're moving away from the idea of making "pip install X" work out of the box, when we're really just changing our tactics for achieving that goal). I also realised what is probably a better idea than "python -m getpip" for dealing with the "How do I get pip after doing a source build?": add a "get-pip.py" utility to Tools/scripts in the cpython repo, rather than adding anything to the standard library. This also puts us on a more solid footing for getting pip bundled with 2.7.x at some point: we're not touching the standard library, just the installers and the utility scripts. The bundling PEP should also suggest to Linux packagers that pip be considered an essential part of a fully functional Python installation. Exactly how that is handled will be up to the distro packagers, but could include noting pip as a recommended dependency for Python (Debian), or rearranging the packaging to make "cpython" a package in its own right, with "python" requiring both "cpython" and "python-pip" (while the latter would just require cpython). I'll post an explicit call for a PEP champion in a separate thread. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 08:13:00 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 16:13:00 +1000 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers Message-ID: Based on the recent discussions, I now plan to reject the pip bootstrapping-on-first-invocation approach described in PEP 439 in favour of a new PEP that proposes: * bundling the latest version of pip with the CPython binary installers for Mac OS X and Windows for all future CPython releases (including maintenance releases) * aligns the proposal with the Python 3.4 release schedule by noting that CPython changes must be completed by the first 3.4 beta, while pip changes must be completed by the first 3.4 release candidate. * ensuring that, for Python 3.4, "python3" and "python3.4" are available for command line invocation of Python, even on Windows * ensuring that the bundled pip, for Python 3.4, ensures "pip", "pip3" and "pip3.4" are available for command line invocation of Python, even on Windows * ensuring that the bundled pip is able to upgrade/downgrade itself independent of the CPython release cycle * ensuring that pip is automatically available in virtual environments created with pyvenv * adding a "get-pip.py" script to Tools/scripts in the CPython repo for bootstrapping the latest pip release from PyPI into custom CPython source builds Note that there are still open questions to be resolved, which is why an author/champion is needed: * what guidance, if any, should we provide to Linux distro packagers? * how should maintenance updates handle the presence of an existing pip installation? Automatically upgrade older versions to the bundled version, while leaving newer versions alone? Force installation of the bundled version? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Sun Jul 14 08:19:58 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sat, 13 Jul 2013 23:19:58 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: <28DB7C9B-5978-44FD-B713-C4A17FEC7B16@coderanger.net> On Jul 13, 2013, at 10:58 PM, Nick Coghlan wrote: > On 14 July 2013 12:46, Donald Stufft wrote: >> I'm sure I've seen people say other things that have made me think "are you expecting the pip maintainers to make that change?" in the various threads, so I doubt this list is definitive. >> >> The other big one is the one you noted about pip *not* offering a stable API, *but* exposing an apparently stable API to introspection. Introspection currently tells me that pip exports *at least* 32 public names (and this is without checking for public submodules that aren't implicitly imported by pip/__init__.py): >> >> >>> import pip; public = set(k for k, v in pip.__dict__.items() if not k.startswith('_') and (not hasattr(v, "__name__") or hasattr(v, "__module__") or v.__name__.startswith("pip."))); print(len(public)) >> 32 >> >> If pip really has no stable public API, then it should properly indicate this under introspection (if it already uses relative imports correctly, then the easiest ways to achieve that are to just shove everything under a "pip._impl" subpackage or shuffle it sideways into a "_pip" package). > > Pip does not use relative imports. Is simply documenting the fact there is no public API enough? Pushing everything into a _impl or _pip directory makes me nervous because that's a lot of code churn (and I know there are people using those APIs, and while they aren't technically stable it feels like moving things around just for the sake of an _ in the name is unfriendly to those people. > > Either the existing APIs are moved to a different name, or they get declared stable and pip switches to "internally forked" APIs any time a backwards incompatible change is needed for refactoring purposes (see runpy._run_module_as_main for an example of needing to do this in the standard library). I've had to directly deal with too many issues arising from getting this wrong in the past for me to endorse bundling of a module that doesn't follow this practice with CPython - if introspection indicates an API is public, then it's public and subject to all standard library backwards compatibility guarantees, or else we take the pain *once* and explicitly mark it private by adding a leading underscore rather than leaving it in limbo (contextlib._GeneratorContextManager is a standard library example of the latter approach - it used to lack the leading underscore, suggesting it was a public API when it's really just an implementation detail of contextlib.contextmanager). > Respectfully, I disagree. Pip is not going in to the stdlib, and as such should not be subject to the same API stability policies as the stdlib. If the PyPA team wants to break the API every release, that is their call as the subject matter experts. Pip is not being included as a library at all. What should be subject to compat is the defined command line interface, because pip is a CLI tool. Independently of this discussion I've already been talking to the PyPA team about what they want to consider a stable API, but that is a discussion to be had over in pip-land, not here and not now. This new category of "bundled for your convenience but still external" applications will need new standards, and we should be clear about them for sure, but I think this is going too far and puts undue burden on the PyPA team. Remember the end goal is simply to get an installer in the hands of users easier. --Noah From ncoghlan at gmail.com Sun Jul 14 08:34:57 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 16:34:57 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <28DB7C9B-5978-44FD-B713-C4A17FEC7B16@coderanger.net> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <28DB7C9B-5978-44FD-B713-C4A17FEC7B16@coderanger.net> Message-ID: On 14 July 2013 16:19, Noah Kantrowitz wrote: > On Jul 13, 2013, at 10:58 PM, Nick Coghlan wrote: > > On 14 July 2013 12:46, Donald Stufft wrote: > > Either the existing APIs are moved to a different name, or they get > declared stable and pip switches to "internally forked" APIs any time a > backwards incompatible change is needed for refactoring purposes (see > runpy._run_module_as_main for an example of needing to do this in the > standard library). I've had to directly deal with too many issues arising > from getting this wrong in the past for me to endorse bundling of a module > that doesn't follow this practice with CPython - if introspection indicates > an API is public, then it's public and subject to all standard library > backwards compatibility guarantees, or else we take the pain *once* and > explicitly mark it private by adding a leading underscore rather than > leaving it in limbo (contextlib._GeneratorContextManager is a standard > library example of the latter approach - it used to lack the leading > underscore, suggesting it was a public API when it's really just an > implementation detail of contextlib.contextmanager). > > > > Respectfully, I disagree. Pip is not going in to the stdlib, and as such > should not be subject to the same API stability policies as the stdlib. If > the PyPA team wants to break the API every release, that is their call as > the subject matter experts. Pip is not being included as a library at all. > What should be subject to compat is the defined command line interface, > because pip is a CLI tool. Independently of this discussion I've already > been talking to the PyPA team about what they want to consider a stable > API, but that is a discussion to be had over in pip-land, not here and not > now. This new category of "bundled for your convenience but still external" > applications will need new standards, and we should be clear about them for > sure, but I think this is going too far and puts undue burden on the PyPA > team. Remember the end goal is simply to get an installer in the hands of > users easier. > I would also be fine with a solution where "import pip" issues a warning about API instability if sys.argv[0] indicates the main executable is something other than the pip CLI. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 08:35:41 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 16:35:41 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <28DB7C9B-5978-44FD-B713-C4A17FEC7B16@coderanger.net> Message-ID: On 14 July 2013 16:34, Nick Coghlan wrote: > On 14 July 2013 16:19, Noah Kantrowitz wrote: > >> On Jul 13, 2013, at 10:58 PM, Nick Coghlan wrote: >> > On 14 July 2013 12:46, Donald Stufft wrote: >> > Either the existing APIs are moved to a different name, or they get >> declared stable and pip switches to "internally forked" APIs any time a >> backwards incompatible change is needed for refactoring purposes (see >> runpy._run_module_as_main for an example of needing to do this in the >> standard library). I've had to directly deal with too many issues arising >> from getting this wrong in the past for me to endorse bundling of a module >> that doesn't follow this practice with CPython - if introspection indicates >> an API is public, then it's public and subject to all standard library >> backwards compatibility guarantees, or else we take the pain *once* and >> explicitly mark it private by adding a leading underscore rather than >> leaving it in limbo (contextlib._GeneratorContextManager is a standard >> library example of the latter approach - it used to lack the leading >> underscore, suggesting it was a public API when it's really just an >> implementation detail of contextlib.contextmanager). >> > >> >> Respectfully, I disagree. Pip is not going in to the stdlib, and as such >> should not be subject to the same API stability policies as the stdlib. If >> the PyPA team wants to break the API every release, that is their call as >> the subject matter experts. Pip is not being included as a library at all. >> What should be subject to compat is the defined command line interface, >> because pip is a CLI tool. Independently of this discussion I've already >> been talking to the PyPA team about what they want to consider a stable >> API, but that is a discussion to be had over in pip-land, not here and not >> now. This new category of "bundled for your convenience but still external" >> applications will need new standards, and we should be clear about them for >> sure, but I think this is going too far and puts undue burden on the PyPA >> team. Remember the end goal is simply to get an installer in the hands of >> users easier. >> > > I would also be fine with a solution where "import pip" issues a warning > about API instability if sys.argv[0] indicates the main executable is > something other than the pip CLI. > Oops, meant to add a point to https://github.com/pypa/pip/issues/1052 for that one. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 08:43:35 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 02:43:35 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: On Jul 14, 2013, at 1:58 AM, Nick Coghlan wrote: > Either the existing APIs are moved to a different name, or they get declared stable and pip switches to "internally forked" APIs any time a backwards incompatible change is needed for refactoring purposes (see runpy._run_module_as_main for an example of needing to do this in the standard library). I've had to directly deal with too many issues arising from getting this wrong in the past for me to endorse bundling of a module that doesn't follow this practice with CPython - if introspection indicates an API is public, then it's public and subject to all standard library backwards compatibility guarantees, or else we take the pain *once* and explicitly mark it private by adding a leading underscore rather than leaving it in limbo (contextlib._GeneratorContextManager is a standard library example of the latter approach - it used to lack the leading underscore, suggesting it was a public API when it's really just an implementation detail of contextlib.contextmanager). > > Mere documentation of public vs private generally doesn't cut it, as too many people use dir(), help() and inspect() rather than the published docs to explore APIs. The only general exception I'm aware of is "test" packages, including the standard library's test package, and for those you can make the case that having "test" or "tests" as a name segment is just as clear an indicator of something being private as at least one name segment starting with a leading underscore. > > I really this is a fairly big ask for the pip maintainers, but I *don't* consider "Oh, don't use our module API, it isn't stable" to be an adequate answer for something that is bundled with the standard installers. Beyond that, I don't mind if the answer is to declare the 1.5 API stable or to sprinkle underscore where appropriate or moving everything to a private package - the documentation and the naming conventions just need to be consistent in their private vs public distinctions (although your points do suggest heavily that the right answer is to accept the burden of backwards compatibility for all APIs currently marked public, and move towards the introduction of appropriate private APIs over time through refactoring). I agree with Noah here. In my eyes pip is either an external project with it's own polices on backwards compatibility and governance or it's part of the standard library and under the domain of Python core. I'm completely against moving it into the standard library for all the reasons I've given in the past. Maybe I'm reading too much into this but one of my primary fears here is that including pip with the Python distribution is going to lead to Python core dictating to pip what pip must do with itself. Now I don't mean to say that Python core should have no sway over pip either as they are officially blessing it and including it as part of the official releases. If I am reading too much into it then I apologize. I just want to make sure that the boundaries between the governance of Python and pip are clearly defined and the expectations on both sides are laid out and agreed upon before it happens. And I think this raises a good point about how the two projects are going to interact. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ben at bendarnell.com Sat Jul 13 15:14:59 2013 From: ben at bendarnell.com (Ben Darnell) Date: Sat, 13 Jul 2013 09:14:59 -0400 Subject: [Distutils] Best practices for optional C extensions Message-ID: I'd like to add a C extension to speed up a small bit of code in a package (Tornado), but make it optional both for compatibility with non-cpython implementations and for ease of installation on systems without a C compiler available. Ideally any user who runs "pip install tornado" on a system capable of compiling extensions would get the extensions; if this capability cannot be detected automatically I'd prefer the opt-out case to be the one that requires non-default arguments. Are there any packages that provide a good example to follow for this? PEP 426 uses "c-accelerators" as an example of an "extra", but it's unclear how this would work (based on the equivalent setuptools feature). There doesn't appear to be a way to know what extras are requested at build time. If the extra required a package like cython then you could build the extension whenever that package is present, but what about hand-written extensions? Extras are also opt-in instead of opt-out, so I'd have to recommend that most people use "pip install tornado[fast]" instead of "pip install tornado" (with "tornado[slow]" available as an option for limited environments). Thanks, -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at bendarnell.com Sat Jul 13 18:29:20 2013 From: ben at bendarnell.com (Ben Darnell) Date: Sat, 13 Jul 2013 12:29:20 -0400 Subject: [Distutils] Best practices for optional C extensions Message-ID: I'd like to add a C extension to speed up a small bit of code in a package (Tornado), but make it optional both for compatibility with non-cpython implementations and for ease of installation on systems without a C compiler available. Ideally any user who runs "pip install tornado" on a system capable of compiling extensions would get the extensions; if this capability cannot be detected automatically I'd prefer the opt-out case to be the one that requires non-default arguments. Are there any packages that provide a good example to follow for this? PEP 426 uses "c-accelerators" as an example of an "extra", but it's unclear how this would work (based on the equivalent setuptools feature). There doesn't appear to be a way to know what extras are requested at build time. If the extra required a package like cython then you could build the extension whenever that package is present, but what about hand-written extensions? Extras are also opt-in instead of opt-out, so I'd have to recommend that most people use "pip install tornado[fast]" instead of "pip install tornado" (with "tornado[slow]" available as an option for limited environments). Thanks, -Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 09:01:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 17:01:58 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: On 14 July 2013 16:43, Donald Stufft wrote: > I just want to make sure that the boundaries between the governance of > Python and pip are clearly defined and the expectations on both sides are > laid out and agreed upon before it happens. And I think this raises a good > point about how the two projects are going to interact. > Agreed, I think the boundaries need to be clear. If something installed by default is *only* support code for a bundled application, then it should either adhere to the standard library's backwards compatibility policies (by appropriately marking private APIs as private), or else it should issue a warning when imported by any other application. Either of those options sounds good to me. However, I consider expecting people to "just know" (or to look at documentation to determine) which provided modules are public or private without adhering to standard naming conventions or providing an explicit runtime warning to be unreasonable. (and yes, if "pip" goes down the runtime warning path, we should probably look into providing a runtime warning for at least the "test" namespace and possibly even the "idlelib" namespace, too) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 09:13:50 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 03:13:50 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> Message-ID: <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> On Jul 14, 2013, at 3:01 AM, Nick Coghlan wrote: > On 14 July 2013 16:43, Donald Stufft wrote: > I just want to make sure that the boundaries between the governance of Python and pip are clearly defined and the expectations on both sides are laid out and agreed upon before it happens. And I think this raises a good point about how the two projects are going to interact. > > Agreed, I think the boundaries need to be clear. If something installed by default is *only* support code for a bundled application, then it should either adhere to the standard library's backwards compatibility policies (by appropriately marking private APIs as private), or else it should issue a warning when imported by any other application. Either of those options sounds good to me. > > However, I consider expecting people to "just know" (or to look at documentation to determine) which provided modules are public or private without adhering to standard naming conventions or providing an explicit runtime warning to be unreasonable. > > (and yes, if "pip" goes down the runtime warning path, we should probably look into providing a runtime warning for at least the "test" namespace and possibly even the "idlelib" namespace, too) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia Yea I forget to talk about the *actual* change that prompted that email when I started feeling dictated to which touched upon one of my fears in this process :) I'm not against either renaming or emitting a warning. I was actually asking if just documenting the fact would be ok because I fear bugs from the code churn that renaming would cause :) I think we'd need to rename things because emitting a warning is an all or nothing ordeal and we've had requests to make certain parts of the API public for Chef and other tools like it. A question that certainly raises in my mind though is "standard library's backwards compatibility policies". What affect does this have on *actual* public API exposed from pip? Does it mean we cannot break compatibility for them until Python 4.x? That sounds very onerous for something that is installed in a way that allows easy upgrade and downgrading separately from Python to match the version requirements of someone using that library. Pip has it's own versions and develops at it's own speed. I think it would be reasonable for the pip maintainers to be asked to declare a public API (even if that's "None") using the naming scheme or an import warning and declare a backwards compatibility policy for pip itself so that people can know what to expect from pip. I do not however, believe it is reasonable to bind pip to the same policy that CPython uses nor the same schedule. (If you weren't suggesting that I apologize). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Jul 14 09:35:34 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 17:35:34 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> Message-ID: On 14 July 2013 17:13, Donald Stufft wrote: > I think it would be reasonable for the pip maintainers to be asked to > declare a public API (even if that's "None") using the naming scheme or an > import warning and declare a backwards compatibility policy for pip itself > so that people can know what to expect from pip. I do not however, believe > it is reasonable to bind pip to the same policy that CPython uses nor the > same schedule. (If you weren't suggesting that I apologize). > The main elements of CPython's backwards compatibility policy that I consider relevant are: * Use leading underscores to denote private APIs with no backwards compatibility guarantees * Be conservative with deprecating public APIs that aren't fundamentally broken * Use DeprecationWarning to give at least one (pip) release notice of an upcoming backwards incompatible change We *are* sometimes quite aggressive with deprecation and removal even in the standard library - we removed contextlib.nested from Python 3.2 as a problematic bug magnet well before I came up with the contextlib.ExitStack API as a less error prone replacement in Python 3.3. It's only when it comes to core syntax and builtin behaviour that we're likely to hit issues that simply don't have a sensible deprecation strategy, so we decide we have to live with them indefinitely. That said, I think the answer to this discussion also affects the answer to whether or not CPython maintenance releases should update to newer versions of pip: if pip chooses to adopt a faster deprecation cycle than CPython, then our maintenance releases shouldn't bundle updated versions. Instead, they should follow the policy: * if this is a new major release, or the first maintenance release to bundle pip, bundle the latest available version of pip * otherwise, bundle the same version of pip as the previous release This would mean we'd be asking the pip team to help out by providing security releases for the bundled version, so we can get that without breaking the public API that's available by default. On the other hand, if the pip team are willing to use long deprecation cycles then we can just bundle the updated versions and not worry about security releases (I'd prefer that, but it only works if the pip team are willing to put up with keeping old APIs around for a couple of years before killing them off once the affected CPython branches go into security fix only mode). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 09:50:52 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 03:50:52 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> Message-ID: I do think pip is pretty conservative about backwards compat other than the security related changes I've been doing. I think we can find the middle ground that lets things work smoothly here :). I was just making sure that we wernt going to have to keep things around for really long times like python 4 ;) On Jul 14, 2013, at 3:35 AM, Nick Coghlan wrote: > On 14 July 2013 17:13, Donald Stufft wrote: >> I think it would be reasonable for the pip maintainers to be asked to declare a public API (even if that's "None") using the naming scheme or an import warning and declare a backwards compatibility policy for pip itself so that people can know what to expect from pip. I do not however, believe it is reasonable to bind pip to the same policy that CPython uses nor the same schedule. (If you weren't suggesting that I apologize). > > The main elements of CPython's backwards compatibility policy that I consider relevant are: > > * Use leading underscores to denote private APIs with no backwards compatibility guarantees > * Be conservative with deprecating public APIs that aren't fundamentally broken > * Use DeprecationWarning to give at least one (pip) release notice of an upcoming backwards incompatible change > > We *are* sometimes quite aggressive with deprecation and removal even in the standard library - we removed contextlib.nested from Python 3.2 as a problematic bug magnet well before I came up with the contextlib.ExitStack API as a less error prone replacement in Python 3.3. It's only when it comes to core syntax and builtin behaviour that we're likely to hit issues that simply don't have a sensible deprecation strategy, so we decide we have to live with them indefinitely. > > That said, I think the answer to this discussion also affects the answer to whether or not CPython maintenance releases should update to newer versions of pip: if pip chooses to adopt a faster deprecation cycle than CPython, then our maintenance releases shouldn't bundle updated versions. Instead, they should follow the policy: > > * if this is a new major release, or the first maintenance release to bundle pip, bundle the latest available version of pip > * otherwise, bundle the same version of pip as the previous release > > This would mean we'd be asking the pip team to help out by providing security releases for the bundled version, so we can get that without breaking the public API that's available by default. > > On the other hand, if the pip team are willing to use long deprecation cycles then we can just bundle the updated versions and not worry about security releases (I'd prefer that, but it only works if the pip team are willing to put up with keeping old APIs around for a couple of years before killing them off once the affected CPython branches go into security fix only mode). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 09:55:12 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 08:55:12 +0100 Subject: [Distutils] Best practices for optional C extensions In-Reply-To: References: Message-ID: On 13 July 2013 14:14, Ben Darnell wrote: > I'd like to add a C extension to speed up a small bit of code in a package > (Tornado), but make it optional both for compatibility with non-cpython > implementations and for ease of installation on systems without a C > compiler available. Ideally any user who runs "pip install tornado" on a > system capable of compiling extensions would get the extensions; if this > capability cannot be detected automatically I'd prefer the opt-out case to > be the one that requires non-default arguments. Are there any packages > that provide a good example to follow for this? I believe that coverage has an optional C extension like this. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 10:12:22 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 18:12:22 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> Message-ID: On 14 July 2013 17:50, Donald Stufft wrote: > I do think pip is pretty conservative about backwards compat other than > the security related changes I've been doing. > > I think we can find the middle ground that lets things work smoothly here > :). I was just making sure that we wernt going to have to keep things > around for really long times like python 4 ;) > Even most of the standard library isn't that conservative - it usually only happens when we can't find a sensible place to hook up DeprecationWarning. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Sun Jul 14 10:23:36 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 01:23:36 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> Message-ID: <6E6C1AA7-365E-4FD9-8213-A1E809473B74@coderanger.net> On Jul 14, 2013, at 12:35 AM, Nick Coghlan wrote: > On 14 July 2013 17:13, Donald Stufft wrote: > I think it would be reasonable for the pip maintainers to be asked to declare a public API (even if that's "None") using the naming scheme or an import warning and declare a backwards compatibility policy for pip itself so that people can know what to expect from pip. I do not however, believe it is reasonable to bind pip to the same policy that CPython uses nor the same schedule. (If you weren't suggesting that I apologize). > > The main elements of CPython's backwards compatibility policy that I consider relevant are: > > * Use leading underscores to denote private APIs with no backwards compatibility guarantees > * Be conservative with deprecating public APIs that aren't fundamentally broken > * Use DeprecationWarning to give at least one (pip) release notice of an upcoming backwards incompatible change > > We *are* sometimes quite aggressive with deprecation and removal even in the standard library - we removed contextlib.nested from Python 3.2 as a problematic bug magnet well before I came up with the contextlib.ExitStack API as a less error prone replacement in Python 3.3. It's only when it comes to core syntax and builtin behaviour that we're likely to hit issues that simply don't have a sensible deprecation strategy, so we decide we have to live with them indefinitely. > > That said, I think the answer to this discussion also affects the answer to whether or not CPython maintenance releases should update to newer versions of pip: if pip chooses to adopt a faster deprecation cycle than CPython, then our maintenance releases shouldn't bundle updated versions. Instead, they should follow the policy: > > * if this is a new major release, or the first maintenance release to bundle pip, bundle the latest available version of pip > * otherwise, bundle the same version of pip as the previous release > > This would mean we'd be asking the pip team to help out by providing security releases for the bundled version, so we can get that without breaking the public API that's available by default. > > On the other hand, if the pip team are willing to use long deprecation cycles then we can just bundle the updated versions and not worry about security releases (I'd prefer that, but it only works if the pip team are willing to put up with keeping old APIs around for a couple of years before killing them off once the affected CPython branches go into security fix only mode). If I can surmise your worry here, it is that people will open an interactive terminal, import pip, reflect out the classes/methods/etc, see that despite being mentioned no-where in the Python or pip documentation the methods and classes don't start with an underscore, and thus conclude that this is a stable API to build against? I agree that conventions are good, but I have to say this sounds like a bit of a stretch and certainly anyone complaining that their undocumented API that they only found via reflection (or reading the pip source) was broken basically gets what they deserve. The point I was trying to make is that a major shift in thinking is needed here. pip is not part of CPython, regardless of this bundling neither this mailing list nor the CPython team will have any control (aside from the nuclear option that the CPython team can elect to stop bundling pip). If you think it would be good for the code-health of pip to be clearer about what their public API is, I will support that all the way and in fact have an open ticket against pip to that effect already, but that is something for the pip team to decide. This does very much mean that the CPython team is not just backing the pip codebase, but the PyPA/pip team. I think the past few years have shown them deserving of this trust, and they should be allowed to run things as they see fit. These lines get blurry since several people move back and forth between CPython and PyPA (and distutils and PyPI, etc) hats, so I think this must be stated clearly up front that what the CPython team thinks is "reasonable" for an API policy will be nothing more than a recommendation from very knowledgable colleagues and will be given the appropriate consideration and respect it deserves based on that. Hopefully that makes my point-of-view a little clearer. --Noah From ncoghlan at gmail.com Sun Jul 14 12:41:31 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 20:41:31 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <6E6C1AA7-365E-4FD9-8213-A1E809473B74@coderanger.net> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> <6E6C1AA7-365E-4FD9-8213-A1E809473B74@coderanger.net> Message-ID: On 14 Jul 2013 18:24, "Noah Kantrowitz" wrote: > > > On Jul 14, 2013, at 12:35 AM, Nick Coghlan wrote: > > > On 14 July 2013 17:13, Donald Stufft wrote: > > I think it would be reasonable for the pip maintainers to be asked to declare a public API (even if that's "None") using the naming scheme or an import warning and declare a backwards compatibility policy for pip itself so that people can know what to expect from pip. I do not however, believe it is reasonable to bind pip to the same policy that CPython uses nor the same schedule. (If you weren't suggesting that I apologize). > > > > The main elements of CPython's backwards compatibility policy that I consider relevant are: > > > > * Use leading underscores to denote private APIs with no backwards compatibility guarantees > > * Be conservative with deprecating public APIs that aren't fundamentally broken > > * Use DeprecationWarning to give at least one (pip) release notice of an upcoming backwards incompatible change > > > > We *are* sometimes quite aggressive with deprecation and removal even in the standard library - we removed contextlib.nested from Python 3.2 as a problematic bug magnet well before I came up with the contextlib.ExitStack API as a less error prone replacement in Python 3.3. It's only when it comes to core syntax and builtin behaviour that we're likely to hit issues that simply don't have a sensible deprecation strategy, so we decide we have to live with them indefinitely. > > > > That said, I think the answer to this discussion also affects the answer to whether or not CPython maintenance releases should update to newer versions of pip: if pip chooses to adopt a faster deprecation cycle than CPython, then our maintenance releases shouldn't bundle updated versions. Instead, they should follow the policy: > > > > * if this is a new major release, or the first maintenance release to bundle pip, bundle the latest available version of pip > > * otherwise, bundle the same version of pip as the previous release > > > > This would mean we'd be asking the pip team to help out by providing security releases for the bundled version, so we can get that without breaking the public API that's available by default. > > > > On the other hand, if the pip team are willing to use long deprecation cycles then we can just bundle the updated versions and not worry about security releases (I'd prefer that, but it only works if the pip team are willing to put up with keeping old APIs around for a couple of years before killing them off once the affected CPython branches go into security fix only mode). > > If I can surmise your worry here, it is that people will open an interactive terminal, import pip, reflect out the classes/methods/etc, see that despite being mentioned no-where in the Python or pip documentation the methods and classes don't start with an underscore, and thus conclude that this is a stable API to build against? I agree that conventions are good, but I have to say this sounds like a bit of a stretch and certainly anyone complaining that their undocumented API that they only found via reflection (or reading the pip source) was broken basically gets what they deserve. The point I was trying to make is that a major shift in thinking is needed here. pip is not part of CPython, regardless of this bundling neither this mailing list nor the CPython team will have any control (aside from the nuclear option that the CPython team can elect to stop bundling pip). If you think it would be good for the code-health of pip to be clearer about what their public API is, I will suppor > t that all the way and in fact have an open ticket against pip to that effect already, but that is something for the pip team to decide. This does very much mean that the CPython team is not just backing the pip codebase, but the PyPA/pip team. I think the past few years have shown them deserving of this trust, and they should be allowed to run things as they see fit. These lines get blurry since several people move back and forth between CPython and PyPA (and distutils and PyPI, etc) hats, so I think this must be stated clearly up front that what the CPython team thinks is "reasonable" for an API policy will be nothing more than a recommendation from very knowledgable colleagues and will be given the appropriate consideration and respect it deserves based on that. Hopefully that makes my point-of-view a little clearer. I started a thread on python-dev proposing strengthened wording in PEP 8 regarding marking of private interfaces, but beyond that, yes, I now agree that this isn't a blocker for bundling pip with CPython. Cheers, Nick. > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 13:09:51 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 12:09:51 +0100 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs Message-ID: I don't think there is any doubt that we need to document to at least some extent how to use pip, in the Python documentation for 3.4. The obvious place is "Installing Python Modules" as a chapter to itself. What is less clear to me is how much to document - just basic "pip install XXX", or the whole pip command set, or somewhere in between? Should there be some type of "refer to www.pip-installer.org for the complete documentation" reference? Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 13:42:32 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 21:42:32 +1000 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On 14 July 2013 21:09, Paul Moore wrote: > I don't think there is any doubt that we need to document to at least some > extent how to use pip, in the Python documentation for 3.4. The obvious > place is "Installing Python Modules" as a chapter to itself. > > What is less clear to me is how much to document - just basic "pip install > XXX", or the whole pip command set, or somewhere in between? Should there > be some type of "refer to www.pip-installer.org for the complete > documentation" reference? > This is the hole https://python-packaging-user-guide.readthedocs.org/en/latest/ is supposed to fill - once it's "ready" (i.e. things have stabilised sufficiently , then I'd like to replace the "Installing Python Modules" and "Distributing Python Modules" sections for 2.7 and 3.3 with some *very* abbreviated quick start guides that then reference that site. The 3.3 changes would then carry over into 3.4. I spent some time at PyCon AU talking to Matthew Iverson ( https://bitbucket.org/Ivoz/python-packaging-user-guide) who I believe Marcus was hoping to get back to that after pip 1.4 was out the door, but anyone on the PyPA list on BitBucket actually has full access to accept pull requests, etc. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 15:01:50 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 14:01:50 +0100 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On 14 July 2013 12:42, Nick Coghlan wrote: > This is the hole > https://python-packaging-user-guide.readthedocs.org/en/latest/ is > supposed to fill - once it's "ready" (i.e. things have stabilised > sufficiently , then I'd like to replace the "Installing Python Modules" and > "Distributing Python Modules" sections for 2.7 and 3.3 with some *very* > abbreviated quick start guides that then reference that site. The 3.3 > changes would then carry over into 3.4. > Hmm, OK. I've no problem with that (although I do find the packaging guide pretty hard to get into for an end user who only wants to *use* packages, not *create* them, but that's a separate issue for me to address by providing some pull requests). I was more thinking in terms of your quick start guides. I think we should explain *in the core documentation* how to (a) install a new package, (b) uninstall a package, (c) list what is installed and (d) upgrade pip itself. That translates to the pip install, uninstall, and list commands at a minimum. I could offer some text, if that's the way you want to go with this. How about if I provide a new (short) document called something like "Python package management" and we work out how to integrate it into the docs as things settle down? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 15:29:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 14:29:22 +0100 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) Message-ID: On 13 July 2013 10:05, Paul Moore wrote: > How robust is the process of upgrading pip using itself? Specifically on > Windows, where these things typically seem less reliable. OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason for the failure is simple enough to explain - the pip.exe wrapper is held open by the OS while it's in use, so that the upgrade cannot replace it. The result is a failed upgrade and a partially installed new version of pip. In practice, the exe stubs are probably added fairly late in the install (at least when installing from sdist, with a wheel that depends on the order of the files in the wheel), so it's probably only a little bit broken, but "a little bit broken" is still broken :-( On the other hand, "python -m pip install -U pip" works fine because it avoids the exe wrappers. There's a lot of scope for user confusion and frustration in all this. For standalone pip I've tended to recommend "don't do that" - manually uninstall and reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. That sort of advice isn't going to be realistic for a pip bundled with CPython. Does anyone have any suggestions? Paul. PS In better news, apart from this issue, pip upgrades of pip and setuptools seem fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 14 15:43:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 14 Jul 2013 23:43:46 +1000 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On 14 July 2013 23:01, Paul Moore wrote: > > On 14 July 2013 12:42, Nick Coghlan wrote: > >> This is the hole >> https://python-packaging-user-guide.readthedocs.org/en/latest/ is >> supposed to fill - once it's "ready" (i.e. things have stabilised >> sufficiently , then I'd like to replace the "Installing Python Modules" and >> "Distributing Python Modules" sections for 2.7 and 3.3 with some *very* >> abbreviated quick start guides that then reference that site. The 3.3 >> changes would then carry over into 3.4. >> > > Hmm, OK. I've no problem with that (although I do find the packaging guide > pretty hard to get into for an end user who only wants to *use* packages, > not *create* them, but that's a separate issue for me to address by > providing some pull requests). I was more thinking in terms of your quick > start guides. I think we should explain *in the core documentation* how to > (a) install a new package, (b) uninstall a package, (c) list what is > installed and (d) upgrade pip itself. That translates to the pip install, > uninstall, and list commands at a minimum. > > I could offer some text, if that's the way you want to go with this. How > about if I provide a new (short) document called something like "Python > package management" and we work out how to integrate it into the docs as > things settle down? > That sounds great - so far it's mostly just been myself and Marcus thinking about it (mostly Marcus, to be honest, along with a couple of folks that submitted pull requests and BitBucket issues), and it keeps getting bumped down the todo list by other things. I think we're getting closer to having something stable enough to document clearly, though - with distribute merged back into setuptools and pip not far away, the bootstrapping seems to be the only remaining slightly messy part (since the 3.4 discussions aren't relevant to the user guide as yet). As far as your first point goes, I agree the "Installation Tutorial" part should probably come first and definitely needs more content. I did just accept a pull request earlier that at least makes that page more than just a list of headings (see https://python-packaging-user-guide.readthedocs.org/en/latest/installation_tutorial.html- courtesy of https://bitbucket.org/alexjeffburke) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 14 16:25:25 2013 From: brett at python.org (Brett Cannon) Date: Sun, 14 Jul 2013 10:25:25 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <6E6C1AA7-365E-4FD9-8213-A1E809473B74@coderanger.net> References: <29FDBDCE-EC77-40BD-823B-A291C5C4EE49@stufft.io> <09947DB4-FD58-469B-85DA-971B05B18799@stufft.io> <6E6C1AA7-365E-4FD9-8213-A1E809473B74@coderanger.net> Message-ID: On Sun, Jul 14, 2013 at 4:23 AM, Noah Kantrowitz wrote: > > On Jul 14, 2013, at 12:35 AM, Nick Coghlan wrote: > > > On 14 July 2013 17:13, Donald Stufft wrote: > > I think it would be reasonable for the pip maintainers to be asked to > declare a public API (even if that's "None") using the naming scheme or an > import warning and declare a backwards compatibility policy for pip itself > so that people can know what to expect from pip. I do not however, believe > it is reasonable to bind pip to the same policy that CPython uses nor the > same schedule. (If you weren't suggesting that I apologize). > > > > The main elements of CPython's backwards compatibility policy that I > consider relevant are: > > > > * Use leading underscores to denote private APIs with no backwards > compatibility guarantees > > * Be conservative with deprecating public APIs that aren't fundamentally > broken > > * Use DeprecationWarning to give at least one (pip) release notice of an > upcoming backwards incompatible change > > > > We *are* sometimes quite aggressive with deprecation and removal even in > the standard library - we removed contextlib.nested from Python 3.2 as a > problematic bug magnet well before I came up with the contextlib.ExitStack > API as a less error prone replacement in Python 3.3. It's only when it > comes to core syntax and builtin behaviour that we're likely to hit issues > that simply don't have a sensible deprecation strategy, so we decide we > have to live with them indefinitely. > > > > That said, I think the answer to this discussion also affects the answer > to whether or not CPython maintenance releases should update to newer > versions of pip: if pip chooses to adopt a faster deprecation cycle than > CPython, then our maintenance releases shouldn't bundle updated versions. > Instead, they should follow the policy: > > > > * if this is a new major release, or the first maintenance release to > bundle pip, bundle the latest available version of pip > > * otherwise, bundle the same version of pip as the previous release > > > > This would mean we'd be asking the pip team to help out by providing > security releases for the bundled version, so we can get that without > breaking the public API that's available by default. > > > > On the other hand, if the pip team are willing to use long deprecation > cycles then we can just bundle the updated versions and not worry about > security releases (I'd prefer that, but it only works if the pip team are > willing to put up with keeping old APIs around for a couple of years before > killing them off once the affected CPython branches go into security fix > only mode). > > If I can surmise your worry here, it is that people will open an > interactive terminal, import pip, reflect out the classes/methods/etc, see > that despite being mentioned no-where in the Python or pip documentation > the methods and classes don't start with an underscore, and thus conclude > that this is a stable API to build against? Yes, and to make that statement even stronger: all of that happening with a freshly installed copy of Python with no external packages installed. Nick's worry stems from experience (which I have also had) where people simply don't check docs as to whether something is public in the stdlib and so it ends up being considered such by users to the point that we feel obliged to support it. > I agree that conventions are good, but I have to say this sounds like a > bit of a stretch and certainly anyone complaining that their undocumented > API that they only found via reflection (or reading the pip source) was > broken basically gets what they deserve. That's what we typically say for older modules, especially when we are cranky. =) But it doesn't stop the wingeing and bug reports. Luckily I think we have gotten better about this. > The point I was trying to make is that a major shift in thinking is needed > here. pip is not part of CPython, regardless of this bundling neither this > mailing list nor the CPython team will have any control (aside from the > nuclear option that the CPython team can elect to stop bundling pip). If > you think it would be good for the code-health of pip to be clearer about > what their public API is, I will suppor > t that all the way and in fact have an open ticket against pip to that > effect already, but that is something for the pip team to decide. This does > very much mean that the CPython team is not just backing the pip codebase, > but the PyPA/pip team. I think the past few years have shown them deserving > of this trust, and they should be allowed to run things as they see fit. > These lines get blurry since several people move back and forth between > CPython and PyPA (and distutils and PyPI, etc) hats, so I think this must > be stated clearly up front that what the CPython team thinks is > "reasonable" for an API policy will be nothing more than a recommendation > from very knowledgable colleagues and will be given the appropriate > consideration and respect it deserves based on that. Hopefully that makes > my point-of-view a little clearer. > I think it's all going to come down to messaging. It will have to be yelled form the top of every mountain that pip is being bundled with Python as a convenience to the community, but that it is **NOT** part of the (C)Python project and thus has it's own development process, issue tracker, etc. If Python bundles pip then python-dev will make promises about what versions we include in bugfix releases, how it's bundled, etc., but otherwise it's a separate project with its own rules. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Jul 14 16:28:54 2013 From: brett at python.org (Brett Cannon) Date: Sun, 14 Jul 2013 10:28:54 -0400 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On Sun, Jul 14, 2013 at 2:13 AM, Nick Coghlan wrote: > Based on the recent discussions, I now plan to reject the pip > bootstrapping-on-first-invocation approach described in PEP 439 in favour > of a new PEP that proposes: > [SNIP] > * ensuring that, for Python 3.4, "python3" and "python3.4" are available > for command line invocation of Python, even on Windows > Can I ask why this is part of the PEP? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Sun Jul 14 16:31:22 2013 From: tseaver at palladion.com (Tres Seaver) Date: Sun, 14 Jul 2013 10:31:22 -0400 Subject: [Distutils] Best practices for optional C extensions In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/13/2013 09:14 AM, Ben Darnell wrote: > I'd like to add a C extension to speed up a small bit of code in a > package (Tornado), but make it optional both for compatibility with > non-cpython implementations and for ease of installation on systems > without a C compiler available. Ideally any user who runs "pip > install tornado" on a system capable of compiling extensions would get > the extensions; if this capability cannot be detected automatically > I'd prefer the opt-out case to be the one that requires non-default > arguments. Are there any packages that provide a good example to > follow for this? > > PEP 426 uses "c-accelerators" as an example of an "extra", but it's > unclear how this would work (based on the equivalent setuptools > feature). There doesn't appear to be a way to know what extras are > requested at build time. If the extra required a package like cython > then you could build the extension whenever that package is present, > but what about hand-written extensions? Extras are also opt-in > instead of opt-out, so I'd have to recommend that most people use "pip > install tornado[fast]" instead of "pip install tornado" (with > "tornado[slow]" available as an option for limited environments). zope.interface subclasses the 'build_ext' command so: - ---------------------------- %< ----------------------------- from distutils.command.build_ext import build_ext from distutils.errors import CCompilerError from distutils.errors import DistutilsExecError from distutils.errors import DistutilsPlatformError class optional_build_ext(build_ext): """Allow the building of C extensions to fail. """ def run(self): try: build_ext.run(self) except DistutilsPlatformError as e: self._unavailable(e) def build_extension(self, ext): try: build_ext.build_extension(self, ext) except (CCompilerError, DistutilsExecError) as e: self._unavailable(e) def _unavailable(self, e): print('*' * 80) print("""WARNING: An optional code optimization (C extension) could not be compiled. Optimizations for this package will not be available!""") print() print(e) print('*' * 80) - ---------------------------- %< ----------------------------- Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHitjoACgkQ+gerLs4ltQ7SJQCgrhgN58g9ztFPEkFAOM49Wu4p RpQAoLnboKietjTx3eXho1kyRvH3r2uN =qWRK -----END PGP SIGNATURE----- From ncoghlan at gmail.com Sun Jul 14 16:33:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jul 2013 00:33:45 +1000 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On 15 July 2013 00:28, Brett Cannon wrote: > > > > On Sun, Jul 14, 2013 at 2:13 AM, Nick Coghlan wrote: >> >> Based on the recent discussions, I now plan to reject the pip >> bootstrapping-on-first-invocation approach described in PEP 439 in favour of >> a new PEP that proposes: >> [SNIP] >> * ensuring that, for Python 3.4, "python3" and "python3.4" are available >> for command line invocation of Python, even on Windows > > > Can I ask why this is part of the PEP? Mostly because "pip3" makes no sense without it. If we *don't* bring Windows into conformance with the way other platforms handle parallel Python 2 and Python 3 installs, then the PEP should document an explicit rationale for not doing it (the py launcher doesn't count in that regard, since the real purpose of that is to handle shebang lines through file associations, with the direct command line usage as an added bonus). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From setuptools at bugs.python.org Sun Jul 14 17:10:31 2013 From: setuptools at bugs.python.org (richard) Date: Sun, 14 Jul 2013 15:10:31 +0000 Subject: [Distutils] [issue154] file protection group/world writeable on egg-info files since 0.7x Message-ID: <1373814631.49.0.577934070272.issue154@psf.upfronthosting.co.za> New submission from richard: since 0.7.x, there are some issues in the tarball setuptools.egg-info directory...namely, 1. the file protection of the files is group/world writeable (should be group/world read-only) 2. there are additional .orig files 3. there is an additional directory EGG-INFO containing an obsolete copy of PKG-INFO. This should be trivial to fix in the source tree. extract below from the 0.8 tarball: drwxrwxrwx 0/0 0 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/ -rw-rw-rw- 0/0 654 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/dependency_links.txt drwxrwxrwx 0/0 0 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/EGG-INFO/ -rw-rw-rw- 0/0 153 2013-07-02 17:30 setuptools-0.8/setuptools.egg-info/EGG-INFO/PKG-INFO -rw-rw-rw- 0/0 2773 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/entry_points.txt -rw-rw-rw- 0/0 2773 2013-07-03 14:12 setuptools-0.8/setuptools.egg-info/entry_points.txt.orig -rw-rw-rw- 0/0 45823 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/PKG-INFO -rw-rw-rw- 0/0 186 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/requires.txt -rw-rw-rw- 0/0 186 2013-07-03 14:12 setuptools-0.8/setuptools.egg-info/requires.txt.orig -rw-rw-rw- 0/0 3663 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/SOURCES.txt -rw-rw-rw- 0/0 49 2013-07-05 19:12 setuptools-0.8/setuptools.egg-info/top_level.txt -rw-rw-rw- 0/0 2 2013-07-02 17:48 setuptools-0.8/setuptools.egg-info/zip-safe ---------- messages: 737 nosy: richard priority: bug status: unread title: file protection group/world writeable on egg-info files since 0.7x _______________________________________________ Setuptools tracker _______________________________________________ From pnasrat at gmail.com Sun Jul 14 17:10:28 2013 From: pnasrat at gmail.com (Paul Nasrat) Date: Sun, 14 Jul 2013 11:10:28 -0400 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On 14 July 2013 02:13, Nick Coghlan wrote: > Based on the recent discussions, I now plan to reject the pip > bootstrapping-on-first-invocation approach described in PEP 439 in favour > of a new PEP that proposes: > > * bundling the latest version of pip with the CPython binary installers > for Mac OS X and Windows for all future CPython releases (including > maintenance releases) > * aligns the proposal with the Python 3.4 release schedule by noting that > CPython changes must be completed by the first 3.4 beta, while pip changes > must be completed by the first 3.4 release candidate. > * ensuring that, for Python 3.4, "python3" and "python3.4" are available > for command line invocation of Python, even on Windows > * ensuring that the bundled pip, for Python 3.4, ensures "pip", "pip3" and > "pip3.4" are available for command line invocation of Python, even on > Windows > * ensuring that the bundled pip is able to upgrade/downgrade itself > independent of the CPython release cycle > * ensuring that pip is automatically available in virtual environments > created with pyvenv > * adding a "get-pip.py" script to Tools/scripts in the CPython repo for > bootstrapping the latest pip release from PyPI into custom CPython source > builds > > Note that there are still open questions to be resolved, which is why an > author/champion is needed: > I've a bunch of contacts in various distros still - I've not championed a PEP before but I would be happy to take this on. > * what guidance, if any, should we provide to Linux distro packagers? > * how should maintenance updates handle the presence of an existing pip > installation? > There are various distro packaging specific ways of handling this. Hard requirements, recommends, obsoleting the standalone package and providing it virtually as part of Automatically upgrade older versions to the bundled version, while leaving > newer versions alone? Force installation of the bundled version? > My personal experience is that forcing the bundled version to OS with strong in-built packaging (Linux, BSD, other *NIX) is likely to meet with some resistance. I can certainly talk with some people, my instinct is it's likely to be only bundle with installers, allow optional install as part of the cPython build which can then be subpackaged/ignored for seperate pip/bundled as distros so desire. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Sun Jul 14 18:45:31 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sun, 14 Jul 2013 16:45:31 +0000 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: Message-ID: From: Paul Moore > On 13 July 2013 10:05, Paul Moore wrote: > How robust is the process of upgrading pip using itself? Specifically on > Windows, where these things typically seem less reliable. > > OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason > for the failure is simple enough to explain - the pip.exe wrapper is held open > by the OS while it's in use, so that the upgrade cannot replace it. > > The result is a failed upgrade and a partially installed new version of pip. In > practice, the exe stubs are probably added fairly late in the install (at least > when installing from sdist, with a wheel that depends on the order of the files > in the wheel), so it's probably only a little bit broken, but "a little bit > broken" is still broken :-( > > On the other hand, "python -m pip install -U pip" works fine because it avoids > the exe wrappers. > > There's a lot of scope for user confusion and frustration in all this. For > standalone pip I've tended to recommend "don't do that" - manually uninstall and > reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. > That sort of advice isn't going to be realistic for a pip bundled with CPython. > > Does anyone have any suggestions? Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. Cheers, Steve From qwcode at gmail.com Sun Jul 14 19:02:20 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 10:02:20 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: > Hmm, OK. I've no problem with that (although I do find the packaging guide > pretty hard to get into for an end user who only wants to *use* packages, > not *create* them > there's a section marked "Installation Tutorial". Someone wanting to install packages should be able to get into that, once it actually has content : ) > I think we should explain *in the core documentation* > How about if I provide a new (short) document called something like "Python > package management" > IMO, we should put the energy into one comprehensive document (the new user guide) and link out to it, not duplicate effort into the core docs. The "Installing Python Modules"/"Distributing Python Modules" documents should be integrated into the new User Guide. It will be confusing and anti-zen for a user to come upon those documents and have to reconcile it with the user guide Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 19:03:21 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 18:03:21 +0100 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: Message-ID: On 14 July 2013 17:45, Steve Dower wrote: > So unless the exe wrapper is changing with each version, I think the best > way of handling this is to not force them to be replaced when they have not > changed. Thanks. That sounds annoyingly complex (pretty much as I expected, though). My feeling is that I'd like to remove the exe wrapper altogether, and use a .py file. I need to check what issues there might be with that before recommending it, though. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 19:06:57 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 13:06:57 -0400 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: Message-ID: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> On Jul 14, 2013, at 1:03 PM, Paul Moore wrote: > On 14 July 2013 17:45, Steve Dower wrote: > So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. > > Thanks. That sounds annoyingly complex (pretty much as I expected, though). My feeling is that I'd like to remove the exe wrapper altogether, and use a .py file. I need to check what issues there might be with that before recommending it, though. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Wouldn't a .py file make the command `pip.py`` and not ``pip`` ? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sun Jul 14 19:11:21 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 18:11:21 +0100 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On 14 July 2013 18:02, Marcus Smith wrote: > there's a section marked "Installation Tutorial". Someone wanting to > install packages should be able to get into that, once it actually has > content : ) > Yes, I see that. I'm not sure I like the up-front "decide between user or system or virtualenv" presentation, though. I'm working on something I prefer, see what you think when it's done. My perspective is that someone who's used to writing Python code wants to install requests (and get on with his latest script). How does he do that? He doesn't want to be presented with decisions to make, he just wants to know how to do it. "pip install --user requests" is the answer he is looking for, frankly. (Missing out --user gets permission issues to deal with, and virtualenvs are more than he's looking to deal with at the moment). The guide should get this person to that command as quickly as possible. I think we should explain *in the core documentation* >> > How about if I provide a new (short) document called something like >> "Python package management" >> > > IMO, we should put the energy into one comprehensive document (the new > user guide) and link out to it, not duplicate effort into the core docs. > OK... But remember that most users are consumers of packages, not creators of them. The packaging guide should reflect that with "install and maintain packages" on page 1, and creating packages squarely in "advanced" usage. Is that in line with the goals of the packaging guide? (If not, let's just get a user-only starter page in the Python docs and leave the packaging guide as the "more comprehensive" documentation it can refer to). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Sun Jul 14 19:12:28 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 10:12:28 -0700 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: Message-ID: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: > From: Paul Moore >> On 13 July 2013 10:05, Paul Moore wrote: >> How robust is the process of upgrading pip using itself? Specifically on >> Windows, where these things typically seem less reliable. >> >> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >> for the failure is simple enough to explain - the pip.exe wrapper is held open >> by the OS while it's in use, so that the upgrade cannot replace it. >> >> The result is a failed upgrade and a partially installed new version of pip. In >> practice, the exe stubs are probably added fairly late in the install (at least >> when installing from sdist, with a wheel that depends on the order of the files >> in the wheel), so it's probably only a little bit broken, but "a little bit >> broken" is still broken :-( >> >> On the other hand, "python -m pip install -U pip" works fine because it avoids >> the exe wrappers. >> >> There's a lot of scope for user confusion and frustration in all this. For >> standalone pip I've tended to recommend "don't do that" - manually uninstall and >> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >> That sort of advice isn't going to be realistic for a pip bundled with CPython. >> >> Does anyone have any suggestions? > > Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? > > Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. > > MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. > > If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. > > So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. --Noah From graffatcolmingov at gmail.com Sun Jul 14 19:31:20 2013 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Sun, 14 Jul 2013 13:31:20 -0400 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> References: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> Message-ID: On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz wrote: > > On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: > >> From: Paul Moore >>> On 13 July 2013 10:05, Paul Moore wrote: >>> How robust is the process of upgrading pip using itself? Specifically on >>> Windows, where these things typically seem less reliable. >>> >>> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >>> for the failure is simple enough to explain - the pip.exe wrapper is held open >>> by the OS while it's in use, so that the upgrade cannot replace it. >>> >>> The result is a failed upgrade and a partially installed new version of pip. In >>> practice, the exe stubs are probably added fairly late in the install (at least >>> when installing from sdist, with a wheel that depends on the order of the files >>> in the wheel), so it's probably only a little bit broken, but "a little bit >>> broken" is still broken :-( >>> >>> On the other hand, "python -m pip install -U pip" works fine because it avoids >>> the exe wrappers. >>> >>> There's a lot of scope for user confusion and frustration in all this. For >>> standalone pip I've tended to recommend "don't do that" - manually uninstall and >>> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >>> That sort of advice isn't going to be realistic for a pip bundled with CPython. >>> >>> Does anyone have any suggestions? >> >> Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? >> >> Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. >> >> MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. >> >> If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. >> >> So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. > > The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. I accidentally only emailed Paul earlier, but why can't we upgrade the pip module with the exe and then replace the process (using something in the os.exec* family) with `python -m pip update-exe` which could then succeed since the OS isn't holding onto the exe file? I could be missing something entirely obvious since I haven't developed (directly) on or for Windows in at least 5 years. From noah at coderanger.net Sun Jul 14 19:39:36 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 10:39:36 -0700 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> Message-ID: <5C4267BE-2D69-4FC6-9ED4-C023213CAE77@coderanger.net> On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: > On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz wrote: >> >> On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: >> >>> From: Paul Moore >>>> On 13 July 2013 10:05, Paul Moore wrote: >>>> How robust is the process of upgrading pip using itself? Specifically on >>>> Windows, where these things typically seem less reliable. >>>> >>>> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >>>> for the failure is simple enough to explain - the pip.exe wrapper is held open >>>> by the OS while it's in use, so that the upgrade cannot replace it. >>>> >>>> The result is a failed upgrade and a partially installed new version of pip. In >>>> practice, the exe stubs are probably added fairly late in the install (at least >>>> when installing from sdist, with a wheel that depends on the order of the files >>>> in the wheel), so it's probably only a little bit broken, but "a little bit >>>> broken" is still broken :-( >>>> >>>> On the other hand, "python -m pip install -U pip" works fine because it avoids >>>> the exe wrappers. >>>> >>>> There's a lot of scope for user confusion and frustration in all this. For >>>> standalone pip I've tended to recommend "don't do that" - manually uninstall and >>>> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >>>> That sort of advice isn't going to be realistic for a pip bundled with CPython. >>>> >>>> Does anyone have any suggestions? >>> >>> Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? >>> >>> Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. >>> >>> MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. >>> >>> If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. >>> >>> So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. >> >> The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. > > I accidentally only emailed Paul earlier, but why can't we upgrade the > pip module with the exe and then replace the process (using something > in the os.exec* family) with `python -m pip update-exe` which could > then succeed since the OS isn't holding onto the exe file? I could be > missing something entirely obvious since I haven't developed > (directly) on or for Windows in at least 5 years. Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. --Noah From noah at coderanger.net Sun Jul 14 19:43:18 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 10:43:18 -0700 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: <5C4267BE-2D69-4FC6-9ED4-C023213CAE77@coderanger.net> References: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> <5C4267BE-2D69-4FC6-9ED4-C023213CAE77@coderanger.net> Message-ID: On Jul 14, 2013, at 10:39 AM, Noah Kantrowitz wrote: > > On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: > >> On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz wrote: >>> >>> On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: >>> >>>> From: Paul Moore >>>>> On 13 July 2013 10:05, Paul Moore wrote: >>>>> How robust is the process of upgrading pip using itself? Specifically on >>>>> Windows, where these things typically seem less reliable. >>>>> >>>>> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >>>>> for the failure is simple enough to explain - the pip.exe wrapper is held open >>>>> by the OS while it's in use, so that the upgrade cannot replace it. >>>>> >>>>> The result is a failed upgrade and a partially installed new version of pip. In >>>>> practice, the exe stubs are probably added fairly late in the install (at least >>>>> when installing from sdist, with a wheel that depends on the order of the files >>>>> in the wheel), so it's probably only a little bit broken, but "a little bit >>>>> broken" is still broken :-( >>>>> >>>>> On the other hand, "python -m pip install -U pip" works fine because it avoids >>>>> the exe wrappers. >>>>> >>>>> There's a lot of scope for user confusion and frustration in all this. For >>>>> standalone pip I've tended to recommend "don't do that" - manually uninstall and >>>>> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >>>>> That sort of advice isn't going to be realistic for a pip bundled with CPython. >>>>> >>>>> Does anyone have any suggestions? >>>> >>>> Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? >>>> >>>> Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. >>>> >>>> MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. >>>> >>>> If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. >>>> >>>> So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. >>> >>> The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. >> >> I accidentally only emailed Paul earlier, but why can't we upgrade the >> pip module with the exe and then replace the process (using something >> in the os.exec* family) with `python -m pip update-exe` which could >> then succeed since the OS isn't holding onto the exe file? I could be >> missing something entirely obvious since I haven't developed >> (directly) on or for Windows in at least 5 years. > > Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. Check that, maybe I'm wrong, does anyone know if the P_OVERLAY flag unlocks the original binary? /me drags out a windows VM ? --Noah From qwcode at gmail.com Sun Jul 14 19:49:13 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 10:49:13 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: > On 14 July 2013 18:02, Marcus Smith wrote: > >> there's a section marked "Installation Tutorial". Someone wanting to >> install packages should be able to get into that, once it actually has >> content : ) >> > > Yes, I see that. I'm not sure I like the up-front "decide between user or > system or virtualenv" presentation, though. I'm working on something I > prefer, see what you think when it's done. > yea, don't take the current install tutorial TOC that seriously. The user/global/virtualenv content came from a recent merge Nick referred to above. We just need knowledgeable people to get in there and start working/changing OK... But remember that most users are consumers of packages, not creators > of them. The packaging guide should reflect that > Like Nick said, we can put the "Installation Tutorial" above the "Packaging Tutorial". Neither tutorial should be considered "Advanced" IMO. The tutorials should be crisp and fast. Also, we *could* put a "Quickstart" above both tutorials, that just lists the frequent commands with one-liner descriptions, but we have to be really careful that it doesn't end up duping the tutorials, which are also intended to be quick as well. > If not, let's just get a user-only starter page in the Python docs and > leave the packaging guide as the "more comprehensive" documentation it can > refer to). > anything that results in feeling like the user guide needs a "user-only starter page in Python docs" means we're doing it wrong IMO : ) I admit the title itself concerns me: "Python Packaging User Guide", like it should have the word "Installation" in it. "Python Installation and Packaging User Guide"? (its soooo long though....) Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sun Jul 14 19:55:18 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 10:55:18 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: > > I believe Marcus was hoping to get back to that after pip 1.4 was out the > door, but anyone on the PyPA list on BitBucket actually has full access to > accept pull requests, etc. > yes, the time has come to get this thing going or risk getting a "yet another dead document" rep, which it's starting to get. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Sun Jul 14 20:10:43 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 11:10:43 -0700 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> <5C4267BE-2D69-4FC6-9ED4-C023213CAE77@coderanger.net> Message-ID: On Jul 14, 2013, at 10:43 AM, Noah Kantrowitz wrote: > > On Jul 14, 2013, at 10:39 AM, Noah Kantrowitz wrote: > >> >> On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: >> >>> On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz wrote: >>>> >>>> On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: >>>> >>>>> From: Paul Moore >>>>>> On 13 July 2013 10:05, Paul Moore wrote: >>>>>> How robust is the process of upgrading pip using itself? Specifically on >>>>>> Windows, where these things typically seem less reliable. >>>>>> >>>>>> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >>>>>> for the failure is simple enough to explain - the pip.exe wrapper is held open >>>>>> by the OS while it's in use, so that the upgrade cannot replace it. >>>>>> >>>>>> The result is a failed upgrade and a partially installed new version of pip. In >>>>>> practice, the exe stubs are probably added fairly late in the install (at least >>>>>> when installing from sdist, with a wheel that depends on the order of the files >>>>>> in the wheel), so it's probably only a little bit broken, but "a little bit >>>>>> broken" is still broken :-( >>>>>> >>>>>> On the other hand, "python -m pip install -U pip" works fine because it avoids >>>>>> the exe wrappers. >>>>>> >>>>>> There's a lot of scope for user confusion and frustration in all this. For >>>>>> standalone pip I've tended to recommend "don't do that" - manually uninstall and >>>>>> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >>>>>> That sort of advice isn't going to be realistic for a pip bundled with CPython. >>>>>> >>>>>> Does anyone have any suggestions? >>>>> >>>>> Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? >>>>> >>>>> Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. >>>>> >>>>> MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. >>>>> >>>>> If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. >>>>> >>>>> So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. >>>> >>>> The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. >>> >>> I accidentally only emailed Paul earlier, but why can't we upgrade the >>> pip module with the exe and then replace the process (using something >>> in the os.exec* family) with `python -m pip update-exe` which could >>> then succeed since the OS isn't holding onto the exe file? I could be >>> missing something entirely obvious since I haven't developed >>> (directly) on or for Windows in at least 5 years. >> >> Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. > > Check that, maybe I'm wrong, does anyone know if the P_OVERLAY flag unlocks the original binary? /me drags out a windows VM ? Ignore my ignoring, with os.execl command flow does return back to the controlling terminal process (the new process continues in the background) and with os.spawnl(os.P_OVERLAY, 'python-2') I just get a segfault on 3.3. Yay for not completely misremembering, boo for this being so complicated. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From graffatcolmingov at gmail.com Sun Jul 14 20:12:34 2013 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Sun, 14 Jul 2013 14:12:34 -0400 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: <18A62404-30AE-41E6-A7BC-B7061DB0C4EE@coderanger.net> <5C4267BE-2D69-4FC6-9ED4-C023213CAE77@coderanger.net> Message-ID: On Sun, Jul 14, 2013 at 2:10 PM, Noah Kantrowitz wrote: > > On Jul 14, 2013, at 10:43 AM, Noah Kantrowitz wrote: > >> >> On Jul 14, 2013, at 10:39 AM, Noah Kantrowitz wrote: >> >>> >>> On Jul 14, 2013, at 10:31 AM, Ian Cordasco wrote: >>> >>>> On Sun, Jul 14, 2013 at 1:12 PM, Noah Kantrowitz wrote: >>>>> >>>>> On Jul 14, 2013, at 9:45 AM, Steve Dower wrote: >>>>> >>>>>> From: Paul Moore >>>>>>> On 13 July 2013 10:05, Paul Moore wrote: >>>>>>> How robust is the process of upgrading pip using itself? Specifically on >>>>>>> Windows, where these things typically seem less reliable. >>>>>>> >>>>>>> OK, I just did some tests. On Windows, "pip install -U pip" FAILS. The reason >>>>>>> for the failure is simple enough to explain - the pip.exe wrapper is held open >>>>>>> by the OS while it's in use, so that the upgrade cannot replace it. >>>>>>> >>>>>>> The result is a failed upgrade and a partially installed new version of pip. In >>>>>>> practice, the exe stubs are probably added fairly late in the install (at least >>>>>>> when installing from sdist, with a wheel that depends on the order of the files >>>>>>> in the wheel), so it's probably only a little bit broken, but "a little bit >>>>>>> broken" is still broken :-( >>>>>>> >>>>>>> On the other hand, "python -m pip install -U pip" works fine because it avoids >>>>>>> the exe wrappers. >>>>>>> >>>>>>> There's a lot of scope for user confusion and frustration in all this. For >>>>>>> standalone pip I've tended to recommend "don't do that" - manually uninstall and >>>>>>> reinstall pip, or recreate your virtualenv. It's not nice, but it's effective. >>>>>>> That sort of advice isn't going to be realistic for a pip bundled with CPython. >>>>>>> >>>>>>> Does anyone have any suggestions? >>>>>> >>>>>> Unless I misunderstand how the exe wrappers work (they're all the same code that looks for a .py file by the same name?) it may be easiest to somehow mark them as non-vital, such that failing to update them does not fail the installer. Maybe detect that it can't be overwritten, compare the contents/hash with the new one, and only fail if it's changed (with an instruction to use 'python -m...')? >>>>>> >>>>>> Spawning a separate process to do the install is probably no good, since you'd have to kill the original one which is going to break command line output. >>>>>> >>>>>> MoveFileEx (with its copy-on-reboot flag) is off the table, since it requires elevation and a reboot. But I think that's the only supported API for doing a deferred copy. >>>>>> >>>>>> If Windows was opening .exes with FILE_SHARE_DELETE then it would be possible to delete the exe and create a new one by the same name, but I doubt that will work and in any case could not be assumed to never change. >>>>>> >>>>>> So unless the exe wrapper is changing with each version, I think the best way of handling this is to not force them to be replaced when they have not changed. >>>>> >>>>> The usual way to do this is just move the existing executable to pip.exe.deleteme or something, and then write out the new one. Then on every startup (or maybe some level of special case for just pip upgrades?) try to unlink *.deleteme. Not the simplest system ever, but it gets the job done. >>>> >>>> I accidentally only emailed Paul earlier, but why can't we upgrade the >>>> pip module with the exe and then replace the process (using something >>>> in the os.exec* family) with `python -m pip update-exe` which could >>>> then succeed since the OS isn't holding onto the exe file? I could be >>>> missing something entirely obvious since I haven't developed >>>> (directly) on or for Windows in at least 5 years. >>> >>> Unfortunately windows doesn't actually offer the equivalent of a POSIX exec(). The various functions in os don't actually replace the current process, they just create a new one and terminate the old one. This means the controlling terminal would see the pip process as ended, so it makes showing output difficult at best. >> >> Check that, maybe I'm wrong, does anyone know if the P_OVERLAY flag unlocks the original binary? /me drags out a windows VM ? > > Ignore my ignoring, with os.execl command flow does return back to the controlling terminal process (the new process continues in the background) and with os.spawnl(os.P_OVERLAY, 'python-2') I just get a segfault on 3.3. Yay for not completely misremembering, boo for this being so complicated. I expected I was wrong, but I appreciate you looking at it to be certain. From qwcode at gmail.com Sun Jul 14 20:34:06 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 11:34:06 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: > I admit the title itself concerns me: "Python Packaging User Guide", like > it should have the word "Installation" in it. > "Python Installation and Packaging User Guide"? (its soooo long > though....) > other ideas: "Python Installation and Packaging Guide" "Python Installation and Distribution Guide" -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Sun Jul 14 20:59:18 2013 From: chris at simplistix.co.uk (Chris Withers) Date: Sun, 14 Jul 2013 19:59:18 +0100 Subject: [Distutils] buildout/setuptools/distribute unhelpful error message (0.7.x issue?) In-Reply-To: References: <51D913F5.3080804@simplistix.co.uk> Message-ID: <51E2F506.1090504@simplistix.co.uk> On 13/07/2013 17:37, Jason R. Coombs wrote: > It looks like something is trying to install Setuptools 0.7.2, possibly with > a temporary version of distribute or one that's not visible by default in your > Python environment. That would have been buildout's bootstrap.py. Jim, is this fixed in the latest 1 and 2 bootstrap.pys? > When you get that error message, I suggest you upgrade away from distribute. The problem is that this is the stable python-pkg-resources in Debian, so likely Ubuntu to. This problem has the potential to affect many many users so I hope a sensible story is being developed to deal with it... > The easiest way to do this if you have distribute installed is to > 'easy_install -U distribute', which will grab distribute 0.7.3 and install > setuptools>=0.7. What's the OS-packages plan for dealing with this? I would consider doing the above to an OS-installed package to be pretty anti-social... > I hope that helps. Please report back if that doesn't get you going. I fixed it before you replied, but it was pretty horrific: aptitude remove python-pkg-resources aptitude remove python-pkg-resources python-bzrlib python-pygments aptitude remove python-pkg-resources bzr python-bzrlib python-pygments aptitude remove python-pkg-resources bzr bzr-svn python-docutils python-bzrlib python-pygments ...which removed a lot of packages. I'm in a position where I can manage that, but I'd imagine a "civilian user" could end up in a lot of trouble. Also, don't forget the weird and crappy setuptools versions that Apple will have baked in to the various Pythons that ship with the various Mac OS's. While this merge is a good thing, it's causing a lot of pain from what I've heard and experienced. Chris > > Regards, > Jason > >> -----Original Message----- >> From: Distutils-SIG [mailto:distutils-sig- >> bounces+jaraco=jaraco.com at python.org] On Behalf Of Chris Withers >> Sent: Sunday, 07 July, 2013 03:09 >> To: distutils sig >> Subject: [Distutils] buildout/setuptools/distribute unhelpful error message >> (0.7.x issue?) >> >> Hi All, >> >> What is this exception trying to tell me? >> >> Downloading >> https://pypi.python.org/packages/source/s/setuptools/setuptools- >> 0.7.2.tar.gz >> Extracting in /tmp/tmpJNVsOY >> Now working in /tmp/tmpJNVsOY/setuptools-0.7.2 Building a Setuptools egg >> in /tmp/tmpBLZGeg /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg >> Traceback (most recent call last): >> File "bootstrap.py", line 91, in >> pkg_resources.working_set.add_entry(path) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 451, in >> add_entry >> self.add(dist, entry, False) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 542, in >> add >> self._added_new(dist) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 705, in >> _added_new >> callback(dist) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2727, in >> >> add_activation_listener(lambda dist: dist.activate()) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2227, in >> activate >> self.insert_on(path) >> File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2328, in >> insert_on >> "with distribute. Found one at %s" % str(self.location)) >> ValueError: A 0.7-series setuptools cannot be installed with distribute. >> Found one at /tmp/tmpBLZGeg/setuptools-0.7.2-py2.6.egg >> >> I don't see any distribute in there, and I don't know where it might be... >> >> $ python2.6 >> Python 2.6.8 (unknown, Jan 29 2013, 10:05:44) [GCC 4.7.2] on linux2 Type >> "help", "copyright", "credits" or "license" for more information. >> >>> import setuptools >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: No module named setuptools >> >> cheers, >> >> Chris >> >> -- >> Simplistix - Content Management, Batch Processing & Python Consulting >> - http://www.simplistix.co.uk >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From p.f.moore at gmail.com Sun Jul 14 21:44:08 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 20:44:08 +0100 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> References: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> Message-ID: On 14 July 2013 18:06, Donald Stufft wrote: > Wouldn't a .py file make the command `pip.py`` and not ``pip`` ? Not if .py is a registered extension. What I can't remember is whether it needs to be in PATHEXT (which it isn't by default). The big problem here is that the behaviour isn't very well documented (if at all) so the various command shells act subtly differently. That's why I want to test, and why it won't be a 5-minute job to do so... But the various "replace the exe afterwards" hacks sound awfully complicated to me - particularly as pip doesn't control the exes in the first place, they are part of the setuptools console script entry point infrastructure. My strong preference here is to remove the current use of setuptools entry points, simply because I don't think the problem is solvable while pip doesn't control the exe management at all. That's a non-trivial change, but longer term maybe the best. Question for Nick, Brett and any other core devs around: Would python-dev be willing to include in the stdlib some sort of package for managing exe-wrappers? I don't really want pip to manage exe wrappers any more than I like setuptools doing so. Maybe the existing launcher can somehow double up in that role? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 21:51:49 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 20:51:49 +0100 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On 14 July 2013 18:49, Marcus Smith wrote: > I admit the title itself concerns me: "Python Packaging User Guide", like > it should have the word "Installation" in it. > "Python Installation and Packaging User Guide"? (its soooo long > though....) > I like "Package Management Guide". Strong hints of managing existing 3rd party packages, which is good because it's what the majority want, but general enough to allow for including managing the build & distribution of your own packages. BTW, I'm sick and tired of agonising every time I use the word "package" over whether I should be "correct" and use "distribution". Can the guide just come right out and bless the occasionally-ambiguous but commonly-used dual nature of the word "package"? If not, can people start actually *using* "distribution" consistently for what pip downloads and installs, so I can find a few more examples for me to copy when I end up with awkward phrases like "distributing your distribution"...? (You'd never believe English was my native language, would you? :-)) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sun Jul 14 22:05:05 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 13:05:05 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: > > > BTW, I'm sick and tired of agonising every time I use the word "package" > over whether I should be "correct" and use "distribution". Can the guide > just come right out and bless the occasionally-ambiguous but commonly-used > dual nature of the word "package"? If not, can people start actually > *using* "distribution" consistently for what pip downloads and installs, so > I can find a few more examples for me to copy when I end up with awkward > phrases like "distributing your distribution"...? (You'd never believe > English was my native language, would you? :-)) > I hear you. I feel the same agony. It think we should use the word "Distribution", but it's hard to compete when PyPI uses "Package". Nick, what do we do? : ) Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sun Jul 14 23:22:51 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 14:22:51 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: Donald: thoughts on changing our use of "Package" on pypi to "Distribution"? (except for the title of course) If we're not going to do that, we should explain and bless the double use of "Package" and drop using "Distribution" in any docs. Our fundamental concepts shouldn't be confusing and conflicted. On Sun, Jul 14, 2013 at 1:05 PM, Marcus Smith wrote: > >> BTW, I'm sick and tired of agonising every time I use the word "package" >> over whether I should be "correct" and use "distribution". Can the guide >> just come right out and bless the occasionally-ambiguous but commonly-used >> dual nature of the word "package"? If not, can people start actually >> *using* "distribution" consistently for what pip downloads and installs, so >> I can find a few more examples for me to copy when I end up with awkward >> phrases like "distributing your distribution"...? (You'd never believe >> English was my native language, would you? :-)) >> > > I hear you. I feel the same agony. It think we should use the word > "Distribution", but it's hard to compete when PyPI uses "Package". > Nick, what do we do? : ) > > Marcus > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 23:24:46 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 17:24:46 -0400 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: On Jul 14, 2013, at 5:22 PM, Marcus Smith wrote: > Donald: > thoughts on changing our use of "Package" on pypi to "Distribution"? (except for the title of course) > If we're not going to do that, we should explain and bless the double use of "Package" and drop using "Distribution" in any docs. > Our fundamental concepts shouldn't be confusing and conflicted. > > On Sun, Jul 14, 2013 at 1:05 PM, Marcus Smith wrote: > > BTW, I'm sick and tired of agonising every time I use the word "package" over whether I should be "correct" and use "distribution". Can the guide just come right out and bless the occasionally-ambiguous but commonly-used dual nature of the word "package"? If not, can people start actually *using* "distribution" consistently for what pip downloads and installs, so I can find a few more examples for me to copy when I end up with awkward phrases like "distributing your distribution"...? (You'd never believe English was my native language, would you? :-)) > > I hear you. I feel the same agony. It think we should use the word "Distribution", but it's hard to compete when PyPI uses "Package". > Nick, what do we do? : ) > > Marcus > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig PyPI will eventually move to using the definitions as defined in PEP426 http://www.python.org/dev/peps/pep-0426/#supporting-definitions . ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sun Jul 14 23:29:11 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 14:29:11 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: no reason not to change "Package" to "Distribution" now, right? this (http://docs.python.org/2/install/index.html) already uses the "Distribution" word. which means changing sentences like "There are currently 32660 packages here." to "There are currently 32660 distributions here." and the main table listing would use the word "Distribution" On Sun, Jul 14, 2013 at 2:24 PM, Donald Stufft wrote: > > On Jul 14, 2013, at 5:22 PM, Marcus Smith wrote: > > Donald: > thoughts on changing our use of "Package" on pypi to "Distribution"? > (except for the title of course) > If we're not going to do that, we should explain and bless the double use > of "Package" and drop using "Distribution" in any docs. > Our fundamental concepts shouldn't be confusing and conflicted. > > On Sun, Jul 14, 2013 at 1:05 PM, Marcus Smith wrote: > >> >>> BTW, I'm sick and tired of agonising every time I use the word "package" >>> over whether I should be "correct" and use "distribution". Can the guide >>> just come right out and bless the occasionally-ambiguous but commonly-used >>> dual nature of the word "package"? If not, can people start actually >>> *using* "distribution" consistently for what pip downloads and installs, so >>> I can find a few more examples for me to copy when I end up with awkward >>> phrases like "distributing your distribution"...? (You'd never believe >>> English was my native language, would you? :-)) >>> >> >> I hear you. I feel the same agony. It think we should use the word >> "Distribution", but it's hard to compete when PyPI uses "Package". >> Nick, what do we do? : ) >> >> Marcus >> >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > > PyPI will eventually move to using the definitions as defined in PEP426 > http://www.python.org/dev/peps/pep-0426/#supporting-definitions . > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Jul 14 23:31:43 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 14 Jul 2013 17:31:43 -0400 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: <63522A4F-C104-4FE3-8F4F-5F6DE016E896@stufft.io> On Jul 14, 2013, at 5:29 PM, Marcus Smith wrote: > no reason not to change "Package" to "Distribution" now, right? > this (http://docs.python.org/2/install/index.html) already uses the "Distribution" word. > which means changing sentences like "There are currently 32660 packages here." to "There are currently 32660 distributions here." > and the main table listing would use the word "Distribution" > > That's 32600 projects, There are almost 200k distributions. And the main table listing would be "Releases" or "Project Releases". > On Sun, Jul 14, 2013 at 2:24 PM, Donald Stufft wrote: > > On Jul 14, 2013, at 5:22 PM, Marcus Smith wrote: > >> Donald: >> thoughts on changing our use of "Package" on pypi to "Distribution"? (except for the title of course) >> If we're not going to do that, we should explain and bless the double use of "Package" and drop using "Distribution" in any docs. >> Our fundamental concepts shouldn't be confusing and conflicted. >> >> On Sun, Jul 14, 2013 at 1:05 PM, Marcus Smith wrote: >> >> BTW, I'm sick and tired of agonising every time I use the word "package" over whether I should be "correct" and use "distribution". Can the guide just come right out and bless the occasionally-ambiguous but commonly-used dual nature of the word "package"? If not, can people start actually *using* "distribution" consistently for what pip downloads and installs, so I can find a few more examples for me to copy when I end up with awkward phrases like "distributing your distribution"...? (You'd never believe English was my native language, would you? :-)) >> >> I hear you. I feel the same agony. It think we should use the word "Distribution", but it's hard to compete when PyPI uses "Package". >> Nick, what do we do? : ) >> >> Marcus >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > PyPI will eventually move to using the definitions as defined in PEP426 http://www.python.org/dev/peps/pep-0426/#supporting-definitions . > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Sun Jul 14 23:32:14 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 14:32:14 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: Message-ID: although some of the uses could be argued as needing "Project" e.g "Package documentation is hosted on its own domain" On Sun, Jul 14, 2013 at 2:29 PM, Marcus Smith wrote: > no reason not to change "Package" to "Distribution" now, right? > this (http://docs.python.org/2/install/index.html) already uses the > "Distribution" word. > which means changing sentences like "There are currently 32660 packages > here." to "There are currently 32660 distributions here." > and the main table listing would use the word "Distribution" > > > > On Sun, Jul 14, 2013 at 2:24 PM, Donald Stufft wrote: > >> >> On Jul 14, 2013, at 5:22 PM, Marcus Smith wrote: >> >> Donald: >> thoughts on changing our use of "Package" on pypi to "Distribution"? >> (except for the title of course) >> If we're not going to do that, we should explain and bless the double use >> of "Package" and drop using "Distribution" in any docs. >> Our fundamental concepts shouldn't be confusing and conflicted. >> >> On Sun, Jul 14, 2013 at 1:05 PM, Marcus Smith wrote: >> >>> >>>> BTW, I'm sick and tired of agonising every time I use the word >>>> "package" over whether I should be "correct" and use "distribution". Can >>>> the guide just come right out and bless the occasionally-ambiguous but >>>> commonly-used dual nature of the word "package"? If not, can people start >>>> actually *using* "distribution" consistently for what pip downloads and >>>> installs, so I can find a few more examples for me to copy when I end up >>>> with awkward phrases like "distributing your distribution"...? (You'd never >>>> believe English was my native language, would you? :-)) >>>> >>> >>> I hear you. I feel the same agony. It think we should use the word >>> "Distribution", but it's hard to compete when PyPI uses "Package". >>> Nick, what do we do? : ) >>> >>> Marcus >>> >>> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig >> >> >> PyPI will eventually move to using the definitions as defined in PEP426 >> http://www.python.org/dev/peps/pep-0426/#supporting-definitions . >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >> DCFA >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sun Jul 14 23:36:26 2013 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 14 Jul 2013 14:36:26 -0700 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: <63522A4F-C104-4FE3-8F4F-5F6DE016E896@stufft.io> References: <63522A4F-C104-4FE3-8F4F-5F6DE016E896@stufft.io> Message-ID: > That's 32600 projects, There are almost 200k distributions. And the main > table listing would be "Releases" or "Project Releases". > ok, got it. anything I do from here on will follow this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Jul 14 23:38:14 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 22:38:14 +0100 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: <63522A4F-C104-4FE3-8F4F-5F6DE016E896@stufft.io> Message-ID: On 14 July 2013 22:36, Marcus Smith wrote: > > That's 32600 projects, There are almost 200k distributions. And the main >> table listing would be "Releases" or "Project Releases". >> > > ok, got it. anything I do from here on will follow this. > That's much better. I apologise, I hadn't noticed that PEP 426 included terminology definitions. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 15 00:06:15 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jul 2013 08:06:15 +1000 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> Message-ID: On 15 Jul 2013 05:44, "Paul Moore" wrote: > > On 14 July 2013 18:06, Donald Stufft wrote: >> >> Wouldn't a .py file make the command `pip.py`` and not ``pip`` ? > > > Not if .py is a registered extension. What I can't remember is whether it needs to be in PATHEXT (which it isn't by default). The big problem here is that the behaviour isn't very well documented (if at all) so the various command shells act subtly differently. That's why I want to test, and why it won't be a 5-minute job to do so... > > But the various "replace the exe afterwards" hacks sound awfully complicated to me - particularly as pip doesn't control the exes in the first place, they are part of the setuptools console script entry point infrastructure. > > My strong preference here is to remove the current use of setuptools entry points, simply because I don't think the problem is solvable while pip doesn't control the exe management at all. That's a non-trivial change, but longer term maybe the best. > > Question for Nick, Brett and any other core devs around: Would python-dev be willing to include in the stdlib some sort of package for managing exe-wrappers? I don't really want pip to manage exe wrappers any more than I like setuptools doing so. Maybe the existing launcher can somehow double up in that role? Not sure it fits the launcher, but having something along those lines in the stdlib makes sense (especially in the context of a pip bundling PEP). Another option we may want to consider is an actual msi installer for pip (I'm not sure that would actually help, but it's worth looking into), as well as investigating what other self-updating Windows apps (like Firefox) do to handle this problem. Cheers, Nick. > > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Mon Jul 15 00:09:51 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 14 Jul 2013 15:09:51 -0700 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: References: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> Message-ID: <3E40F838-1970-4792-A96C-BDF282D152BB@coderanger.net> On Jul 14, 2013, at 3:06 PM, Nick Coghlan wrote: > > On 15 Jul 2013 05:44, "Paul Moore" wrote: > > > > On 14 July 2013 18:06, Donald Stufft wrote: > >> > >> Wouldn't a .py file make the command `pip.py`` and not ``pip`` ? > > > > > > Not if .py is a registered extension. What I can't remember is whether it needs to be in PATHEXT (which it isn't by default). The big problem here is that the behaviour isn't very well documented (if at all) so the various command shells act subtly differently. That's why I want to test, and why it won't be a 5-minute job to do so... > > > > But the various "replace the exe afterwards" hacks sound awfully complicated to me - particularly as pip doesn't control the exes in the first place, they are part of the setuptools console script entry point infrastructure. > > > > My strong preference here is to remove the current use of setuptools entry points, simply because I don't think the problem is solvable while pip doesn't control the exe management at all. That's a non-trivial change, but longer term maybe the best. > > > > Question for Nick, Brett and any other core devs around: Would python-dev be willing to include in the stdlib some sort of package for managing exe-wrappers? I don't really want pip to manage exe wrappers any more than I like setuptools doing so. Maybe the existing launcher can somehow double up in that role? > > Not sure it fits the launcher, but having something along those lines in the stdlib makes sense (especially in the context of a pip bundling PEP). > > Another option we may want to consider is an actual msi installer for pip (I'm not sure that would actually help, but it's worth looking into), as well as investigating what other self-updating Windows apps (like Firefox) do to handle this problem. They do the "exec a helper executable that replaces the original" approach, which works fine for non-console apps since there isn't the problem of the shell getting confused :-/ --Noah From p.f.moore at gmail.com Mon Jul 15 00:17:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 14 Jul 2013 23:17:59 +0100 Subject: [Distutils] Executable wrappers and upgrading pip (Was: Current status of PEP 439 (pip boostrapping)) In-Reply-To: <3E40F838-1970-4792-A96C-BDF282D152BB@coderanger.net> References: <6EACCA35-1901-4D60-9BFF-FCBF94480112@stufft.io> <3E40F838-1970-4792-A96C-BDF282D152BB@coderanger.net> Message-ID: On 14 July 2013 23:09, Noah Kantrowitz wrote: > > Another option we may want to consider is an actual msi installer for > pip (I'm not sure that would actually help, but it's worth looking into), > as well as investigating what other self-updating Windows apps (like > Firefox) do to handle this problem. > > They do the "exec a helper executable that replaces the original" > approach, which works fine for non-console apps since there isn't the > problem of the shell getting confused :-/ Generally, I don't think that going down the route of MSIs is a good move. They aren't a good fit for this problem. Apart from anything else, they won't support installing into a virtualenv. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 15 00:18:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jul 2013 08:18:23 +1000 Subject: [Distutils] Another conversation starter - pip documentation in the Python docs In-Reply-To: References: <63522A4F-C104-4FE3-8F4F-5F6DE016E896@stufft.io> Message-ID: On 15 Jul 2013 07:45, "Paul Moore" wrote: > > On 14 July 2013 22:36, Marcus Smith wrote: >> >> >>> That's 32600 projects, There are almost 200k distributions. And the main table listing would be "Releases" or "Project Releases". >> >> >> ok, got it. anything I do from here on will follow this. > > > That's much better. I apologise, I hadn't noticed that PEP 426 included terminology definitions. They were necessary for me to keep them straight in my own head. The original set were a bit odd, but I think the project/release/distribution/archive split has ended up in a reasonable place after a few iterations (it's still open to revisions if anything seems too awkward, though). I don't think the ambiguity of "package" will ever go away entirely, but we can justify that to some degree by noting that distributions mainly serve to get Python packages and modules installed and available for import. Cheers, Nick. > Paul > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Mon Jul 15 11:30:20 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 15 Jul 2013 11:30:20 +0200 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <78D8BC3C-C638-422A-9D8D-D89FFA917E60@mac.com> On 13 Jul, 2013, at 7:31, Nick Coghlan wrote: > > 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): > > * an explicit bootstrapping script > * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). Or automaticly invoke the bootstrap script during installation (for the Python installers), that we the installers don't end up with a stale version of pip. Either option should be easy enough to add to the OSX installers. Ronald From ronaldoussoren at mac.com Mon Jul 15 11:35:13 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 15 Jul 2013 11:35:13 +0200 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: <55B209B3-9576-4CF0-B58C-2A1E692AFFF1@stufft.io> Message-ID: <849E042D-F8A4-46D1-A15A-2C6661A58821@mac.com> On 13 Jul, 2013, at 15:35, Brett Cannon wrote: > > > > On Sat, Jul 13, 2013 at 2:29 AM, Ned Deily wrote: > In article <55B209B3-9576-4CF0-B58C-2A1E692AFFF1 at stufft.io>, > Donald Stufft wrote: > > On Jul 13, 2013, at 1:31 AM, Nick Coghlan wrote: > > > I'm currently leaning towards offering both, as we're going to need a tool > > > for bootstrapping source builds, but the simplest way to bootstrap pip for > > > Windows and Mac OS X users is to just *bundle a copy with the binary > > > installers*. So long as the bundled copy looks *exactly* the way it would > > > if installed later (so it can update itself), then we avoid the problem of > > > coupling the pip update cycles to the standard library feature release > > > cycle. The bundled version can be updated to the latest available versions > > > when we do a Python maintenance release. > > Off the top of my head, including a copy of pip as a pre-installed > global site-package seems like a very reasonable suggestion. For the > python.org OS X installer, it should be no problem to implement. It > would be equally easy to implement for future 2.7 and 3.3 maintenance > releases. > > Does Apple just install the python.org OS X installer for distribution, or do they build their own thing? They do their own thing. > My only worry is that Apple will not get the message about including pip and we will end up with an odd skew on OS X (I'm not worried about Linux distros as they all seem to follow Python development closely). That will happen anyway, pip won't get magically installed on current OSX releases and adding it to the upcoming 10.9 release is probably not possible either unless it already happens to be in the current beta (they appear to have a fairly early cutoff point for including new software at the Unix layer). Ronald From ronaldoussoren at mac.com Mon Jul 15 11:38:46 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 15 Jul 2013 11:38:46 +0200 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: References: Message-ID: <5661DE4D-D148-44B5-9C40-C61D1E4A8AE1@mac.com> On 13 Jul, 2013, at 16:39, Vinay Sajip wrote: > > > I smoke-tested the script on vanilla Python 3.3 installations on Windows and > OS X. On OS X, the pip script was written to the appropriate Frameworks > folder, but not to /usr/local/bin - I assume it would be simple enough to > arrange that? Not installing in /usr/local/bin is a feature that makes it easier to maintain several python installs without worrying about contamination (for example Python 3 and Python 2, or even 2.6 and 2.7). The installer by default adds the framework 'bin' directory to the environment for the shell of the user that installed the framework. Ronald From ncoghlan at gmail.com Mon Jul 15 11:54:24 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jul 2013 19:54:24 +1000 Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: <78D8BC3C-C638-422A-9D8D-D89FFA917E60@mac.com> References: <78D8BC3C-C638-422A-9D8D-D89FFA917E60@mac.com> Message-ID: On 15 July 2013 19:30, Ronald Oussoren wrote: > > On 13 Jul, 2013, at 7:31, Nick Coghlan wrote: >> >> 3. That means there are two main options available to us that I still consider viable alternatives (the installer bundling idea was suggested in one of the off list comments I mentioned): >> >> * an explicit bootstrapping script >> * bundling a *full* copy of pip with the Python installers for Windows and Mac OS X, but installing it to site-packages rather than to the standard library directory. That way pip can be used to upgrade itself as normal, rather than making it part of the standard library per se. This is then closer to the "bundled application" model adopted for IDLE in PEP 434 (we could, in fact, move to distributing idle the same way). > > Or automaticly invoke the bootstrap script during installation (for the Python installers), that we the installers don't end up with a stale version of pip. Yeah, I see pros and cons to either approach, with the main con of install time bootstrapping being the requirement for network access, while the main con of bundling is that you may end up needing to do "pip install --upgrade pip" immediately after installing Python anyway. I currently have a slight preference for actual bundling, but could probably be persuaded to endorse an install time bootstrap instead. It's only the bootstrap-on-first-use approach that I've decided is asking for trouble. I don't believe either Martin (von L?wis) or Brian (Curtin) is on this list, so I'll email them directly to see if they have a preference. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Mon Jul 15 13:03:48 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 15 Jul 2013 12:03:48 +0100 (BST) Subject: [Distutils] Current status of PEP 439 (pip boostrapping) In-Reply-To: <5661DE4D-D148-44B5-9C40-C61D1E4A8AE1@mac.com> References: <5661DE4D-D148-44B5-9C40-C61D1E4A8AE1@mac.com> Message-ID: <1373886228.36583.YahooMailNeo@web171401.mail.ir2.yahoo.com> > Not installing in /usr/local/bin is a feature that makes it easier to maintain > several python installs without worrying about contamination (for example Python > 3 > and Python 2, or even 2.6 and 2.7). I thought it might be behaving as designed, but wasn't sure. Regards, Vinay Sajip From p.f.moore at gmail.com Mon Jul 15 13:24:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 12:24:10 +0100 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On 14 July 2013 07:13, Nick Coghlan wrote: > * ensuring that pip is automatically available in virtual environments > created with pyvenv > Is the proposal here for pyvenv to download pip or to install a locally stored copy? Using a locally stored copy is what virtualenv does, and I'd prefer it to avoid issues where the user's PC has no internet access (as well as avoiding the need to worry about secure downloads and bundling certs, which was why virtualenv took this route). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 15 15:12:07 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 15 Jul 2013 23:12:07 +1000 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On 15 July 2013 21:24, Paul Moore wrote: > > On 14 July 2013 07:13, Nick Coghlan wrote: >> >> * ensuring that pip is automatically available in virtual environments >> created with pyvenv > > > Is the proposal here for pyvenv to download pip or to install a locally > stored copy? Using a locally stored copy is what virtualenv does, and I'd > prefer it to avoid issues where the user's PC has no internet access (as > well as avoiding the need to worry about secure downloads and bundling > certs, which was why virtualenv took this route). Using a locally stored copy. I'm also considering a new trick for the import system (in the general vein of *.egg-link files) that would let us achieve that without copying, and perhaps even let us eventually deprecate *.pth files entirely. If that scheme comes to fruition it will be as an independent PEP, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Jul 15 15:39:30 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 14:39:30 +0100 Subject: [Distutils] Replacing pip.exe with a Python script Message-ID: I'm looking at the possibility of replacing the current setuptools entry point based pip executables with Python scripts. The biggest problem is that a script "pip.py" shadows the pip package, making "import pip" fail. I can get round this by deleting sys.path[0] (the location of the currently running script) but how robust is that? Are there any corner cases where it would break? Or alternatively, is there a better way to do this rather than messing directly with sys.path? I suspect this is a fairly common question, but my Google-fu is failing me :-( Sorry, I know this is a basic Python coding question - in my defence, it's for something related to the current pip discussions :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Mon Jul 15 15:44:43 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 15 Jul 2013 14:44:43 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On 15 July 2013 14:39, Paul Moore wrote: > > I'm looking at the possibility of replacing the current setuptools entry > point based pip executables with Python scripts. The biggest problem is that > a script "pip.py" shadows the pip package, making "import pip" fail. > > I can get round this by deleting sys.path[0] (the location of the currently > running script) but how robust is that? Are there any corner cases where it > would break? Or alternatively, is there a better way to do this rather than > messing directly with sys.path? I suspect this is a fairly common question, > but my Google-fu is failing me :-( Can you not just rename the pip module to _pip? Oscar From brett at python.org Mon Jul 15 15:53:52 2013 From: brett at python.org (Brett Cannon) Date: Mon, 15 Jul 2013 09:53:52 -0400 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On Mon, Jul 15, 2013 at 9:39 AM, Paul Moore wrote: > I'm looking at the possibility of replacing the current setuptools entry > point based pip executables with Python scripts. The biggest problem is > that a script "pip.py" shadows the pip package, making "import pip" fail. > > I can get round this by deleting sys.path[0] (the location of the > currently running script) but how robust is that? Are there any corner > cases where it would break? Or alternatively, is there a better way to do > this rather than messing directly with sys.path? I suspect this is a fairly > common question, but my Google-fu is failing me :-( > > Sorry, I know this is a basic Python coding question - in my defence, it's > for something related to the current pip discussions :-) > As long as you make sure that sys.path[0] is actually the script location then it will work (other things like .pth files, PYTHONSTARTUP, etc. could have changed things before your script started execution). But realize that a) in Python 3.3 the scripts location will be ./pip.py, not just pip.py, and b) if I get my way all paths will be absolute for __file__, so you will have to just associate '' with os.getcwd() and then search for the proper directory on sys.path. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Jul 15 16:08:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 15:08:10 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On 15 July 2013 14:44, Oscar Benjamin wrote: > Can you not just rename the pip module to _pip? That's a far more intrusive change, and I'm not sure I want to go there. It *will* break existing code that (rightly or wrongly) imports pip. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Jul 15 16:16:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 15:16:56 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On 15 July 2013 14:53, Brett Cannon wrote: > As long as you make sure that sys.path[0] is actually the script location > then it will work (other things like .pth files, PYTHONSTARTUP, etc. could > have changed things before your script started execution). But realize that > a) in Python 3.3 the scripts location will be ./pip.py, not just pip.py, > and b) if I get my way all paths will be absolute for __file__, so you will > have to just associate '' with os.getcwd() and then search for the proper > directory on sys.path. OK, that pretty much tells me that this is a bad idea. It's never going to be robust enough to work. I'm amazed actually that there's no way to say "don't add the script location to sys.path", even as a command line option. It seems like the sort of thing you'd want to make scripts robust, a bit like -S and -E. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Mon Jul 15 16:16:38 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 15 Jul 2013 15:16:38 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 13 July 2013 20:59, Paul Moore wrote: > It would be nice to get feedback from "normal users" on this. I suspect that > the scientific community would make a good cross-section (AIUI there's quite > a lot of Windows use, and for many people in the community Python is very > much a tool, rather than a way of life :-)). Does anyone have links into the > scipy groups? I lurk on the IPython lists, so I could ask there, at a > pinch... I don't know if I really count as a normal user but I can describe how Python is installed on the Windows machines in my faculty for scientific use. All our Windows machines have the Enthought Python Distribution (EPD) installed. This bundles CPython with numpy, scipy, matplotlib, wxpython, setuptools, pip and a whole load more. Ordinary users do not need to install numpy etc. since these are pre-installed. The bootstrap process is probably irrelevant since EPD installs easy_install and that can be used to install pip if desired. Ordinary users do not have write access to the EPD installation directory and so can only use pip/easy_install with --user anyway. On my own desktop machine which runs Windows all of the Python installations I use are inside my user directory so there is no meaningful difference between 'pip install' and 'pip install --user'. The real problem for us with using e.g. pip to install something like numpy is that it will not install the appropriate non-Python external libraries. For example, numpy ships with a just functional BLAS library but you really want to install and have it link against proper BLAS/LAPACK libraries. The good free libraries should be compiled on the target machine and pypi/pip/distutils do not currently help much with doing this. Debian (or at least Ubuntu) provides for example the ATLAS library as a source only package. This means that you can compile it on the target machine and get the most out of your CPU capabilities while still using the Debian tools to obtain the dependencies and manage the build process. The new wheel format will not help with this since even if there were an ATLAS wheel it would probably be a generic 686 binary without e.g. SSE. This is another advantage of using EPD which ships the non-free Intel MKL library. Python(x, y) is similar to EPD but is GPL'd and ships with OpenBLAS. Both distributions also ship MinGW which is useful since it's likely that our Windows machines will not have the appropriate MSVC version to match up with the CPython version. (They also don't suffer from Issue12641 so MinGW works). Oscar From p.f.moore at gmail.com Mon Jul 15 16:21:58 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 15:21:58 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 15 July 2013 15:16, Oscar Benjamin wrote: > I don't know if I really count as a normal user but I can describe how > Python is installed on the Windows machines in my faculty for > scientific use. > Thanks, that's interesting. Do people typically write command-line Python scripts? If so, do they expect to be able to put them on PATH and run them? What command processor is typically used? Powershell or cmd? I suspect that there isn't much that's "typical" about anyone :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jul 15 16:32:49 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 00:32:49 +1000 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On 16 July 2013 00:16, Paul Moore wrote: > > On 15 July 2013 14:53, Brett Cannon wrote: >> >> As long as you make sure that sys.path[0] is actually the script location >> then it will work (other things like .pth files, PYTHONSTARTUP, etc. could >> have changed things before your script started execution). But realize that >> a) in Python 3.3 the scripts location will be ./pip.py, not just pip.py, and >> b) if I get my way all paths will be absolute for __file__, so you will have >> to just associate '' with os.getcwd() and then search for the proper >> directory on sys.path. > > > OK, that pretty much tells me that this is a bad idea. It's never going to > be robust enough to work. Most of the stuff Brett mentioned there shouldn't be relevant for a directly executed script - doing sys.path.remove(os.path.dirname(os.path.abspath(__file__)) should be pretty robust in any scenario. > I'm amazed actually that there's no way to say > "don't add the script location to sys.path", even as a command line option. > It seems like the sort of thing you'd want to make scripts robust, a bit > like -S and -E. You'd think that, but then you'd look at getpath.c and run away (or write something like PEP 432, as I did) :P Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Jul 15 16:41:15 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 15:41:15 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: On 15 July 2013 15:32, Nick Coghlan wrote: > Most of the stuff Brett mentioned there shouldn't be relevant for a > directly executed script - doing > sys.path.remove(os.path.dirname(os.path.abspath(__file__)) should be > pretty robust in any scenario. > OK, well apart from the shadowing issue, my initial tests have looked relatively positive. So the questions are now more around whether this is how we want to go with pip, what backward compatibility issues it may have (the launcher isn't available in Python < 3.3) etc. So I'll work up a pull request for discussion by the pip devs. > > I'm amazed actually that there's no way to say > > "don't add the script location to sys.path", even as a command line > option. > > It seems like the sort of thing you'd want to make scripts robust, a bit > > like -S and -E. > > You'd think that, but then you'd look at getpath.c and run away (or > write something like PEP 432, as I did) :P > You're a better man than I, Gunga Din :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Mon Jul 15 17:21:29 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 15 Jul 2013 16:21:29 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 15 July 2013 15:21, Paul Moore wrote: > > On 15 July 2013 15:16, Oscar Benjamin wrote: >> >> I don't know if I really count as a normal user but I can describe how >> Python is installed on the Windows machines in my faculty for >> scientific use. > > Thanks, that's interesting. > > Do people typically write command-line Python scripts? A lot of researchers would run their scripts in an IDE such as Spyder (pre-installed by Enthought). This is the way that people are used to working if they are more familiar with Matlab. It's a bad idea in may ways but essentially rather than passing command line arguments a lot of people will just edit the variables at the top of their script and rerun it. Another method used is ipython which you can use to edit/run your code in a semi-interactive/semi-manual manner using the magic %edit command; this is similar to spyder. We also have a number of Linux clusters that are used to farm out big computational jobs and for this people do need to write proper command line scripts and submit the jobs via ssh (using putty rather than a real terminal) but they probably edit/test this code on the target machines. > If so, do they expect to be able to put them on PATH and run them? Probably not. I think that most people make a folder full of scripts and either run them from an IDE or cd into the folder and run them there. Again this is basically how you would do it in Matlab. In scientific work the end user is someone who spends a lot of time writing quite small scripts that are often not really reusable and are tied in some sense to a wider project. I think that maybe 80% of the .py files I have written are command line scripts under 100 lines that produce a single figure with matplotlib. The majority of those scripts are tied to a LaTeX document and invoked by a Makefile with e.g. 'python scripts/fig1.py images/fig1.pdf'. Most of my colleagues would probably have a more manual/interactive approach than me though. > What command processor is typically used? Powershell or cmd? I haven't seen anyone use Powershell but I assume that it is installed. It's not on my machine but I use Console2/git-bash which means that shebang lines already work for me. I do often see (unfortunate) people using cmd.exe though. Oscar From alexis at notmyidea.org Mon Jul 15 18:27:26 2013 From: alexis at notmyidea.org (=?ISO-8859-1?Q?Alexis_M=E9taireau?=) Date: Mon, 15 Jul 2013 18:27:26 +0200 Subject: [Distutils] Request to add a trove classifier for pelican plugins Message-ID: <51E422EE.8060106@notmyidea.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, I hope this is the right place to ask for this. I would like to have a trove classifier for pelican [0] plugins. We plan to release them on PyPI and having a classifier to distinguish them from all the other packages sounds the way to go. We're not really a framework, but following the already established pattern, I guess "Framework :: Pelican" makes sense. Thanks! Alexis [0] http://getpelican.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJR5CLuAAoJEAeIBdhn9W8SyqwH/3e64+X6KJ4WxX/zeO9isKqw F5XfeRieO2rthUd6ALREF+VYhgsIwrU6B6gwLjyDe5tFA8a0sPkXFIg7pCEXjNxt ufX7W8BhdjRcVOx//9/zP4v4+HeU9OUZBwpiNuMnE7N9jbq4iWtt3OQ0GtfVIl1h d9WJoxb+8aDGbes/jgNuTh4B/Jm9XjIm8fXP5mcyLkfj0vnyHJUTgm/GjnuYEF6K 9mtWMAMoZ6RgIS41JgI7yUt81pLBQrCJc5yVuG7lE3hdaFstKAdBZhOVSrIzj0eJ KN4TMABexVCsVfwklDPIifAneKQZEmONUJLWfzleU367kRGb5YNavrtpW/kQomY= =FomK -----END PGP SIGNATURE----- From chris.barker at noaa.gov Mon Jul 15 18:47:05 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 15 Jul 2013 09:47:05 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On Sat, Jul 13, 2013 at 12:59 PM, Paul Moore wrote: >> 2. This sounds like something that needs fixed on Windows. Even if you say >> ``-m`` for pip then things are still broken by default for any other package >> on PyPI that installs a script. So this feels like something wrong with >> Python on windows not wrong with the script approach. > > It is, and it should be fixed. But in many years, nobody has managed to come > up with an acceptable solution. I don't _think_ this is just Windows Bashing: MS has done very very little to improve the whole command line experience on Windows over the years. For example, as far as I now, even with Windows 7 (8), the standard system tool to edit PATH is a very, very old little text box that only holds maybe 50 characters -- it's really painful and pathetic. That, and I think the really, really old way of editing autoexec.bat is dead (editing a text file is easier than a really lousy GUI) All that is a way to say that Python can only make it so easy for Windows users, but what's in place is not bad, and it really makes sense for pip to use what's been there for ages, i.e. a command called "pip" (and pip2, pip3...) that sits in the same place that all other third-party Pyton "scripts" are installed. No matter how you slice it, a user will need to put that on their PATH one way or another. Of course, what MS is telling us is: don't rely on the command line! So a really nice thing to do for Windows users would be to provide a little GUI pip tool that's part of the standard install. (not that I'm volunteering to write it...has no none yet written a tkInter-bsed pip front-end?) the current setuptools exe-wrapper feels really kludgy, but it works -- it seems the only real problematic issue is the self-update problem -- maybe there is a Windows guru somewhere that can fix that.... > The debates seem to be largely around what > happens if you install multiple versions of Python and then remove some of > them, and how badly your system PATH gets messed up by this. Is this any better on *nix? When I use the OS-X installers, after a while, I get a pretty klunky pile-up of PATH-manipulating stuf in my .bash_profile... > * Accept that Windows is a problem in this regard, but don't worry about it > - install executable wrappers/scripts and let the user deal with path > issues. not so bad, really > It would be nice to get feedback from "normal users" on this. I suspect that > the scientific community would make a good cross-section (AIUI there's quite > a lot of Windows use, and for many people in the community Python is very > much a tool, rather than a way of life :-)). True -- note that there are now two commercial pyton distributions (Enthought Canopy, and Continuum Anaconda) that heavily used by the scipy community -- they both provide their own package distribution solutions (though ship pip, too, i'm pretty sure). The demand for those tells us something about packaging.... > Does anyone have links into the > scipy groups? Yes, but I don't know that post from me would get you anything that that post from a core pip-developer wouldn't get -- I'd post on the numpy list for best access to developers, maybe scipy and/or matplotlib for more it's-just-a-tool-to-me users. iPython's not a bad option for folks concerned about user experience, as well. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From p.f.moore at gmail.com Mon Jul 15 19:11:30 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 18:11:30 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 15 July 2013 17:47, Chris Barker - NOAA Federal wrote: > I don't _think_ this is just Windows Bashing: MS has done very very > little to improve the whole command line experience on Windows over > the years. > It's not MS-bashing. I agree with you, and I'm one of the least likely people around here to indulge in arbitrary MS-bashing. (With the exception of Steve Dower, I guess :-)) Powershell is a *great* step up from cmd, but there are still a lot of dodgy bits round the edges (mostly because of the whole "console vs GUI subsystems" thing). > All that is a way to say that Python can only make it so easy for > Windows users, but what's in place is not bad, and it really makes > sense for pip to use what's been there for ages, i.e. a command called > "pip" (and pip2, pip3...) that sits in the same place that all other > third-party Pyton "scripts" are installed. No matter how you slice it, > a user will need to put that on their PATH one way or another. > Agreed, PATH manipulation is a fact of life for everyone, both Unix and Windows. Of course, what MS is telling us is: don't rely on the command line! > So a really nice thing to do for Windows users would be to provide a > little GUI pip tool that's part of the standard install. (not that I'm > volunteering to write it...has no none yet written a tkInter-bsed pip > front-end?) > I don't think a GUI-based tool is the answer here - the command line is orders of magnitude more powerful. For simple cases yes, but we have bdist_wininst and bdist_msi for those, and they are clearly not enough. > the current setuptools exe-wrapper feels really kludgy, but it works > -- it seems the only real problematic issue is the self-update problem > The self-update issue is the big one. There are others (completely incomprehensible errors if the #! line in the script is wrong, for example) but it's certainly pretty much the best solution we have at the moment. Most of my problems with the setuptools wrappers are not actually with the exe, but rather with the actual script (and its dependency on pkg_resources) that lies behind it - and that's not a Windows problem per se. > -- maybe there is a Windows guru somewhere that can fix that.... > I think I'm that guru, unfortunately, and I'm not having a whole lot of luck :-) The real problem is not technical, actually - it's knowing what Windows users will actually be comfortable with. Unix users tend to assume Windows users are very uncomfortable on the command line (no offence meant to anyone by that) whereas the reality is that some are, some (like me...) really are not, and some are simply unfamiliar with the capabilities of the Windows command line through lack of need to use it (many of my colleagues, for example). I'm actually tempted to give up trying to please everyone, and just put together a solution that suits *me* and see how that flies. Second guessing what other people want makes my head hurt :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Jul 15 19:26:00 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Jul 2013 13:26:00 -0400 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: Message-ID: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> On Jul 15, 2013, at 9:39 AM, Paul Moore wrote: > I'm looking at the possibility of replacing the current setuptools entry point based pip executables with Python scripts. The biggest problem is that a script "pip.py" shadows the pip package, making "import pip" fail. > > I can get round this by deleting sys.path[0] (the location of the currently running script) but how robust is that? Are there any corner cases where it would break? Or alternatively, is there a better way to do this rather than messing directly with sys.path? I suspect this is a fairly common question, but my Google-fu is failing me :-( > > Sorry, I know this is a basic Python coding question - in my defence, it's for something related to the current pip discussions :-) > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Maybe this is a crazy idea, but would a windows only extension work? .pye(executable) Then just associate .pye with the launcher. Python won't see .pye as importable so there's no shadow issues. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Mon Jul 15 20:23:37 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 19:23:37 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 15 July 2013 18:26, Donald Stufft wrote: > Maybe this is a crazy idea, but would a windows only extension work? > .pye(executable) Then just associate .pye with the launcher. Python won't > see .pye as importable so there's no shadow issues. That's actually a very good idea. The only downside is the proliferation of extensions involved, and the need to register them. That puts it into the territory of things the installer needs to do if we're to be able to assume it. But I may propose it to python-dev (Daniel proposed a "zipped Python app" extension a while back, as well. I'm not sure what happened with that one...) Actually, this and many of the other ideas fall foul of backward compatibility issues - we can't assume the Python launcher is available prior to Python 3.3, so #! support in .py files isn't available either. I think I'm coming to the conclusion that the best way forward is: 1. Continue using the setuptools exe launcher, but bundle our own copy. 2. Modify setup.py to install our own scripts run via the exe launcher, which don't rely on entry points and pkg_resources. 3. Special case the heck out of pip upgrading itself to ignore errors from trying to replace the exe (as long as the exe is unchanged, based on a size/checksum check) This covers the replacing-the-exe issue and the entry point script problems of vendoring setuptools. I don't *like* this option, but at least it's not going to break big chunks of our userbase... There are some other options I'd still like to explore before settling on something, for example making pip install from sdist *always* build a temporary wheel and then always install from wheels - we can then introspect the wheel before installing and catch this type of issue before starting. That lets us easily avoid the "overwriting the exe" issue, as well as letting us cleanly roll back failed installs. One thing is clear - this is a longer term effort, not a quick fix... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From setuptools at bugs.python.org Mon Jul 15 21:16:37 2013 From: setuptools at bugs.python.org (Naftuli Tzvi Kay) Date: Mon, 15 Jul 2013 19:16:37 +0000 Subject: [Distutils] [issue155] 0.7 breaks existing packages with hyphens in 'name' Message-ID: <1373915797.18.0.489917380748.issue155@psf.upfronthosting.co.za> New submission from Naftuli Tzvi Kay: Please see this on StackOverflow for a full explanation: http://stackoverflow.com/questions/17659561/hyphens-in-project-names-in-setuptools-0-7 I have a nice little script library called [buildout-starter](https://github.com/rfkrocktk/buildout-starter) which makes creating Buildout projects really easy. Then, the latest Buildout declared a dependency on `setuptools>=0.7`, so I had to upgrade my `setuptools` here to be compliant. After the upgrade, `setuptools` now fails along with my Buildout. Whereas the following would work before 0.7, it now fails: from setuptools import setup, find_packages setup( name = "tornado-chat-example", version = "0.0.1-SNAPSHOT", packages = find_packages('src'), package_dir = { '': 'src'}, install_requires = [ 'setuptools', ], ) My `src` directory looks like this: src ??? tornadochatexample ??? tornado_chat_example.egg-info Here's the error I get: Develop: '/home/naftuli/tornado-chat-example/.' Installing python_section. Couldn't find index page for 'tornadochatexample' (maybe misspelled?) Getting distribution for 'tornadochatexample'. Couldn't find index page for 'tornadochatexample' (maybe misspelled?) While: Installing python_section. Getting distribution for 'tornadochatexample'. Error: Couldn't find a distribution for 'tornadochatexample'. Like I mentioned before, this example would seemingly run on `setuptools` 0.6, but now fails in the latest `setuptools` 0.7. How can I get this to work? I'd like my project to be named `tornado-chat-example` but have a package of `tornadochatexample`. Is this a bug, as it used to work before? ---------- messages: 739 nosy: rfkrocktk priority: bug status: unread title: 0.7 breaks existing packages with hyphens in 'name' _______________________________________________ Setuptools tracker _______________________________________________ From oscar.j.benjamin at gmail.com Mon Jul 15 21:47:23 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 15 Jul 2013 20:47:23 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 15 July 2013 18:11, Paul Moore wrote: > The real problem is not technical, actually - it's knowing what Windows > users will actually be comfortable with. Unix users tend to assume Windows > users are very uncomfortable on the command line (no offence meant to anyone > by that) whereas the reality is that some are, some (like me...) really are > not, and some are simply unfamiliar with the capabilities of the Windows > command line through lack of need to use it (many of my colleagues, for > example). There's another significant group of users including myself who use the command line extensively but infect every Windows machine they touch with unixy tools. I use Console2 as my terminal GUI and git-bash (msys bash) as the shell. I don't really get on with it but a lot of other people use Cygwin for similar reasons. One consequence of this setup is that git-bash considers any file with a shebang to be executable and also .exe files but not .bat files. I commonly write a python script with a shebang (and no .py extension) and place a bat file with the same name next to it if, for example, I need to invoke it from something that doesn't understand my unixy setup (e.g. os.system). PATHEXT wouldn't solve the problem for me since the .py extension would still be there when I invoke the script from git-bash. > I'm actually tempted to give up trying to please everyone, and just put > together a solution that suits *me* and see how that flies. Second guessing > what other people want makes my head hurt :-) That's probably a good strategy :) Oscar From chris.barker at noaa.gov Tue Jul 16 00:09:31 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 15 Jul 2013 15:09:31 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On Mon, Jul 15, 2013 at 10:11 AM, Paul Moore wrote: > of Steve Dower, I guess :-)) Powershell is a *great* step up from cmd, could we use a powershell script to launch python scripts? Maybe it wouldn't be any easier to update than an exe, but it might be more accessible. >> Of course, what MS is telling us is: don't rely on the command line! >> So a really nice thing to do for Windows users would be to provide a >> little GUI pip tool that's part of the standard install. (not that I'm >> volunteering to write it...has no none yet written a tkInter-bsed pip >> front-end?) > I don't think a GUI-based tool is the answer here - the command line is > orders of magnitude more powerful. For simple cases yes, but we have > bdist_wininst and bdist_msi for those, and they are clearly not enough. but they are really widely used -- maybe when binary wheels become ubiqitous, I'll stop using them, but I'm no command line phobic, and I usually go first to look for an installer on Windows. Over in the Mac world, we have similar issues (except a proper command line under there if you want it...), and eggs were a real issue because there was nothing to launch if you point and clicked on one. That being said, you're only going to get so far programmin python if you can't run a simple command on the command line. So maybe any GUI front-end should be part of a larger tool -- perhaps provided by IDE developers, for instance. > Most > of my problems with the setuptools wrappers are not actually with the exe, > but rather with the actual script (and its dependency on pkg_resources) that > lies behind it - and that's not a Windows problem per se. I've never liked pkg_resources..... ;-) > The real problem is not technical, actually - it's knowing what Windows > users will actually be comfortable with. Unix users tend to assume Windows > users are very uncomfortable on the command line (no offence meant to anyone > by that) whereas the reality is that some are, some (like me...) really are > not, and some are simply unfamiliar with the capabilities of the Windows > command line through lack of need to use it (many of my colleagues, for > example). true -- and a simple command line solution is fine for most -- as I said, they'll need to deal with that one way or another if they are going to program..(and I say this as an instructor of intro to pyton classes...) > I'm actually tempted to give up trying to please everyone, and just put > together a solution that suits *me* and see how that flies. Second guessing > what other people want makes my head hurt :-) fine plan! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From vinay_sajip at yahoo.co.uk Tue Jul 16 00:21:56 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 15 Jul 2013 22:21:56 +0000 (UTC) Subject: [Distutils] Replacing pip.exe with a Python script References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: Paul Moore gmail.com> writes: > I think I'm coming to the conclusion that the best way forward is: > 1. Continue using the setuptools exe launcher, but bundle our own copy. Wouldn't that be the case with a bundled setuptools/pip anyway? The launcher executables are part of setuptools AFAIK. > 2. Modify setup.py to install our own scripts run via the exe launcher, which don't rely on entry points and pkg_resources. FYI the distlib executable launchers (not based on setuptools, but work the same way) and the distlib script generation approach does not require distlib to run. You might be able to make use of that in some way :-) Regards, Vinay Sajip From p.f.moore at gmail.com Tue Jul 16 00:22:26 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 15 Jul 2013 23:22:26 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: On 15 July 2013 23:09, Chris Barker - NOAA Federal wrote: > > For simple cases yes, but we have > > bdist_wininst and bdist_msi for those, and they are clearly not enough. > > but they are really widely used -- maybe when binary wheels become > ubiqitous, I'll stop using them, but I'm no command line phobic, and I > usually go first to look for an installer on Windows. The killer issue with bdist_wininst and bdist_msi is that they don't work with virtualenvs. I was a fan of them till I started using virtualenv, at which point they become totally useless :-( I'm not against someone writing a GUI. But it won't be me :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Jul 16 00:24:40 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Jul 2013 18:24:40 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> Message-ID: <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> On Jul 15, 2013, at 6:22 PM, Paul Moore wrote: > > On 15 July 2013 23:09, Chris Barker - NOAA Federal wrote: > > For simple cases yes, but we have > > bdist_wininst and bdist_msi for those, and they are clearly not enough. > > but they are really widely used -- maybe when binary wheels become > ubiqitous, I'll stop using them, but I'm no command line phobic, and I > usually go first to look for an installer on Windows. > > The killer issue with bdist_wininst and bdist_msi is that they don't work with virtualenvs. I was a fan of them till I started using virtualenv, at which point they become totally useless :-( > > I'm not against someone writing a GUI. But it won't be me :-) > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig There is something like 200 total bdist_msi on PyPI and 5k bdist_wininst. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 16 00:28:41 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 15 Jul 2013 18:28:41 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> Message-ID: On Jul 15, 2013, at 6:24 PM, Donald Stufft wrote: > > On Jul 15, 2013, at 6:22 PM, Paul Moore wrote: > >> >> On 15 July 2013 23:09, Chris Barker - NOAA Federal wrote: >> > For simple cases yes, but we have >> > bdist_wininst and bdist_msi for those, and they are clearly not enough. >> >> but they are really widely used -- maybe when binary wheels become >> ubiqitous, I'll stop using them, but I'm no command line phobic, and I >> usually go first to look for an installer on Windows. >> >> The killer issue with bdist_wininst and bdist_msi is that they don't work with virtualenvs. I was a fan of them till I started using virtualenv, at which point they become totally useless :-( >> >> I'm not against someone writing a GUI. But it won't be me :-) >> >> Paul >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > There is something like 200 total bdist_msi on PyPI and 5k bdist_wininst. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig To put numbers into perspective, there are ~180k total files uploaded to PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Tue Jul 16 00:39:07 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 15 Jul 2013 15:39:07 -0700 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> Message-ID: On Mon, Jul 15, 2013 at 3:28 PM, Donald Stufft wrote: > There is something like 200 total bdist_msi on PyPI and 5k bdist_wininst. > To put numbers into perspective, there are ~180k total files uploaded to > PyPI. I don't hink this means that the installers aren't widely used, I think it mean they aren't distributed on PyPI. Installers are really useful for packages that require compiled code that depends on external libs -- and most of the major such package maintainers provide them. Also, numbers aren't as important as the handful of widely used, but hard to build, packages.... But they are useless with virtualenv, so I'm looking forward to binary wheels... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From oscar.j.benjamin at gmail.com Tue Jul 16 01:12:55 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 16 Jul 2013 00:12:55 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> Message-ID: On 15 July 2013 23:39, Chris Barker - NOAA Federal wrote: > On Mon, Jul 15, 2013 at 3:28 PM, Donald Stufft wrote: >> There is something like 200 total bdist_msi on PyPI and 5k bdist_wininst. > >> To put numbers into perspective, there are ~180k total files uploaded to >> PyPI. > > I don't hink this means that the installers aren't widely used, I > think it mean they aren't distributed on PyPI. > > Installers are really useful for packages that require compiled code > that depends on external libs -- and most of the major such package > maintainers provide them. I second this. I use pip all the time for pure Python packages on Linux and Windows because it works very well for these. However when it comes to numpy, matplotlib, wxpython, PyQT4 et al. I wouldn't even attempt to use pip on Windows. Most commonly I would do the standard Windows thing of going to the project website and looking for the downloads page. I've also used Christian Gohlke's index of science-related Windows binaries [1] which are supplied as .exe files. He says that "Most binaries are built from source code found on PyPI..." or in other words if it were easy to build these with pip then his index would be unnecessary. When wheel distribution becomes common I hope that this situation will improve substantially though. Oscar [1] http://www.lfd.uci.edu/~gohlke/pythonlibs/ From ncoghlan at gmail.com Tue Jul 16 01:33:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 09:33:48 +1000 Subject: [Distutils] Call for PEP author/champion: Bundling pip with CPython installers In-Reply-To: References: Message-ID: On 15 Jul 2013 01:10, "Paul Nasrat" wrote: > > > > > On 14 July 2013 02:13, Nick Coghlan wrote: >> >> Based on the recent discussions, I now plan to reject the pip bootstrapping-on-first-invocation approach described in PEP 439 in favour of a new PEP that proposes: >> >> * bundling the latest version of pip with the CPython binary installers for Mac OS X and Windows for all future CPython releases (including maintenance releases) >> * aligns the proposal with the Python 3.4 release schedule by noting that CPython changes must be completed by the first 3.4 beta, while pip changes must be completed by the first 3.4 release candidate. >> * ensuring that, for Python 3.4, "python3" and "python3.4" are available for command line invocation of Python, even on Windows >> * ensuring that the bundled pip, for Python 3.4, ensures "pip", "pip3" and "pip3.4" are available for command line invocation of Python, even on Windows >> * ensuring that the bundled pip is able to upgrade/downgrade itself independent of the CPython release cycle >> * ensuring that pip is automatically available in virtual environments created with pyvenv >> * adding a "get-pip.py" script to Tools/scripts in the CPython repo for bootstrapping the latest pip release from PyPI into custom CPython source builds >> >> Note that there are still open questions to be resolved, which is why an author/champion is needed: > > > I've a bunch of contacts in various distros still - I've not championed a PEP before but I would be happy to take this on. Thanks, Paul, that sounds great. Once we have something written up I can run it by Fedora's python-devel list, too. Probably the easiest way to get started is to grab the PEP 439 source from https://hg.python.org/peps, file the serial numbers off and edit that into a proposal for bundling pip with the installers rather than using runtime bootstrapping. PEP 1 has more general guidance on the PEP process (although in this case feel free to send updates directly to me for posting). >> >> * what guidance, if any, should we provide to Linux distro packagers? >> >> * how should maintenance updates handle the presence of an existing pip installation? > > > There are various distro packaging specific ways of handling this. Hard requirements, recommends, obsoleting the standalone package and providing it virtually as part of I suspect we'll end up being fairly agnostic on the Linux details, and merely make it clear that at the very least "pip install --user " should be readily available. > >> Automatically upgrade older versions to the bundled version, while leaving newer versions alone? Force installation of the bundled version? > > > My personal experience is that forcing the bundled version to OS with strong in-built packaging (Linux, BSD, other *NIX) is likely to meet with some resistance. I can certainly talk with some people, my instinct is it's likely to be only bundle with installers, allow optional install as part of the cPython build which can then be subpackaged/ignored for seperate pip/bundled as distros so desire. Yes, you can take all my bundling comments as relating specifically to the Windows and Mac OS X installers we provide. Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Jul 16 06:31:59 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 15 Jul 2013 21:31:59 -0700 Subject: [Distutils] pip and virtualenv release candidates In-Reply-To: References: Message-ID: pip-1.4rc5 and virtualenv-1.10rc8 are now available the changes from the previous RCs: - Applied security patch to pip's ssl support related to certificate DNS wildcard matching (http://bugs.python.org/issue17980) - Fixed index header processing bug: https://github.com/pypa/pip/pull/1047 here's the RC install instructions again: $ curl -L -O https://github.com/pypa/virtualenv/archive/1.10rc8.tar.gz $ tar zxf 1.10rc8.tar.gz $ python virtualenv-1.10rc8/virtualenv.py myVE $ myVE/bin/pip --version pip 1.4rc5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Jul 16 11:08:10 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 16 Jul 2013 09:08:10 +0000 Subject: [Distutils] devpi-0.9.3: new list/remove commands, bugfixes Message-ID: <20130716090810.GK3125@merlinux.eu> I just released new versions of the devpi system, which provides a self-updating pypi caching and index server and a ``devpi`` command line tool to help with common upload/test/release activities. devpi-0.9.3 comes with some bug fixes and two new sub commands to view and remove release files from a private index. For general docs see: http://doc.devpi.net and for the changelog see below. Special thanks to Anthon van der Neut for his contributions, in particular the "argcomplete" support allowing for completion on options and subcommands. best, holger 0.9.3 ---------------------------- server: - fixed issue9: caching of packages where upstream provides no last-modified header now works. - fixed issue8: only http/https archives are allowed and other schemes (such as ftp) are silently skipped - added support for REST DELETE methods of projects and versions on an index - added "argcomplete" support for tab completion on options (thanks to Anthon van der Neut) client: - new "devpi list" command to show projects of the in-use index or all release files of a project with "devpi list PROJECTNAME". - new "devpi remove" command to remove releases from the current index, including any contained release files - added "argcomplete" support for tab completion on options (thanks to Anthon van der Neut) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From holger at merlinux.eu Tue Jul 16 11:19:00 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 16 Jul 2013 09:19:00 +0000 Subject: [Distutils] vetting, signing, verification of release files Message-ID: <20130716091900.GL3125@merlinux.eu> I am considering implementing gpg-signing and verification of release files for devpi. Rather than requiring package authors to sign their release files, i am pondering a scheme where anyone can vet for a particular published release file by publishing a signature about it. This aims to help responsible companies to work together. I've heart from devops/admins that they manually download and check release files and then install it offline after some vetting. Wouldn't it be useful to turn this into a more collaborative effort? Any thoughts or pointers to existing efforts within the (Python) packaging ecologies? best, holger From jannis at leidel.info Tue Jul 16 12:21:41 2013 From: jannis at leidel.info (Jannis Leidel) Date: Tue, 16 Jul 2013 12:21:41 +0200 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <20130716091900.GL3125@merlinux.eu> References: <20130716091900.GL3125@merlinux.eu> Message-ID: <8AD857B0-F41E-448C-B639-B0EE033E5E7E@leidel.info> On 16.07.2013, at 11:19, holger krekel wrote: > Any thoughts or pointers to existing efforts within the (Python) > packaging ecologies? Erik Rose just released peep the other day [1], which admittedly doesn't use gpg but at least allows pip users to simplify the manual vetting process. Jannis 1: https://pypi.python.org/pypi/peep From p.f.moore at gmail.com Tue Jul 16 12:28:19 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 11:28:19 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 15 July 2013 23:21, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > > > I think I'm coming to the conclusion that the best way forward is: > > 1. Continue using the setuptools exe launcher, but bundle our own copy. > > Wouldn't that be the case with a bundled setuptools/pip anyway? The > launcher > executables are part of setuptools AFAIK. > > > 2. Modify setup.py to install our own scripts run via the exe launcher, > which don't rely on entry points and pkg_resources. > > FYI the distlib executable launchers (not based on setuptools, but work the > same way) and the distlib script generation approach does not require > distlib to run. You might be able to make use of that in some way :-) Yes, the physical executables aren't that much of an issue - grabbing them from somewhere and bundling them is easy enough and as you say we may already have them (I actually thought distlib included the exes, but the version bundled with pip at the moment doesn't have them). The fun bit is having to modify setuptools to do our own script wrapper management, because setuptools doesn't let us customise its process to remove the runtime dependency on a top-level pkg_resources. Having to make project-specific customisations to distutils feels like we're going in precisely the opposite direction from what the whole packaging process is trying to achieve, and it makes me feel vaguely unclean having to consider it :-) Two thoughts for the wider audience: 1. Should pip re-vendor a newer version of distlib, so we have the exe wrappers? We currently have 0.1.1 and 0.1.2 is on PyPI. 2. Would writing a distutils extension class in setup.py to make the exe wrappers using the vendored distlib.scripts package be acceptable to remove the runtime dependency on pkg_resources from the wrappers? Note: This is just one relatively small step towards removing some of our dependencies on an external setuptools. It's not the whole story, and it still leaves the "upgrading a running exe wrapper" problem to address. (This discussion may be worth migrating to pypa-dev, I'm not sure how much the wider distutils audience cares about this level of detail - feel free to switch lists anyone who thinks it's appropriate). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jul 16 12:37:21 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 11:37:21 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> Message-ID: On 16 July 2013 00:12, Oscar Benjamin wrote: > On 15 July 2013 23:39, Chris Barker - NOAA Federal > wrote: > > On Mon, Jul 15, 2013 at 3:28 PM, Donald Stufft wrote: > >> There is something like 200 total bdist_msi on PyPI and 5k > bdist_wininst. > > > >> To put numbers into perspective, there are ~180k total files uploaded to > >> PyPI. > > > > I don't hink this means that the installers aren't widely used, I > > think it mean they aren't distributed on PyPI. > > > > Installers are really useful for packages that require compiled code > > that depends on external libs -- and most of the major such package > > maintainers provide them. > > I second this. I use pip all the time for pure Python packages on > Linux and Windows because it works very well for these. However when > it comes to numpy, matplotlib, wxpython, PyQT4 et al. I wouldn't even > attempt to use pip on Windows. > > Most commonly I would do the standard Windows thing of going to the > project website and looking for the downloads page. I've also used > Christian Gohlke's index of science-related Windows binaries [1] which > are supplied as .exe files. He says that "Most binaries are built from > source code found on PyPI..." or in other words if it were easy to > build these with pip then his index would be unnecessary. > > When wheel distribution becomes common I hope that this situation will > improve substantially though. Precisely. At the moment, unless you need to compile code with external dependencies, pip install works fine (it's even fine for C code without dependencies if you have a compiler). But once the build process is even slightly complex, wininst installers are the only real answer. The fact that they don't work on virtualenvs is a pain, but there are 2 ways round this: 1. I believe that easy_install will install wininst installers. I've not tried myself. 2. You can use wheel convert to make wheels out of wininsts, and then pip install those. This is what I do, and it's really effective. I keep a local index of converted wheels to limit the download/convert overhead. I'd like to see more wheels on PyPI and sites like Christoph Gohlke's move to providing wheels, and preferably in a PyPI index style format, so people can pip install *anything*. But obviously that's the authors' (and people like Christoph's) choice. MSI is a lousy format in this context, because it's near-impossible to introspect, and hence convert to a wheel or anything similar. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Tue Jul 16 12:38:03 2013 From: christian at python.org (Christian Heimes) Date: Tue, 16 Jul 2013 12:38:03 +0200 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <8AD857B0-F41E-448C-B639-B0EE033E5E7E@leidel.info> References: <20130716091900.GL3125@merlinux.eu> <8AD857B0-F41E-448C-B639-B0EE033E5E7E@leidel.info> Message-ID: <51E5228B.3000904@python.org> Am 16.07.2013 12:21, schrieb Jannis Leidel: > On 16.07.2013, at 11:19, holger krekel wrote: > >> Any thoughts or pointers to existing efforts within the (Python) >> packaging ecologies? > > Erik Rose just released peep the other day [1], which admittedly doesn't use gpg but at least allows pip users to simplify the manual vetting process. Peep is a bit scary because the author doesn't have much confidence in his own crypto fu: "Proof of concept. Does all the crypto stuff. Should be secure." Peep doesn't protect you from at least on DoS attack scenario. The tool does neither verify nor limit the size of a downloaded file. In theory an active attacker could make you download an arbitrarily large file in order to clog your network pipes. Eventually your machine runs out of disk space, too. I'd feel much better if such a tool would verify both hashsum and file size. Christian From oscar.j.benjamin at gmail.com Tue Jul 16 13:04:39 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 16 Jul 2013 12:04:39 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 16 July 2013 11:28, Paul Moore wrote: > Two thoughts for the wider audience: > > 1. Should pip re-vendor a newer version of distlib, so we have the exe > wrappers? We currently have 0.1.1 and 0.1.2 is on PyPI. In what way would that affect anyone? > 2. Would writing a distutils extension class in setup.py to make the exe > wrappers using the vendored distlib.scripts package be acceptable to remove > the runtime dependency on pkg_resources from the wrappers? Does that mean that an end user would need a C compiler in a situation where they previously didn't? Oscar From holger at merlinux.eu Tue Jul 16 13:17:12 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 16 Jul 2013 11:17:12 +0000 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <8AD857B0-F41E-448C-B639-B0EE033E5E7E@leidel.info> References: <20130716091900.GL3125@merlinux.eu> <8AD857B0-F41E-448C-B639-B0EE033E5E7E@leidel.info> Message-ID: <20130716111712.GB1668@merlinux.eu> On Tue, Jul 16, 2013 at 12:21 +0200, Jannis Leidel wrote: > On 16.07.2013, at 11:19, holger krekel wrote: > > > Any thoughts or pointers to existing efforts within the (Python) > > packaging ecologies? > > Erik Rose just released peep the other day [1], which admittedly doesn't use gpg but at least allows pip users to simplify the manual vetting process. > > Jannis > > 1: https://pypi.python.org/pypi/peep thanks for the pointer, i actually saw that earlier. If i see it correctly it does not target "vetting sharing": if a 1000 careful people want to install Django-1.5.1.tar.gz they each need to do the verification work individually, each creating their particular "requirements.txt" with extra hashes. best, holger From p.f.moore at gmail.com Tue Jul 16 13:25:15 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 12:25:15 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 16 July 2013 12:04, Oscar Benjamin wrote: > On 16 July 2013 11:28, Paul Moore wrote: > > Two thoughts for the wider audience: > > > > 1. Should pip re-vendor a newer version of distlib, so we have the exe > > wrappers? We currently have 0.1.1 and 0.1.2 is on PyPI. > > In what way would that affect anyone? > Sorry, you're right - that's really for the pip developers. > > > 2. Would writing a distutils extension class in setup.py to make the exe > > wrappers using the vendored distlib.scripts package be acceptable to > remove > > the runtime dependency on pkg_resources from the wrappers? > > Does that mean that an end user would need a C compiler in a situation > where they previously didn't? > I don't believe so - distlib bundles the compiled code. On the other hand, I'm missing something, as I don't see how the *current* exe wrappers avoid meaning that there need to be separate 32-bit and 64-bit versions of pip... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Jul 16 13:42:31 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 16 Jul 2013 12:42:31 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 16 July 2013 12:25, Paul Moore wrote: >> >> > 2. Would writing a distutils extension class in setup.py to make the exe >> > wrappers using the vendored distlib.scripts package be acceptable to >> > remove >> > the runtime dependency on pkg_resources from the wrappers? >> >> Does that mean that an end user would need a C compiler in a situation >> where they previously didn't? > > I don't believe so - distlib bundles the compiled code. > > On the other hand, I'm missing something, as I don't see how the *current* > exe wrappers avoid meaning that there need to be separate 32-bit and 64-bit > versions of pip... I thought that 64 bit Windows could run 32 bit Windows .exe files (although I don't have a way to check this). Oscar From p.f.moore at gmail.com Tue Jul 16 14:23:02 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 13:23:02 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 16 July 2013 12:42, Oscar Benjamin wrote: > I thought that 64 bit Windows could run 32 bit Windows .exe files > (although I don't have a way to check this). > Yes, but there are 32-bit and 64-bit exe wrappers, which I suspect is because a 32-bit exe can't load a 64-bit DLL (and may be vice versa). As I said, I don't know for sure at the moment, but it needs investigating. Grumble. Next time the label on the can says "worms" I need to leave the can opener alone :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Tue Jul 16 14:42:06 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 16 Jul 2013 14:42:06 +0200 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: <1F12F353-7907-474E-9CAD-11D36A0A463A@mac.com> On 16 Jul, 2013, at 13:25, Paul Moore wrote: > > On the other hand, I'm missing something, as I don't see how the *current* exe wrappers avoid meaning that there need to be separate 32-bit and 64-bit versions of pip... Couldn't you just ship both variants of the exe wrappers in a single distribution and then use the correct one for the current installation? That's what I'm doing in py2app. Ronald From ncoghlan at gmail.com Tue Jul 16 14:39:37 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 22:39:37 +1000 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 16 July 2013 22:23, Paul Moore wrote: > > On 16 July 2013 12:42, Oscar Benjamin wrote: >> >> I thought that 64 bit Windows could run 32 bit Windows .exe files >> (although I don't have a way to check this). > > > Yes, but there are 32-bit and 64-bit exe wrappers, which I suspect is > because a 32-bit exe can't load a 64-bit DLL (and may be vice versa). As I > said, I don't know for sure at the moment, but it needs investigating. > > Grumble. Next time the label on the can says "worms" I need to leave the can > opener alone :-) I, for one, am happy you opened it now rather than in a few months time! If Paul Nasrat takes up the challenge of documenting all the practical issues associated with bundling pip with the CPython binary installers as a PEP, I expect he will need to work closely with you to capture the details on the Windows side of things. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Jul 16 15:08:43 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 14:08:43 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) Message-ID: On 16 July 2013 13:42, Ronald Oussoren wrote: > > On the other hand, I'm missing something, as I don't see how the > *current* exe wrappers avoid meaning that there need to be separate 32-bit > and 64-bit versions of pip... > > Couldn't you just ship both variants of the exe wrappers in a single > distribution and then use the correct one for the current installation? > That's what I'm doing in py2app. That's OK for source-style installs (which is what setuptools does, and what pip mostly cares about right now). But for bundling with Python it needs to be considered (although it's just getting the right one in the right installer). But for wheels it's a pain, because instead of having just a single pip wheel, we need 32-bit and 64-bit wheels solely for the wrappers. That sucks. Hard. (And it's not just for pip, nose will have the same problem, as will many other projects that use exe wrappers). And the bdist_wheel command currently doesn't recognise this *at all*. So wheels using wrappers are potentially broken. I think the correct solution is to explicitly have declarative support for "console script entry point" metadata in PEP 426, as well as having tools like bdist_wheel and distil do some explicit backward compatibility hacking to remove legacy-style exe wrappers. The wheel install code should then explicitly install appropriate wrappers for the target platform (which may be exe wrappers similar to the current ones, but moving forward may be some other mechanism if one is found). This is complex enough that I'm now concerned that we need reference "wheel install" code in the stdlib, just so that people don't make up their own on the basis that "wheel is simple to install manually" and screw it up. Also so that we only have one style of command line script wrapper to deal with going forward, not a multitude of mostly-compatible solutions. Nick: See the above point re PEP 426 - do you agree that this needs addressing in Metadata 2.0? Paul PS There is still the proviso that I haven't tested my assumption that the separate 32 and 64 bit wrappers are *needed* (setuptools and distlib use them, so I think it's a valid assumption, but I need to test). I will try to get time to check that ASAP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 16 15:21:17 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 23:21:17 +1000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: On 16 July 2013 23:08, Paul Moore wrote: > Nick: See the above point re PEP 426 - do you agree that this needs > addressing in Metadata 2.0? I believe Daniel already covered it in PEP 427 - rather than baking the entry point wrappers into the wheel, installers can generate any needed entry point wrappers if the wheel includes Python scripts in {distribution}-{version}.data/scripts/ (see http://www.python.org/dev/peps/pep-0427/#recommended-installer-features) Now, there may be holes in that scheme, but it seemed solid enough when I approved the PEP. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Jul 16 15:29:01 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 14:29:01 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: On 16 July 2013 14:21, Nick Coghlan wrote: > On 16 July 2013 23:08, Paul Moore wrote: > > Nick: See the above point re PEP 426 - do you agree that this needs > > addressing in Metadata 2.0? > > I believe Daniel already covered it in PEP 427 - rather than baking > the entry point wrappers into the wheel, installers can generate any > needed entry point wrappers if the wheel includes Python scripts in > {distribution}-{version}.data/scripts/ (see > http://www.python.org/dev/peps/pep-0427/#recommended-installer-features) > > Now, there may be holes in that scheme, but it seemed solid enough > when I approved the PEP. > The big problem is that implementations don't do that :-( AFAIK, none of distlib, pip or wheel itself do anything with script wrappers except rewrite #! lines (which is the other, much easier, item in that section). At the moment, bdist_wheel just puts the exe wrappers generated from the source into the wheel itself, which again is probably wrong in the context what the PEP suggests. As I said in my other email, I think this is subtle enough that we need a stdlib implementation to stop people making mistakes like this. It's certainly not fair to expect a mostly-Unix development team to get this sort of Windows arcana right without some help. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Tue Jul 16 15:30:56 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 16 Jul 2013 15:30:56 +0200 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> On 16 Jul, 2013, at 15:08, Paul Moore wrote: > On 16 July 2013 13:42, Ronald Oussoren wrote: > > On the other hand, I'm missing something, as I don't see how the *current* exe wrappers avoid meaning that there need to be separate 32-bit and 64-bit versions of pip... > > Couldn't you just ship both variants of the exe wrappers in a single distribution and then use the correct one for the current installation? That's what I'm doing in py2app. > > That's OK for source-style installs (which is what setuptools does, and what pip mostly cares about right now). But for bundling with Python it needs to be considered (although it's just getting the right one in the right installer). But for wheels it's a pain, because instead of having just a single pip wheel, we need 32-bit and 64-bit wheels solely for the wrappers. That sucks. Hard. (And it's not just for pip, nose will have the same problem, as will many other projects that use exe wrappers). And the bdist_wheel command currently doesn't recognise this *at all*. So wheels using wrappers are potentially broken. You could just have a wheel that contains two data files: wrapper-win32.exe and wrapper-win64.exe, then select the one that gets used as the wrapper for a specific script when you create that wrapper. That's assuming that the wrapper .exe gets "created" when a wheel is installed and is not included in the wheel. > > I think the correct solution is to explicitly have declarative support for "console script entry point" metadata in PEP 426, as well as having tools like bdist_wheel and distil do some explicit backward compatibility hacking to remove legacy-style exe wrappers. The wheel install code should then explicitly install appropriate wrappers for the target platform (which may be exe wrappers similar to the current ones, but moving forward may be some other mechanism if one is found). Yikes, that means my assumption is wrong. The section on "Recommended installer features" in the wheel spec[1] says that the wrapper executable should be created on installation, does pip not do this? > > This is complex enough that I'm now concerned that we need reference "wheel install" code in the stdlib, just so that people don't make up their own on the basis that "wheel is simple to install manually" and screw it up. Also so that we only have one style of command line script wrapper to deal with going forward, not a multitude of mostly-compatible solutions. I'd love to see comprehensive wheel support in the stdlib, but that may have to wait for 3.5 because the entire packaging systeem (wheels, metadata, ...) is moving forward quickly at the moment. That said, it would be nice if distutils would grow support for creating wheels and modern metadata in sdists as that would mean I could drop usage of setuptools for most of my software (for python 3.4). > > Nick: See the above point re PEP 426 - do you agree that this needs addressing in Metadata 2.0? > > Paul > > PS There is still the proviso that I haven't tested my assumption that the separate 32 and 64 bit wrappers are *needed* (setuptools and distlib use them, so I think it's a valid assumption, but I need to test). I will try to get time to check that ASAP. That depends on what the wrapper does, if it launches a regular python with the right command-line you might be able to get away with a single wrapper, if it loads python.dll and executes the script directory you do need separate wrappers for 32 and 64 bit. [1] http://www.python.org/dev/peps/pep-0427/#recommended-installer-features From p.f.moore at gmail.com Tue Jul 16 15:32:27 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 14:32:27 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: On 16 July 2013 14:08, Paul Moore wrote: > PS There is still the proviso that I haven't tested my assumption that the > separate 32 and 64 bit wrappers are *needed* (setuptools and distlib use > them, so I think it's a valid assumption, but I need to test). I will try > to get time to check that ASAP. Hmm. I just did a quick test, and then based on the results checked the setuptools source code. I can see no reason why there needs to be 32 and 64 bit launcher exes. The launchers simply use CreateProcess to launch a separate Python process using the #! line of the script. So there's no DLL loading going on, and no reason that I can see for needing separate 32 and 64 bit builds. Jason - can you shed any light on why there are separate builds for 32 and 64 bits? Actually, the launcher is essentially identical to the "py" launcher for Windows, except that it gets a script name to execute from the name of the launcher. I'm wondering whether the correct approach here would be to enhance the launcher one more time to look for a suitably named script and auto-run it if it's present (i.e. merge the wrapper functionality into the launcher). Then we have a standard wrapper that everyone can use and not reinvent their own. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jul 16 15:38:23 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 14:38:23 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 14:30, Ronald Oussoren wrote: > > I think the correct solution is to explicitly have declarative support > for "console script entry point" metadata in PEP 426, as well as having > tools like bdist_wheel and distil do some explicit backward compatibility > hacking to remove legacy-style exe wrappers. The wheel install code should > then explicitly install appropriate wrappers for the target platform (which > may be exe wrappers similar to the current ones, but moving forward may be > some other mechanism if one is found). > > Yikes, that means my assumption is wrong. The section on "Recommended > installer features" in the wheel spec[1] says that the wrapper executable > should be created on installation, does pip not do this? > Yes, Nick pointed me at that part of the PEP. Nobody's doing that at the moment, and exes are being added to the wheels at wheel build time, which is also wrong. That'll teach me to work from reality rather than specs :-( Daniel, Vinay, pip developers - it looks like we need to do some work in this area to make the code conform to the specs. The PEP only says this is a "recommended" feature, but frankly I think it needs to be mandatory, or script wrappers are going to be a mess we'll be dealing with for some time :-( > PS There is still the proviso that I haven't tested my assumption that > the separate 32 and 64 bit wrappers are *needed* (setuptools and distlib > use them, so I think it's a valid assumption, but I need to test). I will > try to get time to check that ASAP. > > That depends on what the wrapper does, if it launches a regular python > with the right command-line you might be able to get away with a single > wrapper, if it loads python.dll and executes the script directory you do > need separate wrappers for 32 and 64 bit. As I said in another message, looks like there's no real reason for separate wrappers. A 32-bit one should work regardless [1]. But wheels built on 64-bit systems at the moment won't work on 32-bit ones (because the wrappers will be 64-bit). [1] With the possible exception that Windows' magic shadowing of 32 and 64 bit "stuff" - the WOW64 magic that I know nothing about - could cause odd results in obscure cases. I propose ignoring this in the absence of actual bug reports :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Jul 16 15:40:12 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 23:40:12 +1000 Subject: [Distutils] PEP 426 updated based on last round of discussion Message-ID: I actually pushed this to python.org on the weekend but forgot to announce it on the list. The latest version of PEP 426 is up at http://www.python.org/dev/peps/pep-0426/ It was a net deletion of content despite going into more depth in a few areas, so I'm counting that as a win for clarity :) Change details are at http://hg.python.org/peps/rev/067d3c3c1351 Significant changes: * serialisation prefix changed to "pydist". This metadata is the metadata that exists at the sdist level. Wheels and installation may add other metadata files, and PyPI will publish additional metadata extracted from the archive contents, so I decided "pymeta" didn't feel write (the name of the schema file changed as well, but I did that in a separate commit so the diff stayed readable) * all the *_may_require fields are gone (as previously discussed) * rather than "install specifiers" I went with "requirement specifiers" (install turned out not to read very well) * accordingly, the subfields of dependency specifiers are "requires", "extra" and "environment" * the abbreviated form (which has "requires" as a list) was easy enough to specify, so that's what I used. The unpacked form (where multiple entries in the same dependency list have the same extra/environment combination) is explicitly disallowed in order to encourage consistent formatting. * clarified that internal whitespace is permitted in requirement specifiers (there may be a simpler way to specify this, such as "all whitespace in requirement specifiers is ignored") * I made the change to explicitly distrust "Provides" data retrieved from a public index server, and noted that projects defining a virtual dependency should claim that name to define the default provider. * noted that Debian packagers may want to map extras to Recommended or Suggested packages * noted some possible use cases for metadata extensions * fixed and clarified various things in the reference copy of the JSON schema (it could still do with an audit against the current PEP text, though) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Jul 16 15:41:43 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 16 Jul 2013 23:41:43 +1000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: On 16 July 2013 23:29, Paul Moore wrote: > As I said in my other email, I think this is subtle enough that we need a > stdlib implementation to stop people making mistakes like this. It's > certainly not fair to expect a mostly-Unix development team to get this sort > of Windows arcana right without some help. Are we talking about the pip developers or python-dev, here? I think Martin and Brian feel pretty lonely, too :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Tue Jul 16 15:53:17 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 13:53:17 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: Paul Moore gmail.com> writes: > Yes, Nick pointed me at that part of the PEP. Nobody's doing that at the > moment, and exes are being added to the wheels at wheel build time, which > is also wrong. Not true for distlib - it doesn't add .exe wrappers to wheels at build time :-) It adds them to the target directory when installing under Windows. (You can also choose not to install any wrappers.) Regards, Vinay Sajip From dholth at gmail.com Tue Jul 16 15:59:57 2013 From: dholth at gmail.com (Daniel Holth) Date: Tue, 16 Jul 2013 09:59:57 -0400 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: Need a script that visits all the console-script entry points to regenerate the wrappers. Then there are also the non-console-scripts scripts... I consider scripts as an optional convenience, but I suppose that isn't always the case. On Tue, Jul 16, 2013 at 9:53 AM, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> Yes, Nick pointed me at that part of the PEP. Nobody's doing that at the >> moment, and exes are being added to the wheels at wheel build time, which >> is also wrong. > > Not true for distlib - it doesn't add .exe wrappers to wheels at build time > :-) It adds them to the target directory when installing under Windows. (You > can also choose not to install any wrappers.) > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From vinay_sajip at yahoo.co.uk Tue Jul 16 16:01:56 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 14:01:56 +0000 (UTC) Subject: [Distutils] =?utf-8?q?Wheels_and_console_script_entry_point_wrapp?= =?utf-8?q?ers=09=28Was=3A_Replacing_pip=2Eexe_with_a_Python_script?= =?utf-8?q?=29?= References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: Vinay Sajip yahoo.co.uk> writes: > > Not true for distlib - it doesn't add .exe wrappers to wheels at build time > It adds them to the target directory when installing under Windows. (You > can also choose not to install any wrappers.) Sorry, some misinformation there - distlib does do this when invoked via distil. However, this can be turned off at the distlib level - I will update distil to not do this when adding wheels. Regards, Vinay Sajip From oscar.j.benjamin at gmail.com Tue Jul 16 17:09:58 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 16 Jul 2013 16:09:58 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 14:38, Paul Moore wrote: > On 16 July 2013 14:30, Ronald Oussoren wrote: >> >> > I think the correct solution is to explicitly have declarative support >> > for "console script entry point" metadata in PEP 426, as well as having >> > tools like bdist_wheel and distil do some explicit backward compatibility >> > hacking to remove legacy-style exe wrappers. The wheel install code should >> > then explicitly install appropriate wrappers for the target platform (which >> > may be exe wrappers similar to the current ones, but moving forward may be >> > some other mechanism if one is found). There are many other uses for console script entry point metadata. For example, it would be good to be able to query pip/pypi in order to find out which packages supply a particular console command. One feature of Ubuntu that I really like is the way that it automatically tells you how to install any missing command: oscar:~$ pypy The program 'pypy' is currently not installed. You can install it by typing: sudo apt-get install pypy Obviously it's not as useful when the command and the package have exactly the same name :) >> >> Yikes, that means my assumption is wrong. The section on "Recommended >> installer features" in the wheel spec[1] says that the wrapper executable >> should be created on installation, does pip not do this? > > > Yes, Nick pointed me at that part of the PEP. Nobody's doing that at the > moment, and exes are being added to the wheels at wheel build time, which is > also wrong. > > That'll teach me to work from reality rather than specs :-( > > Daniel, Vinay, pip developers - it looks like we need to do some work in > this area to make the code conform to the specs. The PEP only says this is a > "recommended" feature, but frankly I think it needs to be mandatory, or > script wrappers are going to be a mess we'll be dealing with for some time > :-( I think that it should be mandatory. It should be possible for someone releasing a script via pypi to ensure that their program can be invoked after installation under a name of their choosing (assuming the user has the Python Scripts/bin directory in PATH). AFAIK the only bullet-proof way to do this on Windows is with an .exe wrapper. If you only want the program to be invokable from cmd and PowerShell* then a .bat file should be fine. Depending on file extension to invoke .py files with py.exe is subject to input/output redirection bugs on some windows systems (this is solveable when using .py in PATHEXT instead of file associations for cmd at least). However, if you also want the program name to be invokable from e.g. subprocess with shell=False or from git-bash or Cygwin or many other things then neither .bat files nor PATHEXT are sufficient. Wrapper .exes are necessary to ensure that this works properly. Oscar * I don't actually use PowerShell and cannot confirm that running .bat files works fully i.e. without screwing up sys.argv encoding or input/output redirection or anything else. From p.f.moore at gmail.com Tue Jul 16 17:12:41 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 16:12:41 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 15:01, Vinay Sajip wrote: > Vinay Sajip yahoo.co.uk> writes: > > > > Not true for distlib - it doesn't add .exe wrappers to wheels at build > time > > It adds them to the target directory when installing under Windows. (You > > can also choose not to install any wrappers.) > > Sorry, some misinformation there - distlib does do this when invoked via > distil. However, this can be turned off at the distlib level - I will > update > distil to not do this when adding wheels. OK. That sounds good. I'm starting to become uncertain as to whether we actually have an issue here. I think: (1) Builders can put anything they like into the scripts directory. That's more or less out of our control. From what I know of distil's approach, it doesn't actually execute setup.py so it keeps a lot more control than (say) bdist_wheel, which effectively runs setup.py install to a temporary directory then bundles what it finds. But in essence, as things stand right now, the scripts directory of an arbitrary wheel could contain arbitrary stuff. (2) Wheel installers should "make Python scripts work". All that I'm aware of fix up shebang lines and add execute bits. Only distlib adds exe wrappers, I believe. Others work on Windows because the builders put exe wrappers in place (see (1)) but that has some issues. I suspect there may be some "accidental" successes on Unix where setuptools scripts do or don't get a .py extension depending on the target platform, but I have no evidence of this myself. So there's a lot of potentially platform dependent stuff in "scripts" and wheel builders don't (can't) recognise and encode this in the tags. Wheels mostly work at the moment because not many people use them cross platform. But there's potential for issues down the line. FWIW, I believe that the whole "scripts" directory as a concept is too platform-specific. The only real use for it is to expose CLI applications, and a declarative approach like setuptools console_scripts entry points would be better. So longer term I'd argue for deprecating "scripts" altogether and replacing it with some form of "CLI entry point" metadata which may be exposed as part of metadata 2.0 or 3.0, or may simply be internal metadata communicated between the builder and the installer, but not exposed at the PyPI level. Oscar's email argues for exposing it as project metadata, though, and I can see the benefit. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Jul 16 17:22:16 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 16:22:16 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 16:09, Oscar Benjamin wrote: > If you only want the program to be invokable from cmd and PowerShell* > then a .bat file should be fine. Depending on file extension to invoke > .py files with py.exe is subject to input/output redirection bugs on > some windows systems (this is solveable when using .py in PATHEXT > instead of file associations for cmd at least). > bat files have many, many problems. The worst ones are: * Not nestable. If pip is a bat file, saying "pip install foo" from within another bat file will fail silently (control never returns to the line after the pip command). * If you interrupt them you get the obnoxious "Do you want to terminate the batch file?" prompt. If anyone suggests using bat files, I'll cry :-) However, if you also want the program name to be invokable from e.g. > subprocess with shell=False or from git-bash or Cygwin or many other > things then neither .bat files nor PATHEXT are sufficient. Wrapper > .exes are necessary to ensure that this works properly. > Yes. I have been convinced that ultimately, wrapper exes are the only "transparent" means of writing command-line applications on Windows. Because of this, I'd quite like it if wrapper functionality were added to the py launcher (most of the functionality is already present, it would probably be a pretty small change) so that we had a "one obvious way" of writing wrappers. I may try to put together a patch for CPython to this effect... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Jul 16 17:40:18 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 16 Jul 2013 16:40:18 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 16:22, Paul Moore wrote: > On 16 July 2013 16:09, Oscar Benjamin wrote: > >> However, if you also want the program name to be invokable from e.g. >> subprocess with shell=False or from git-bash or Cygwin or many other >> things then neither .bat files nor PATHEXT are sufficient. Wrapper >> .exes are necessary to ensure that this works properly. > > Yes. I have been convinced that ultimately, wrapper exes are the only > "transparent" means of writing command-line applications on Windows. > > Because of this, I'd quite like it if wrapper functionality were added to > the py launcher (most of the functionality is already present, it would > probably be a pretty small change) so that we had a "one obvious way" of > writing wrappers. I may try to put together a patch for CPython to this > effect... I don't know whether or not you intend to have wrappers also work for Python 2.7 (in a third-party package perhaps) but there is a slightly subtle point to watch out for when non-ASCII characters in sys.argv come into play. Python 2.x uses GetCommandLineA and 3.x uses GetCommandLineW. A wrapper to launch 2.x should use GetCommandLineA and CreateProcessA to ensure that the 8-bit argument strings are passed through unaltered. To launch 3.x it should use the W versions. If not then the MSVC runtime (or the OS?) will convert between the 8-bit and 16-bit encodings using its own lossy routines. Oscar From vinay_sajip at yahoo.co.uk Tue Jul 16 17:51:09 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 15:51:09 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: Paul Moore gmail.com> writes: > FWIW, I believe that the whole "scripts" directory as a concept is too > platform-specific. The only real use for it is to expose CLI applications Well, you can also expose GUI applications this way, though the applications are fewer - Qt applications could easily be cross-platform, for example. Also, you can put in scripts which are not entry-point related, which would be essential if you e.g. have existing scripts to bundle as part of an application, which could be written in other languages, say. Not common, perhaps, but not a case you want to arbitrarily restrict given that it works now. Of course, there the distributor of the package is responsible for ensuring cross-platform workability of such scripts using e.g. including .cmd files for Windows or whatever. Regards, Vinay Sajip From p.f.moore at gmail.com Tue Jul 16 18:44:54 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 17:44:54 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: On 16 July 2013 16:51, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > > > FWIW, I believe that the whole "scripts" directory as a concept is too > > platform-specific. The only real use for it is to expose CLI applications > > Well, you can also expose GUI applications this way, though the > applications > are fewer - Qt applications could easily be cross-platform, for example. > Also, you can put in scripts which are not entry-point related, which would > be essential if you e.g. have existing scripts to bundle as part of an > application, which could be written in other languages, say. Not common, > perhaps, but not a case you want to arbitrarily restrict given that it > works > now. Of course, there the distributor of the package is responsible for > ensuring cross-platform workability of such scripts using e.g. including > .cmd files for Windows or whatever. You're right, I should have included GUI apps. I'm not aware of any cases of scripts that aren't entry-point related (and which couldn't be converted to entry points fairly easily) although that certainly doesn't mean there aren't any. OTOH, I do know that there used to be a *lot* of examples of scripts doing what entry points do that were either absolutely not cross-platform (Unix shell scripts, bat files) or were problematic/badly implemented (Python scripts without a py extension, etc) Many of these have now gone, thank goodness, migrated to setuptools entry points. If I were writing a firm proposal, I'd go for something like entry points as metadata, managed by installers, as the primary interface (with backward compatibility code to automatically migrate setuptools entry points so the impact on developers is minimal) and the scripts directory/setup argument being solely for backward compatibility, no management by installers at all, whatever is in there just gets dumped onto the target unchanged (and the contents of scripts is explicitly defined as *not* affecting the compatibility tags). So setuptools users get new entry point metadata automatically, which is processed at install time, and anyone still using the distutils scripts parameter still works as before but gets no particular support from the new tools. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Jul 16 19:34:41 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 13:34:41 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <8BF0BD6D-93A0-4478-B8D5-F810F128A415@stufft.io> <7D576421-E423-4FD5-A39F-BF3072A5B362@stufft.io> Message-ID: <99D5AEF3-045C-4F04-9378-5EE4B6F48F21@stufft.io> On Jul 15, 2013, at 6:39 PM, Chris Barker - NOAA Federal wrote: > On Mon, Jul 15, 2013 at 3:28 PM, Donald Stufft wrote: >> There is something like 200 total bdist_msi on PyPI and 5k bdist_wininst. > >> To put numbers into perspective, there are ~180k total files uploaded to >> PyPI. > > I don't hink this means that the installers aren't widely used, I > think it mean they aren't distributed on PyPI. > > Installers are really useful for packages that require compiled code > that depends on external libs -- and most of the major such package > maintainers provide them. > > Also, numbers aren't as important as the handful of widely used, but > hard to build, packages.... > > But they are useless with virtualenv, so I'm looking forward to binary wheels... > > -Chris > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Sorry I should be more clear :) I wasn't claiming they weren't used (In fact I would guess but without looking that those 5k probably have a good bit of downloads). Just making a statement as to how many of them exist on PyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 16 19:40:25 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 13:40:25 -0400 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: <5E706F7D-0128-4FCB-8C93-7098331A0683@stufft.io> On Jul 16, 2013, at 9:40 AM, Nick Coghlan wrote: > I actually pushed this to python.org on the weekend but forgot to > announce it on the list. > > The latest version of PEP 426 is up at http://www.python.org/dev/peps/pep-0426/ > > It was a net deletion of content despite going into more depth in a > few areas, so I'm counting that as a win for clarity :) > > Change details are at http://hg.python.org/peps/rev/067d3c3c1351 > > Significant changes: > > * serialisation prefix changed to "pydist". This metadata is the > metadata that exists at the sdist level. Wheels and installation may > add other metadata files, and PyPI will publish additional metadata > extracted from the archive contents, so I decided "pymeta" didn't feel > write (the name of the schema file changed as well, but I did that in > a separate commit so the diff stayed readable) > > * all the *_may_require fields are gone (as previously discussed) > > * rather than "install specifiers" I went with "requirement > specifiers" (install turned out not to read very well) > > * accordingly, the subfields of dependency specifiers are "requires", > "extra" and "environment" > > * the abbreviated form (which has "requires" as a list) was easy > enough to specify, so that's what I used. The unpacked form (where > multiple entries in the same dependency list have the same > extra/environment combination) is explicitly disallowed in order to > encourage consistent formatting. So to be clear, this means it's { "requires": [ "foo", "bar" ] } ? And it means that having multiple combinations of the same extra/envs is disallowed so I'm going to have to collapse everything back down since it's not stored that way at all? > > * clarified that internal whitespace is permitted in requirement > specifiers (there may be a simpler way to specify this, such as "all > whitespace in requirement specifiers is ignored") > > * I made the change to explicitly distrust "Provides" data retrieved > from a public index server, and noted that projects defining a virtual > dependency should claim that name to define the default provider. > > * noted that Debian packagers may want to map extras to Recommended or > Suggested packages > > * noted some possible use cases for metadata extensions > > * fixed and clarified various things in the reference copy of the JSON > schema (it could still do with an audit against the current PEP text, > though) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From doko at ubuntu.com Tue Jul 16 19:36:42 2013 From: doko at ubuntu.com (Matthias Klose) Date: Tue, 16 Jul 2013 19:36:42 +0200 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: Message-ID: <51E584AA.5060308@ubuntu.com> Am 13.07.2013 16:54, schrieb Paul Moore: > 1. Install to user-packages by default. > 2. Not depend on setuptools (??? - Nick's "inversion" idea) > 3. Possibly change the wrapper command name from pip to pip3 on Unix. > 4. Ensure that pip upgrading itself in-place is sufficiently robust and > reliable that users don't get "stuck" on the Python-supplied version. 5. Support cross-compilation of extensions by default. From vinay_sajip at yahoo.co.uk Tue Jul 16 19:54:21 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 17:54:21 +0000 (UTC) Subject: [Distutils] PEP 426 updated based on last round of discussion References: <5E706F7D-0128-4FCB-8C93-7098331A0683@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > So to be clear, this means it's > > { > "requires": [ > "foo", > "bar" > ] > } > > ? > > And it means that having multiple combinations of the same > extra/envs is disallowed so I'm going to have to collapse everything > back down since it's not stored that way at all? > I posted a working example [1] showing how there's no need to have the same structure at the RDBMS layer and the JSON layer. I asked for more information about modelling difficulties you said you had encountered, but didn't hear anything more about it. AFAICT the code you were talking about isn't public - at least, I couldn't see it in the branches on your GitHub repo. As my example shows, it's possible to have a sensible RDBMS structure which interoperates with multiple entries in "requires". If I've misunderstood something, please let me know what it is. Regards, Vinay Sajip [1] https://gist.github.com/vsajip/5929707 From donald at stufft.io Tue Jul 16 19:57:45 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 13:57:45 -0400 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <20130716091900.GL3125@merlinux.eu> References: <20130716091900.GL3125@merlinux.eu> Message-ID: <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> On Jul 16, 2013, at 5:19 AM, holger krekel wrote: > > I am considering implementing gpg-signing and verification of release files > for devpi. Rather than requiring package authors to sign their release > files, i am pondering a scheme where anyone can vet for a particular > published release file by publishing a signature about it. This aims > to help responsible companies to work together. I've heart from devops/admins > that they manually download and check release files and then install > it offline after some vetting. Wouldn't it be useful to turn this > into a more collaborative effort? > > Any thoughts or pointers to existing efforts within the (Python) > packaging ecologies? > > best, > holger > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig So I'm not entirely sure what your goals are here. What exactly are you verifying? What is going to verify signatures once you have a (theoretically) trusted set? What is going to keep a malicious actor from poisoning the well? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Tue Jul 16 20:18:24 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 19:18:24 +0100 (BST) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> >If I were writing a firm proposal, I'd go for something like entry points as metadata My extended metadata already covers this, though I use the name "exports" (suggested by PJE) because you can share not just code but data, and "entry points" generally implies code. The current version of distil creates wrappers for both gui and console scripts, and adds the appropriate native executable wrappers (32- or 64-bit, according to the running Python) on Windows. Example of the metadata for setuptools 0.8: "exports": { "distutils.commands": [ "alias = setuptools.command.alias:alias", "bdist_egg = setuptools.command.bdist_egg:bdist_egg", "bdist_rpm = setuptools.command.bdist_rpm:bdist_rpm", "build_ext = setuptools.command.build_ext:build_ext", "build_py = setuptools.command.build_py:build_py", "develop = setuptools.command.develop:develop", "easy_install = setuptools.command.easy_install:easy_install", "egg_info = setuptools.command.egg_info:egg_info", "install = setuptools.command.install:install", "install_lib = setuptools.command.install_lib:install_lib", "rotate = setuptools.command.rotate:rotate", "saveopts = setuptools.command.saveopts:saveopts", "sdist = setuptools.command.sdist:sdist", "setopt = setuptools.command.setopt:setopt", "test = setuptools.command.test:test", "install_egg_info = setuptools.command.install_egg_info:install_egg_info", "install_scripts = setuptools.command.install_scripts:install_scripts", "register = setuptools.command.register:register", "bdist_wininst = setuptools.command.bdist_wininst:bdist_wininst", "upload_docs = setuptools.command.upload_docs:upload_docs" ], "scripts": { "console": [ "easy_install = setuptools.command.easy_install:main" ] }, "egg_info.writers": [ "PKG-INFO = setuptools.command.egg_info:write_pkg_info", "requires.txt = setuptools.command.egg_info:write_requirements", "entry_points.txt = setuptools.command.egg_info:write_entries", "eager_resources.txt = setuptools.command.egg_info:overwrite_arg", "namespace_packages.txt = setuptools.command.egg_info:overwrite_arg", "top_level.txt = setuptools.command.egg_info:write_toplevel_names", "depends.txt = setuptools.command.egg_info:warn_depends_obsolete", "dependency_links.txt = setuptools.command.egg_info:overwrite_arg" ], "setuptools.file_finders": [ "svn_cvs = setuptools.command.sdist:_default_revctrl" ], "distutils.setup_keywords": [ "eager_resources = setuptools.dist:assert_string_list", "namespace_packages = setuptools.dist:check_nsp", "extras_require = setuptools.dist:check_extras", "install_requires = setuptools.dist:check_requirements", "tests_require = setuptools.dist:check_requirements", "entry_points = setuptools.dist:check_entry_points", "test_suite = setuptools.dist:check_test_suite", "zip_safe = setuptools.dist:assert_bool", "package_data = setuptools.dist:check_package_data", "exclude_package_data = setuptools.dist:check_package_data", "include_package_data = setuptools.dist:assert_bool", "packages = setuptools.dist:check_packages", "dependency_links = setuptools.dist:assert_string_list", "test_loader = setuptools.dist:check_importable", "use_2to3 = setuptools.dist:assert_bool", "convert_2to3_doctests = setuptools.dist:assert_string_list", "use_2to3_fixers = setuptools.dist:assert_string_list", "use_2to3_exclude_fixers = setuptools.dist:assert_string_list" ], "setuptools.installation": [ "eggsecutable = setuptools.command.easy_install:bootstrap" ] } Regards, Vinay Sajip From p.f.moore at gmail.com Tue Jul 16 20:34:36 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 19:34:36 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 16 July 2013 19:18, Vinay Sajip wrote: > > >If I were writing a firm proposal, I'd go for something like entry points > as metadata > > My extended metadata already covers this, though I use the name "exports" > (suggested by PJE) because you can share not just code but data, and "entry > points" generally implies code. The current version of distil creates > wrappers for both gui and console scripts, and adds the appropriate native > executable wrappers (32- or 64-bit, according to the running Python) on > Windows. Nice. So we could say that wheel installers should create exe wrappers based on the 'scripts' key of the 'exports' metadata. Then we process the 'scripts' directory unaltered for backward compatibility. The only other part of the equation is to ensure that wheel *builders* do not put setuptools-generated entry point wrappers into the scripts directory. Before I embarrass myself again (:-)) is this also something you already have implemented in distlib? :-) When you're finished with Guido's time machine, make sure you put the keys back :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Tue Jul 16 21:04:56 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 19:04:56 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Paul Moore gmail.com> writes: > The only other part of the equation is to ensure that wheel *builders* do > not put setuptools-generated entry point wrappers into the scripts > directory. Before I embarrass myself again () is this also something you > already have implemented in distlib? No, because I didn't want to embarrass you ;-) Seriously - no, because that is policy rather than mechanism. The way distil builds wheels is to perform an installation into a working area, and then call Wheel.build in distlib pointing to the work area. The distlib code just copies what's in the work area to the wheel. Currently the install-scripts part of distil installs all scripts into the work area, including ones defined in exports; it should be a five-minute job to ensure that scripts in exports are excluded from this, when building wheels. A version of distil with these updates should appear soon - I'm waiting for the dust to settle on the most recent changes to PEP 426. Regards, Vinay Sajip From jaraco at jaraco.com Tue Jul 16 21:14:58 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Tue, 16 Jul 2013 19:14:58 +0000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: Message-ID: <88ead8c673d047baa494a03b46cd05ef@BLUPR06MB003.namprd06.prod.outlook.com> There are two versions of launchers primarily because of my naivet? when addressing the UAC issue. 64-bit launchers were exempt from the UAC restrictions that caused them to launch in a separate window. I believed this to be a proper fix, when in fact those still using 32-bit launchers were still experiencing the problem. See https://bitbucket.org/tarek/distribute/issue/143/easy_install-opens-new-cons ole-cant-read for more detail. So I agree, it would probably be sufficient to only supply 32-bit executables. However, my preference would be to supply architecture-appropriate executables rather than relying on a compatibility layer. Furthermore, I don?t believe the ARM architecture has a compatibility layer (meaning 64-bit executables are required for 64-bit ARM builds), so architecture and word size distinction is necessary. I believe you?re right about leveraging the py launcher. I?d like for setuptools to not have to supply launchers at all but depend on py launcher instead. The py launcher is bundled with Python 3.3 so should become ubiquitously available soon. I believe setuptools can begin to rely on it and not supply a launcher at all. The scripts currently installed by setuptools are suitable for launching by py launcher, so all that will need to happen is to stop supplying its own launcher. At least, that?s how I imagine it happening. From: Paul Moore [mailto:p.f.moore at gmail.com] Sent: Tuesday, 16 July, 2013 09:32 To: Distutils; Jason R. Coombs Subject: Re: Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) On 16 July 2013 14:08, Paul Moore > wrote: PS There is still the proviso that I haven't tested my assumption that the separate 32 and 64 bit wrappers are *needed* (setuptools and distlib use them, so I think it's a valid assumption, but I need to test). I will try to get time to check that ASAP. Hmm. I just did a quick test, and then based on the results checked the setuptools source code. I can see no reason why there needs to be 32 and 64 bit launcher exes. The launchers simply use CreateProcess to launch a separate Python process using the #! line of the script. So there's no DLL loading going on, and no reason that I can see for needing separate 32 and 64 bit builds. Jason - can you shed any light on why there are separate builds for 32 and 64 bits? Actually, the launcher is essentially identical to the "py" launcher for Windows, except that it gets a script name to execute from the name of the launcher. I'm wondering whether the correct approach here would be to enhance the launcher one more time to look for a suitably named script and auto-run it if it's present (i.e. merge the wrapper functionality into the launcher). Then we have a standard wrapper that everyone can use and not reinvent their own. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From vinay_sajip at yahoo.co.uk Tue Jul 16 21:21:27 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 19:21:27 +0000 (UTC) Subject: [Distutils] =?utf-8?q?Wheels_and_console_script_entry_point_wrapp?= =?utf-8?q?ers=09=28Was=3A_Replacing_pip=2Eexe_with_a_Python_script?= =?utf-8?q?=29?= References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Vinay Sajip yahoo.co.uk> writes: > defined in exports; it should be a five-minute job to ensure that scripts in > exports are excluded from this, when building wheels. It was a quick job, but thinking about it, I should probably update the Wheel.install API to take an optional process_exports=True argument, so that the exported-script processing can be done during installation from wheels in a standardised way. Regards, Vinay Sajip From donald at stufft.io Tue Jul 16 21:34:01 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 15:34:01 -0400 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: <5E706F7D-0128-4FCB-8C93-7098331A0683@stufft.io> Message-ID: On Jul 16, 2013, at 1:54 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> So to be clear, this means it's >> >> { >> "requires": [ >> "foo", >> "bar" >> ] >> } >> >> ? >> >> And it means that having multiple combinations of the same >> extra/envs is disallowed so I'm going to have to collapse everything >> back down since it's not stored that way at all? >> > > I posted a working example [1] showing how there's no need to have the same > structure at the RDBMS layer and the JSON layer. I asked for more > information about modelling difficulties you said you had encountered, but > didn't hear anything more about it. AFAICT the code you were talking about > isn't public - at least, I couldn't see it in the branches on your GitHub repo. > > As my example shows, it's possible to have a sensible RDBMS structure which > interoperates with multiple entries in "requires". If I've misunderstood > something, please let me know what it is. > > Regards, > > Vinay Sajip > > [1] https://gist.github.com/vsajip/5929707 The dependency models are located at https://github.com/dstufft/warehouse/blob/f438bdcb17a5ee9de8e209d3eb6c93cc4aee9492/warehouse/packaging/models.py#L280-L380 It's completely possible and if I came across as saying it wasn't then I failed to clarify myself properly. My point was that it was simpler using a single list of dictionaries, not a list of dictionaries itself containing lists because there was less support code required to transform between them. Every additional piece of code comes with an overhead in the form of tests, mental overhead, potential bugs etc. I was trying to advocate for less required code because it makes things simpler :) I was asking for clarification here because my original plan if things were required to be a list was to make single entry lists, again to limit the need to include additional support code. It appears that this plan isn't inline with the current iteration of the PEP but I was making sure :) I have a preference for not introducing more nesting, and making things match the modeling better but I'll make it work either way. I hardly think PEP426 will fail if it's using deeper nesting even if I dislike it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Jul 16 22:13:03 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 21:13:03 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 16 July 2013 20:21, Vinay Sajip wrote: > > defined in exports; it should be a five-minute job to ensure that > scripts in > > exports are excluded from this, when building wheels. > > It was a quick job, but thinking about it, I should probably update the > Wheel.install API to take an optional process_exports=True argument, so > that > the exported-script processing can be done during installation from wheels > in a standardised way. I'm not 100% sure what your proposal is here - I'm confused about the precise roles of setup.py/setuptools as "builder" vs distil as "builder" vs distlib as "wheel builder" vs distlib as "wheel installer". I'll try to get some time in the next day or so to review the code and make sure we're not talking at cross purposes here. Bear with me. I understood that distlib simply built wheels from whatever is in the directories supplied. I'm not clear where it would get the "exports" metadata to store in the wheel. Assuming that's sorted, though, then whether or not distlib processes the exports on install is not an issue to me (I think it always should). What is more of an issue is what "the thing that puts stuff into the directories" does. If that's a setuptools-based build, it will process the exports data *itself* and put wrappers into the scripts directory. That is what I think we should be trapping and suppressing. But we have to be careful, as if setuptools is used to install directly, *not* going via a wheel, it has to generate wrappers itself somehow. Using distil to do the build is a whole other route. My main concern here is keeping a reasonable level of backward compatibility/interoperability with all the assorted tools around. And in particular with pip managing (but technically not *doing*) the builds and installs. I'll have a think, and do some code reviews, and try not get sucked into sending more emails until I know the facts a bit more clearly :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Tue Jul 16 23:41:40 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 21:41:40 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Paul Moore gmail.com> writes: > I'm not 100% sure what your proposal is here - I'm confused about the precise > roles of setup.py/setuptools as "builder" vs distil as "builder" vs distlib > as "wheel builder" vs distlib as "wheel installer". I'll try to get some time > in the next day or so to review the code and make sure we're not talking at > cross purposes here. Bear with me. I'm not sure I've got a concrete proposal yet - I'm still thinking things out, too :-) Currently, the wheel module in distlib doesn't do anything too clever - just the mechanical stuff of archiving and unarchiving. Because the exports stuff wasn't in the PEP, distil wrote the generated scripts and .exe wrappers to the work area, and so they ended up in the wheel. (This is the case for all released versions of distil). > I understood that distlib simply built wheels from whatever is in the > directories supplied. I'm not clear where it would get the "exports" metadata > to store in the wheel. Assuming that's sorted, though, then whether or not Right now it's getting the exports courtesy of distil when you use that to build the wheel - distlib can't rely on that information being available because it's not part of the PEP. If the PEP is updated to include the exports, they should be in the wheel no matter which tool builds it. Then in theory distlib could generate the scripts during installation, but there are a lot of options to consider - did setuptools put them in there already? Do we want native launchers? etc. which is perfectly doable in distlib, but I'm not sure that's the best place for it because I think wheel processing should be uncomplicated. Wheel.install already has quite a few kwargs: dry_run=False: Don't actually do anything executable=None:Custom executable for shebang lines (e.g. to support path searching) warner=None:Used to defer warning behaviour to calling application lib_only=False:Process site-packages contents only (you suggested this) I'd like to not have to add any more, unless it's unavoidable :-) Nevertheless, I will probably try implementing it in distlib as an experiment, too see how it looks. > distlib processes the exports on install is not an issue to me (I think it > always should). What is more of an issue is what "the thing that puts stuff > into the directories" does. If that's a setuptools-based build, it will > process the exports data *itself* and put wrappers into the scripts > directory. That is what I think we should be trapping and suppressing. But > we have to be careful, as if setuptools is used to install directly, *not* > going via a wheel, it has to generate wrappers itself somehow. Exactly why I'm so leery of putting this logic in distlib, until we think it through and add it to the PEP. At the moment distil does it the same way as setuptools/pip only to remain compatible, not for any other reason. > Using distil to do the build is a whole other route. Right. I'm aiming for distil to be able to do just about everything pip can (functionally, the code is pretty much there barring installs from DVCS URLs), but backward compatibility is always a concern and a challenge :-) Another complication for distlib is that I expect it to work in 2.6+, where you can't always rely on the py launcher being present - hence the wrappers in distlib, with flags to disable writing them out. Another area to consider for scripts is which of foo, fooX and foo-X.Y to write to the scripts folder. This is particularly important in user site-packages, where scripts for different Python versions will coexist in the same folder, and the possibility exists of overwriting, which sometimes leads to unexpected behaviour (e.g. if I install foo-dist using 2.x which installs foo and foo-2.x to ~/.local/bin, then install it using 3.x, it would write foo and foo-3.x to ~/.local/bin. Quite apart from headaches with native launchers, it could be that the foo script installed with 3.x doesn't work (e.g. if it tries to process Python code which is 2.x compatible but not 3.x compatible). Fun and games! Regards, Vinay Sajip From p.f.moore at gmail.com Wed Jul 17 00:13:57 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 16 Jul 2013 23:13:57 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 16 July 2013 22:41, Vinay Sajip wrote: > If the PEP is updated to include the exports, they should be in the wheel > no > matter which tool builds it. Then in theory distlib could generate the > scripts during installation, but there are a lot of options to consider - > did setuptools put them in there already? Do we want native launchers? etc. > which is perfectly doable in distlib, but I'm not sure that's the best > place > for it because I think wheel processing should be uncomplicated. > Wheel.install already has quite a few kwargs: > I really don't want the wrappers to be present in the wheel, because if they are the wheel becomes architecture-specific. Also, consider that Unix targets should have the actual scripts written with no extension, whereas Windows targets should have foo-script.py and foo.exe. That should be decided at install time, bot at wheel creation time. As regards version-specific scripts, I'd assume it's the project's job to specify precisely what scripts they want. On that one, I'm on the side of providing infrastructure, not setting policy. Although I could be persuaded otherwise if there was a PEP on what commands a distribution should provide. In that case, let the project provide the command names, and let the installer implement the standard versioned-executable policy. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Jul 17 00:43:11 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 16 Jul 2013 23:43:11 +0100 (BST) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <1374014591.78734.YahooMailNeo@web171401.mail.ir2.yahoo.com> >I really don't want the wrappers to be present in the wheel, because if they are the wheel becomes architecture-specific. Also, consider that Unix targets should have the actual scripts written with no extension, whereas Windows targets should have foo-script.py and foo.exe. That should be decided at install time, bot at wheel creation time. I think we're agreed on that as a desirable. I've already changed the distil code to omit writing launchers to the wheel, and that probably won't need to change. But it does generate the scripts for exports - to avoid doing that, we need to get the exports into the wheel in a standardised way (via pydist.json, perhaps, or perhaps a separate file). Iterating over the exports in a distribution path needs to be fast - note that the exports are not just scripts, and it's not yet clear whether the script exports need to be separated out from the rest. >As regards version-specific scripts, I'd assume it's the project's job to specify precisely what scripts they want. On that one, I'm on the side of providing infrastructure, not setting policy. This sounds like you mean that distlib needs to stay basic (mechanism/infrastructure), and the installer needs to do the script generation (policy - perhaps controlled by the user). Currently, distlib allows one to specify foo with optional variants fooX and foo-X.Y (before factoring native launchers into the mix). This is set on the ScriptMaker instance which generates the scripts in the target directory (including shebang rewriting, native launchers etc.) Regards, Vinay Sajip From ncoghlan at gmail.com Wed Jul 17 01:27:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 09:27:30 +1000 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: <5E706F7D-0128-4FCB-8C93-7098331A0683@stufft.io> Message-ID: On 17 Jul 2013 05:34, "Donald Stufft" wrote: > > > On Jul 16, 2013, at 1:54 PM, Vinay Sajip wrote: > > > Donald Stufft stufft.io> writes: > > > >> So to be clear, this means it's > >> > >> { > >> "requires": [ > >> "foo", > >> "bar" > >> ] > >> } > >> > >> ? > >> > >> And it means that having multiple combinations of the same > >> extra/envs is disallowed so I'm going to have to collapse everything > >> back down since it's not stored that way at all? > >> > > > > I posted a working example [1] showing how there's no need to have the same > > structure at the RDBMS layer and the JSON layer. I asked for more > > information about modelling difficulties you said you had encountered, but > > didn't hear anything more about it. AFAICT the code you were talking about > > isn't public - at least, I couldn't see it in the branches on your GitHub repo. > > > > As my example shows, it's possible to have a sensible RDBMS structure which > > interoperates with multiple entries in "requires". If I've misunderstood > > something, please let me know what it is. > > > > Regards, > > > > Vinay Sajip > > > > [1] https://gist.github.com/vsajip/5929707 > > The dependency models are located at https://github.com/dstufft/warehouse/blob/f438bdcb17a5ee9de8e209d3eb6c93cc4aee9492/warehouse/packaging/models.py#L280-L380 > > It's completely possible and if I came across as saying it wasn't then I failed to clarify myself properly. My point was that it was simpler using a single list of dictionaries, not a list of dictionaries itself containing lists because there was less support code required to transform between them. Every additional piece of code comes with an overhead in the form of tests, mental overhead, potential bugs etc. I was trying to advocate for less required code because it makes things simpler :) > > I was asking for clarification here because my original plan if things were required to be a list was to make single entry lists, again to limit the need to include additional support code. It appears that this plan isn't inline with the current iteration of the PEP but I was making sure :) > > I have a preference for not introducing more nesting, and making things match the modeling better but I'll make it work either way. I hardly think PEP426 will fail if it's using deeper nesting even if I dislike it. Yes, in this case I think improving the brevity of the serialisation format will be an aid to debuggability, even though the primary purpose of the format remains communicating between tools. I should add a section discussing this decision in "Rejected Design Ideas", though. Cheers, Nick. > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 17 01:56:57 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 19:56:57 -0400 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: <51E584AA.5060308@ubuntu.com> References: <51E584AA.5060308@ubuntu.com> Message-ID: On Jul 16, 2013, at 1:36 PM, Matthias Klose wrote: > 5. Support cross-compilation of extensions by default. TBH I don't know how much of this has anything to do with pip? As far as compiling goes all pip does is call setup.py install so people are compiling with either setuptools or distutils. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Jul 17 02:00:03 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 10:00:03 +1000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 17 Jul 2013 04:19, "Vinay Sajip" wrote: > > > > >If I were writing a firm proposal, I'd go for something like entry points as metadata > > My extended metadata already covers this, though I use the name "exports" (suggested by PJE) because you can share not just code but data, and "entry points" generally implies code. The current version of distil creates wrappers for both gui and console scripts, and adds the appropriate native executable wrappers (32- or 64-bit, according to the running Python) on Windows. Yeah, originally we were going to postpone dealing with entry points to a metadata extension (Daniel even had a proto-PEP kicking around in the pre-JSON days). However, I now think it makes more sense to standardise them as an "exports" field in PEP 426. So run with the assumption that something like that will be part of the standard metadata - either derived from entry_points.txt for existing metadata, or specified directly for next generation metadata. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 17 02:03:14 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 16 Jul 2013 20:03:14 -0400 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> On Jul 16, 2013, at 8:00 PM, Nick Coghlan wrote: > > On 17 Jul 2013 04:19, "Vinay Sajip" wrote: > > > > > > > > >If I were writing a firm proposal, I'd go for something like entry points as metadata > > > > My extended metadata already covers this, though I use the name "exports" (suggested by PJE) because you can share not just code but data, and "entry points" generally implies code. The current version of distil creates wrappers for both gui and console scripts, and adds the appropriate native executable wrappers (32- or 64-bit, according to the running Python) on Windows. > > Yeah, originally we were going to postpone dealing with entry points to a metadata extension (Daniel even had a proto-PEP kicking around in the pre-JSON days). > > However, I now think it makes more sense to standardise them as an "exports" field in PEP 426. So run with the assumption that something like that will be part of the standard metadata - either derived from entry_points.txt for existing metadata, or specified directly for next generation metadata. > > Cheers, > Nick. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Are these only the scripts portion of entry points, or the whole kit and caboodle of pluggable entry points? Because I think the first makes sense, the second I'm hesitant on. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Jul 17 02:17:22 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 10:17:22 +1000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> Message-ID: On 17 Jul 2013 10:03, "Donald Stufft" wrote: > > > On Jul 16, 2013, at 8:00 PM, Nick Coghlan wrote: > >> >> On 17 Jul 2013 04:19, "Vinay Sajip" wrote: >> > >> > >> > >> > >If I were writing a firm proposal, I'd go for something like entry points as metadata >> > >> > My extended metadata already covers this, though I use the name "exports" (suggested by PJE) because you can share not just code but data, and "entry points" generally implies code. The current version of distil creates wrappers for both gui and console scripts, and adds the appropriate native executable wrappers (32- or 64-bit, according to the running Python) on Windows. >> >> Yeah, originally we were going to postpone dealing with entry points to a metadata extension (Daniel even had a proto-PEP kicking around in the pre-JSON days). >> >> However, I now think it makes more sense to standardise them as an "exports" field in PEP 426. So run with the assumption that something like that will be part of the standard metadata - either derived from entry_points.txt for existing metadata, or specified directly for next generation metadata. >> >> Cheers, >> Nick. >> >> _______________________________________________ >> >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > > Are these only the scripts portion of entry points, or the whole kit and caboodle of pluggable entry points? Because I think the first makes sense, the second I'm hesitant on. Actually, it may be better to have a top level "scripts" field, distinct from a general export mechanism. I'm seeing value in an exports mechanism, though. Yes, *in theory* you can get the same effect with an extension, but extensions can do a lot of other things, too. Python's metaprogramming is built on a model of multiple tools with increasing levels of power, flexibility and complexity, so I'm thinking an exports vs extensions split may be a good approach in a similar vein. No decision on this front yet, but I think it's at least worth my trying it out to see how it looks in the context of the PEP. (After all, we already know entry points are quite a popular feature of the setuptools metadata) A couple of bonus features of standardisation are that we can tie it into the extras system and automatic analysis tools can check the exports can actually be imported without needing to understand arbitrary extensions. Cheers, Nick. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benzolius at yahoo.com Tue Jul 16 10:57:15 2013 From: benzolius at yahoo.com (Benedek Zoltan) Date: Tue, 16 Jul 2013 01:57:15 -0700 (PDT) Subject: [Distutils] buildout doesn't work as expected, please help me Message-ID: <1373965035.93141.YahooMailNeo@web121601.mail.ne1.yahoo.com> Hi, I'm relatively new to buildout, but untill now I could manage to install python packages, supervisord, config files and run system commands by buildout, using the recipes: z3c.recipe.scripts collective.recipe.supervisor collective.recipe.template plone.recipe.command Sometimes I had a timeout error like: "Installing pypackages. While: ? Installing pypackages. ? Getting distribution for 'python-dateutil'. An internal error occurred due to a bug in either zc.buildout or in a recipe being used: Traceback (most recent call last): ..." and "timeout: timed out" but generally worked. Recently I get nearly always this error or an other one: "? Getting distribution for 'zc.buildout>=1.7.1,<2dev'. An internal error occurred due to a bug in either zc.buildout or in a recipe being used: Traceback (most recent call last): ..." and "timeout: timed out" I tried: 1. to run even with: bin/buildout -t 30 2.? to put this in buildout.cfg find-links =??????????????????????????????????????????????????????????????????? ??? http://a.pypi.python.org/simple/??????????????????????????????????????????? ??? http://b.pypi.python.org/simple/??????????????????????????????????????????? ??? http://c.pypi.python.org/simple/??????????????????????????????????????????? ??? http://d.pypi.python.org/simple/??????????????????????????????????????????? ??? http://e.pypi.python.org/simple/??????????????????????????????????????????? ??? http://f.pypi.python.org/simple/??????????????????????????????????????????? ??? http://g.pypi.python.org/simple/ 3. I changed z3c.recipe.scripts to zc.recipe.egg Unfortunately the problem is not solved. Any help will be greatly appreciated. Thank you Zoli -------------- next part -------------- An HTML attachment was scrubbed... URL: From liamk at numenet.com Tue Jul 16 19:15:10 2013 From: liamk at numenet.com (Liam Kirsher) Date: Tue, 16 Jul 2013 10:15:10 -0700 Subject: [Distutils] distribute o.7.3 causing installation error? Message-ID: <51E57F9E.9020505@numenet.com> Hi, I ran into an error about a month ago caused by a change in the PyPi version of distribute. Thankfully, someone was able to roll back the change. Unfortunately, I'm getting a similar kind of problem now -- and I notice that 0.7.3 was released on 5 July, so... I'm wondering if it might be related. This is being included in a Chef recipe. I'm attaching the pip.log, which shows it uninstalling distribute (which looks like version 0.6.49), and then failing to find it and attempting to install 0.7.3, and subsequent package installs failing. Anyway, I'm not quite sure what to do here! How can I fix this problem? (And also, how can I prevent it from happening in the future by pegging the version to something that works?) The pip recipe includes the following comments, which may be relevant. > # Ubuntu's python-setuptools, python-pip and py thon-virtualenv packages > # are broken...this feels like Rubygems! > # > http://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python > # https://bitbucket.org/ianb/pip/issue/104/pip-uninstall-on-ubuntu-linux > remote_file "#{Chef::Config[:file_cache_path]}/distribute_setup.py" do > source node['python']['distribute_script_url'] > mode "0644" > not_if { ::File.exists?(pip_binary) } > end > execute "install-pip" do > cwd Chef::Config[:file_cache_path] > command <<-EOF > #{node['python']['binary']} distribute_setup.py > --download-base=#{node['python']['distribute_option']['download_base']} > #{::File.dirname(pip_binary)}/easy_install pip > EOF > not_if { ::File.exists?(pip_binary) } > end Chef run log: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > python::virtualenv > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[virtualenv] action install > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - install package > python_pip[virtualenv] version latest > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > supervisor::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[supervisor] action upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Error executing > action `upgrade` on resource 'python_pip[supervisor]' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > Mixlib::ShellOut::ShellCommandFailed > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ------------------------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Expected process to > exit with [0], but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a > (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py > (namespace package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > pidproxy script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File > "", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output > from command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File "", > line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command > /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > -root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log > in /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Cookbook Trace: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:155:in `pip_cmd' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:139:in `install_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:144:in `upgrade_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:60:in `block (2 > levels) in class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:58:in `block in > class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Resource Declaration: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # In > /var/chef/cache/cookbooks/supervisor/recipes/default.rb > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 29: python_pip > "supervisor" do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 30: action :upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 31: version > node['supervisor']['version'] if node['supervisor']['version'] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 32: end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 33: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Compiled Resource: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # Declared in > /var/chef/cache/cookbooks/supervisor/recipes/default.rb:29:in `from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > python_pip("supervisor") do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com action [:upgrade] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retries 0 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retry_delay 2 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com cookbook_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com recipe_name "default" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com package_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com timeout 900 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com options " --upgrade" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: ntp::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * service[ntp] > action restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[ntp] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: rabbitmq::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > service[rabbitmq-server] action restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[rabbitmq-server] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:50+00:00] ERROR: Running exception handlers > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: Saving node information to > /var/chef/cache/failed-run-data.json > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] ERROR: Exception handlers complete > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Chef Client failed. > 35 resources updated > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: Stacktrace dumped to > /var/chef/cache/chef-stacktrace.out > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: > Mixlib::ShellOut::ShellCommandFailed: python_pip[supervisor] > (supervisor::default line 29) had an error: > Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with > [0], but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a > (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py > (namespace package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > pidproxy script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File > "", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output > from command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File " ec2-54-245-36-62.us-west-2.compute.amazonaws.com g>", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command > /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log > in /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 -- Liam Kirsher PGP: http://liam.numenet.com/pgp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pip.log Type: text/x-log Size: 149041 bytes Desc: not available URL: From liamk at numenet.com Tue Jul 16 19:48:22 2013 From: liamk at numenet.com (Liam Kirsher) Date: Tue, 16 Jul 2013 10:48:22 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? follow up Message-ID: <51E58766.2040206@numenet.com> Hi, Also, just noticed that 0.7.3 is available as .zip, but not as .tar.gz like the others are. Liam -- Liam Kirsher PGP: http://liam.numenet.com/pgp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Wed Jul 17 09:03:27 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 17 Jul 2013 07:03:27 +0000 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> Message-ID: <20130717070327.GN1668@merlinux.eu> On Tue, Jul 16, 2013 at 13:57 -0400, Donald Stufft wrote: > On Jul 16, 2013, at 5:19 AM, holger krekel wrote: > > > > > I am considering implementing gpg-signing and verification of release files > > for devpi. Rather than requiring package authors to sign their release > > files, i am pondering a scheme where anyone can vet for a particular > > published release file by publishing a signature about it. This aims > > to help responsible companies to work together. > > > So I'm not entirely sure what your goals are here. The goal is to facilitate collaboration between individuals and companies in vetting the integrity and, to some degree, authenticity of a published pypi package. > What exactly are you verifying? What is going to verify signatures once you have a (theoretically) trusted set? What is going to keep a malicious actor from poisoning the well? These are typical questions which is why i asked if anyone knows about existing schemes/efforts. I guess most Linux distros do it already so if nothing comes up here PyPI-specific (what is the status of TUF, btw?) i am going to look into the distro's working models. One difference is that i want the vetting/signing to happen after publishing to allow for an incremental approach. cheers, holger -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From vinay_sajip at yahoo.co.uk Wed Jul 17 09:40:13 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 07:40:13 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > Actually, it may be better to have a top level "scripts" field, distinct from a general export mechanism. > I'm seeing value in an exports mechanism, though. The exports functionality is important and used enough to warrant support in the PEP, and not only for the scripts part. There should be some way that this data gets into .dist-info in a standardised way, so that there is an ability to query e.g. implementations of a particular interface. Currently, distlib supports this as an extension to the PEP by reading a file from .dist-info, which distil puts there. At the moment the PEP is silent on the subject, which could lead to fragmentation in the implementations - e.g. whether JSON Or ini-style format is used for the data. I'd like to suggest that the whole of the exports information be included in the PEP 426 metadata, without singling out the scripts part. That way, it ends up in .dist-info via pydist.json. It has been suggested by PJE that the exports information should be in a separate file for speed of searching - though that suggestion was made in a pre-JSON world, where the speed of parsing the metadata wasn't C-assisted. Should performance still be an issue, then the exports dict could still be written out separately in .dist-info as e.g. exports.json by an installer. Regards, Vinay Sajip From donald at stufft.io Wed Jul 17 09:43:19 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 03:43:19 -0400 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> Message-ID: <3006DC94-C386-4110-A640-8CF98AC27C95@stufft.io> On Jul 17, 2013, at 3:40 AM, Vinay Sajip wrote: > It has been suggested by PJE that the > exports information should be in a separate file for speed of searching - > though that suggestion was made in a pre-JSON world, where the speed of > parsing the metadata wasn't C-assisted. I don't think it was speed of parsing the file that was his concern, rather that if it's a separate file that only exists when there are entry points, then you don't have to open a file for every installed distribution. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Wed Jul 17 09:48:46 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 07:48:46 +0000 (UTC) Subject: [Distutils] vetting, signing, verification of release files References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> Message-ID: holger krekel merlinux.eu> writes: > about existing schemes/efforts. I guess most Linux distros do it already > so if nothing comes up here PyPI-specific (what is the status of TUF, btw?) > i am going to look into the distro's working models. ISTM it works for distros because they're the central authority guaranteeing the provenance of the software in their repos. It's harder with PyPI because it's not a central authority curating the content. Perhaps something like a web of trust would be needed. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Jul 17 09:53:51 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 07:53:51 +0000 (UTC) Subject: [Distutils] =?utf-8?q?Wheels_and_console_script_entry_point_wrapp?= =?utf-8?q?ers=09=28Was=3A_Replacing_pip=2Eexe_with_a_Python_script?= =?utf-8?q?=29?= References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <1373998704.38172.YahooMailNeo@web171404.mail.ir2.yahoo.com> <64062226-5923-47A2-82F3-8999C2C77CC6@stufft.io> <3006DC94-C386-4110-A640-8CF98AC27C95@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I don't think it was speed of parsing the file that was his concern, rather that > if it's a separate file that only exists when there are entry points, then you > don't have to open a file for every installed distribution. OK. In that case, exports.json would avoid the need to open pydist.json to look for exports - if exports.json is missing, it would mean that dist has no exports. Regards, Vinay Sajip From reinout at vanrees.org Wed Jul 17 09:58:53 2013 From: reinout at vanrees.org (Reinout van Rees) Date: Wed, 17 Jul 2013 09:58:53 +0200 Subject: [Distutils] distribute o.7.3 causing installation error? In-Reply-To: <51E57F9E.9020505@numenet.com> References: <51E57F9E.9020505@numenet.com> Message-ID: On 16-07-13 19:15, Liam Kirsher wrote: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No module > named setuptools My guess is that there's a left-over distribute somewhere. Probably an egg-link in some dist-packages or site-packages directory. I had a problem like that too. What I did in that case: - Search&destroy any distribute/setuptools anywhere. - Install setuptools from scratch instead of trying to upgrade it. wget https://bitbucket.org/pypa/setuptools/raw/0.8/ez_setup.py sudo /usr/bin/python ez_setup.py Yes, this sucks if you want to maintain a nice clean OS-managed machine. Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "If you're not sure what to do, make something. -- Paul Graham" From holger at merlinux.eu Wed Jul 17 10:16:40 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 17 Jul 2013 08:16:40 +0000 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> Message-ID: <20130717081640.GR1668@merlinux.eu> On Wed, Jul 17, 2013 at 07:48 +0000, Vinay Sajip wrote: > holger krekel merlinux.eu> writes: > > > about existing schemes/efforts. I guess most Linux distros do it already > > so if nothing comes up here PyPI-specific (what is the status of TUF, btw?) > > i am going to look into the distro's working models. > > ISTM it works for distros because they're the central authority guaranteeing > the provenance of the software in their repos. It's harder with PyPI because > it's not a central authority curating the content. Perhaps something like a > web of trust would be needed. I am thinking about curating release files _after_ publishing and then configuring install activities to require "signed-off" release files. Basically giving companies and devops the possibility to organise their vetting processes and collaborate, without requiring PyPI to change first. This certainly involves the question of trust but if nothing else an entity can at least trust its own signatures :) best, holger -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From ncoghlan at gmail.com Wed Jul 17 10:50:16 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 18:50:16 +1000 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <20130717081640.GR1668@merlinux.eu> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> Message-ID: On 17 Jul 2013 18:17, "holger krekel" wrote: > > On Wed, Jul 17, 2013 at 07:48 +0000, Vinay Sajip wrote: > > holger krekel merlinux.eu> writes: > > > > > about existing schemes/efforts. I guess most Linux distros do it already > > > so if nothing comes up here PyPI-specific (what is the status of TUF, btw?) > > > i am going to look into the distro's working models. > > > > ISTM it works for distros because they're the central authority guaranteeing > > the provenance of the software in their repos. It's harder with PyPI because > > it's not a central authority curating the content. Perhaps something like a > > web of trust would be needed. > > I am thinking about curating release files _after_ publishing and > then configuring install activities to require "signed-off" release files. > Basically giving companies and devops the possibility to organise their > vetting processes and collaborate, without requiring PyPI to change first. > This certainly involves the question of trust but if nothing else an entity > can at least trust its own signatures :) Note that Linux distros don't trust each other's keys and nor do app stores trust other. Secure collaborative vetting of software is an Unsolved Problem. The Update Framework provides a solid technical basis for such collaboration, but even it doesn't solve the fundamental trust issues. Those issues are why we still rely on the CA model for SSL, despite its serious flaws: nobody has come up with anything else that scales effectively. The use of JSON for metadata 2.0 is enough to make it TUF friendly, but there are significant key management issues to be addressed before TUF could be used on PyPI itself. That's no reason to avoid experimenting with private TUF enabled PyPI servers, though - a private server alleviates most of the ugly key management problems. Cheers, Nick. > > best, > holger > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > > iQEcBAEBAgAGBQJR5lLnAAoJEI47A6J5t3LWUkYIAJ1qqyAc185R7NrXqJyEpNo6 > erDSfCMROcMqxtkqLCeoaiKSSBnhyJJpcLJ9a5P2/z8hBsYVTKM54NdOpvJEcgb/ > s/sepYI3vTIXFtUyRTxXPmhUZoxgh+GdvatCWw+7EA8pcAPs3YvrdKPYqHOm3xup > Z1KWAUrPWhVxoUY8laUBaHkHxX3WJ88Hj0buJfzsKEbQvytT8sRO9Nq03VE5EsjL > 85boVh4UIA0KUMtEgzxgRGDjD9Cc47ukFrmN/ViYKdmV6gmIBV1h30dcRXhvof5W > QSuuROqXjQ466Vm5aaE7rfLzIAOtxOvjBuZLygr2bMbZYY8WtHDJD7e0VYFJPCw= > =vZ9n > -----END PGP SIGNATURE----- > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Jul 17 12:00:02 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 11:00:02 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <51E584AA.5060308@ubuntu.com> Message-ID: On 17 July 2013 00:56, Donald Stufft wrote: > > On Jul 16, 2013, at 1:36 PM, Matthias Klose wrote: > > 5. Support cross-compilation of extensions by default. > > TBH I don't know how much of this has anything to do with pip? As far as > compiling goes all pip does is call setup.py install so people are compiling > with either setuptools or distutils. I'm not involved in the current packaging work so someone else can correct me if I'm wrong but: What I'm really looking forward to as a result of all of this work on wheels and packaging metadata is that there will no longer be any need for people to use either setuptools or distutils to compile anything. Once this work is ready it will be possible for developers to build wheels using whatever tools they like and then upload them to pypi. pip will be able to select the appropriate wheel for the end user's OS, CPU, Python version etc. and install the binary wheel without needing any compiler support on the target system. What this means is that it will be possible to do all the compilation on developers machines so that the tools to do so don't need to be in the stdlib any more. Then someone will be able to release the "fancycompiler" package on pypi that will support features like cross-compilation, other developers can use it, and the end user doesn't need to care how the wheels were made. Oscar From p.f.moore at gmail.com Wed Jul 17 12:17:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 11:17:38 +0100 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <51E584AA.5060308@ubuntu.com> Message-ID: On 17 July 2013 00:56, Donald Stufft wrote: > On Jul 16, 2013, at 1:36 PM, Matthias Klose wrote: > > 5. Support cross-compilation of extensions by default. > > > TBH I don't know how much of this has anything to do with pip? As far as > compiling goes all pip does is call setup.py install so people are > compiling with either setuptools or distutils. > Nothing at all. Cross-compilation is not handled by pip, nor is it relevant to the discussions about bundling pip with Python. The much longer-term goal of the packaging discussions around decoupling "builders" and "installers" might allow for the development of better 3rd party build tools with cross-compilation support (things in the space that tools like bento occupy) but it will never be something that pip needs to concern itself with. I doubt that the standard library (i.e. distutils or any successor) will be involved in this either - it's specialised enough that 3rd party tools are the correct way to handle it. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Jul 17 13:01:12 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 12:01:12 +0100 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 16 July 2013 14:40, Nick Coghlan wrote: > > The latest version of PEP 426 is up at http://www.python.org/dev/peps/pep-0426/ Just looking at the "Build requires" section I found myself wondering: is there any way to say that e.g. a C compiler is required for building, or a Fortran compiler or any other piece of software that isn't a "Python distribution"? The example shows Cython which is commonly built and used with MinGW on Windows. I guess that it would be possible to create a pypi distribution that would install MinGW and set it up as part of a Python installation so that a project such as Cython could depend on it with e.g.: "name": "Cython", "build_requires": [ { "requires": ["pymingw"], "environment": "sys.platform == 'win32'" } ] "run_requires": [ { "requires": ["pymingw"], "environment": "sys.platform == 'win32'" } ] But it would be unfortunate to depend on MinGW in the event that the user actually has the appropriate MSVC version. Or perhaps there could be a meta-distribution called "CCompiler" that installs MinGW only if the the appropriate MSVC version is not available. Or could there be an environment marker to indicate the presence of particularly common requirements such as having a C compiler? Oscar From p.f.moore at gmail.com Wed Jul 17 13:10:42 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 12:10:42 +0100 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 12:01, Oscar Benjamin wrote: > On 16 July 2013 14:40, Nick Coghlan wrote: > > > > The latest version of PEP 426 is up at > http://www.python.org/dev/peps/pep-0426/ > > Just looking at the "Build requires" section I found myself wondering: > is there any way to say that e.g. a C compiler is required for > building, or a Fortran compiler or any other piece of software that > isn't a "Python distribution"? > [...] > Or perhaps there could be a meta-distribution called "CCompiler" that > installs MinGW only if the the appropriate MSVC version is not > available. Or could there be an environment marker to indicate the > presence of particularly common requirements such as having a C > compiler? I can't imagine it's practical to auto-install a C compiler - or even to check for one before building. But I can see it being useful for introspection purposes to know about this type of requirement. (A C compiler could be necessary, or optional for speedups, a particular external library could be needed, etc) The data would likely only be as good as what project developers provide, but nevertheless having standard places to record the data could encourage doing so... OTOH, maybe this is metadata 3.0 stuff - I feel like at the moment we need to get what we have now out of the door rather than continually adding extra capabilities. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Jul 17 13:45:37 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 12:45:37 +0100 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 12:10, Paul Moore wrote: > > I can't imagine it's practical to auto-install a C compiler Why not? > - or even to check for one before building. > > But I can see it being useful for > introspection purposes to know about this type of requirement. (A C compiler > could be necessary, or optional for speedups, a particular external library > could be needed, etc) Perhaps instead the installer tool would give you a way to clarify that you do have a C compiler and to warn if not. Alternatively a meta-package could be used to indicate (when installed) that a compatible C-compiler is available and then other distributions could depend on it for building. > The data would likely only be as good as what project developers provide, > but nevertheless having standard places to record the data could encourage > doing so... > > OTOH, maybe this is metadata 3.0 stuff - I feel like at the moment we need > to get what we have now out of the door rather than continually adding extra > capabilities. I wasn't proposing to hold anything up or add new capabilities. I'm just trying to see how far these changes go towards making non-pure Python software automatically installable. Everything I would want to build "build requires" software that is not on pypi. It would be great if e.g. the instructions for installing Cython on Windows could just be "pip install cython" instead of this: http://wiki.cython.org/InstallingOnWindows Oscar From ncoghlan at gmail.com Wed Jul 17 13:56:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 21:56:18 +1000 Subject: [Distutils] Expectations on how pip needs to change for Python 3.4 In-Reply-To: References: <51E584AA.5060308@ubuntu.com> Message-ID: On 17 July 2013 20:00, Oscar Benjamin wrote: > On 17 July 2013 00:56, Donald Stufft wrote: >> >> On Jul 16, 2013, at 1:36 PM, Matthias Klose wrote: >> >> 5. Support cross-compilation of extensions by default. >> >> TBH I don't know how much of this has anything to do with pip? As far as >> compiling goes all pip does is call setup.py install so people are compiling >> with either setuptools or distutils. > > I'm not involved in the current packaging work so someone else can > correct me if I'm wrong but: > > What I'm really looking forward to as a result of all of this work on > wheels and packaging metadata is that there will no longer be any need > for people to use either setuptools or distutils to compile anything. > Once this work is ready it will be possible for developers to build > wheels using whatever tools they like and then upload them to pypi. > pip will be able to select the appropriate wheel for the end user's > OS, CPU, Python version etc. and install the binary wheel without > needing any compiler support on the target system. > > What this means is that it will be possible to do all the compilation > on developers machines so that the tools to do so don't need to be in > the stdlib any more. Then someone will be able to release the > "fancycompiler" package on pypi that will support features like > cross-compilation, other developers can use it, and the end user > doesn't need to care how the wheels were made. Yep, one of the key goals of current package efforts is to decouple builders from installers so that the developer's choice of build system has no impact on the end user's choice of installer. However, as a concession to practical reality, we're continuing with setuptools as the "official" build system for now, as devising a more tool neutral metabuild system to replace it will be a significant challenge in its own right, and we can only sensibly work on so many things at once. Cross compilation is one of the things the metabuild system would need to take into account, which is one of the reasons it has been deferred (see http://www.python.org/dev/peps/pep-0426/#metabuild-system - that draft design *doesn't* handle cross compilation at all) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Jul 17 14:17:24 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 22:17:24 +1000 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 21:45, Oscar Benjamin wrote: > On 17 July 2013 12:10, Paul Moore wrote: >> >> I can't imagine it's practical to auto-install a C compiler > > Why not? > >> - or even to check for one before building. >> >> But I can see it being useful for >> introspection purposes to know about this type of requirement. (A C compiler >> could be necessary, or optional for speedups, a particular external library >> could be needed, etc) > > Perhaps instead the installer tool would give you a way to clarify > that you do have a C compiler and to warn if not. > > Alternatively a meta-package could be used to indicate (when > installed) that a compatible C-compiler is available and then other > distributions could depend on it for building. > >> The data would likely only be as good as what project developers provide, >> but nevertheless having standard places to record the data could encourage >> doing so... >> >> OTOH, maybe this is metadata 3.0 stuff - I feel like at the moment we need >> to get what we have now out of the door rather than continually adding extra >> capabilities. > > I wasn't proposing to hold anything up or add new capabilities. I'm > just trying to see how far these changes go towards making non-pure > Python software automatically installable. Everything I would want to > build "build requires" software that is not on pypi. > > It would be great if e.g. the instructions for installing Cython on > Windows could just be "pip install cython" instead of this: > http://wiki.cython.org/InstallingOnWindows External dependencies are platform specific, so the task of defining them has been deliberately left to metadata extensions. Depending on a Windows exe or MSI, an OS X app bundle, a Fedora/RHEL/EPEL/Suse RPM or a Debian/Ubuntu DEB are all very different things not easily accommodated in a cross platform standard. That said, the new metadata standard does deliberately include a few pieces intended to make such things easier to define: 1. The extensions concept - using a structured data format like JSON makes it much easier for platform specific tools (or even pip itself) to say "declare this metadata, and we will run these commands automatically" 2. The "direct reference" format as a tool for linking a name with a specific URL 3. The "install hooks" system for running arbitrary additional commands post-install (if the installation tool is instructed to allow that) 4. A more robust mechanism for defining supported platforms 5. The extras system extended across all the different kinds of dependency (so if you don't want to build optional C accelerators, you can express that clearly and skip all the associated dependencies) Essentially, the standard tries to provide a better toolkit for solving these kinds of problems, without trying to solve them directly. It's still going to be up to other projects like zc.buildout and conda to provide a way to package that kind of thing up at least somewhat neatly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From holger at merlinux.eu Wed Jul 17 14:18:51 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 17 Jul 2013 12:18:51 +0000 Subject: [Distutils] devpi-0.9.4: bug fixes for upload/install activities In-Reply-To: <20130716090810.GK3125@merlinux.eu> References: <20130716090810.GK3125@merlinux.eu> Message-ID: <20130717121851.GU1668@merlinux.eu> I just released 0.9.4 packages of the devpi system, providing a self-updating pypi caching and index server and an optional ``devpi`` command line tool to help with common upload/test/release activities. For general docs see: http://doc.devpi.net For the 0.9.4 bug fixes see below. best, holger krekel CHANGES 0.9.4 --------------------------- server: - fix issue where lookups into subpages of the simple index (simple/NAME/VER) would not trigger a 404 as they should. client: - fix uploading by adding setup.py's dir to sys.path: setup.py files that import modules/packages for obtaining versions etc. now work. Thanks jbasko. - fix automatic devpi-server startup on python26/windows -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From benzolius at yahoo.com Wed Jul 17 11:43:04 2013 From: benzolius at yahoo.com (Benedek Zoltan) Date: Wed, 17 Jul 2013 02:43:04 -0700 (PDT) Subject: [Distutils] buildout doesn't work as expected, please help me Message-ID: <1374054184.35101.YahooMailNeo@web121603.mail.ne1.yahoo.com> Hi, I suspect the problem is with the changing infrastructure of setuptools-distribute. I'm happy to hear of this merge, thank you for all of you, who contributed to these tools. https://python-packaging-user-guide.readthedocs.org/en/latest/current.html I tried to clean everyting buildout made: rm -rf bin/ eggs/ parts/ rm bootstrap.py.installed.cfg wget http://downloads.buildout.org/1/bootstrap.py python bootstrap.py and after several tries finally worked. Could be helpful too: http://reinout.vanrees.org/weblog/2013/07/08/new-setuptools-buildout.html http://www.coresoftwaregroup.com/blog/distribute-setuptools-and-a-failing-bootstrap Thanks Zoli -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Jul 17 14:40:02 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 13:40:02 +0100 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 13:17, Nick Coghlan wrote: > That said, the new metadata standard does deliberately include a few > pieces intended to make such things easier to define: > > 1. The extensions concept - using a structured data format like JSON > makes it much easier for platform specific tools (or even pip itself) > to say "declare this metadata, and we will run these commands > automatically" Okay, so this is where you can put the "I need a [specific] C-compiler" information. Then a pip alternative (or a future pip) that knew more about C compilation could respond appropriately. The PEP doesn't explicitly say anything about how a tool should handle unrecognised metadata extensions; it seems fairly obvious to me that they are supposed to be ignored but perhaps this should be explicitly stated. On the other hand it would be useful to be able to say: if you don't understand my "fortran" metadata extension then you don't know how to install/build this distribution. Is there a way e.g. to indicate a build/install dependency on the tool understanding some section in the extension metadata, or that an extension is compulsory somehow? Then a user could do: $ pip install autocont Error installing "autocont": required extension "fortran" not understood. See http://pypa.org/list_of_known_extensions.htm for more information. Oscar From ncoghlan at gmail.com Wed Jul 17 15:05:44 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 23:05:44 +1000 Subject: [Distutils] buildout doesn't work as expected, please help me In-Reply-To: <1374054184.35101.YahooMailNeo@web121603.mail.ne1.yahoo.com> References: <1374054184.35101.YahooMailNeo@web121603.mail.ne1.yahoo.com> Message-ID: On 17 July 2013 19:43, Benedek Zoltan wrote: > Hi, > > I suspect the problem is with the changing infrastructure of > setuptools-distribute. > I'm happy to hear of this merge, thank you for all of you, who contributed > to these tools. > > https://python-packaging-user-guide.readthedocs.org/en/latest/current.html > > > I tried to clean everyting buildout made: > > rm -rf bin/ eggs/ parts/ > rm bootstrap.py .installed.cfg > > wget http://downloads.buildout.org/1/bootstrap.py > > python bootstrap.py > > and after several tries finally worked. Thanks for the update! Glad you were able to get it working, and sorry for the last few remnants of the setuptools/distribute fork process causing you trouble :( Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From theller at ctypes.org Wed Jul 17 15:13:16 2013 From: theller at ctypes.org (Thomas Heller) Date: Wed, 17 Jul 2013 15:13:16 +0200 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: Am 15.07.2013 19:26, schrieb Donald Stufft: > Maybe this is a crazy idea, but would a windows only extension work? > .pye(executable) Then just associate .pye with the launcher. Python > won't see .pye as importable so there's no shadow issues. pip.bat? Thomas From ncoghlan at gmail.com Wed Jul 17 15:09:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 17 Jul 2013 23:09:58 +1000 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 22:40, Oscar Benjamin wrote: > On 17 July 2013 13:17, Nick Coghlan wrote: >> That said, the new metadata standard does deliberately include a few >> pieces intended to make such things easier to define: >> >> 1. The extensions concept - using a structured data format like JSON >> makes it much easier for platform specific tools (or even pip itself) >> to say "declare this metadata, and we will run these commands >> automatically" > > Okay, so this is where you can put the "I need a [specific] > C-compiler" information. Then a pip alternative (or a future pip) that > knew more about C compilation could respond appropriately. > > The PEP doesn't explicitly say anything about how a tool should handle > unrecognised metadata extensions; it seems fairly obvious to me that > they are supposed to be ignored but perhaps this should be explicitly > stated. > > On the other hand it would be useful to be able to say: if you don't > understand my "fortran" metadata extension then you don't know how to > install/build this distribution. Is there a way e.g. to indicate a > build/install dependency on the tool understanding some section in the > extension metadata, or that an extension is compulsory somehow? Yes, you can include a build_requires on something that understands the extension and make setup.py a shim that invokes that build system rather than setuptools/distutils (similar to the way d2to1 allows setup.cfg based projects work with tools that are expecting a setup.py file) It's not as elegant as a proper metabuild system (since there's a fair bit of legacy cruft in the setup.py CLI), but it works :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Wed Jul 17 15:17:37 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 14:17:37 +0100 Subject: [Distutils] PEP 426 updated based on last round of discussion In-Reply-To: References: Message-ID: On 17 July 2013 12:45, Oscar Benjamin wrote: > > OTOH, maybe this is metadata 3.0 stuff - I feel like at the moment we > need > > to get what we have now out of the door rather than continually adding > extra > > capabilities. > > I wasn't proposing to hold anything up or add new capabilities. I'm > just trying to see how far these changes go towards making non-pure > Python software automatically installable. Everything I would want to > build "build requires" software that is not on pypi. > Sorry - my comment was more directed at myself suggesting extra capabilities in the metadata, than at you for asking the question. The more questions we ask about "how would this work in practice, for such-and-such use case" the better... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jul 17 15:33:15 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 14:33:15 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 17 July 2013 14:13, Thomas Heller wrote: > Am 15.07.2013 19:26, schrieb Donald Stufft: > > Maybe this is a crazy idea, but would a windows only extension work? >> .pye(executable) Then just associate .pye with the launcher. Python >> won't see .pye as importable so there's no shadow issues. >> > > pip.bat? > That's my cue to cry :-) If you missed my earlier comments about bat files, then no - bat files have a significant number of failings that make them unsuitable for this sort of thing. The simplest example I can give is that bat files don't nest. So if the "pip" command is implemented as a bat file, and you have a script to build your virtualenv that looks like this: @echo off virtualenv foo foo\scripts\activate pip install wibble echo Complete! then the final message would never be executed as the pip bat file would not return to its caller. (Actually, the activate command is also a bat file, so pip would never be executed either, but you get my point). That's the worst sort of silent failure, and I have spent significant time in the past debugging scripts that fell foul of this behaviour. To fix this you have to use "call pip install wibble". And once you admit the possibility that certain commands could be implemented as bat files you have to either check everything, or use "call" everywhere, even when not necessary (e.g. "call python") just to be safe. If you don't think this problem is sufficient, I can offer more :-( I'm afraid exe files as wrappers are probably the only viable option. The basic reason is that the OS recognises exe files in all context, with no special configuration needed. This is not true of any other file type. So anything else will have corner cases that will give unexpected results. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Wed Jul 17 15:49:38 2013 From: theller at ctypes.org (Thomas Heller) Date: Wed, 17 Jul 2013 15:49:38 +0200 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: Am 17.07.2013 15:33, schrieb Paul Moore: > On 17 July 2013 14:13, Thomas Heller > wrote: > > Am 15.07.2013 19:26, schrieb Donald Stufft: > > Maybe this is a crazy idea, but would a windows only extension work? > .pye(executable) Then just associate .pye with the launcher. Python > won't see .pye as importable so there's no shadow issues. > > > pip.bat? > > > That's my cue to cry :-) > > If you missed my earlier comments about bat files, then no - bat files > have a significant number of failings that make them unsuitable for this > sort of thing. Yes, missed them. Sorry that I did not read enough of the discussion, I just stumbled over some messages in this thread and .bat popped up in my head. I still use them for quite some things although I have been bitten by their problems myself often enough. > I'm afraid exe files as wrappers are probably the only viable option. > The basic reason is that the OS recognises exe files in all context, > with no special configuration needed. This is not true of any other file > type. So anything else will have corner cases that will give unexpected > results. Thomas From p.f.moore at gmail.com Wed Jul 17 15:50:50 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 14:50:50 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On 17 July 2013 14:49, Thomas Heller wrote: > Yes, missed them. Sorry that I did not read enough of the discussion, > I just stumbled over some messages in this thread and .bat popped up in > my head. > No problem. It's probably useful to have a summary of the issues with bat files (more accurately, the reasons we need exe wrappers) written down here in any case. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Jul 17 17:12:01 2013 From: brett at python.org (Brett Cannon) Date: Wed, 17 Jul 2013 11:12:01 -0400 Subject: [Distutils] Q about best practices now (or near future) Message-ID: I'm going to be pushing an update to one of my projects to PyPI this week and so I figured I could use this opportunity to help with patches to the User Guide's packaging tutorial. But to do that I wanted to ask what the current best practices are. * Are we even close to suggesting wheels for source distributions? * Are we promoting (weakly, strongly?) the signing of distributions yet? * Are we saying "use setuptools" for everyone, or still only if you need it (asking since there is a stub about installing setuptools but the simple example doesn't have a direct need for it ATM, but could use find_packages() and such)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Jul 17 17:46:05 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 17 Jul 2013 11:46:05 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: > I'm going to be pushing an update to one of my projects to PyPI this week > and so I figured I could use this opportunity to help with patches to the > User Guide's packaging tutorial. > > But to do that I wanted to ask what the current best practices are. > > * Are we even close to suggesting wheels for source distributions? No, wheels don't replace source distributions at all. They just let you install something without having to have whatever built the wheel from its sdist. It is currently nice to have them available. I'd like to see an ambitious person begin uploading wheels that have no traditional sdist. > * Are we promoting (weakly, strongly?) the signing of distributions yet? No change. > * Are we saying "use setuptools" for everyone, or still only if you need it > (asking since there is a stub about installing setuptools but the simple > example doesn't have a direct need for it ATM, but could use find_packages() > and such)? Setuptools is the preferred distutils-derived system. distutils should no longer be considered morally superior. The MEBS idea, or a simple setup.py emulator and a contract with the installer on which commands it will actually call, will eventually let you do a proper job of choosing build systems. From ronaldoussoren at mac.com Wed Jul 17 17:49:45 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 17 Jul 2013 17:49:45 +0200 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: <16EB0FEF-6EBA-476C-A451-59FC3C1D0B5F@mac.com> On 17 Jul, 2013, at 17:46, Daniel Holth wrote: > On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: >> I'm going to be pushing an update to one of my projects to PyPI this week >> and so I figured I could use this opportunity to help with patches to the >> User Guide's packaging tutorial. >> >> But to do that I wanted to ask what the current best practices are. >> >> * Are we even close to suggesting wheels for source distributions? > > No, wheels don't replace source distributions at all. They just let > you install something without having to have whatever built the wheel > from its sdist. It is currently nice to have them available. > > I'd like to see an ambitious person begin uploading wheels that have > no traditional sdist. Do you mean an sdist without a setup.py? That will likely take some time, for the time being projects will still need a setup.py that just prints information on how to build them (or bootstraps the actual wheel building tool). Ronald From p.f.moore at gmail.com Wed Jul 17 17:55:45 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 16:55:45 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 July 2013 16:46, Daniel Holth wrote: > > * Are we saying "use setuptools" for everyone, or still only if you need > it > > (asking since there is a stub about installing setuptools but the simple > > example doesn't have a direct need for it ATM, but could use > find_packages() > > and such)? > > Setuptools is the preferred distutils-derived system. distutils should > no longer be considered morally superior. > Personally, I still reserve judgement on setuptools. But that's mainly if you actually use its features (you should carefully consider and understand the implications if you use its script wrapper functionality, for example). I see no reason to knee-jerk use it if you don't use any of its functionality, though. I may be in a minority on that, though :-) > The MEBS idea, or a simple setup.py emulator and a contract with the > installer on which commands it will actually call, will eventually let > you do a proper job of choosing build systems. By the way, what *does* MEBS mean? I've seen a few people use the term, but never found an explanation... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Jul 17 17:59:43 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 17 Jul 2013 11:59:43 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 11:55 AM, Paul Moore wrote: > > On 17 July 2013 16:46, Daniel Holth wrote: >> >> > * Are we saying "use setuptools" for everyone, or still only if you need >> > it >> > (asking since there is a stub about installing setuptools but the simple >> > example doesn't have a direct need for it ATM, but could use >> > find_packages() >> > and such)? >> >> Setuptools is the preferred distutils-derived system. distutils should >> no longer be considered morally superior. > > > Personally, I still reserve judgement on setuptools. But that's mainly if > you actually use its features (you should carefully consider and understand > the implications if you use its script wrapper functionality, for example). > > I see no reason to knee-jerk use it if you don't use any of its > functionality, though. I may be in a minority on that, though :-) One code path. Plus all your pip-using users are using it anyway. Many have seemed to not realize that "having dependencies" is one of "its features". >> >> The MEBS idea, or a simple setup.py emulator and a contract with the >> installer on which commands it will actually call, will eventually let >> you do a proper job of choosing build systems. > > > By the way, what *does* MEBS mean? I've seen a few people use the term, but > never found an explanation... It stands for the "Meta Build System (not an actual project)" which I proposed last September. A suitably nuts person could just layout their project like a wheel, edit the .dist-info by hand, zip and publish that. From ronaldoussoren at mac.com Wed Jul 17 18:01:42 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 17 Jul 2013 18:01:42 +0200 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 Jul, 2013, at 17:55, Paul Moore wrote: > > On 17 July 2013 16:46, Daniel Holth wrote: > > * Are we saying "use setuptools" for everyone, or still only if you need it > > (asking since there is a stub about installing setuptools but the simple > > example doesn't have a direct need for it ATM, but could use find_packages() > > and such)? > > Setuptools is the preferred distutils-derived system. distutils should > no longer be considered morally superior. > > Personally, I still reserve judgement on setuptools. But that's mainly if you actually use its features (you should carefully consider and understand the implications if you use its script wrapper functionality, for example). > > I see no reason to knee-jerk use it if you don't use any of its functionality, though. I may be in a minority on that, though :-) I agree, and if metadata 2.0 and bdist_wheel support were added to distutils there'd be even less reason to use setuptools. I primarily use setuptools for its dependency system on installation, and that's nicely covered by using metadata 2.0, wheels and pip. > > The MEBS idea, or a simple setup.py emulator and a contract with the > installer on which commands it will actually call, will eventually let > you do a proper job of choosing build systems. > > By the way, what *does* MEBS mean? I've seen a few people use the term, but never found an explanation... MEta Build System. Ronald > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From brett at python.org Wed Jul 17 18:39:41 2013 From: brett at python.org (Brett Cannon) Date: Wed, 17 Jul 2013 12:39:41 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth wrote: > On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: > > I'm going to be pushing an update to one of my projects to PyPI this week > > and so I figured I could use this opportunity to help with patches to the > > User Guide's packaging tutorial. > > > > But to do that I wanted to ask what the current best practices are. > > > > * Are we even close to suggesting wheels for source distributions? > > No, wheels don't replace source distributions at all. They just let > you install something without having to have whatever built the wheel > from its sdist. It is currently nice to have them available. > Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone? -Brett > > I'd like to see an ambitious person begin uploading wheels that have > no traditional sdist. > > > * Are we promoting (weakly, strongly?) the signing of distributions yet? > > No change. > > > * Are we saying "use setuptools" for everyone, or still only if you need > it > > (asking since there is a stub about installing setuptools but the simple > > example doesn't have a direct need for it ATM, but could use > find_packages() > > and such)? > > Setuptools is the preferred distutils-derived system. distutils should > no longer be considered morally superior. > > The MEBS idea, or a simple setup.py emulator and a contract with the > installer on which commands it will actually call, will eventually let > you do a proper job of choosing build systems. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 17 18:45:27 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 12:45:27 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Jul 17, 2013, at 12:39 PM, Brett Cannon wrote: > > > > On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth wrote: > On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: > > I'm going to be pushing an update to one of my projects to PyPI this week > > and so I figured I could use this opportunity to help with patches to the > > User Guide's packaging tutorial. > > > > But to do that I wanted to ask what the current best practices are. > > > > * Are we even close to suggesting wheels for source distributions? > > No, wheels don't replace source distributions at all. They just let > you install something without having to have whatever built the wheel > from its sdist. It is currently nice to have them available. > > Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone? > > -Brett You *can* publish only Wheels, especially i your package is pure python. However it's a "built" package. You should still publish the sdist (and sdist 2.0 when that happens) because a Wheel is (essentially) derived from a sdist. It is easier for the tooling to install and in general you'll want to use them, but not everything supports Wheel and some people will want to build their own wheels. Think of Wheel as a debian package and the sdist as the source package. Ideally the majority of the time people will be installing from the Wheel but the sdist is still there for those who don't. > > > > I'd like to see an ambitious person begin uploading wheels that have > no traditional sdist. > > > * Are we promoting (weakly, strongly?) the signing of distributions yet? > > No change. > > > * Are we saying "use setuptools" for everyone, or still only if you need it > > (asking since there is a stub about installing setuptools but the simple > > example doesn't have a direct need for it ATM, but could use find_packages() > > and such)? > > Setuptools is the preferred distutils-derived system. distutils should > no longer be considered morally superior. > > The MEBS idea, or a simple setup.py emulator and a contract with the > installer on which commands it will actually call, will eventually let > you do a proper job of choosing build systems. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Wed Jul 17 18:59:16 2013 From: brett at python.org (Brett Cannon) Date: Wed, 17 Jul 2013 12:59:16 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 12:45 PM, Donald Stufft wrote: > > On Jul 17, 2013, at 12:39 PM, Brett Cannon wrote: > > > > > On Wed, Jul 17, 2013 at 11:46 AM, Daniel Holth wrote: > >> On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: >> > I'm going to be pushing an update to one of my projects to PyPI this >> week >> > and so I figured I could use this opportunity to help with patches to >> the >> > User Guide's packaging tutorial. >> > >> > But to do that I wanted to ask what the current best practices are. >> > >> > * Are we even close to suggesting wheels for source distributions? >> >> No, wheels don't replace source distributions at all. They just let >> you install something without having to have whatever built the wheel >> from its sdist. It is currently nice to have them available. >> > > Then I'm thoroughly confused since the Wheel PEP says in its rationale > that "Python needs a package format that is easier to install than sdist". > That would suggest a wheel would work for a source distribution and replace > sdist zip/tar files. If wheels aren't going to replace what sdist spits out > as the installation file format of choice for pip what is it for, just > binary files alone? > > -Brett > > > You *can* publish only Wheels, especially i your package is pure python. > However it's a "built" package. You should still publish the sdist (and > sdist 2.0 when that happens) because a Wheel is (essentially) derived from > a sdist. > > It is easier for the tooling to install and in general you'll want to use > them, but not everything supports Wheel and some people will want to build > their own wheels. Think of Wheel as a debian package and the sdist as the > source package. Ideally the majority of the time people will be installing > from the Wheel but the sdist is still there for those who don't. > > OK, that makes sense and what I understood wheels to be.Thanks for the clarification! Daniel's wording made me think suddenly that wheel files were only for distributions that had an extension or something. But it also sounds like that project providing wheel distributions is too early to include in the User's Guide. -Brett > > > > >> >> I'd like to see an ambitious person begin uploading wheels that have >> no traditional sdist. >> >> > * Are we promoting (weakly, strongly?) the signing of distributions yet? >> >> No change. >> >> > * Are we saying "use setuptools" for everyone, or still only if you >> need it >> > (asking since there is a stub about installing setuptools but the simple >> > example doesn't have a direct need for it ATM, but could use >> find_packages() >> > and such)? >> >> Setuptools is the preferred distutils-derived system. distutils should >> no longer be considered morally superior. >> >> The MEBS idea, or a simple setup.py emulator and a contract with the >> installer on which commands it will actually call, will eventually let >> you do a proper job of choosing build systems. >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Jul 17 19:06:27 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 17:06:27 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: Brett Cannon python.org> writes: > Then I'm thoroughly confused since the Wheel PEP says in its rationale that "Python needs a package format that is easier to install than sdist". That would suggest a wheel would work for a source distribution and replace sdist zip/tar files. If wheels aren't going to replace what sdist spits out as the installation file format of choice for pip what is it for, just binary files alone? Another way to look at it: The wheel contains all the code needed to use a distribution at run or build time - Python code, .so files, header files, data files, scripts. "Just stuff - no fluff" :-) The sdist generally contains all the files in the wheel, plus those needed to build the wheel (e.g. .pyx, .f, .c), + docs, tests, test data etc. but not the built files. This isn't hard and fast, though - an sdist could e.g. choose to include a .c file created from a .pyx, so that the user doesn't need to have Cython installed, but just a C compiler. Of course some people bundle their test code in a tests subpackage which would then end up in the wheel, but hopefully I've given the gist of the distinction. Regards, Vinay Sajip From p.f.moore at gmail.com Wed Jul 17 19:28:55 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 18:28:55 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 July 2013 17:59, Brett Cannon wrote: > It is easier for the tooling to install and in general you'll want to use >> them, but not everything supports Wheel and some people will want to build >> their own wheels. Think of Wheel as a debian package and the sdist as the >> source package. Ideally the majority of the time people will be installing >> from the Wheel but the sdist is still there for those who don't. >> >> > OK, that makes sense and what I understood wheels to be.Thanks for the > clarification! Daniel's wording made me think suddenly that wheel files > were only for distributions that had an extension or something. > I think that's the best way for people to think of sdist/wheel - it's precisely equivalent to srpm/rpm (or the debian equivalent as Donald points out) in the Unix world. And ultimately, the expectation is that people install from wheels even for pure-python projects that could just as easily be installed from source, for precisely the same reasons as people use rpms rather than srpms. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tk47 at students.poly.edu Wed Jul 17 19:17:48 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Thu, 18 Jul 2013 01:17:48 +0800 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> Message-ID: <51E6D1BC.8010305@students.poly.edu> On 07/17/2013 04:50 PM, Nick Coghlan wrote: > > On 17 Jul 2013 18:17, "holger krekel" > wrote: > > > > On Wed, Jul 17, 2013 at 07:48 +0000, Vinay Sajip wrote: > > > holger krekel merlinux.eu > writes: > > > > > > > about existing schemes/efforts. I guess most Linux distros do > it already > > > > so if nothing comes up here PyPI-specific (what is the status of > TUF, btw?) > > > > i am going to look into the distro's working models. > > > > > > ISTM it works for distros because they're the central authority > guaranteeing > > > the provenance of the software in their repos. It's harder with > PyPI because > > > it's not a central authority curating the content. Perhaps > something like a > > > web of trust would be needed. > > > > I am thinking about curating release files _after_ publishing and > > then configuring install activities to require "signed-off" release > files. > > Basically giving companies and devops the possibility to organise their > > vetting processes and collaborate, without requiring PyPI to change > first. > > This certainly involves the question of trust but if nothing else an > entity > > can at least trust its own signatures :) > > Note that Linux distros don't trust each other's keys and nor do app > stores trust other. Secure collaborative vetting of software is an > Unsolved Problem. The Update Framework provides a solid technical > basis for such collaboration, but even it doesn't solve the > fundamental trust issues. Those issues are why we still rely on the CA > model for SSL, despite its serious flaws: nobody has come up with > anything else that scales effectively. > > The use of JSON for metadata 2.0 is enough to make it TUF friendly, > but there are significant key management issues to be addressed before > TUF could be used on PyPI itself. That's no reason to avoid > experimenting with private TUF enabled PyPI servers, though - a > private server alleviates most of the ugly key management problems. > > Thank you, Nick. Indeed, we think that TUF (designed in collaboration with some of chief designers of the Tor project) offers a secure yet usable way to address many classes of attacks on package managers, many previously left unconsidered in the Linux distribution community until we pointed it out to them, at which point they adopted our security proposals (https://isis.poly.edu/~jcappos/papers/cappos_mirror_ccs_08.pdf). We are delighted to see that JSON is being used for PyPI metadata 2.0, which would certainly lend itself very easily for integration with TUF. Speaking of which, let me answer some questions about the current status of PyPI and pip over TUF. TLDR: We now have a pretty good scheme balancing key management with security for PyPI. At the time of writing, I have an almost identical version of pip ready anytime to read metadata off a TUF-secured PyPI mirror. There is just one thing left to do: I need to just compress the metadata as much as possible (a problem common to all package managers). I expect this to be done in the next two weeks, by which time we should have a slightly modified version of pip which would securely download packages from an up-to-date TUF-secured PyPI mirror. (Aside: let me say that we are discussing all things related to PyPI, pip and TUF on the TUF mailing list (https://groups.google.com/forum/?fromgroups#!forum/theupdateframework). I welcome you to join our mailing list so that we can continue the discussion. I did not want to incessantly copy our discussions to the DistUtils mailing list because I was not sure whether it would be always relevant to the DistUtils SIG which is already busy with a number of other projects. In retrospect, perhaps I should have summarized our findings every now and then on this list, because I can understand that it looks to some people as though we have been silent, when in fact that was not the case.) To very briefly summarize our status without going into tangential details: 1. We previously found and reported on this mailing list that if we naively assigned a key to every PyPI project, then the metadata would not scale. We would have security with little usability. This looks like an insoluble key management problem, but we think we have a pretty good solution. 2. The solution is briefly this: we now propose just two targets roles for all PyPI files. 2.1. The first role --- called the "unstable" targets role --- will have completely online keys (meaning that it can kept on the server for automated release purposes). The unstable role will sign for all PyPI files being added, updated or deleted without question. The metadata for this role will change all the time. 2.2. The second role --- called the "stable" targets role --- will have completely offline keys (meaning that keys are kept as securely as possible and only used with manual human intervention). The stable role will sign for only the PyPI files which have vetted and deemed trustworthy. The metadata for this role is expected to change much less frequently than the unstable role. Okay, sounds too abstract to some. What does this mean in practice? We want to make key management simple. Preferably, as Nick Coghlan and others have proposed before, we would want PyPI to initially, at least, sign for all packages, because managing keys for every single project right off the bat is potentially painful. Therefore, with that view in mind --- which is to first accommodate PyPI signing for packages, and gradually allowing projects to sign for their own packages --- we then consider what our proposal above would do. Firstly, it would make key management so much simpler. There is a sufficient number of offline keys used to sign metadata for a valuable and trustworthy set of packages (done only every now and then), and an online key used to make continuous release of PyPI packages possible (done all the time). 1. Now suppose that the top-level targets role says: when you download a package, you must first always ask the stable role about it. If it has something to say about it, then use that information (and just ignore the unstable role). Otherwise, ask the unstable role about it. 2. Fine, what about that? Now suppose that the both the stable and unstable roles have signed for some very popular package called FooBar 2.0. Suppose further that attackers have broken into the TUF-secured PyPI repository. Oh, they can't find the keys to the stable role, so they can't mess with the stable role metadata without getting caught, but since the unstable keys are online, they could make it sign for malicious versions of the FooBar 2.0 package. 3. But no problem there! Since we have instructed that the stable role must always be consulted first, then valid metadata about the intended, trusted FooBar 2.0 package cannot be modified (not without getting all the human owners of the keys to collude). The unstable role may be tampered with to offer bogus metadata, but the security impact will be limited with *prior* metadata about packages in the way-harder-to-attack stable role. More details, should you be interested, are available here: https://groups.google.com/forum/?fromgroups#!topic/theupdateframework/pocW9bacwuc I hope that answers a number of questions. Let us know if you have more questions, and I think I can safely conclude that I can start discussing TUF on this mailing list again! PS: Pardon any delay in my response in the next couple of days, as I will be flying for a day or so to New York in approximately 24 hours. -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Jul 17 20:46:58 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 17 Jul 2013 14:46:58 -0400 Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: <20130717144658.37679d79@anarchist> On Jul 17, 2013, at 11:46 AM, Daniel Holth wrote: >I'd like to see an ambitious person begin uploading wheels that have >no traditional sdist. You're not getting rid of sdists are you? Please note that without source distributions (preferably .tar.gz) your package will never get distributed on a Linux distro. Maybe the keyword here is "traditional" though. In that case, keep in mind that at least in Debian and its derivatives, we have a lot of tools that make it pretty trivial to package something setup.py based from PyPI. If/when that goes away, it will be more difficult to get new package updates, until the distro's supporting tools catch up. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From p.f.moore at gmail.com Wed Jul 17 20:56:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 19:56:39 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <20130717144658.37679d79@anarchist> References: <20130717144658.37679d79@anarchist> Message-ID: On 17 July 2013 19:46, Barry Warsaw wrote: > You're not getting rid of sdists are you? > There are as-yet unspecified plans for a sdist 2.0 format. It is expected to fulfil the same role as current sdist, though, so no need to worry. > Please note that without source distributions (preferably .tar.gz) your > package will never get distributed on a Linux distro. > Understood. I expect Nick is fully aware of the implications here :-) > Maybe the keyword here is "traditional" though. In that case, keep in mind > that at least in Debian and its derivatives, we have a lot of tools that > make > it pretty trivial to package something setup.py based from PyPI. If/when > that > goes away, it will be more difficult to get new package updates, until the > distro's supporting tools catch up. > The long-term intent is to remove executable setup.py. When this happens, definitely consumers (end users, Python tools like pip, and distro packaging systems) will have some migration work to do. Keeping that manageable will definitely be important. But doing nothing and staying where we are isn't really an option, so we'll have to accept and manage the pain. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Jul 17 21:11:05 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 17 Jul 2013 15:11:05 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <20130717144658.37679d79@anarchist> Message-ID: <20130717151105.4cc11efe@anarchist> On Jul 17, 2013, at 07:56 PM, Paul Moore wrote: >The long-term intent is to remove executable setup.py. When this happens, >definitely consumers (end users, Python tools like pip, and distro >packaging systems) will have some migration work to do. Keeping that >manageable will definitely be important. But doing nothing and staying >where we are isn't really an option, so we'll have to accept and manage the >pain. Definitely. And if that leads to a declarative equivalent that we can reason about without executing, all the better. the-setup.cfg-is-dead,-long-live-the-setup.cfg-ly y'rs, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From p.f.moore at gmail.com Wed Jul 17 21:12:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 20:12:56 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: On 17 July 2013 19:55, Steve Dower wrote: > > I'm afraid exe files as wrappers are probably the only viable option. > The basic > > reason is that the OS recognises exe files in all context, with no > special > > configuration needed. This is not true of any other file type. So > anything else > > will have corner cases that will give unexpected results. > > No reason to be afraid of this, exe wrappers are totally the best option. > > As for updating .exes while they're running, the best approach is to > rename the running one (e.g. 'pip.exe' -> 'pip.exe.deleteme') in the same > folder and either: > * delete any existing .deleteme files on next run, or > * delete an existing pip.exe.deleteme file immediately before trying to > rename to it > > Any other approach will also have corner cases, but this will be the most > reliable in the context of multiple users/permissions/environment variables. > Cool. I'm not happy about the clutter of '.deleteme' files, and I'll still look for a way to delete them straight after the upgrade process terminates, but I may have to settle for lazy deletion. The problem issue remaining is recognising when we need to do this. In terms of code paths, pip install -U pip is no different from (for example) pip install -U flask. But it needs to be handled specially just because it's pip. And it *doesn't* need to be handled specially if it's "python -m pip install -U pip". That's not a Windows issue, though, I was just using the Windows issue to put off having to think about it :-) One thought, while I have a Windows expert's attention ;-) Is there a way (I'm not too bothered how complex it might be) of doing the equivalent of Unix exec in Windows? That is, start a new process and then have my initial process exit *without* the shell (or whatever started the first process) getting control back until the child completes? I'm assuming not, as otherwise solving the issue would be as easy as exec-ing "python -m pip" from the wrapper. But no harm in asking :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Jul 17 21:13:44 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 17 Jul 2013 20:13:44 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <20130717151105.4cc11efe@anarchist> References: <20130717144658.37679d79@anarchist> <20130717151105.4cc11efe@anarchist> Message-ID: On 17 July 2013 20:11, Barry Warsaw wrote: > the-setup.cfg-is-dead,-long-live-the-setup.cfg-ly y'rs, > setup.json. Get with the program, man ;-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 17 21:20:48 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 15:20:48 -0400 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: <817DE9DA-E6B6-4B2D-93A6-F04C405FA074@stufft.io> On Jul 17, 2013, at 3:12 PM, Paul Moore wrote: > The problem issue remaining is recognising when we need to do this. In terms of code paths, pip install -U pip is no different from (for example) pip install -U flask. But it needs to be handled specially just because it's pip. And it *doesn't* need to be handled specially if it's "python -m pip install -U pip". That's not a Windows issue, though, I was just using the Windows issue to put off having to think about it :-) OTOH it doesn't *hurt* to do it so if a conditional is hard to sort out (I'm thinking explicitly about pip vs python -mpip) we can just genericize it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ronaldoussoren at mac.com Wed Jul 17 21:24:04 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 17 Jul 2013 21:24:04 +0200 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <51E6D1BC.8010305@students.poly.edu> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> Message-ID: On 17 Jul, 2013, at 19:17, Trishank Karthik Kuppusamy wrote: > > To very briefly summarize our status without going into tangential details: > > 1. We previously found and reported on this mailing list that if we naively assigned a key to every PyPI project, then the metadata would not scale. We would have security with little usability. This looks like an insoluble key management problem, but we think we have a pretty good solution. > 2. The solution is briefly this: we now propose just two targets roles for all PyPI files. > 2.1. The first role --- called the "unstable" targets role --- will have completely online keys (meaning that it can kept on the server for automated release purposes). The unstable role will sign for all PyPI files being added, updated or deleted without question. The metadata for this role will change all the time. > 2.2. The second role --- called the "stable" targets role --- will have completely offline keys (meaning that keys are kept as securely as possible and only used with manual human intervention). The stable role will sign for only the PyPI files which have vetted and deemed trustworthy. The metadata for this role is expected to change much less frequently than the unstable role. > > Okay, sounds too abstract to some. What does this mean in practice? We want to make key management simple. Preferably, as Nick Coghlan and others have proposed before, we would want PyPI to initially, at least, sign for all packages, because managing keys for every single project right off the bat is potentially painful. Therefore, with that view in mind --- which is to first accommodate PyPI signing for packages, and gradually allowing projects to sign for their own packages --- we then consider what our proposal above would do. > > Firstly, it would make key management so much simpler. There is a sufficient number of offline keys used to sign metadata for a valuable and trustworthy set of packages (done only every now and then), and an online key used to make continuous release of PyPI packages possible (done all the time). > > 1. Now suppose that the top-level targets role says: when you download a package, you must first always ask the stable role about it. If it has something to say about it, then use that information (and just ignore the unstable role). Otherwise, ask the unstable role about it. > 2. Fine, what about that? Now suppose that the both the stable and unstable roles have signed for some very popular package called FooBar 2.0. Suppose further that attackers have broken into the TUF-secured PyPI repository. Oh, they can't find the keys to the stable role, so they can't mess with the stable role metadata without getting caught, but since the unstable keys are online, they could make it sign for malicious versions of the FooBar 2.0 package. > 3. But no problem there! Since we have instructed that the stable role must always be consulted first, then valid metadata about the intended, trusted FooBar 2.0 package cannot be modified (not without getting all the human owners of the keys to collude). The unstable role may be tampered with to offer bogus metadata, but the security impact will be limited with *prior* metadata about packages in the way-harder-to-attack stable role. I'm trying to understand what this means for package maintainers. If I understand you correctly maintainers would upload packages just like they do now, and packages are then automaticly signed by the "unstable" role. Then some manual process by the PyPI maintainers can sign a package with a stable row. Is that correct? If it is, how is this supposed to scale? The contents of PyPI is currently not vetted at all, and it seems to me that manually vetting uploads for even the most popular packages would be a significant amount of work that would have to be done by what's likely a small set of volunteers. Also, what are you supposed to do when FooBar 2.0 is signed by the stable role and FooBar 2.0.1 is only signed by the unstable role, and you try to fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? Ronald From oscar.j.benjamin at gmail.com Wed Jul 17 21:34:11 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 20:34:11 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <20130717144658.37679d79@anarchist> References: <20130717144658.37679d79@anarchist> Message-ID: On 17 July 2013 19:46, Barry Warsaw wrote: >On Jul 17, 2013, at 11:46 AM, Daniel Holth wrote: >> >>I'd like to see an ambitious person begin uploading wheels that have >>no traditional sdist. > > You're not getting rid of sdists are you? > > Please note that without source distributions (preferably .tar.gz) your > package will never get distributed on a Linux distro. > > Maybe the keyword here is "traditional" though. Yeah, I think what Daniel means is that the sdist->wheel transformation could be done by a tool unlike distutils and setuptools. The sdist as supplied would not be something that could be directly installed with 'python setup.py install' but it could be turned into a wheel by bento/waf/yaku/scons etc. > In that case, keep in mind > that at least in Debian and its derivatives, we have a lot of tools that make > it pretty trivial to package something setup.py based from PyPI. If/when that > goes away, it will be more difficult to get new package updates, until the > distro's supporting tools catch up. I imagined that distro packaging tools would end up using the wheel as an intermediate format when building a deb from a source deb. Would that not make things easier long-term? In the short term, you can expect that whatever solution people use is likely to be convertible to a traditional sdist in some straight-forward way e.g. 'bentomaker sdist'. Oscar From barry at python.org Wed Jul 17 21:39:09 2013 From: barry at python.org (Barry Warsaw) Date: Wed, 17 Jul 2013 15:39:09 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <20130717144658.37679d79@anarchist> Message-ID: <20130717153909.446ca09a@anarchist> On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote: >I imagined that distro packaging tools would end up using the wheel as >an intermediate format when building a deb from a source deb. Do you mean, the distro would download the wheel or that it would build it during the build step for the archive? Probably not the former, as any binary blobs in a wheel would both violate policy and likely be inappropriate for all the platforms we build for. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From oscar.j.benjamin at gmail.com Wed Jul 17 21:44:47 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 20:44:47 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <20130717153909.446ca09a@anarchist> References: <20130717144658.37679d79@anarchist> <20130717153909.446ca09a@anarchist> Message-ID: On 17 July 2013 20:39, Barry Warsaw wrote: > On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote: > >>I imagined that distro packaging tools would end up using the wheel as >>an intermediate format when building a deb from a source deb. > > Do you mean, the distro would download the wheel or that it would build it > during the build step for the archive? Probably not the former, as any binary > blobs in a wheel would both violate policy and likely be inappropriate for all > the platforms we build for. I meant the latter. The source deb would comprise the sdist (that may or may not be "traditional") and other distro files. The author of the sdist designed it with the intention that it could be turned into a wheel in some way (perhaps not the traditional one). So the natural way to build it is to use the author's intended build mechanism, end up with a wheel, and then convert that to an installable deb. Oscar From donald at stufft.io Wed Jul 17 21:47:55 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 15:47:55 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <20130717144658.37679d79@anarchist> <20130717153909.446ca09a@anarchist> Message-ID: On Jul 17, 2013, at 3:44 PM, Oscar Benjamin wrote: > I meant the latter. The source deb would comprise the sdist (that may > or may not be "traditional") and other distro files. The author of the > sdist designed it with the intention that it could be turned into a > wheel in some way (perhaps not the traditional one). So the natural > way to build it is to use the author's intended build mechanism, end > up with a wheel, and then convert that to an installable deb. As far as I know that's not how distros package things. They'll take the source and package it into a source package for their platform, and then their build machines will build binary packages for all the archectures they support. I don't suspect the distros to use Wheels at all. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mail at timgolden.me.uk Wed Jul 17 21:22:37 2013 From: mail at timgolden.me.uk (Tim Golden) Date: Wed, 17 Jul 2013 20:22:37 +0100 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: <51E6EEFD.5050609@timgolden.me.uk> On 17/07/2013 20:12, Paul Moore wrote: > On 17 July 2013 19:55, Steve Dower wrote: > >>> I'm afraid exe files as wrappers are probably the only viable option. >> The basic >>> reason is that the OS recognises exe files in all context, with no >> special >>> configuration needed. This is not true of any other file type. So >> anything else >>> will have corner cases that will give unexpected results. >> >> No reason to be afraid of this, exe wrappers are totally the best option. >> >> As for updating .exes while they're running, the best approach is to >> rename the running one (e.g. 'pip.exe' -> 'pip.exe.deleteme') in the same >> folder and either: >> * delete any existing .deleteme files on next run, or >> * delete an existing pip.exe.deleteme file immediately before trying to >> rename to it >> >> Any other approach will also have corner cases, but this will be the most >> reliable in the context of multiple users/permissions/environment variables. >> > The problem issue remaining is recognising when we need to do this. In > terms of code paths, pip install -U pip is no different from (for example) > pip install -U flask. But it needs to be handled specially just because > it's pip. I don't think it does: in essence, any distribution which installs an .exe wrapper for some entry point can (possibly should) be treated the same way. Assuming that flask installed some kind of "run-flask.exe" in scripts/ you'd do the same thing: rename in-place so that the old one could keep running; write the new one under the old name; delete the old one (if you can). The issue of deleting the .deleteme versions of arbitrary scripts is that you don't know when they might be delete-able. In the Flask example, assume that someone was actually using run-flask.exe to run a Flask app, you can rename the .exe but you won't be able to delete it until it had finished running. Unless you have a watchdog script which watches the file / folder then you just have to call it .deleteme and clean up as best you can the next time round. TJG From dholth at gmail.com Wed Jul 17 21:52:40 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 17 Jul 2013 15:52:40 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <20130717153909.446ca09a@anarchist> References: <20130717144658.37679d79@anarchist> <20130717153909.446ca09a@anarchist> Message-ID: On Wed, Jul 17, 2013 at 3:39 PM, Barry Warsaw wrote: > On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote: > > >>I imagined that distro packaging tools would end up using the wheel as >>an intermediate format when building a deb from a source deb. > > Do you mean, the distro would download the wheel or that it would build it > during the build step for the archive? Probably not the former, as any binary > blobs in a wheel would both violate policy and likely be inappropriate for all > the platforms we build for. > > -Barry The distro packager will likely only have to type "python -m some_tool install ... " instead of "setup.py install ...". IIRC distro packaging normally does installation into some temporary directory which is then archived to create the distro package. The existence of wheel probably doesn't make any difference. However a pure-Python wheel on pypi might be something a distro could work with or it could be an intermediate format compiled just-in-time by the distro. The new json metadata probably will affect the distros more. From Steve.Dower at microsoft.com Wed Jul 17 20:55:39 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 17 Jul 2013 18:55:39 +0000 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> > I'm afraid exe files as wrappers are probably the only viable option. The basic > reason is that the OS recognises exe files in all context, with no special > configuration needed. This is not true of any other file type. So anything else > will have corner cases that will give unexpected results. No reason to be afraid of this, exe wrappers are totally the best option. As for updating .exes while they're running, the best approach is to rename the running one (e.g. 'pip.exe' -> 'pip.exe.deleteme') in the same folder and either: * delete any existing .deleteme files on next run, or * delete an existing pip.exe.deleteme file immediately before trying to rename to it Any other approach will also have corner cases, but this will be the most reliable in the context of multiple users/permissions/environment variables. > Paul Steve From zooko at zooko.com Wed Jul 17 21:58:54 2013 From: zooko at zooko.com (zooko) Date: Wed, 17 Jul 2013 23:58:54 +0400 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <20130716091900.GL3125@merlinux.eu> References: <20130716091900.GL3125@merlinux.eu> Message-ID: <20130717195852.GC18066@zooko.com> In my opinion it is a good idea to embed, not just the *name* of the package that your package depends on, but also the public key or public keys that your package requires the depended-upon package to be signed by. There was a time when wheel did this, using Ed25519 keys (which are nice and small so it is easy to embed them directly into the metadata next to things like URLs and Author Names). I don't know if it still does. There's a PEP that mentions JWS signatures: http://www.python.org/dev/peps/pep-0427/ Regards, Zooko From oscar.j.benjamin at gmail.com Wed Jul 17 22:08:31 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 21:08:31 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <20130717144658.37679d79@anarchist> <20130717153909.446ca09a@anarchist> Message-ID: On 17 July 2013 20:52, Daniel Holth wrote: > On Wed, Jul 17, 2013 at 3:39 PM, Barry Warsaw wrote: >> On Jul 17, 2013, at 08:34 PM, Oscar Benjamin wrote: >> >>>I imagined that distro packaging tools would end up using the wheel as >>>an intermediate format when building a deb from a source deb. >> >> Do you mean, the distro would download the wheel or that it would build it >> during the build step for the archive? Probably not the former, as any binary >> blobs in a wheel would both violate policy and likely be inappropriate for all >> the platforms we build for. >> > > The distro packager will likely only have to type "python -m some_tool > install ... " instead of "setup.py install ...". IIRC distro packaging > normally does installation into some temporary directory which is then > archived to create the distro package. The existence of wheel probably > doesn't make any difference. Currently sdists provides a relatively uniform interface in the way that the setup.py can be used for build/installation. If non-traditional sdists become commonplace then that will not be the case any more. On the other hand the wheel format provides not just a uniform interface but a formally specified one that I imagine is more suitable for the kind of automated processing that is done by distros. I'm not a distro packager but I imagined that they would find it more convenient to have tools that turn one formally specified format into another than to run the installation in a monkey-patched environment. Oscar From donald at stufft.io Wed Jul 17 22:14:22 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 16:14:22 -0400 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: <20130717195852.GC18066@zooko.com> References: <20130716091900.GL3125@merlinux.eu> <20130717195852.GC18066@zooko.com> Message-ID: On Jul 17, 2013, at 3:58 PM, zooko wrote: > In my opinion it is a good idea to embed, not just the *name* of the package > that your package depends on, but also the public key or public keys that your > package requires the depended-upon package to be signed by. The problem with this is it makes it more difficult to replace a library with a patched copy. Example: I want to install the library Foo, Foo depends on Bar, and Bar depends on Broken. Broken is well, broken and I want to use a patched version of it locally. So I fix Broken, upload it to my private index server and I pip install from that. If public keys are encoded as part of the dependency chain, not only do I need to patch Broken but I also need to patch Foo and Bar _and_ anything else that depends on Foo, Bar, or Broken _and_ anything else that depends on those, so on until we reach the leaves. Packages should have signatures. Dependency should be by name. End tooling should provide a method to make a set of requirements with certain signatures or hashes for a specific instance of this installation. (E.g. Awesome, Inc could have a set of requirements that contain Foo, Bar and their own patched version of Broken along with the keys used to sign all of them). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oscar.j.benjamin at gmail.com Wed Jul 17 22:20:13 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 17 Jul 2013 21:20:13 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 July 2013 17:59, Brett Cannon wrote: > > But it also sounds like that project providing wheel distributions is too > early to include in the User's Guide. There are already many guides showing how to use distutils/setuptools to do things the old way. There are also confused bits of documentation/guides referring to now obsolete projects that at one point were touted as the future. It would be really good to have a guide that shows how the new working with wheels and metadata way is expected to work from the perspective of end users and package authors even if this isn't fully ready yet. I've been loosely following the packaging work long enough to see it change direction more than once. I still find it hard to see the complete picture for how pip, pypi, metadata, setuptools, setup.py, setup.json, wheels and sdists are expected to piece together in terms of what a package author is expected to do and how it affects end users. A guide (instead of a load of PEPs) would be a great way to clarify this for me and for the many others who haven't been following the progress of this at all. Oscar From Steve.Dower at microsoft.com Wed Jul 17 22:12:52 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 17 Jul 2013 20:12:52 +0000 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: <477707cf02834c6fa4e596e1b709a389@BLUPR03MB199.namprd03.prod.outlook.com> > The problem issue remaining is recognising when we need to do this. In terms of > code paths, pip install -U pip is no different from (for example) pip install -U > flask. But it needs to be handled specially just because it's pip. And it > *doesn't* need to be handled specially if it's "python -m pip install -U pip". > That's not a Windows issue, though, I was just using the Windows issue to put > off having to think about it :-) Not especially. You wouldn't want to do it for all files, but you can do the rename for all .exe files and then try and delete the .deleteme immediately. (Or you can try and replace and gracefully handle the exception.) As long as failing to delete the old .exe doesn't cause installation to fail, 99% of the time the end result is identical to now (and the other 1% is better, because install succeeded where it would currently fail). > One thought, while I have a Windows expert's attention ;-) Is there a way (I'm > not too bothered how complex it might be) of doing the equivalent of Unix exec > in Windows? That is, start a new process and then have my initial process exit > *without* the shell (or whatever started the first process) getting control back > until the child completes? I'm assuming not, as otherwise solving the issue > would be as easy as exec-ing "python -m pip" from the wrapper. But no harm in > asking :-) I had this discussion with a colleague earlier today, and there really isn't. You'd need to be cooperating with the process that started you initially (cmd.exe/powershell.exe/whatever) and if there is a way to do it, (a) I don't know what it is, and (b) it's unlikely to be consistent/reliable/documented. At best, you could start a new process at the end of installation that keeps trying to delete the .deleteme file until it succeeds (or waits for a specific process to exit). However, my gut says that it's safer just to leave the file around and try and delete it later, especially since pip is going to be used in such a wide variety of contexts. Being clever is more likely to get you into trouble. Steve From jcappos at poly.edu Wed Jul 17 22:31:39 2013 From: jcappos at poly.edu (Justin Cappos) Date: Wed, 17 Jul 2013 16:31:39 -0400 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> Message-ID: Essentially, nothing changes from the user's standpoint or from the standpoint of the package developer (except they sign their package). The reason why we have multiple roles is to be robust against attacks in case the main PyPI repo is hacked. (Trishank can chime in with more complete / precise information once he's back.) Thanks, Justin On Wed, Jul 17, 2013 at 3:24 PM, Ronald Oussoren wrote: > > On 17 Jul, 2013, at 19:17, Trishank Karthik Kuppusamy < > tk47 at students.poly.edu> wrote: > > > > To very briefly summarize our status without going into tangential > details: > > > > 1. We previously found and reported on this mailing list that if we > naively assigned a key to every PyPI project, then the metadata would not > scale. We would have security with little usability. This looks like an > insoluble key management problem, but we think we have a pretty good > solution. > > 2. The solution is briefly this: we now propose just two targets roles > for all PyPI files. > > 2.1. The first role --- called the "unstable" targets role --- will have > completely online keys (meaning that it can kept on the server for > automated release purposes). The unstable role will sign for all PyPI files > being added, updated or deleted without question. The metadata for this > role will change all the time. > > 2.2. The second role --- called the "stable" targets role --- will have > completely offline keys (meaning that keys are kept as securely as possible > and only used with manual human intervention). The stable role will sign > for only the PyPI files which have vetted and deemed trustworthy. The > metadata for this role is expected to change much less frequently than the > unstable role. > > > > Okay, sounds too abstract to some. What does this mean in practice? We > want to make key management simple. Preferably, as Nick Coghlan and others > have proposed before, we would want PyPI to initially, at least, sign for > all packages, because managing keys for every single project right off the > bat is potentially painful. Therefore, with that view in mind --- which is > to first accommodate PyPI signing for packages, and gradually allowing > projects to sign for their own packages --- we then consider what our > proposal above would do. > > > > Firstly, it would make key management so much simpler. There is a > sufficient number of offline keys used to sign metadata for a valuable and > trustworthy set of packages (done only every now and then), and an online > key used to make continuous release of PyPI packages possible (done all the > time). > > > > 1. Now suppose that the top-level targets role says: when you download a > package, you must first always ask the stable role about it. If it has > something to say about it, then use that information (and just ignore the > unstable role). Otherwise, ask the unstable role about it. > > 2. Fine, what about that? Now suppose that the both the stable and > unstable roles have signed for some very popular package called FooBar 2.0. > Suppose further that attackers have broken into the TUF-secured PyPI > repository. Oh, they can't find the keys to the stable role, so they can't > mess with the stable role metadata without getting caught, but since the > unstable keys are online, they could make it sign for malicious versions of > the FooBar 2.0 package. > > 3. But no problem there! Since we have instructed that the stable role > must always be consulted first, then valid metadata about the intended, > trusted FooBar 2.0 package cannot be modified (not without getting all the > human owners of the keys to collude). The unstable role may be tampered > with to offer bogus metadata, but the security impact will be limited with > *prior* metadata about packages in the way-harder-to-attack stable role. > > I'm trying to understand what this means for package maintainers. If I > understand you correctly maintainers would upload packages just like they > do now, and packages are then automaticly signed by the "unstable" role. > Then some manual process by the PyPI maintainers can sign a package with a > stable row. Is that correct? If it is, how is this supposed to scale? The > contents of PyPI is currently not vetted at all, and it seems to me that > manually vetting uploads for even the most popular packages would be a > significant amount of work that would have to be done by what's likely a > small set of volunteers. > > Also, what are you supposed to do when FooBar 2.0 is signed by the stable > role and FooBar 2.0.1 is only signed by the unstable role, and you try to > fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? > > Ronald > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Jul 17 22:35:58 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 17 Jul 2013 16:35:58 -0400 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <20130717195852.GC18066@zooko.com> Message-ID: Wheel provides a "wheel keygen" and "wheel sign" command and if you set WHEEL_TOOL=/path/to/wheel then bdist_wheel will automatically sign all the packages you create. Ideally wheel would sign every package, reducing the problem from "how do we force people to use PGP" to "how do we derive value from existing signatures." It also allows multiple signers per package. Readers of this list mostly use pypi for library management during development. Zooko's use case is different and appropriate for an application publisher. You trust the application publisher and want to get versions of dependencies they trust/tested or that are signed by people that they trust. As an end user you do not want parties unknown to "fix" dependencies. In any case it wasn't ever expected that people would embed keys in setup.py's abstract dependencies, rather they would go into requirements.txt used to install complete applications. You would also have had the option to trust any number of signing keys (n signers out of m possible signers, likely at a minimum both the publisher's and your own signing key would be accepted for any particular package). There has been a focus on deciding whether a package is malicious. I think that's wrong / too hard. It's better to focus on making sure everyone at least gets the same packages so targeted attacks via the pypi system don't work. I also feel it's much more important to make signatures widespread than to make them individually as secure as possible. On Wed, Jul 17, 2013 at 4:14 PM, Donald Stufft wrote: > > On Jul 17, 2013, at 3:58 PM, zooko wrote: > > In my opinion it is a good idea to embed, not just the *name* of the package > that your package depends on, but also the public key or public keys that > your > package requires the depended-upon package to be signed by. > > > The problem with this is it makes it more difficult to replace a library > with a patched copy. > > Example: > I want to install the library Foo, Foo depends on Bar, and Bar depends on > Broken. Broken > is well, broken and I want to use a patched version of it locally. > So I fix Broken, upload > it to my private index server and I pip install from that. > > If public keys are encoded as part of the dependency chain, not only > do I need to patch Broken > but I also need to patch Foo and Bar _and_ anything else that > depends on Foo, Bar, or Broken > _and_ anything else that depends on those, so on until we reach the > leaves. > > > Packages should have signatures. Dependency should be by name. End tooling > should provide a method to make a set of requirements with certain > signatures or hashes for a specific instance of this installation. (E.g. > Awesome, Inc could have a set of requirements that contain Foo, Bar and > their own patched version of Broken along with the keys used to sign all of > them). > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From ncoghlan at gmail.com Wed Jul 17 23:43:03 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 07:43:03 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 01:46, "Daniel Holth" wrote: > > On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: > > I'm going to be pushing an update to one of my projects to PyPI this week > > and so I figured I could use this opportunity to help with patches to the > > User Guide's packaging tutorial. > > > > But to do that I wanted to ask what the current best practices are. > > > > * Are we even close to suggesting wheels for source distributions? > > No, wheels don't replace source distributions at all. They just let > you install something without having to have whatever built the wheel > from its sdist. It is currently nice to have them available. > > I'd like to see an ambitious person begin uploading wheels that have > no traditional sdist. Argh, don't even suggest that. Such projects could never be included in a Linux distribution - we need the original source to push into a trusted build system. Cheers, Nick. > > > * Are we promoting (weakly, strongly?) the signing of distributions yet? > > No change. > > > * Are we saying "use setuptools" for everyone, or still only if you need it > > (asking since there is a stub about installing setuptools but the simple > > example doesn't have a direct need for it ATM, but could use find_packages() > > and such)? > > Setuptools is the preferred distutils-derived system. distutils should > no longer be considered morally superior. > > The MEBS idea, or a simple setup.py emulator and a contract with the > installer on which commands it will actually call, will eventually let > you do a proper job of choosing build systems. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 00:12:01 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 08:12:01 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 06:24, "Oscar Benjamin" wrote: > > On 17 July 2013 17:59, Brett Cannon wrote: > > > > But it also sounds like that project providing wheel distributions is too > > early to include in the User's Guide. > > There are already many guides showing how to use distutils/setuptools > to do things the old way. There are also confused bits of > documentation/guides referring to now obsolete projects that at one > point were touted as the future. It would be really good to have a > guide that shows how the new working with wheels and metadata way is > expected to work from the perspective of end users and package authors > even if this isn't fully ready yet. > > I've been loosely following the packaging work long enough to see it > change direction more than once. I still find it hard to see the > complete picture for how pip, pypi, metadata, setuptools, setup.py, > setup.json, wheels and sdists are expected to piece together in terms > of what a package author is expected to do and how it affects end > users. A guide (instead of a load of PEPs) would be a great way to > clarify this for me and for the many others who haven't been following > the progress of this at all. That's exactly what the packaging guide is for. It just needs volunteers to help write it. PEP 426 goes into a lot of detail on the various things that are supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 time frame idea, purely for resourcing reasons. The bundling proposed for 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not the *only* way to do it (other build systems like d2to1 work, they just need a suitable setup.py shim, and other installers are possible too), just the obvious way. For better or for worse, I don't believe we have any more chances to ask developers to switch to a different front end (heck, quite a few projects still recommend easy_install or even downloading the sdist and running setup.py directly). Instead, we need to clearly document the current status of things, and start working towards *incremental*, *non-disruptive* changes in the way the back end operates. If we do it right, most users *shouldn't even notice* when the various tools are updated to produce and consume metadata 2.0 (which can be distributed in parallel with the existing metadata formats), unless they decide to use the additional features the enhanced schema makes possible. It's good that distil exists as a proof of concept, but the ship has sailed on the default language level installer: it will be pip. Updating both pip and setuptools to use distlib as a common backend may be a good idea in the long run (and probably a better notion than pip growing a programmatic API of its own), but that's not something I see as urgently needed. Cheers, Nick. > > > Oscar > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Jul 18 00:18:10 2013 From: brett at python.org (Brett Cannon) Date: Wed, 17 Jul 2013 18:18:10 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Wed, Jul 17, 2013 at 6:12 PM, Nick Coghlan wrote: > > On 18 Jul 2013 06:24, "Oscar Benjamin" wrote: > > > > On 17 July 2013 17:59, Brett Cannon wrote: > > > > > > But it also sounds like that project providing wheel distributions is > too > > > early to include in the User's Guide. > > > > There are already many guides showing how to use distutils/setuptools > > to do things the old way. There are also confused bits of > > documentation/guides referring to now obsolete projects that at one > > point were touted as the future. It would be really good to have a > > guide that shows how the new working with wheels and metadata way is > > expected to work from the perspective of end users and package authors > > even if this isn't fully ready yet. > > > > I've been loosely following the packaging work long enough to see it > > change direction more than once. I still find it hard to see the > > complete picture for how pip, pypi, metadata, setuptools, setup.py, > > setup.json, wheels and sdists are expected to piece together in terms > > of what a package author is expected to do and how it affects end > > users. A guide (instead of a load of PEPs) would be a great way to > > clarify this for me and for the many others who haven't been following > > the progress of this at all. > > That's exactly what the packaging guide is for. It just needs volunteers > to help write it. > > PEP 426 goes into a lot of detail on the various things that are > supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 > time frame idea, purely for resourcing reasons. The bundling proposed for > 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not > the *only* way to do it (other build systems like d2to1 work, they just > need a suitable setup.py shim, and other installers are possible too), just > the obvious way. > As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch. I think for the rewrite to move forward someone's going to need to own each part and specify upfront what assumptions are being made about the audience (e.g. they know what a package is and how to create one, etc.) and their abilities (can you say ``curl | python`` to them and thus just link to the setuptools docs for installation?). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 00:21:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 08:21:14 +1000 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> <2fb31ca0107648cbae71d6f3a88c277d@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: On 18 Jul 2013 05:13, "Paul Moore" wrote: > > On 17 July 2013 19:55, Steve Dower wrote: >> >> > I'm afraid exe files as wrappers are probably the only viable option. The basic >> > reason is that the OS recognises exe files in all context, with no special >> > configuration needed. This is not true of any other file type. So anything else >> > will have corner cases that will give unexpected results. >> >> No reason to be afraid of this, exe wrappers are totally the best option. >> >> As for updating .exes while they're running, the best approach is to rename the running one (e.g. 'pip.exe' -> 'pip.exe.deleteme') in the same folder and either: >> * delete any existing .deleteme files on next run, or >> * delete an existing pip.exe.deleteme file immediately before trying to rename to it >> >> Any other approach will also have corner cases, but this will be the most reliable in the context of multiple users/permissions/environment variables. > > > Cool. I'm not happy about the clutter of '.deleteme' files, and I'll still look for a way to delete them straight after the upgrade process terminates, but I may have to settle for lazy deletion. Call them ".previous" and it looks less ugly, in my opinion. You can also just try deletion first and only if that fails do the renaming trick. EAFP and all that :) Cheers, Nick. > > The problem issue remaining is recognising when we need to do this. In terms of code paths, pip install -U pip is no different from (for example) pip install -U flask. But it needs to be handled specially just because it's pip. And it *doesn't* need to be handled specially if it's "python -m pip install -U pip". That's not a Windows issue, though, I was just using the Windows issue to put off having to think about it :-) > > One thought, while I have a Windows expert's attention ;-) Is there a way (I'm not too bothered how complex it might be) of doing the equivalent of Unix exec in Windows? That is, start a new process and then have my initial process exit *without* the shell (or whatever started the first process) getting control back until the child completes? I'm assuming not, as otherwise solving the issue would be as easy as exec-ing "python -m pip" from the wrapper. But no harm in asking :-) > > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 00:28:08 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 08:28:08 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 08:18, "Brett Cannon" wrote: > > > > > On Wed, Jul 17, 2013 at 6:12 PM, Nick Coghlan wrote: >> >> >> On 18 Jul 2013 06:24, "Oscar Benjamin" wrote: >> > >> > On 17 July 2013 17:59, Brett Cannon wrote: >> > > >> > > But it also sounds like that project providing wheel distributions is too >> > > early to include in the User's Guide. >> > >> > There are already many guides showing how to use distutils/setuptools >> > to do things the old way. There are also confused bits of >> > documentation/guides referring to now obsolete projects that at one >> > point were touted as the future. It would be really good to have a >> > guide that shows how the new working with wheels and metadata way is >> > expected to work from the perspective of end users and package authors >> > even if this isn't fully ready yet. >> > >> > I've been loosely following the packaging work long enough to see it >> > change direction more than once. I still find it hard to see the >> > complete picture for how pip, pypi, metadata, setuptools, setup.py, >> > setup.json, wheels and sdists are expected to piece together in terms >> > of what a package author is expected to do and how it affects end >> > users. A guide (instead of a load of PEPs) would be a great way to >> > clarify this for me and for the many others who haven't been following >> > the progress of this at all. >> >> That's exactly what the packaging guide is for. It just needs volunteers to help write it. >> >> PEP 426 goes into a lot of detail on the various things that are supported, but a key thing to keep in mind is that metadata 2.0 is a 3.4.1 time frame idea, purely for resourcing reasons. The bundling proposed for 3.4 is about blessing setuptools & pip as the "obvious way to do it". Not the *only* way to do it (other build systems like d2to1 work, they just need a suitable setup.py shim, and other installers are possible too), just the obvious way. > > > As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch. > > I think for the rewrite to move forward someone's going to need to own each part and specify upfront what assumptions are being made about the audience (e.g. they know what a package is and how to create one, etc.) and their abilities (can you say ``curl | python`` to them and thus just link to the setuptools docs for installation?). It would make sense to have targeted sections for "I am...": -... a new developer on Windows -... a new developer on Mac OS X -... a new developer on Linux -... an experienced Python developer on Windows -... an experienced Python developer on Mac OS X -... an experienced Python developer on Linux -... an experienced developer, new to Python, on Windows -... an experienced developer, new to Python, on Mac OS X -... an experienced developer, new to Python, on Linux Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 00:30:52 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 22:30:52 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: Nick Coghlan gmail.com> writes: > It's good that distil exists as a proof of concept, but the ship has sailed on the default language level installer: it will be pip. I understand that it's your call as the packaging czar, but was there any discussion about this before the decision was made? Any pros and cons of different approaches weighed up? Python 3.4 beta is still 5-6 months away. Call me naive, but I would normally have expected a PEP on the bundling of pip to be produced by an interested party/champion, then that people would discuss and refine the PEP on the mailing list, and *then* a pronouncement would be made. This is what PEP 1 describes as the PEP process. Instead, it seems a decision has already been made, and now an author/champion for a PEP is being sought ex post facto. With all due respect, this seems back to front - so it would be good to have a better understanding of the factors that went into the decision, including the timing of it. Can you shed some light on this? Thanks and regards, Vinay Sajip From donald at stufft.io Thu Jul 18 00:40:00 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 18:40:00 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: <063872B3-B340-4419-8497-DD596A2601B9@stufft.io> On Jul 17, 2013, at 6:30 PM, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> It's good that distil exists as a proof of concept, but the ship has sailed > on the default language level installer: it will be pip. > > I understand that it's your call as the packaging czar, but was there any > discussion about this before the decision was made? Any pros and cons of > different approaches weighed up? Python 3.4 beta is still 5-6 months away. > Call me naive, but I would normally have expected a PEP on the bundling of pip > to be produced by an interested party/champion, then that people would discuss > and refine the PEP on the mailing list, and *then* a pronouncement would be > made. This is what PEP 1 describes as the PEP process. Instead, it seems a > decision has already been made, and now an author/champion for a PEP is being > sought ex post facto. With all due respect, this seems back to front - so it > would be good to have a better understanding of the factors that went into the > decision, including the timing of it. Can you shed some light on this? > > Thanks and regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I think bundling pip or bundling nothing is the only thing that makes sense. There actually *is* a PEP however it took a different approach that has been (during the discussions about it) decided that a different way would be less error prone and more suitable. So now someone to write a PEP for that *new* way is being sought out. So it's not so much that a pronouncement was made prior to a PEP being written, but that a PEP was written, discussed, and a better way was found during that discussion. As far as I know you're free to make a competing PEP if you'd like. However I think the chances of it getting accepted are very low because the goal here is user convenience. It's hard to argue that pip isn't the installer with the most buy in in the community and thus bundling it (as opposed to a different installer) is the most convient thing for the most users. In many ways this makes things better for alternative installers because it gives a simple unified command to installing that third party installer without needing to handle bootstrapping. However because pip is bundled an alternative installer will likely need to provide significant benefits over pip in order to gain critical mass. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 00:58:59 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 22:58:59 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <063872B3-B340-4419-8497-DD596A2601B9@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I think bundling pip or bundling nothing is the only thing that makes sense. There > actually *is* a PEP however it took a different approach that has been (during the > discussions about it) decided that a different way would be less error prone and > more suitable. So now someone to write a PEP for that *new* way is being sought > out. So it's not so much that a pronouncement was made prior to a PEP being > written, but that a PEP was written, discussed, and a better way was found during > that discussion. Which specific PEP are you referring to? I'm not aware of any PEP which refers to bundling anything with Python. If whichever PEP it was took a fairly different approach to the one being discussed, and no conclusion could be reached about it, that doesn't mean that PEP 1 shouldn't be followed - just that a new PEP needs to be written, espousing the new approach, and it needs to go through the PEP workflow. For example, PEP 386 was supplanted by PEP 440, because the earlier PEP had some flaws which the later PEP took care to address. The earlier metadata PEPs all built on one another, with PEP 426 being the latest. > As far as I know you're free to make a competing PEP if you'd like. What would be the point, given that the decision has already been made by the packaging BDFL? If someone else had put forward the pip bundling PEP, I would certainly have commented on it like anyone else and participated in the discussions. I'm more concerned that the PEP process is not being followed than I'm concerned about "my particular approach vs. your particular approach vs. his/her particular approach". Regards, Vinay Sajip From ncoghlan at gmail.com Thu Jul 18 01:00:11 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 09:00:11 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 08:31, "Vinay Sajip" wrote: > > Nick Coghlan gmail.com> writes: > > > It's good that distil exists as a proof of concept, but the ship has sailed > on the default language level installer: it will be pip. > > I understand that it's your call as the packaging czar, but was there any > discussion about this before the decision was made? Any pros and cons of > different approaches weighed up? Python 3.4 beta is still 5-6 months away. > Call me naive, but I would normally have expected a PEP on the bundling of pip > to be produced by an interested party/champion, then that people would discuss > and refine the PEP on the mailing list, and *then* a pronouncement would be > made. This is what PEP 1 describes as the PEP process. Instead, it seems a > decision has already been made, and now an author/champion for a PEP is being > sought ex post facto. With all due respect, this seems back to front - so it > would be good to have a better understanding of the factors that went into the > decision, including the timing of it. Can you shed some light on this? Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before I make that official, so I can refer to it in the rejection notice. However, even without a PEP, I consider pip the only acceptable option, as I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone "use distribute over setuptools and pip over easy_install". The former sort of caught on (but it was subtle, since Linux distros all packaged distribute as setuptools anyway), and the latter has been quite effective amongst those that didn't need the binary egg format support. We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools. However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from the Python 3 transition. Essentially, don't underestimate how thin the ice we're currently walking on is community-wise: people are irritated and even outright angry with the Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to proceed. Cheers, Nick. > > Thanks and regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 01:36:22 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 17 Jul 2013 23:36:22 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: Nick Coghlan gmail.com> writes: > Technically the decision *hasn't* been made - there is, as yet, no bundling PEP for me to consider for any installer, and I've decided not to accept Richard's bootstrapping PEP due to the issues around delaying the download to first use. I'd just like to have a bundling PEP posted before I make that official, so I can refer to it in the rejection notice. Technically? Well, "that ship has sailed" seems pretty well decided to me. I know that "technically is the best kind of correct" :-) But IIUC, your reservations on PEP 439 (I didn't realise that was what Donald was referring to in his response) related to Richard's specific implementation. I posted an example getpip.py (very simple, I grant you) which would get setuptools and pip for users, without the need for bundling anything, plus proposed an equivalent upgrade for pyvenv which would do the same for venvs. There has been no discussion around getpip.py whatsoever, AFAIK. > However, even without a PEP, I consider pip the only acceptable option, as I believe we have no credibility left to burn with the broader Python development community on tool choices. We've spent years telling everyone We don't need to burn any credibility at all. Perhaps python-dev lost some credibility when packaging got pulled from 3.3, even though it was a good decision made for the right reasons. But you only ask people to believe you when you have some new story to tell them, and pip is hardly new. > We're now telling people, OK setuptools is actually fine, but you should still use pip instead of easy_install and start using wheels instead of eggs. This is defensible, since even people using distribute were still importing setuptools. This is something which arose from the coming together of setuptools and Distribute. There was no credibility lost promoting Distribute, since setuptools never supported Python 3 - until now. There's no credibility lost now promoting setuptools, since it is essentially now the same as Distribute without the need for compatibility workarounds. > However, I simply see *no way* we could pull off a migration to a new recommended installer when the migration from the previous one to the current one is still far from complete :P I'm certainly not suggesting the time is right for migrating to a new recommended installer we have always promoted pip (over easy_install), and that doesn't need to change. It doesn't mean we have to bundle pip with Python - just make it easier to get it on Windows and OS X. Just a few days ago you were saying that python -m getpip would be good to have, then I created a getpip module, and now AFAICT it hasn't even been looked at, while people gear up to do shed-loads of work to bundle pip with Python. > Adding in the distutils2/packaging digression just lowers our collective credibility even further, and we also get some significant spillover from the Python 3 transition. Haters gonna hate. What're you gonna do? :-) > Essentially, don't underestimate how thin the ice we're currently walking on is community-wise: people are irritated and even outright angry with the Python core development team, and they have good reasons to be. We need to remain mindful of that, and take it into account when deciding how to proceed. Who are these angry, entitled people? Have they forgotten that Python is a volunteer project? Why do we owe such people anything? I'm not convinced that such people are representative of the wider community. To judge from the video of the packaging panel at PyCon 2013, people are perhaps disappointed that we haven't got further, but there was no animosity that I could detect. The atmosphere was pretty positive and what I saw was an endearing faith and hope that we would, in time, get things right. None of what you have said answers why the PEP process shouldn't be followed in this case. No compelling case has been made AFAICT for bundling pip as opposed to enabling python -m getpip, especially given that (a) the work involved in one is very small compared to the other, and (b) the result for the user is the same - they get to use setuptools and pip. Regards, Vinay Sajip From ncoghlan at gmail.com Thu Jul 18 01:46:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 09:46:19 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 09:37, "Vinay Sajip" wrote: > > Nick Coghlan gmail.com> writes: > > > Technically the decision *hasn't* been made - there is, as yet, no > bundling PEP for me to consider for any installer, and I've decided not to > accept Richard's bootstrapping PEP due to the issues around delaying the > download to first use. I'd just like to have a bundling PEP posted before I > make that official, so I can refer to it in the rejection notice. > > Technically? Well, "that ship has sailed" seems pretty well decided to me. I > know that "technically is the best kind of correct" :-) > > But IIUC, your reservations on PEP 439 (I didn't realise that was what > Donald was referring to in his response) related to Richard's specific > implementation. I posted an example getpip.py (very simple, I grant you) > which would get setuptools and pip for users, without the need for bundling > anything, plus proposed an equivalent upgrade for pyvenv which would do the > same for venvs. There has been no discussion around getpip.py whatsoever, > AFAIK. No, my reservations are about delaying the installation of pip to first use (or any time after the installation of Python). I don't care that much about the distinction between bundling and install-time bootstrapping and would appreciate a PEP that explicitly weighed up the pros and cons of those two approaches (at the very least bundling means you don't need a reliable network connection at install time, while install time bootstrapping avoids the problem of old versions of pip, and also gives a way to bootstrap older Python installations). Cheers, Nick. > > > However, even without a PEP, I consider pip the only acceptable option, as > I believe we have no credibility left to burn with the broader Python > development community on tool choices. We've spent years telling everyone > > We don't need to burn any credibility at all. Perhaps python-dev lost some > credibility when packaging got pulled from 3.3, even though it was a good > decision made for the right reasons. But you only ask people to believe you > when you have some new story to tell them, and pip is hardly new. > > > We're now telling people, OK setuptools is actually fine, but you should > still use pip instead of easy_install and start using wheels instead of > eggs. This is defensible, since even people using distribute were still > importing setuptools. > > This is something which arose from the coming together of setuptools and > Distribute. There was no credibility lost promoting Distribute, since > setuptools never supported Python 3 - until now. There's no credibility lost > now promoting setuptools, since it is essentially now the same as Distribute > without the need for compatibility workarounds. > > > However, I simply see *no way* we could pull off a migration to a new > recommended installer when the migration from the previous one to the > current one is still far from complete :P > > I'm certainly not suggesting the time is right for migrating to a new > recommended installer we have always promoted pip (over easy_install), and > that doesn't need to change. It doesn't mean we have to bundle pip with > Python - just make it easier to get it on Windows and OS X. Just a few days > ago you were saying that python -m getpip would be good to have, then I > created a getpip module, and now AFAICT it hasn't even been looked at, while > people gear up to do shed-loads of work to bundle pip with Python. > > > Adding in the distutils2/packaging digression just lowers our collective > credibility even further, and we also get some significant spillover from > the Python 3 transition. > > Haters gonna hate. What're you gonna do? :-) > > > Essentially, don't underestimate how thin the ice we're currently walking > on is community-wise: people are irritated and even outright angry with the > Python core development team, and they have good reasons to be. We need to > remain mindful of that, and take it into account when deciding how to > proceed. > > Who are these angry, entitled people? Have they forgotten that Python is a > volunteer project? Why do we owe such people anything? I'm not convinced > that such people are representative of the wider community. > > To judge from the video of the packaging panel at PyCon 2013, people are > perhaps disappointed that we haven't got further, but there was no animosity > that I could detect. The atmosphere was pretty positive and what I saw was > an endearing faith and hope that we would, in time, get things right. > > None of what you have said answers why the PEP process shouldn't be followed > in this case. No compelling case has been made AFAICT for bundling pip as > opposed to enabling python -m getpip, especially given that (a) the work > involved in one is very small compared to the other, and (b) the result for > the user is the same - they get to use setuptools and pip. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 01:56:35 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 09:56:35 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 09:37, "Vinay Sajip" wrote: > > Nick Coghlan gmail.com> writes: > > > Technically the decision *hasn't* been made - there is, as yet, no > bundling PEP for me to consider for any installer, and I've decided not to > accept Richard's bootstrapping PEP due to the issues around delaying the > download to first use. I'd just like to have a bundling PEP posted before I > make that official, so I can refer to it in the rejection notice. > > Technically? Well, "that ship has sailed" seems pretty well decided to me. I > know that "technically is the best kind of correct" :-) > > But IIUC, your reservations on PEP 439 (I didn't realise that was what > Donald was referring to in his response) related to Richard's specific > implementation. I posted an example getpip.py (very simple, I grant you) > which would get setuptools and pip for users, without the need for bundling > anything, plus proposed an equivalent upgrade for pyvenv which would do the > same for venvs. There has been no discussion around getpip.py whatsoever, > AFAIK. > > > However, even without a PEP, I consider pip the only acceptable option, as > I believe we have no credibility left to burn with the broader Python > development community on tool choices. We've spent years telling everyone > > We don't need to burn any credibility at all. Perhaps python-dev lost some > credibility when packaging got pulled from 3.3, even though it was a good > decision made for the right reasons. But you only ask people to believe you > when you have some new story to tell them, and pip is hardly new. > > > We're now telling people, OK setuptools is actually fine, but you should > still use pip instead of easy_install and start using wheels instead of > eggs. This is defensible, since even people using distribute were still > importing setuptools. > > This is something which arose from the coming together of setuptools and > Distribute. There was no credibility lost promoting Distribute, since > setuptools never supported Python 3 - until now. There's no credibility lost > now promoting setuptools, since it is essentially now the same as Distribute > without the need for compatibility workarounds. > > > However, I simply see *no way* we could pull off a migration to a new > recommended installer when the migration from the previous one to the > current one is still far from complete :P > > I'm certainly not suggesting the time is right for migrating to a new > recommended installer we have always promoted pip (over easy_install), and > that doesn't need to change. It doesn't mean we have to bundle pip with > Python - just make it easier to get it on Windows and OS X. Just a few days > ago you were saying that python -m getpip would be good to have, then I > created a getpip module, and now AFAICT it hasn't even been looked at, while > people gear up to do shed-loads of work to bundle pip with Python. > > > Adding in the distutils2/packaging digression just lowers our collective > credibility even further, and we also get some significant spillover from > the Python 3 transition. > > Haters gonna hate. What're you gonna do? :-) It's not about haters - it's about not causing additional pain for people that we have already asked to put up with a lot. However solid our reasons for doing so were, we've deliberately created a bunch of additional work for various people. > > > Essentially, don't underestimate how thin the ice we're currently walking > on is community-wise: people are irritated and even outright angry with the > Python core development team, and they have good reasons to be. We need to > remain mindful of that, and take it into account when deciding how to > proceed. > > Who are these angry, entitled people? Have they forgotten that Python is a > volunteer project? Why do we owe such people anything? I'm not convinced > that such people are representative of the wider community. I'm talking about people who don't get mad, they just walk away. Or they even stick around, grin, and bear it without complaint. They matter, even if they don't complain. We have a duty of care to our users to find the least disruptive path forward (that's why Python 3 was such a big deal - we chose the disruptive path because we couldn't see any other solution). In the case of packaging, that means finding a way to let educators and Python developers safely assume that end users, experienced or otherwise, will have ready access to the pip CLI. Cheers, Nick. > > To judge from the video of the packaging panel at PyCon 2013, people are > perhaps disappointed that we haven't got further, but there was no animosity > that I could detect. The atmosphere was pretty positive and what I saw was > an endearing faith and hope that we would, in time, get things right. > > None of what you have said answers why the PEP process shouldn't be followed > in this case. No compelling case has been made AFAICT for bundling pip as > opposed to enabling python -m getpip, especially given that (a) the work > involved in one is very small compared to the other, and (b) the result for > the user is the same - they get to use setuptools and pip. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 01:57:13 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 19:57:13 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> On Jul 17, 2013, at 7:36 PM, Vinay Sajip wrote: > Just a few days > ago you were saying that python -m getpip would be good to have, then I > created a getpip module, and now AFAICT it hasn't even been looked at, while > people gear up to do shed-loads of work to bundle pip with Python. There was discussion around ``python -m getpip`` and the general thinking of that thread was that expecting users to type in an explicit command was adding extra steps into the process (and placing a dependency on the network connection being available whenever they happen to want to install something) and that was less than desirable. On top of that it was also the general thinking of that thread that implicitly bootstrapping during the first run was too magical and too prone to breakages related to the network connection. Bundling at creation of the release files or during install time is what's in play at the moment. Personally I feel that bundling is the least error prone and most likely to work in the largest number of cases. Given that this one major target of this is beginners minimizing the number of places something can fail at seems to be the most useful option. Throw in the fact that it makes offline installations match the online installations better and I think it's the way it should go. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 02:03:21 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 00:03:21 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: Nick Coghlan gmail.com> writes: > No, my reservations are about delaying the installation of pip to first use (or any time after the installation of Python). I don't care that much about the distinction between bundling and install-time bootstrapping and would appreciate a PEP that explicitly weighed up the pros and cons of those two approaches (at the very least bundling means you don't need a reliable network connection at install time, while install time bootstrapping avoids the problem of old versions of pip, and also gives a way to bootstrap older Python installations). Leaving aside specialised corporate setups with no access to PyPI, any installer is of very limited use without a reliable network connection. Most of the people we're expecting to reach with these changes will have always on network connections, or as near as makes no difference. However, pip and setuptools will change over time, and "-m getpip" allows upgrades to be done fairly easily, under user control. So ISTM we're really talking about an initial "python -m getpip" before lots and lots of "pip install this", "pip install that" etc. Did you (or anyone else) look at my getpip.py? In what way might it not be fit for purpose as a bootrstapper? If it can be readily modified to do what's needed (and I'll put in the work if I can), then given that bootstrapping was the original impetus lacking only an implementation which passed the "simple enough to explain, so a good idea" criterion, perhaps that situation can be rectified. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Jul 18 02:16:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 00:16:49 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > There was discussion around ``python -m getpip`` and the general thinking of that > thread was that expecting users to type in an explicit command was adding extra > steps into the process (and placing a dependency on the network connection > being available whenever they happen to want to install something) and that was Well, it's just one additional command to type in - it's really neither here nor there as long as it's well documented. And the network connection argument is a bit of a straw man. Even if pip is present already, a typical pip invocation will fail if there is no network connection - hardly a good user experience. No reasonable user is going to complain if the instructions about installing packages include having a working network connection as a precondition. Whatever the technical merits of approach A vs. approach B, remember that my initial post was about following the process. Regards, Vinay Sajip From donald at stufft.io Thu Jul 18 02:21:03 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 20:21:03 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: <84C22195-BF7E-41E2-9D53-C21263359D20@stufft.io> On Jul 17, 2013, at 8:03 PM, Vinay Sajip wrote: > Leaving aside specialised corporate setups with no access to PyPI, any > installer is of very limited use without a reliable network connection. Most > of the people we're expecting to reach with these changes will have always on > network connections, or as near as makes no difference. However, pip and > setuptools will change over time, and "-m getpip" allows upgrades to be done > fairly easily, under user control. So ISTM we're really talking about an > initial "python -m getpip" before lots and lots of "pip install this", "pip > install that" etc. It's hardly true that this is only specialized corporate setups. Another situation off the top of my head would be at various meet ups or conferences where people are trying to teach new people and might have little or no access. Even assuming they *do* have access to the network, accessing the network includes a number of extra failure conditions. For instance pip 1.3+ is the first version of pip to include verification of SSL and we've had a fair number of people need help making pip be able to reach PyPI through their particular setups. Sometimes it's because the version of OpenSSL is old, other times they don't have OpenSSL at all, or they have a proxy between them and PyPI which is preventing them or requires additional configuration to make it work. Each possible failure condition is another thing that can go wrong for users, each one is another point of frustration and another reason not to fetch it if it can be helped. You state that an installer is of limited use without a network connection but that's not particularly true either. Especially with Wheels and the removal of the simple "setup.py install" and the current focus on having a local cache of pre-built wheels I suspect there to be a decent number of people wanting to install from local wheels. It is true that each problem has a solution, but they are different solutions for each problem and generally require that the person be aware of the problem and the solution prior to having it in order to work around it. > > Did you (or anyone else) look at my getpip.py? In what way might it not be fit > for purpose as a bootrstapper? If it can be readily modified to do what's > needed (and I'll put in the work if I can), then given that bootstrapping was > the original impetus lacking only an implementation which passed the "simple > enough to explain, so a good idea" criterion, perhaps that situation can be > rectified. I did not look at your getpip.py. I've always believed that an explicit "fetch pip" step was not a reasonable step in the process. However bootstrapping had an implementation it's major issue was that it was implicit and that was deemed inappropriate. If you post it again I'll review it but I'll also be against actually using it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu Jul 18 02:25:03 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 20:25:03 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> Message-ID: <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> On Jul 17, 2013, at 8:16 PM, Vinay Sajip wrote: > Well, it's just one additional command to type in - it's really neither here > nor there as long as it's well documented. There is already a getpip.py that's just not distributed with Python. So if "There is only one additional command to type" was the excuse than we already have that: curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python But for various reasons many projects have decided that expecting people to install the tools is difficult, especially for beginners and that simply documenting the command to install it was not enough. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 02:33:23 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 00:33:23 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: Message-ID: Nick Coghlan gmail.com> writes: > It's not about haters - it's about not causing additional pain for people I used the term loosely in response to your comment about irritated and angry people. > I'm talking about people who don't get mad, they just walk away. Or they even stick around, grin, and bear it without complaint. They matter, even if they don't complain. > We have a duty of care to our users to find the least disruptive path forward (that's why Python 3 was such a big deal - we chose the disruptive path because we couldn't see any other solution). > In the case of packaging, that means finding a way to let educators and Python developers safely assume that end users, experienced or otherwise, will have ready access to the pip CLI. I'm not arguing that people shouldn't have access to the pip CLI. It's not about pip vs. something else. I'm saying there's no real evidence that people having to run "python -m getpip" once per Python installation is any kind of deal-breaker, or that a lack of network connection is somehow a problem when getting pip, but not a problem when getting things off PyPI. More importantly, it doesn't seem like the PEP process has been followed, as other proposed alternatives (I mean the approach of "python -m getpip", as well as my specific suggested getpip.py) have not received adequate review or obvious negative feedback, nor have the pros and cons of bootstrapping vs. bundling been presented coherently and then pronounced upon. I'll stop going on about this topic now, though I will be happy have technical discussions if there's really any point. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Jul 18 02:38:51 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 00:38:51 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > ? ??curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python Well it doesn't work on Windows, which would be a reasonable objection to using that specific approach. > But for various reasons many projects have decided that expecting people to > install the tools is difficult, especially for beginners and that simply documenting > the command to install it was not enough. If it's that obvious, then why did Richard spend so long writing a bootstrap script, drafting PEP 439 etc.? Do you have any numbers on the "many projects"? Regards, Vinay Sajip From dholth at gmail.com Thu Jul 18 02:40:26 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 17 Jul 2013 20:40:26 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> Message-ID: On Wed, Jul 17, 2013 at 8:16 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> There was discussion around ``python -m getpip`` and the general thinking of > that >> thread was that expecting users to type in an explicit command was adding > extra >> steps into the process (and placing a dependency on the network connection >> being available whenever they happen to want to install something) and that > was > > Well, it's just one additional command to type in - it's really neither here > nor there as long as it's well documented. > > And the network connection argument is a bit of a straw man. Even if pip is > present already, a typical pip invocation will fail if there is no network > connection - hardly a good user experience. No reasonable user is going to > complain if the instructions about installing packages include having a > working network connection as a precondition. > > Whatever the technical merits of approach A vs. approach B, remember that my > initial post was about following the process. > > Regards, > > Vinay Sajip I didn't realize the current option was about bundling pip itself rather than including a simple bootstrap. I have favored the bootstrap approach (being any intentionally limited installer that you would be daft to use generally). The rationale is that we would want to avoid bundling a soon outdated "good enough" tool that people use instead of letting better pypi-hosted tools thrive. Setuptools is an example of a project that has this problem. Projects might use the [even more*] terrible distutils in preference, admonishing others to do the same, often without understanding why apart from "it's in the standard library". I didn't believe in the pip command that installs itself because I would have been irritated if pip was installed by surprise - maybe I have a reason to install it a different way - perhaps from source or from a system package. A bundled get-pip that avoids also having to install setuptools first, and that is secure, and easy to remember, would be super handy. The normal way to get pip these days is to install virtualenv. After you get it it's just one command to run and pretty convenient. * for the haters From donald at stufft.io Thu Jul 18 03:03:33 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 21:03:33 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> Message-ID: <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> On Jul 17, 2013, at 8:38 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> >> curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python > > Well it doesn't work on Windows, which would be a reasonable objection to > using that specific approach. > >> But for various reasons many projects have decided that expecting people to >> install the tools is difficult, especially for beginners and that simply > documenting >> the command to install it was not enough. > > If it's that obvious, then why did Richard spend so long writing a bootstrap > script, drafting PEP 439 etc.? Do you have any numbers on the "many projects"? I never stated it was *obvious*. To me requiring an explicit bootstrap step was always a bad idea. It's an unfriendly UX that requires people to either know ahead of time if they already have pip installed, or try to use pip, notice it fail, run the bootstrapper, and then run the command they originally wanted to run. It also places a burden on every other project in the ecosystem to document that they need to first run `python -m getpip` and then run ``pip install project``. However Richard's implementation and the PEP was not an explicit bootstrap. It was an implicit bootstrap that upon the first execution of ``pip``would fetch and install pip and setuptools. The implicit bootstrap approach was more or less decided against for fear of being too magical and users not really being aware if they have or don't have pip. So to recap: Bootstrapping over the Networking in General - Requires network access - Extra failure points - OpenSSL Age - OpenSSL Available at all? - Proxies? - SSL Intercept Devices? Explicit bootstrapping - Everything from Bootstrapping over the network - Requires users (and projects) to use/document an explicit command Implicit Bootstrapping - Everything from Bootstrapping over the network - Users unsure if pip is installed or not (or at what point it will install) - "Magical" Bootstrap at Python Install Time - Everything from Bootstrapping over the network - Users possibly unaware that installer reaches the network - Some users tend to not be fans of installers "Phoning Home" - Privacy implications? Pre-Installation at Release Creation Time - Users might possibly have an older version of pip - ??? The older version of pip is just about the only real downside *for the users* of Python/pip that I can think of. This is already the case for most people using the pip provided by their Linux distribution and it's simple to upgrade the pip if the user requires a newer version of pip using ``pip install --upgrade pip``. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu Jul 18 03:12:47 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 21:12:47 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> Message-ID: On Jul 17, 2013, at 8:40 PM, Daniel Holth wrote: > > I didn't realize the current option was about bundling pip itself > rather than including a simple bootstrap. I have favored the bootstrap > approach (being any intentionally limited installer that you would be > daft to use generally). The rationale is that we would want to avoid > bundling a soon outdated "good enough" tool that people use instead of > letting better pypi-hosted tools thrive. Is the argument here that by including pip pre-installed that these other tools will be unable to compete? Because the same thing could be said for installing a bootstrapped as well. In fact in either option I expect the way an alternative installer to be installed would be via ``pip install foo`` regardless of if the person needs to type ``python -mgetpip`` first or not. > > Setuptools is an example of a project that has this problem. Projects > might use the [even more*] terrible distutils in preference, > admonishing others to do the same, often without understanding why > apart from "it's in the standard library". It's for more reasons than it's in the standard library. setuptools has had a lot of misfeatures and a good bit of the angst against not using setuptools was due to easy_install not setuptools itself. > > I didn't believe in the pip command that installs itself because I > would have been irritated if pip was installed by surprise - maybe I > have a reason to install it a different way - perhaps from source or > from a system package. > > A bundled get-pip that avoids also having to install setuptools first, > and that is secure, and easy to remember, would be super handy. For the record I'm not against including a method for fetching pip. I expect Linux distributions to uninstall pip from the Python and it would still of course be possible to uninstall the provided pip so an easy method to (re)install it if the users happen to do that and wish to get it back doesn't seem like a bad idea to me. > > The normal way to get pip these days is to install virtualenv. After > you get it it's just one command to run and pretty convenient. > > * for the haters > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Let us not forgot that the pre-installed approach is hardly a new thing for package managers. Both Ruby and Node do this with their respective package managers in order to make it simpler for their users to install packages. So it's been shown that this type of setup can work. Do we really need extra tedium that users need to do? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jcappos at poly.edu Thu Jul 18 03:34:06 2013 From: jcappos at poly.edu (Justin Cappos) Date: Wed, 17 Jul 2013 21:34:06 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: <51E744FA.9040106@students.poly.edu> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> Message-ID: My impression is this only holds for things signed directly by PyPI because the developers have not registered a key. I think that developers who register keys won't have this issue. Let's talk about this when you return, but it's really projects / developers that will be stable in the common case, not packages, right? Justin On Wed, Jul 17, 2013 at 9:29 PM, Trishank Karthik Kuppusamy < tk47 at students.poly.edu> wrote: > On 07/18/2013 03:24 AM, Ronald Oussoren wrote: > >> I'm trying to understand what this means for package maintainers. If I >> understand you correctly maintainers would upload packages just like they >> do now, and packages are then automaticly signed by the "unstable" role. >> Then some manual process by the PyPI maintainers can sign a package with a >> stable row. Is that correct? If it is, how is this supposed to scale? The >> contents of PyPI is currently not vetted at all, and it seems to me that >> manually vetting uploads for even the most popular packages would be a >> significant amount of work that would have to be done by what's likely a >> small set of volunteers. >> > > I think Daniel put it best when he said that we have been focusing too > much on deciding whether or not a package is malicious. As he said, it is > important that any security proposal must limit what targeted attacks on > the PyPI infrastructure can do. > > You are right that asking people to vet through packages for inclusion > into the stable role would be generally unscalable. I think the best way to > think about it is that we can mostly decide a "stable" set of packages with > a simple rule, and then *choose* to interfere (if necessary) with decisions > on which packages go in or out of the stable role. The stable role simply > has to sign this automatically computed set of "stable" packages every now > and then, so that the impacts of attacks on the PyPI infrastructure are > limited. Users who install the same set of stable packages will see the > installation of the same set of intended packages. > > Presently, I use a simple heuristic to compute a nominal set of stable > packages: all files older than 3 months are considered to be "stable". > There is no consideration of whether a package is malicious here; just that > it has not changed long enough to be considered mature. > > > Also, what are you supposed to do when FooBar 2.0 is signed by the stable >> role and FooBar 2.0.1 is only signed by the unstable role, and you try to >> fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? >> >> > In this case, I expect that since we have asked pip to install FooBar > 2.0.*, it will first fetch the /simple/FooBar/ PyPI metadata (distinct from > TUF metadata) to see what versions of the FooBar package are available. If > FooBar 2.0.1 was recently added, then the latest version of the > /simple/FooBar/ metadata would have been signed for the unstable role. > There are two cases for the stable role: > > 1. The stable role has also signed for the FooBar 2.0.1 package. In this > case, pip would find FooBar 2.0.1 and install it. > 2. The stable role has not yet signed for the FooBar 2.0.1 package. In > this case, pip would find FooBar 2.0 and install it. > > Why would this happen? In this case, we have specified in the TUF metadata > that if the same file (in this case, the /simple/FooBar/ HTML file) has > been signed for by both the stable and unstable roles, then the client must > prefer the version from the stable role. > > Of course, there are questions about timeliness. Sometimes users want the > latest packages, or the developers of the packages themselves may want this > to be the case. For the purposes of bootstrapping PyPI with TUF, we have > presently decided to simplify key management and allow for the protection > of some valuable packages on PyPI (with limited timeliness trade-off) while > allowing for the majority of the packages to be continuously released. > > There are a few ways to ensure that the latest intended versions of the > FooBar package will be installed: > 1. Do not nominate FooBar into the "stable" set of packages, which should > ideally be reserved --- for initial bootstrapping purposes at least --- for > perhaps what the community thinks are the "canonical" packages that must > initially be protected from attacks. > 2. The stable role may delegate its responsibility about information on > the FooBar package to the FooBar package developers themselves. > 3. Explore different rules (other than just ordering roles by trust) to > balance key management, timeliness and other issues without significantly > sacrificing security. > > We welcome your thoughts here. For the moment, we are planning to wrap up > as soon as possible our experiments on how PyPI+pip perform with and > without TUF with this particular scheme of stable and unstable roles. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tk47 at students.poly.edu Thu Jul 18 03:29:30 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Thu, 18 Jul 2013 09:29:30 +0800 Subject: [Distutils] vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> Message-ID: <51E744FA.9040106@students.poly.edu> On 07/18/2013 03:24 AM, Ronald Oussoren wrote: > I'm trying to understand what this means for package maintainers. If I understand you correctly maintainers would upload packages just like they do now, and packages are then automaticly signed by the "unstable" role. Then some manual process by the PyPI maintainers can sign a package with a stable row. Is that correct? If it is, how is this supposed to scale? The contents of PyPI is currently not vetted at all, and it seems to me that manually vetting uploads for even the most popular packages would be a significant amount of work that would have to be done by what's likely a small set of volunteers. I think Daniel put it best when he said that we have been focusing too much on deciding whether or not a package is malicious. As he said, it is important that any security proposal must limit what targeted attacks on the PyPI infrastructure can do. You are right that asking people to vet through packages for inclusion into the stable role would be generally unscalable. I think the best way to think about it is that we can mostly decide a "stable" set of packages with a simple rule, and then *choose* to interfere (if necessary) with decisions on which packages go in or out of the stable role. The stable role simply has to sign this automatically computed set of "stable" packages every now and then, so that the impacts of attacks on the PyPI infrastructure are limited. Users who install the same set of stable packages will see the installation of the same set of intended packages. Presently, I use a simple heuristic to compute a nominal set of stable packages: all files older than 3 months are considered to be "stable". There is no consideration of whether a package is malicious here; just that it has not changed long enough to be considered mature. > Also, what are you supposed to do when FooBar 2.0 is signed by the stable role and FooBar 2.0.1 is only signed by the unstable role, and you try to fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? > In this case, I expect that since we have asked pip to install FooBar 2.0.*, it will first fetch the /simple/FooBar/ PyPI metadata (distinct from TUF metadata) to see what versions of the FooBar package are available. If FooBar 2.0.1 was recently added, then the latest version of the /simple/FooBar/ metadata would have been signed for the unstable role. There are two cases for the stable role: 1. The stable role has also signed for the FooBar 2.0.1 package. In this case, pip would find FooBar 2.0.1 and install it. 2. The stable role has not yet signed for the FooBar 2.0.1 package. In this case, pip would find FooBar 2.0 and install it. Why would this happen? In this case, we have specified in the TUF metadata that if the same file (in this case, the /simple/FooBar/ HTML file) has been signed for by both the stable and unstable roles, then the client must prefer the version from the stable role. Of course, there are questions about timeliness. Sometimes users want the latest packages, or the developers of the packages themselves may want this to be the case. For the purposes of bootstrapping PyPI with TUF, we have presently decided to simplify key management and allow for the protection of some valuable packages on PyPI (with limited timeliness trade-off) while allowing for the majority of the packages to be continuously released. There are a few ways to ensure that the latest intended versions of the FooBar package will be installed: 1. Do not nominate FooBar into the "stable" set of packages, which should ideally be reserved --- for initial bootstrapping purposes at least --- for perhaps what the community thinks are the "canonical" packages that must initially be protected from attacks. 2. The stable role may delegate its responsibility about information on the FooBar package to the FooBar package developers themselves. 3. Explore different rules (other than just ordering roles by trust) to balance key management, timeliness and other issues without significantly sacrificing security. We welcome your thoughts here. For the moment, we are planning to wrap up as soon as possible our experiments on how PyPI+pip perform with and without TUF with this particular scheme of stable and unstable roles. From donald at stufft.io Thu Jul 18 03:46:21 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 21:46:21 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: <51E744FA.9040106@students.poly.edu> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> Message-ID: <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> On Jul 17, 2013, at 9:29 PM, Trishank Karthik Kuppusamy wrote: > On 07/18/2013 03:24 AM, Ronald Oussoren wrote: >> I'm trying to understand what this means for package maintainers. If I understand you correctly maintainers would upload packages just like they do now, and packages are then automaticly signed by the "unstable" role. Then some manual process by the PyPI maintainers can sign a package with a stable row. Is that correct? If it is, how is this supposed to scale? The contents of PyPI is currently not vetted at all, and it seems to me that manually vetting uploads for even the most popular packages would be a significant amount of work that would have to be done by what's likely a small set of volunteers. > > I think Daniel put it best when he said that we have been focusing too much on deciding whether or not a package is malicious. As he said, it is important that any security proposal must limit what targeted attacks on the PyPI infrastructure can do. As I've mentioned before an online key (as is required by PyPI) means that if someone compromises PyPI they compromise the key. It seems to me that TUF is really designed to handle the case of the Linux distribution (or similar) where you have vetted maintainers who are given a subsection of the total releases. However PyPI does not have vetted authors nor the man power to sign authors keys offline. PyPI and a Linux Distro repo solve problems that appear similar but are actually quite different under the surface. I do agree however that PyPI should not attempt to discern what is malicious or not. > > You are right that asking people to vet through packages for inclusion into the stable role would be generally unscalable. I think the best way to think about it is that we can mostly decide a "stable" set of packages with a simple rule, and then *choose* to interfere (if necessary) with decisions on which packages go in or out of the stable role. The stable role simply has to sign this automatically computed set of "stable" packages every now and then, so that the impacts of attacks on the PyPI infrastructure are limited. Users who install the same set of stable packages will see the installation of the same set of intended packages. > > Presently, I use a simple heuristic to compute a nominal set of stable packages: all files older than 3 months are considered to be "stable". There is no consideration of whether a package is malicious here; just that it has not changed long enough to be considered mature. > >> Also, what are you supposed to do when FooBar 2.0 is signed by the stable role and FooBar 2.0.1 is only signed by the unstable role, and you try to fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? >> > > In this case, I expect that since we have asked pip to install FooBar 2.0.*, it will first fetch the /simple/FooBar/ PyPI metadata (distinct from TUF metadata) to see what versions of the FooBar package are available. If FooBar 2.0.1 was recently added, then the latest version of the /simple/FooBar/ metadata would have been signed for the unstable role. There are two cases for the stable role: > > 1. The stable role has also signed for the FooBar 2.0.1 package. In this case, pip would find FooBar 2.0.1 and install it. > 2. The stable role has not yet signed for the FooBar 2.0.1 package. In this case, pip would find FooBar 2.0 and install it. And things are stable after 3 months? This sounds completely insane. So if a package releases a security update it'll be 3 months until people get that fix by default? > > Why would this happen? In this case, we have specified in the TUF metadata that if the same file (in this case, the /simple/FooBar/ HTML file) has been signed for by both the stable and unstable roles, then the client must prefer the version from the stable role. > > Of course, there are questions about timeliness. Sometimes users want the latest packages, or the developers of the packages themselves may want this to be the case. For the purposes of bootstrapping PyPI with TUF, we have presently decided to simplify key management and allow for the protection of some valuable packages on PyPI (with limited timeliness trade-off) while allowing for the majority of the packages to be continuously released. > > There are a few ways to ensure that the latest intended versions of the FooBar package will be installed: > 1. Do not nominate FooBar into the "stable" set of packages, which should ideally be reserved --- for initial bootstrapping purposes at least --- for perhaps what the community thinks are the "canonical" packages that must initially be protected from attacks. > 2. The stable role may delegate its responsibility about information on the FooBar package to the FooBar package developers themselves. > 3. Explore different rules (other than just ordering roles by trust) to balance key management, timeliness and other issues without significantly sacrificing security. > > We welcome your thoughts here. For the moment, we are planning to wrap up as soon as possible our experiments on how PyPI+pip perform with and without TUF with this particular scheme of stable and unstable roles. > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tk47 at students.poly.edu Thu Jul 18 03:50:12 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Thu, 18 Jul 2013 09:50:12 +0800 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> Message-ID: <51E749D4.5080508@students.poly.edu> On 07/18/2013 09:34 AM, Justin Cappos wrote: > My impression is this only holds for things signed directly by PyPI > because the developers have not registered a key. I think that > developers who register keys won't have this issue. Let's talk about > this when you return, but it's really projects / developers that will > be stable in the common case, not packages, right? > > Yes, developers who register keys and have the stable role delegate their packages to themselves will not have this issue. When I say "package", I mean what gets downloaded and installed when pip goes to PyPI to get a package with exactly the given name. I am not aware of a way to guide pip to install packages by projects (could you clarify what you mean by this?) or developers, but perhaps this might change in the future with PyPI metadata 2.0. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcappos at poly.edu Thu Jul 18 03:52:09 2013 From: jcappos at poly.edu (Justin Cappos) Date: Wed, 17 Jul 2013 21:52:09 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: If there is not a compromise of PyPI, then all updates happen essentially instantly. Developers that do not sign packages and so PyPI signs them, may have their newest packages remain unavailable for a period of up to 3 months *if there is a compromise of PyPI*. Thanks, Justin On Wed, Jul 17, 2013 at 9:46 PM, Donald Stufft wrote: > > On Jul 17, 2013, at 9:29 PM, Trishank Karthik Kuppusamy < > tk47 at students.poly.edu> wrote: > > > On 07/18/2013 03:24 AM, Ronald Oussoren wrote: > >> I'm trying to understand what this means for package maintainers. If I > understand you correctly maintainers would upload packages just like they > do now, and packages are then automaticly signed by the "unstable" role. > Then some manual process by the PyPI maintainers can sign a package with a > stable row. Is that correct? If it is, how is this supposed to scale? The > contents of PyPI is currently not vetted at all, and it seems to me that > manually vetting uploads for even the most popular packages would be a > significant amount of work that would have to be done by what's likely a > small set of volunteers. > > > > I think Daniel put it best when he said that we have been focusing too > much on deciding whether or not a package is malicious. As he said, it is > important that any security proposal must limit what targeted attacks on > the PyPI infrastructure can do. > > As I've mentioned before an online key (as is required by PyPI) means that > if someone compromises PyPI they compromise the key. It seems to me that > TUF is really designed to handle the case of the Linux distribution (or > similar) where you have vetted maintainers who are given a subsection of > the total releases. However PyPI does not have vetted authors nor the man > power to sign authors keys offline. > > PyPI and a Linux Distro repo solve problems that appear similar but are > actually quite different under the surface. > > I do agree however that PyPI should not attempt to discern what is > malicious or not. > > > > > You are right that asking people to vet through packages for inclusion > into the stable role would be generally unscalable. I think the best way to > think about it is that we can mostly decide a "stable" set of packages with > a simple rule, and then *choose* to interfere (if necessary) with decisions > on which packages go in or out of the stable role. The stable role simply > has to sign this automatically computed set of "stable" packages every now > and then, so that the impacts of attacks on the PyPI infrastructure are > limited. Users who install the same set of stable packages will see the > installation of the same set of intended packages. > > > > Presently, I use a simple heuristic to compute a nominal set of stable > packages: all files older than 3 months are considered to be "stable". > There is no consideration of whether a package is malicious here; just that > it has not changed long enough to be considered mature. > > > >> Also, what are you supposed to do when FooBar 2.0 is signed by the > stable role and FooBar 2.0.1 is only signed by the unstable role, and you > try to fetch FooBar 2.0.* (that is, 2.0 or any 2.0.x point release)? > >> > > > > In this case, I expect that since we have asked pip to install FooBar > 2.0.*, it will first fetch the /simple/FooBar/ PyPI metadata (distinct from > TUF metadata) to see what versions of the FooBar package are available. If > FooBar 2.0.1 was recently added, then the latest version of the > /simple/FooBar/ metadata would have been signed for the unstable role. > There are two cases for the stable role: > > > > 1. The stable role has also signed for the FooBar 2.0.1 package. In this > case, pip would find FooBar 2.0.1 and install it. > > 2. The stable role has not yet signed for the FooBar 2.0.1 package. In > this case, pip would find FooBar 2.0 and install it. > > And things are stable after 3 months? This sounds completely insane. So if > a package releases a security update it'll be 3 months until people get > that fix by default? > > > > > Why would this happen? In this case, we have specified in the TUF > metadata that if the same file (in this case, the /simple/FooBar/ HTML > file) has been signed for by both the stable and unstable roles, then the > client must prefer the version from the stable role. > > > > Of course, there are questions about timeliness. Sometimes users want > the latest packages, or the developers of the packages themselves may want > this to be the case. For the purposes of bootstrapping PyPI with TUF, we > have presently decided to simplify key management and allow for the > protection of some valuable packages on PyPI (with limited timeliness > trade-off) while allowing for the majority of the packages to be > continuously released. > > > > There are a few ways to ensure that the latest intended versions of the > FooBar package will be installed: > > 1. Do not nominate FooBar into the "stable" set of packages, which > should ideally be reserved --- for initial bootstrapping purposes at least > --- for perhaps what the community thinks are the "canonical" packages that > must initially be protected from attacks. > > 2. The stable role may delegate its responsibility about information on > the FooBar package to the FooBar package developers themselves. > > 3. Explore different rules (other than just ordering roles by trust) to > balance key management, timeliness and other issues without significantly > sacrificing security. > > > > We welcome your thoughts here. For the moment, we are planning to wrap > up as soon as possible our experiments on how PyPI+pip perform with and > without TUF with this particular scheme of stable and unstable roles. > > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 03:54:25 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 17 Jul 2013 21:54:25 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: On Jul 17, 2013, at 9:52 PM, Justin Cappos wrote: > If there is not a compromise of PyPI, then all updates happen essentially instantly. > > Developers that do not sign packages and so PyPI signs them, may have their newest packages remain unavailable for a period of up to 3 months *if there is a compromise of PyPI*. Can you go into details about how things will graduate from unstable to stable instantly in a way that a compromise of PyPI doesn't also allow that? > > Thanks, > Justin > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jcappos at poly.edu Thu Jul 18 04:06:04 2013 From: jcappos at poly.edu (Justin Cappos) Date: Wed, 17 Jul 2013 22:06:04 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: Sure. The "stable" key is kept offline (not on PyPI). It knows who the developers for projects are and delegates trust to them. So Django (for example), has its key signed by this offline key. The "bleeding-edge" key is kept online on PyPI. It is used to sign project keys for projects newer than the last use of the stable key. If I register new project "mycoolnewpypiproject" and choose to sign my packages then it delegates trust to me. Importantly, if the stable and bleeding-edge roles trust the same project name with different keys, the stable role's key is used. A malicious attacker that can hack PyPI can get access to the bleeding-edge key and also some other items that say how timely the data is and similar things. They could say that "mycoolnewpypiproject" is actually signed by a different key than mine because they possess the bleeding-edge role. However, they can't (convincingly) say that Django is signed by a different key because the stable key already has this role listed. Sorry for any confusion about this. We will provide a bunch of other information soon (should we do this as a PEP?) along with example metadata and working code. We definitely appreciate any feedback. Thanks, Justin On Wed, Jul 17, 2013 at 9:54 PM, Donald Stufft wrote: > > On Jul 17, 2013, at 9:52 PM, Justin Cappos wrote: > > > If there is not a compromise of PyPI, then all updates happen > essentially instantly. > > > > Developers that do not sign packages and so PyPI signs them, may have > their newest packages remain unavailable for a period of up to 3 months *if > there is a compromise of PyPI*. > > Can you go into details about how things will graduate from unstable to > stable instantly in a way that a compromise of PyPI doesn't also allow that? > > > > > Thanks, > > Justin > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 05:37:20 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 17 Jul 2013 20:37:20 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: > As of right now the User's Guide doesn't mention using setuptools for > building (beyond an empty header listing) and goes with the old distutils > setup.py approach. It also words things like you don't know how to really > use Python and are starting a project entirely from scratch. > Although most of the text from the original Hitchhiker Guide is gone at this point (since the "fork" a few months back), the "Packaging Tutorial" as it is, is mostly still carryover from that. Don't take it is intentional new writing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 05:44:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 13:44:58 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 July 2013 10:33, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > More importantly, it doesn't seem like the PEP process has been followed, as > other proposed alternatives (I mean the approach of "python -m getpip", as > well as my specific suggested getpip.py) have not received adequate review > or obvious negative feedback, nor have the pros and cons of bootstrapping > vs. bundling been presented coherently and then pronounced upon. Then (help) write the missing PEP! PEP's don't appear out of nowhere, they happen because people write them. That's why I sent a request to the list explicitly asking for someone to write a competitor to PEP 439 *because* I wasn't going to accept it, so we need something else to champion one or more of the alternatives. So far, Paul Nasrat is the only person who offered to take on that task, and he has yet to respond to my acceptance of that offer (which I'm not reading too much into at this point - I only sent that reply a day or two ago, and I expect that like the rest of us, Paul has plenty of other things to be working on in addition to Python packaging). There are only two approaches that are completely out of the running at this point: * implicit bootstrapping of pip at first use (as PEP 439 proposed) * promoting anything other than pip as the default installer Various other options for "how do we make it easier for end users to get started with pip" are all still technically on the table, including: * explicit command line based bootstrapping of pip by end users (just slightly cleaned up from the status quo) * creating Windows and Mac OS X installers for pip (since using wget/curl to run a script is either not possible or just an entirely strange notion there and forms a major part of the bootstrapping problem - after all, we expect people to be able to use the CPython Windows and Mac OS X installers just fine, why should they have any more trouble with an installer for pip?) * implicit bootstrapping of pip by the CPython Windows and Mac OS X installers * implicit bootstrapping of pip by the Python Launcher for Windows * bundling pip with the CPython Windows and Mac OS X installers (and using it to upgrade itself) * bundling pip with the Python Launcher for Windows (and using it to upgrade itself) Yes, I have my opinions and will try to nudge things in particular directions that I think are better, but until someone sits down and *actually writes the PEP for it*, I won't know how justified those opinions are. Even though I have already stated my dislike for some of these approaches (up to and including misstating that dislike as "not going to happen"), that just means the arguments in favour would need to be a bit more persuasive to convince me I am wrong. The problem statement also needs to be updated to cover the use case of an instructor running a class and wanting to offer a local PyPI server (or other cache) without a reliable network connection to the outside world, since *that* is the main argument against the bootstrapping based solutions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Jul 18 05:52:35 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 13:52:35 +1000 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: On 18 July 2013 12:06, Justin Cappos wrote: > Sorry for any confusion about this. We will provide a bunch of other > information soon (should we do this as a PEP?) along with example metadata > and working code. We definitely appreciate any feedback. It's probably too early for a PEP (since we already have way too many other things in motion for people to sensibly keep track of), but this certainly sounds promising - a post summarising your efforts to date would be really helpful. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jcappos at poly.edu Thu Jul 18 06:03:47 2013 From: jcappos at poly.edu (Justin Cappos) Date: Thu, 18 Jul 2013 00:03:47 -0400 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: Okay, we'll get this together once Trishank returns and we've had a chance to write up the latest. Justin On Wed, Jul 17, 2013 at 11:52 PM, Nick Coghlan wrote: > On 18 July 2013 12:06, Justin Cappos wrote: > > Sorry for any confusion about this. We will provide a bunch of other > > information soon (should we do this as a PEP?) along with example > metadata > > and working code. We definitely appreciate any feedback. > > It's probably too early for a PEP (since we already have way too many > other things in motion for people to sensibly keep track of), but this > certainly sounds promising - a post summarising your efforts to date > would be really helpful. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 06:30:29 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 17 Jul 2013 21:30:29 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: > > > But it also sounds like that project providing wheel distributions is too > early to include in the User's Guide. > My intention is for the user guide to cover building and installing wheels. https://bitbucket.org/pypa/python-packaging-user-guide/issue/11/include-instructions-on-wheel-building-and -------------- next part -------------- An HTML attachment was scrubbed... URL: From liamk at numenet.com Thu Jul 18 06:28:13 2013 From: liamk at numenet.com (Liam Kirsher) Date: Wed, 17 Jul 2013 21:28:13 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? Message-ID: <51E76EDD.7040009@numenet.com> Hi, I ran into an error about a month ago caused by a change in the PyPi version of distribute. Thankfully, someone was able to roll back the change. Unfortunately, I'm getting a similar kind of problem now -- and I notice that 0.7.3 was released on 5 July, so... I'm wondering if it might be related. This is being included in a Chef recipe. I'm attaching the pip.log, which shows it uninstalling distribute (which looks like version 0.6.49), and then failing to find it and attempting to install 0.7.3, and subsequent package installs failing. Anyway, I'm not quite sure what to do here! How can I fix this problem? (And also, how can I prevent it from happening in the future by pegging the version to something that works?) The pip recipe includes the following comments, which may be relevant. > # Ubuntu's python-setuptools, python-pip and py thon-virtualenv packages > # are broken...this feels like Rubygems! > # > http://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python > # https://bitbucket.org/ianb/pip/issue/104/pip-uninstall-on-ubuntu-linux > remote_file "#{Chef::Config[:file_cache_path]}/distribute_setup.py" do > source node['python']['distribute_script_url'] > mode "0644" > not_if { ::File.exists?(pip_binary) } > end > execute "install-pip" do > cwd Chef::Config[:file_cache_path] > command <<-EOF > #{node['python']['binary']} distribute_setup.py > --download-base=#{node['python']['distribute_option']['download_base']} > #{::File.dirname(pip_binary)}/easy_install pip > EOF > not_if { ::File.exists?(pip_binary) } > end Chef run log: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > python::virtualenv > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[virtualenv] action install > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - install package > python_pip[virtualenv] version latest > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > supervisor::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[supervisor] action upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Error executing > action `upgrade` on resource 'python_pip[supervisor]' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > Mixlib::ShellOut::ShellCommandFailed > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ------------------------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Expected process to > exit with [0], but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a > (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py > (namespace package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > pidproxy script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File > "", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output > from command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File "", > line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command > /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > -root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log > in /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Cookbook Trace: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:155:in `pip_cmd' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:139:in `install_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:144:in `upgrade_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:60:in `block (2 > levels) in class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > /var/chef/cache/cookbooks/python/providers/pip.rb:58:in `block in > class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Resource Declaration: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # In > /var/chef/cache/cookbooks/supervisor/recipes/default.rb > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 29: python_pip > "supervisor" do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 30: action :upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 31: version > node['supervisor']['version'] if node['supervisor']['version'] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 32: end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 33: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Compiled Resource: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # Declared in > /var/chef/cache/cookbooks/supervisor/recipes/default.rb:29:in `from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > python_pip("supervisor") do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com action [:upgrade] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retries 0 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retry_delay 2 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com cookbook_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com recipe_name "default" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com package_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com timeout 900 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com options " --upgrade" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: ntp::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * service[ntp] > action restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[ntp] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: rabbitmq::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > service[rabbitmq-server] action restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[rabbitmq-server] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:50+00:00] ERROR: Running exception handlers > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: Saving node information to > /var/chef/cache/failed-run-data.json > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] ERROR: Exception handlers complete > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Chef Client failed. > 35 resources updated > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: Stacktrace dumped to > /var/chef/cache/chef-stacktrace.out > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > [2013-07-16T02:36:51+00:00] FATAL: > Mixlib::ShellOut::ShellCommandFailed: python_pip[supervisor] > (supervisor::default line 29) had an error: > Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with > [0], but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a > (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py > (namespace package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > pidproxy script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File > "", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output > from command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File " ec2-54-245-36-62.us-west-2.compute.amazonaws.com g>", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command > /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log > in /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of > pip install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 -- Liam Kirsher PGP: http://liam.numenet.com/pgp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pip.log Type: text/x-log Size: 149042 bytes Desc: not available URL: From liamk at numenet.com Thu Jul 18 06:28:34 2013 From: liamk at numenet.com (Liam Kirsher) Date: Wed, 17 Jul 2013 21:28:34 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? follow up Message-ID: <51E76EF2.6080300@numenet.com> Hi, Also, just noticed that 0.7.3 is available as .zip, but not as .tar.gz like the others are. Liam -- Liam Kirsher PGP: http://liam.numenet.com/pgp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 07:21:53 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 17 Jul 2013 22:21:53 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? In-Reply-To: <51E76EDD.7040009@numenet.com> References: <51E76EDD.7040009@numenet.com> Message-ID: Hello Liam: The problem and solutions are explained here: https://github.com/pypa/pip/issues/1033#issuecomment-20546202 Btw, the issue includes comments from chef maintainers about a similar (or the same) supervisor recipe. Marcus On Wed, Jul 17, 2013 at 9:28 PM, Liam Kirsher wrote: > Hi, > > I ran into an error about a month ago caused by a change in the PyPi > version of distribute. Thankfully, someone was able to roll back the > change. Unfortunately, I'm getting a similar kind of problem now -- and I > notice that 0.7.3 was released on 5 July, so... I'm wondering if it might > be related. This is being included in a Chef recipe. > > I'm attaching the pip.log, which shows it uninstalling distribute (which > looks like version 0.6.49), and then failing to find it and attempting to > install 0.7.3, and subsequent package installs failing. > > Anyway, I'm not quite sure what to do here! How can I fix this problem? > (And also, how can I prevent it from happening in the future by pegging the > version to something that works?) > > > The pip recipe includes the following comments, which may be relevant. > > # Ubuntu's python-setuptools, python-p > ip and py > > > thon-virtualenv packages > # are broken...this feels like Rubygems! > # http://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python > # https://bitbucket.org/ianb/pip/issue/104/pip-uninstall-on-ubuntu-linux > remote_file "#{Chef::Config[:file_cache_path]}/distribute_setup.py" do > source node['python']['distribute_script_url'] > mode "0644" > not_if { ::File.exists?(pip_binary) } > end > execute "install-pip" do > cwd Chef::Config[:file_cache_path] > command <<-EOF > #{node['python']['binary']} distribute_setup.py --download-base=#{node['python']['distribute_option']['download_base']} > #{::File.dirname(pip_binary)}/easy_install pip > EOF > not_if { ::File.exists?(pip_binary) } > end > > > > Chef run log: > > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > python::virtualenv > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[virtualenv] action install > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - install package > python_pip[virtualenv] version latest > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: > supervisor::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > python_pip[supervisor] action upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Error executing action > `upgrade` on resource 'python_pip[supervisor]' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com================================================================================ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.comMixlib::ShellOut::ShellCommandFailed > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com------------------------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Expected process to exit > with [0], but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of pip > install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a(from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py (namespace > package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing pidproxy > script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File "", > line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output from > command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File "", line > 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No module > named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command /usr/bin/python > -c "import setuptools;__file__='/tmp/pip-build > ec2-54-245-36-62.us-west-2.compute.amazonaws.com-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log in > /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of pip > install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Cookbook Trace: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com/var/chef/cache/cookbooks/python/providers/pip.rb:155:in `pip_cmd' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com/var/chef/cache/cookbooks/python/providers/pip.rb:139:in `install_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com/var/chef/cache/cookbooks/python/providers/pip.rb:144:in `upgrade_package' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com/var/chef/cache/cookbooks/python/providers/pip.rb:60:in `block (2 levels) > in class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com/var/chef/cache/cookbooks/python/providers/pip.rb:58:in `block in > class_from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Resource Declaration: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com --------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # In > /var/chef/cache/cookbooks/supervisor/recipes/default.rb > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 29: python_pip > "supervisor" do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 30: action :upgrade > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 31: version > node['supervisor']['version'] if node['supervisor']['version'] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 32: end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com 33: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Compiled Resource: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ------------------ > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com # Declared in > /var/chef/cache/cookbooks/supervisor/recipes/default.rb:29:in `from_file' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com python_pip("supervisor") > do > ec2-54-245-36-62.us-west-2.compute.amazonaws.com action [:upgrade] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retries 0 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com retry_delay 2 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com cookbook_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com recipe_name "default" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com package_name > "supervisor" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com timeout 900 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com options " --upgrade" > ec2-54-245-36-62.us-west-2.compute.amazonaws.com end > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: ntp::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * service[ntp] action > restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[ntp] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Recipe: rabbitmq::default > ec2-54-245-36-62.us-west-2.compute.amazonaws.com * > service[rabbitmq-server] action restart > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com - restart service > service[rabbitmq-server] > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com[2013-07-16T02:36:50+00:00] ERROR: Running exception handlers > ec2-54-245-36-62.us-west-2.compute.amazonaws.com[2013-07-16T02:36:51+00:00] FATAL: Saving node information to > /var/chef/cache/failed-run-data.json > ec2-54-245-36-62.us-west-2.compute.amazonaws.com[2013-07-16T02:36:51+00:00] ERROR: Exception handlers complete > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Chef Client failed. 35 > resources updated > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com[2013-07-16T02:36:51+00:00] FATAL: Stacktrace dumped to > /var/chef/cache/chef-stacktrace.out > ec2-54-245-36-62.us-west-2.compute.amazonaws.com[2013-07-16T02:36:51+00:00] FATAL: Mixlib::ShellOut::ShellCommandFailed: > python_pip[supervisor] (supervisor::default line 29) had an error: > Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], > but received '1' > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- Begin output of pip > install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDOUT: > Downloading/unpacking supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > distribute from > https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a(from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > meld3>=0.6.5 (from supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Downloading/unpacking > setuptools>=0.7 (from distribute->supervisor) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > egg_info for package setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing collected > packages: supervisor, distribute, meld3, setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for supervisor > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Skipping > installation of > /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py (namespace > package) > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > echo_supervisord_conf script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing pidproxy > script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisorctl script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Installing > supervisord script to /usr/local/bin > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Found existing > installation: distribute 0.6.49 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Uninstalling > distribute: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Successfully > uninstalled distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for distribute > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Running setup.py > install for meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File "", > line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No > module named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Complete output from > command /usr/bin/python -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Traceback (most > recent call last): > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com File " ec2-54-245-36-62.us-west-2.compute.amazonaws.com g>", line 1, in > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ImportError: No module > named setuptools > ec2-54-245-36-62.us-west-2.compute.amazonaws.com > ec2-54-245-36-62.us-west-2.compute.amazonaws.com---------------------------------------- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Command /usr/bin/python > -c "import > setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', > '\n'), __file__, 'exec'))" install --record > /tmp/pip-mDCOBa-record/install-record.txt > --single-version-externally-managed failed with error code 1 in > /tmp/pip-build-root/meld3 > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Storing complete log in > /home/ubuntu/.pip/pip.log > ec2-54-245-36-62.us-west-2.compute.amazonaws.com STDERR: > ec2-54-245-36-62.us-west-2.compute.amazonaws.com ---- End output of pip > install --upgrade supervisor ---- > ec2-54-245-36-62.us-west-2.compute.amazonaws.com Ran pip install > --upgrade supervisor returned 1 > > > -- > Liam Kirsher > PGP: http://liam.numenet.com/pgp/ > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Jul 18 08:45:24 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 07:45:24 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: On 18 July 2013 02:03, Donald Stufft wrote: > it's simple to upgrade the pip if the user requires a newer version > of pip using ``pip install --upgrade pip` > Please don't gloss over the potential issues with upgrading in the face of in-use exe wrappers. We have a design for a solution, but as yet no working code. I expect to work on this, but my time is limited and I'm not at all sure there won't be issues still to resolve. (Obviously, anyone else is welcome to help, but it's a "windows issue", so I don't know how much interest there will be from non-Windows developers). Prior to the setuptools move away from 2to3, my standard response to anyone reporting issues with in-place upgrades of setuptools or pip (certainly on Windows, and in general anywhere else too) was "well, don't do that - remove and reinstall manually". Things are better now, but not yet perfect and I don't believe that there is a consensus that this is acceptable for a bundled pip. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 08:51:23 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 02:51:23 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: <57801DDF-4DA2-4F97-A2DF-686CAEE8005C@stufft.io> On Jul 18, 2013, at 2:45 AM, Paul Moore wrote: > On 18 July 2013 02:03, Donald Stufft wrote: > it's simple to upgrade the pip if the user requires a newer version > of pip using ``pip install --upgrade pip` > > Please don't gloss over the potential issues with upgrading in the face of in-use exe wrappers. We have a design for a solution, but as yet no working code. I expect to work on this, but my time is limited and I'm not at all sure there won't be issues still to resolve. (Obviously, anyone else is welcome to help, but it's a "windows issue", so I don't know how much interest there will be from non-Windows developers). That's a bug ;) And will be worked around one way or another even if I need to install Windows to make it happen in time. > > Prior to the setuptools move away from 2to3, my standard response to anyone reporting issues with in-place upgrades of setuptools or pip (certainly on Windows, and in general anywhere else too) was "well, don't do that - remove and reinstall manually". Things are better now, but not yet perfect and I don't believe that there is a consensus that this is acceptable for a bundled pip. I consider "remove and reinstall" to be a terrible UX and if that's the best answer pip can give we need to fix that regardless. But as I said I don't mind ``python -mgetpip`` existing for one reason or another. I just don't think a bootstrap command is our best option for providing the most streamlined user experience. Either way running an old pip is hardly that big of a deal. Anyone using a Linux distro is likely to be running an older version unless they've gone out of their way to upgrade it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Jul 18 09:20:58 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 08:20:58 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 July 2013 23:18, Brett Cannon wrote: > As of right now the User's Guide doesn't mention using setuptools for > building (beyond an empty header listing) and goes with the old distutils > setup.py approach. It also words things like you don't know how to really > use Python and are starting a project entirely from scratch. > Just picking up on this question: 1. As Brett says, is the recommendation that everyone should use setuptools? 2. If that's the case, why aren't we bundling setuptools in the same way that we are bundling pip? 3. If we were bundling setuptools, pip wouldn't need to go through the rigmarole of vendoring it. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 09:29:29 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 03:29:29 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On Jul 18, 2013, at 3:20 AM, Paul Moore wrote: > On 17 July 2013 23:18, Brett Cannon wrote: > As of right now the User's Guide doesn't mention using setuptools for building (beyond an empty header listing) and goes with the old distutils setup.py approach. It also words things like you don't know how to really use Python and are starting a project entirely from scratch. > > Just picking up on this question: > 1. As Brett says, is the recommendation that everyone should use setuptools? > 2. If that's the case, why aren't we bundling setuptools in the same way that we are bundling pip? > 3. If we were bundling setuptools, pip wouldn't need to go through the rigmarole of vendoring it. Personally I think pip should be vendoring setuptools regardless. A package manager with dependencies is strange and there have been quite a few problems caused by setuptools getting in a bad state. > > Paul. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 09:44:45 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 08:44:45 +0100 (BST) Subject: [Distutils] Fw: Q about best practices now (or near future) In-Reply-To: <1374133287.86696.YahooMailNeo@web171405.mail.ir2.yahoo.com> References: <1374133287.86696.YahooMailNeo@web171405.mail.ir2.yahoo.com> Message-ID: <1374133485.10736.YahooMailNeo@web171405.mail.ir2.yahoo.com> Sorry, accidentally left distutils-sig off when replying. ----- Forwarded Message ----- > From: Vinay Sajip > To: Nick Coghlan > Cc: > Sent: Thursday, 18 July 2013, 8:41 > Subject: Re: [Distutils] Q about best practices now (or near future) > >> From: Nick Coghlan > > > >> Then (help) write the missing PEP! PEP's don't appear out of > nowhere, > > I think I have been helping as and when I can by participating in the various > discussions, but the PEP has to be written by a champion. I clearly can't be > a champion for this, else why would I be working on distlib? That's what I > currently see as the way forward, obviously, but it's premature to look at a > PEP for it because it hasn't had enough exposure or peer review. > > I have no particular axe to grind against pip - I did a lot of the core work for > the single code-base port, speeded up the test suite a fair bit, and have > contributed other bits and bobs. However, it is the past and present of > packaging, as I see it, and not a worthy long-term future - it has too much > technical debt. As the de facto installer for Python, pip needs no additional > new endorsement, in my view. If I had to choose, I would say I find none of the > choices especially appetising, but I would choose an explicit bootstrap over the > others. Note that installing Distribute/pip was explicitly removed from the > pyvenv script before 3.3 beta, because of python-dev concerns about promoting > specific third-party solutions in the stdlib (even though they were the defacto > tools for Python 3.x, and endorsed as such by python-dev).. > > Nothing has essentially changed from the 3.3 beta time frame. People still use > pip, just as they always did. The recommendation from python-dev is as it always > was (use pip), with a slight alteration on the Distribute front due to the merge > with setuptools. Neither pip nor setuptools are *significantly* better than they > were in functional terms, and if they weren't the right solution when > distutils2/packaging was mooted, I don't see why that should have changed > now. > >> these approaches (up to and including misstating that dislike as "not > >> going to happen"), that just means the arguments in favour would need >> to be a bit more persuasive to convince me I am wrong. > > That's not how "not going to happen" comes across. You're > saying it's a misstatement in this off-list mail, but?as you are the > packaging BDFL, some people on-list would just give up when they saw that. > >> The problem statement also needs to be updated to cover the use case >> of an instructor running a class and wanting to offer a local PyPI >> server (or other cache) without a reliable network connection to the >> outside world, since *that* is the main argument against the >> bootstrapping based solutions. > > > How widespread is that scenario, really, in this day and age? I consider this a > straw man. If that really is a case to cover, you can make a getpip script cover > this contingency with a command-line argument, the pip and setuptools packages > can be stored on the local PyPI cache, and so on. It's no more onerous than > explaining to the students, for example, the pip command line parameters you > would need to specify to access a local PyPI cache. From my experience, over the > course of a class students will run many commands, some of which they don't > fully understand, under the guidance of the instructor. > > I have to say, I'm not comfortable with the *level* of some of the > arguments/points put forward - for example, that "we already had a get-pip > command, using curl URL | python". They come across as unconsidered, more > like rationalisations for a course already set, and it's hard to engage in a > debate which doesn't feel right. > > Regards, > > Vinay Sajip > From p.f.moore at gmail.com Thu Jul 18 09:45:23 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 08:45:23 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 July 2013 08:29, Donald Stufft wrote: > Personally I think pip should be vendoring setuptools regardless. A > package manager with dependencies is strange and there have been quite a > few problems caused by setuptools getting in a bad state. Agrred on the dependency point (but I don't consider "depends on something bundled with Python" as being an external dependency, hence my question). As regards vendoring, I'm reserving judgement until I see the code - I think getting something working is more important than discussing what might be hard to implement... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 09:50:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 08:50:49 +0100 (BST) Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> Message-ID: <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> > It's for more reasons than it's in the standard library. setuptools has > had a lot of misfeatures and a good bit of the angst against not using > setuptools was due to easy_install not setuptools itself. It's hard to disentangle the two - it's not as if the easy_install functionality is completely separate, and it's possible to change its behaviour independently. Another thing about setuptools which some don't especially like is that generated scripts reference pkg_resources, for no particularly good reason.? Regards, Vinay Sajip From ncoghlan at gmail.com Thu Jul 18 09:57:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 17:57:14 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: On 18 July 2013 16:45, Paul Moore wrote: > On 18 July 2013 02:03, Donald Stufft wrote: >> >> it's simple to upgrade the pip if the user requires a newer version >> of pip using ``pip install --upgrade pip` > > > Please don't gloss over the potential issues with upgrading in the face of > in-use exe wrappers. We have a design for a solution, but as yet no working > code. I expect to work on this, but my time is limited and I'm not at all > sure there won't be issues still to resolve. (Obviously, anyone else is > welcome to help, but it's a "windows issue", so I don't know how much > interest there will be from non-Windows developers). > > Prior to the setuptools move away from 2to3, my standard response to anyone > reporting issues with in-place upgrades of setuptools or pip (certainly on > Windows, and in general anywhere else too) was "well, don't do that - remove > and reinstall manually". Things are better now, but not yet perfect and I > don't believe that there is a consensus that this is acceptable for a > bundled pip. Making in-place upgrades using "pip install --upgrade pip" reliable on Windows is definitely the preferred solution, but it isn't a show stopper if it isn't ready for 3.4. Requiring that in-place upgrades be run as "python -m pip install --upgrade pip" would be acceptable, so long as the direct invocation ("pip install --upgrade pip") was detected and a clear error thrown suggesting the other command (this would be mildly annoying, but it's still a substantial improvement over the status quo). Something like: "Due to an unfortunate limitation of pip on Windows, direct upgrades are not supported. Please run 'python -m pip install --upgrade pip' to work around the problem." Shipping an msi installer for pip (perhaps bundling with setuptools) would also be an acceptable alternative. Bundling both with the "Python launcher for Windows" installer is definitely something we should consider for older versions (rather than updating the CPython installer). Either way, Windows users are used to downloading and running installers to get Python upgrades :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu Jul 18 10:10:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 09:10:53 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: On 18 July 2013 08:57, Nick Coghlan wrote: > Shipping an msi installer for pip (perhaps bundling with setuptools) > would also be an acceptable alternative. > -1. I would suggest that this approach, if it were considered seriously, should be reviewed carefully by someone who understands MSI installers (not me!). Specifically, if I install pip via an MSI, then use "python -m pip install -U pip", will the "Add/Remove Programs" entry created by the MSI still uninstall cleanly? Broken uninstall options and incomplete package removals are a perennial problem on Windows, usually caused by messing with installed files outside control of the installer. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Jul 18 10:22:56 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 18:22:56 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: On 18 July 2013 18:10, Paul Moore wrote: > On 18 July 2013 08:57, Nick Coghlan wrote: >> >> Shipping an msi installer for pip (perhaps bundling with setuptools) >> would also be an acceptable alternative. > > > -1. > > I would suggest that this approach, if it were considered seriously, should > be reviewed carefully by someone who understands MSI installers (not me!). > Specifically, if I install pip via an MSI, then use "python -m pip install > -U pip", will the "Add/Remove Programs" entry created by the MSI still > uninstall cleanly? Broken uninstall options and incomplete package removals > are a perennial problem on Windows, usually caused by messing with installed > files outside control of the installer. This potential problem needs to be taken into account for any bundling solution as well. Explicit bootstrapping (with an install time option to invoke it in the CPython and Python launcher for Windows installers) is looking better all the time :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Jul 18 10:25:03 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 18:25:03 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 18 July 2013 17:50, Vinay Sajip wrote: >> It's for more reasons than it's in the standard library. setuptools has > >> had a lot of misfeatures and a good bit of the angst against not using >> setuptools was due to easy_install not setuptools itself. > > It's hard to disentangle the two - it's not as if the easy_install functionality is completely separate, and it's possible to change its behaviour independently. Another thing about setuptools which some don't especially like is that generated scripts reference pkg_resources, for no particularly good reason. It would actually be nice if "pkg_resources" and "setuptools-core" were available as separate PyPI distributions, and setuptools bundled them together with easy_install. It's a *long* way down the priority list thing (and will likely never make it to the top, although it may be more practical once pip vendors the bits it needs). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu Jul 18 10:30:42 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 04:30:42 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> < CACac1F_NuLhRwYzqUgxiZWSyLqV6uBOh8Ue4PBR1xUBrJUuKhg@mail.gmail.com> Message-ID: On Jul 18, 2013, at 4:22 AM, Nick Coghlan wrote: > On 18 July 2013 18:10, Paul Moore wrote: >> On 18 July 2013 08:57, Nick Coghlan wrote: >>> >>> Shipping an msi installer for pip (perhaps bundling with setuptools) >>> would also be an acceptable alternative. >> >> >> -1. >> >> I would suggest that this approach, if it were considered seriously, should >> be reviewed carefully by someone who understands MSI installers (not me!). >> Specifically, if I install pip via an MSI, then use "python -m pip install >> -U pip", will the "Add/Remove Programs" entry created by the MSI still >> uninstall cleanly? Broken uninstall options and incomplete package removals >> are a perennial problem on Windows, usually caused by messing with installed >> files outside control of the installer. > > This potential problem needs to be taken into account for any bundling > solution as well. Explicit bootstrapping (with an install time option > to invoke it in the CPython and Python launcher for Windows > installers) is looking better all the time :) That's only a problem if we make a MSI installer. Which I don't think we need to do. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Jul 18 10:35:52 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 18:35:52 +1000 Subject: [Distutils] Fw: Q about best practices now (or near future) In-Reply-To: <1374133485.10736.YahooMailNeo@web171405.mail.ir2.yahoo.com> References: <1374133287.86696.YahooMailNeo@web171405.mail.ir2.yahoo.com> <1374133485.10736.YahooMailNeo@web171405.mail.ir2.yahoo.com> Message-ID: On 18 July 2013 17:44, Vinay Sajip wrote: > Sorry, accidentally left distutils-sig off when replying. Since I already replied off list, there's no way this dual conversation over the same emails will get confusing, nope, uh-uh :) > ----- Forwarded Message ----- >> From: Vinay Sajip >> To: Nick Coghlan >> Cc: >> Sent: Thursday, 18 July 2013, 8:41 >> Subject: Re: [Distutils] Q about best practices now (or near future) >> >>> From: Nick Coghlan >> >> >> >>> Then (help) write the missing PEP! PEP's don't appear out of >> nowhere, >> >> I think I have been helping as and when I can by participating in the various >> discussions, I apologise for that crack - you deserve better than that. > but the PEP has to be written by a champion. I clearly can't be >> a champion for this, As I said off-list, I think you might make a good champion for an explicit bootstrapping PEP. I've already stated I'm no longer a fan of implicit bootstrapping at first use, but some valid concerns have been raised about the bundling approach as well. A middle ground where: 1. We provide an explicit bootstrapping script as a "getpip" module 2. This is added to the standard library in 3.4+ 3. This is added to the Python Launcher for Windows for earlier versions 4. The following installers gain a "Boostrap pip?" option: * Python Launcher for Windows * CPython Windows installer * CPython Mac OS X installer >>> these approaches (up to and including misstating that dislike as "not >> >>> going to happen"), that just means the arguments in favour would need >>> to be a bit more persuasive to convince me I am wrong. >> >> That's not how "not going to happen" comes across. You're >> saying it's a misstatement in this off-list mail, but as you are the >> packaging BDFL, some people on-list would just give up when they saw that. Agreed, that's why I'm correcting the record now :) >>> The problem statement also needs to be updated to cover the use case >>> of an instructor running a class and wanting to offer a local PyPI >>> server (or other cache) without a reliable network connection to the >>> outside world, since *that* is the main argument against the >>> bootstrapping based solutions. >> >> >> How widespread is that scenario, really, in this day and age? I consider this a >> straw man. If that really is a case to cover, you can make a getpip script cover >> this contingency with a command-line argument, the pip and setuptools packages >> can be stored on the local PyPI cache, and so on. It's no more onerous than >> explaining to the students, for example, the pip command line parameters you >> would need to specify to access a local PyPI cache. From my experience, over the >> course of a class students will run many commands, some of which they don't >> fully understand, under the guidance of the instructor. Yes, an explicit bootstrapping PEP could definitely make a case for being able to handle that scenario. The reason it's important is that I really want for us to be able to provide relatively straightforward instructions to handle the following two cases: * Tutorials, etc, with unreliable conference and other venue uplinks (but reasonable local connectivity or just passing USB keys around) * Remote events where the uplink has a strict download quota, so minimising bandwidth usage is important >> I have to say, I'm not comfortable with the *level* of some of the >> arguments/points put forward - for example, that "we already had a get-pip >> command, using curl URL | python". They come across as unconsidered, more >> like rationalisations for a course already set, and it's hard to engage in a >> debate which doesn't feel right. I think a large part of that is the natural disappointment from thinking we were close to having everything resolved through the implicit bootstrap mechanism, and then realising that wasn't going to happen after all and we still have more work to do. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From holger at merlinux.eu Thu Jul 18 10:36:47 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 18 Jul 2013 08:36:47 +0000 Subject: [Distutils] [tuf] Re: vetting, signing, verification of release files In-Reply-To: <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> References: <20130716091900.GL3125@merlinux.eu> <70D36543-935E-4749-9D0F-7B106E2D04E3@stufft.io> <20130717070327.GN1668@merlinux.eu> <20130717081640.GR1668@merlinux.eu> <51E6D1BC.8010305@students.poly.edu> <51E744FA.9040106@students.poly.edu> <8DEBD390-4D53-4535-9774-2779CEB92AAE@stufft.io> Message-ID: <20130718083647.GX1668@merlinux.eu> On Wed, Jul 17, 2013 at 21:46 -0400, Donald Stufft wrote: > As I've mentioned before an online key (as is required by PyPI) means > that if someone compromises PyPI they compromise the key. It seems to > me that TUF is really designed to handle the case of the Linux > distribution (or similar) where you have vetted maintainers who are > given a subsection of the total releases. However PyPI does not have > vetted authors nor the man power to sign authors keys offline. If we had a person with a master key present at Pycon conferences, package maintainers could walk up and have their key signed. Given the many activities of the PSF and the community, i don't think it's off-limits. If we have sig-verified installs, there would be an incentive for authors to go for that little effort. best, holger -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 12:26:06 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 11:26:06 +0100 (BST) Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <1374143166.85725.YahooMailNeo@web171403.mail.ir2.yahoo.com> > It would actually be nice if "pkg_resources" and? > "setuptools-core" > were available as separate PyPI distributions, and setuptools bundled > them together with easy_install. This would seem to require quite a sizeable refactoring of setuptools, since the?easy_install?command is just an entry point for setuptools.command.easy_install.main(). Regards, Vinay Sajip From oscar.j.benjamin at gmail.com Thu Jul 18 13:48:31 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 18 Jul 2013 12:48:31 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 17 July 2013 22:43, Nick Coghlan wrote: > > On 18 Jul 2013 01:46, "Daniel Holth" wrote: >> >> On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: >> > I'm going to be pushing an update to one of my projects to PyPI this >> > week >> > and so I figured I could use this opportunity to help with patches to >> > the >> > User Guide's packaging tutorial. >> > >> > But to do that I wanted to ask what the current best practices are. >> > >> > * Are we even close to suggesting wheels for source distributions? >> >> No, wheels don't replace source distributions at all. They just let >> you install something without having to have whatever built the wheel >> from its sdist. It is currently nice to have them available. >> >> I'd like to see an ambitious person begin uploading wheels that have >> no traditional sdist. > > Argh, don't even suggest that. Such projects could never be included in a > Linux distribution - we need the original source to push into a trusted > build system. What do you mean by this? I interpret Daniel's comment as meaning that there's no setup.py in the sdist. And I think it's a great idea and that lots of others would be very happy to ditch the setup.py concept in favour of something entirely different from the distutils way of doing things. In another thread you mentioned the idea that someone would build without using distutils/setuptools by using a setup.py that simply invokes an alternate build system that is build-required by the sdist. That's fine for simple cases but how many 'python setup.py 's should the setup.py support? Setuptools setup() supports the following: build, build_py, build_ext, build_clib, build_scripts, clean, install, install_lib, install_headers, install_scripts, install_data, sdist, register, bdist, bdist_dumb, bdist_rpm, bdist_wininst, upload, check, rotate, develop, setopt, saveopts, egg_info, upload_docs, install_egg_info, alias, easy_install, bdist_egg, test (Presumably bdist_wheel would be there if I had a newer setuptools). Oscar From ncoghlan at gmail.com Thu Jul 18 14:13:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 18 Jul 2013 22:13:23 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul 2013 21:48, "Oscar Benjamin" wrote: > > On 17 July 2013 22:43, Nick Coghlan wrote: > > > > On 18 Jul 2013 01:46, "Daniel Holth" wrote: > >> > >> On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: > >> > I'm going to be pushing an update to one of my projects to PyPI this > >> > week > >> > and so I figured I could use this opportunity to help with patches to > >> > the > >> > User Guide's packaging tutorial. > >> > > >> > But to do that I wanted to ask what the current best practices are. > >> > > >> > * Are we even close to suggesting wheels for source distributions? > >> > >> No, wheels don't replace source distributions at all. They just let > >> you install something without having to have whatever built the wheel > >> from its sdist. It is currently nice to have them available. > >> > >> I'd like to see an ambitious person begin uploading wheels that have > >> no traditional sdist. > > > > Argh, don't even suggest that. Such projects could never be included in a > > Linux distribution - we need the original source to push into a trusted > > build system. > > What do you mean by this? > > I interpret Daniel's comment as meaning that there's no setup.py in > the sdist. And I think it's a great idea and that lots of others would > be very happy to ditch the setup.py concept in favour of something > entirely different from the distutils way of doing things. No, that's not what he said, he said no sdist at all. Wheel fills the role of a prebuilt binary format, it's not suitable as the *sole* upload format for a project. Tarball, sdist, wheel. Three different artifacts for three different phases of distribution. > In another thread you mentioned the idea that someone would build > without using distutils/setuptools by using a setup.py that simply > invokes an alternate build system that is build-required by the sdist. > That's fine for simple cases but how many 'python setup.py 's > should the setup.py support? Please read PEP 426, as I cover this in detail. If anything needs further clarification, please let me know. Cheers, Nick. > > Setuptools setup() supports the following: > build, build_py, build_ext, build_clib, build_scripts, clean, install, > install_lib, install_headers, install_scripts, install_data, sdist, > register, bdist, bdist_dumb, bdist_rpm, bdist_wininst, upload, check, > rotate, develop, setopt, saveopts, egg_info, upload_docs, > install_egg_info, alias, easy_install, bdist_egg, test > > (Presumably bdist_wheel would be there if I had a newer setuptools). > > > Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Thu Jul 18 14:17:27 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 18 Jul 2013 14:17:27 +0200 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 Jul, 2013, at 13:48, Oscar Benjamin wrote: > On 17 July 2013 22:43, Nick Coghlan wrote: >> >> On 18 Jul 2013 01:46, "Daniel Holth" wrote: >>> >>> On Wed, Jul 17, 2013 at 11:12 AM, Brett Cannon wrote: >>>> I'm going to be pushing an update to one of my projects to PyPI this >>>> week >>>> and so I figured I could use this opportunity to help with patches to >>>> the >>>> User Guide's packaging tutorial. >>>> >>>> But to do that I wanted to ask what the current best practices are. >>>> >>>> * Are we even close to suggesting wheels for source distributions? >>> >>> No, wheels don't replace source distributions at all. They just let >>> you install something without having to have whatever built the wheel >>> from its sdist. It is currently nice to have them available. >>> >>> I'd like to see an ambitious person begin uploading wheels that have >>> no traditional sdist. >> >> Argh, don't even suggest that. Such projects could never be included in a >> Linux distribution - we need the original source to push into a trusted >> build system. > > What do you mean by this? > > I interpret Daniel's comment as meaning that there's no setup.py in > the sdist. And I think it's a great idea and that lots of others would > be very happy to ditch the setup.py concept in favour of something > entirely different from the distutils way of doing things. > > In another thread you mentioned the idea that someone would build > without using distutils/setuptools by using a setup.py that simply > invokes an alternate build system that is build-required by the sdist. > That's fine for simple cases but how many 'python setup.py 's > should the setup.py support? I don't think that's clear at the moment. It could be as little as "bdist_wheel", that could be enough to interface to get from an extracted sdist to a wheel. The current focus is on defining a common metadata format (the metadata 2.0 JSON files) and a binary distribution format, and that's enough to keep the folks doing the actual work occupied for now. In the long run we'll probably end up with something like this: * Sources from a VCS (that is, project in the layout used by those doing development) | [tool specific] | V * sdist archive (sources + metadata.json + ???, to be specified) | [to be specified interface] | V * wheel archive | ["pip", PEP 376(?)] * installed package If I recall correctly the transformation from sdist to wheel is currently not specified because getting the last steps (binary distribution and installation) right is more important right now. The exact format of an sdist, and the interface for specifying how to build a wheel from an sdist is still open for discussion and experimentation. That is, what's the minimal tool that could be used to create wheels for distributions that contain one or more python packages with dependency information? And what would be needed for a more complex distribution with (optional) C extensions, data files, custom compilers, ...? The initial interface to the build system could well be a setup.py file that the build system will only invoke as "python setup.py bdist_wheel --bdist-dir=DIR" (with build-time depedencies specified in the metdata file) because that's easy to arrange for distutils/setuptools, and it should be easy enough to provide a dummy setup.py file with just that interface for alternative build systems. Ronald From dholth at gmail.com Thu Jul 18 15:03:01 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 09:03:01 -0400 Subject: [Distutils] entry points PEP Message-ID: Abstract This PEP proposes a way to represent the setuptools ?entry points? feature in standard Python metadata. Entry points are a useful mechanism for advertising or discovering plugins or other exported functionality without having to depend on the module namespace. Since the feature is used by many existing Python distributions and not everyone wants to use setuptools, it is useful to have a way to represent the functionality that is not tied to setuptools itself. The proposed feature defines an extension field for the standard Python distribution metadata and some basic semantics for its use. Overview Entry points are represented as an extension in the metadata: { ? ?extensions?: { ?entry_points? : { ? } } } The extension contains the data in a dictionary format similar to that accepted by setuptools? setup(entry_points={}) keyword argument. It is a dictionary of ?group? : [ ?key=module.name:attrs.attrs [extra, extra2, ...]?, ]. ?group? is the name of this class of entry points. Values in common use include ?console_scripts? and ?sqlalchemy.dialects?. During discovery, clients usually iterate over all the entry points in a particular group. ?key? is the name of a particular entry point in the group. It must be locally unique within this distribution?s group. ?module.name? is the Python module that defines the entry point. Client code would import the module to use the entry point. ?:attrs.attrs? are the optional attributes of the module that define the entry point (the module itself can be the entry point). Client code uses normal attribute access on the imported module to use the entry point. ?[extra, extra2, ...]? are any optional features of the distribution (declared with ?extras?) that must be installed to use the declared entry point. This is not common. Complete example { ? ?extensions?: { ?entry_points? : { { ?sqlalchemy.dialects? : [ ?mysql = sqlalchemy_mysql:base.dialect?] } } } } Use Client code reads every distribution?s metadata file on sys.path, filtering for the entry point group name desired, and, if applicable, the individual entry point name. Once the desired entry point has been found, a utility function imports the necessary module and attribute to return an object which can be inspected or called. distlib and setuptools? pkg_resources provide APIs for this functionality. From p.f.moore at gmail.com Thu Jul 18 15:27:13 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 14:27:13 +0100 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 18 July 2013 14:03, Daniel Holth wrote: > Abstract > > This PEP proposes a way to represent the setuptools ?entry points? > feature in standard Python metadata. Entry points are a useful > mechanism for advertising or discovering plugins or other exported > functionality without having to depend on the module namespace. Since > the feature is used by many existing Python distributions and not > everyone wants to use setuptools, it is useful to have a way to > represent the functionality that is not tied to setuptools itself. > 1. I think that console (and GUI) scripts should be top-level metadata, not an extension. Installers need to be able to create wrappers based on these, and it is useful data for introspection. 2. distlib calls these "exports" and I think that's a better name. But if names are all that we argue over, I'm happy :-) 3. Someone (I think it was PJE) pointed out that having entry points in a separate metadata file was good because it allowed a fast check of "does this distribution expose entry points?" Note that this isn't a useful thing to check for script wrappers, which again argues that those should be handled separately. 4. You seem to have an extra set of curly braces in a few places. You say the value of "entry_points" is a dictionary, but you show it as a set containing one dictionary in the set (and of course sets aren't valid JSON). I'll assume this is just a typo, and you meant { ? ?extensions?: { ?entry_points? : { ?sqlalchemy.dialects? : [ ?mysql = sqlalchemy_mysql:base.dialect?] , ... } } } 5. What's the logic for having the values (I don't see a good term for these - the "mysql = ..." bit above) be a structured string that the user has to parse? Either it should be completely free format (which I suspect makes it of limited use for introspection, if nothing else) or it should be broken up into JSON - no point in having a blob of data that needs parsing in the middle of an already structured format! Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu Jul 18 15:49:05 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 09:49:05 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 9:27 AM, Paul Moore wrote: > On 18 July 2013 14:03, Daniel Holth wrote: >> >> Abstract >> >> This PEP proposes a way to represent the setuptools ?entry points? >> feature in standard Python metadata. Entry points are a useful >> mechanism for advertising or discovering plugins or other exported >> functionality without having to depend on the module namespace. Since >> the feature is used by many existing Python distributions and not >> everyone wants to use setuptools, it is useful to have a way to >> represent the functionality that is not tied to setuptools itself. > > > 1. I think that console (and GUI) scripts should be top-level metadata, not > an extension. Installers need to be able to create wrappers based on these, > and it is useful data for introspection. It is an extension so it can be a separate PEP, since there's enough to talk about in the main PEP. The document tries to write down what setuptools does in a straightforward json way. > 2. distlib calls these "exports" and I think that's a better name. But if > names are all that we argue over, I'm happy :-) > > 3. Someone (I think it was PJE) pointed out that having entry points in a > separate metadata file was good because it allowed a fast check of "does > this distribution expose entry points?" Note that this isn't a useful thing > to check for script wrappers, which again argues that those should be > handled separately. I am more interested in seeing the installer update some kind of index like a sqlite database. I don't know if it would be faster since I haven't tried it. > 4. You seem to have an extra set of curly braces in a few places. You say > the value of "entry_points" is a dictionary, but you show it as a set > containing one dictionary in the set (and of course sets aren't valid JSON). > I'll assume this is just a typo, and you meant > > { ? > ?extensions?: { > ?entry_points? : { > ?sqlalchemy.dialects? : [ ?mysql = sqlalchemy_mysql:base.dialect?] , > ... > } > } > } > > 5. What's the logic for having the values (I don't see a good term for these > - the "mysql = ..." bit above) be a structured string that the user has to > parse? Either it should be completely free format (which I suspect makes it > of limited use for introspection, if nothing else) or it should be broken up > into JSON - no point in having a blob of data that needs parsing in the > middle of an already structured format! For one thing you can have more than one mysql = in the same sqlalchemy.dialects. I think in this instance the string parsing is probably simpler than defining a more JSONy version. FWIW the PEP 426 metadata is also full of structured strings. From mal at egenix.com Thu Jul 18 16:10:33 2013 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 18 Jul 2013 16:10:33 +0200 Subject: [Distutils] API for registering/managing URLs for a package Message-ID: <51E7F759.1060707@egenix.com> I would like to write a script to automatically register release URLs for PyPI packages. Is the REST API documented somewhere, or is the implementation the spec ? ;-) And related to this: Will there be an option to tell PyPI's CDN to cache the release URL's contents ? Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From oscar.j.benjamin at gmail.com Thu Jul 18 16:40:01 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 18 Jul 2013 15:40:01 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: On 18 July 2013 13:13, Nick Coghlan wrote: > > On 18 Jul 2013 21:48, "Oscar Benjamin" wrote: > >> In another thread you mentioned the idea that someone would build >> without using distutils/setuptools by using a setup.py that simply >> invokes an alternate build system that is build-required by the sdist. >> That's fine for simple cases but how many 'python setup.py 's >> should the setup.py support? > > Please read PEP 426, as I cover this in detail. If anything needs further > clarification, please let me know. Okay, I have actually read that before but I forgot about that bit. It says: ''' In the meantime, the above operations will be handled through the distutils/setuptools command system: python setup.py dist_info python setup.py sdist python setup.py build_ext --inplace python setup.py test python setup.py bdist_wheel ''' That seems a sufficiently minimal set of commands. What I wonder when reading it is whether any other command line options are expected to be supported. For example if the setup.py is using distutils/setuptools then you could do something like: python setup.py sdist --dist-dir=some_dir Should it be explicitly not required that the setup.py should support any other invocation than those listed and should just report success/failure by error code? Also in the event of failure is it the job of setup.py to clean up after itself (since there's no clean command)? Oscar From noah at coderanger.net Thu Jul 18 17:06:55 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Thu, 18 Jul 2013 08:06:55 -0700 Subject: [Distutils] API for registering/managing URLs for a package In-Reply-To: <51E7F759.1060707@egenix.com> References: <51E7F759.1060707@egenix.com> Message-ID: On Jul 18, 2013, at 7:10 AM, M.-A. Lemburg wrote: > I would like to write a script to automatically register release URLs > for PyPI packages. > > Is the REST API documented somewhere, or is the implementation the > spec ? ;-) > > And related to this: > > Will there be an option to tell PyPI's CDN to cache the release > URL's contents ? I think you are perhaps confused, the use of external URLs on PyPI is formally deprecated. The way you inform the PyPI and the CDN network about your package is you upload it to PyPI. pip 1.4 effectively disables "unsafe" external URLs, and all external URLs will follow soon. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Thu Jul 18 18:12:58 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 09:12:58 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > > It would actually be nice if "pkg_resources" and "setuptools-core" > were available as separate PyPI distributions, and setuptools bundled > them together with easy_install. It's a *long* way down the priority > list thing (and will likely never make it to the top, although it may > be more practical once pip vendors the bits it needs). > the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources. you're left with 2 poor solutions: 1) rewriting script import lines, or 2) still installing setuptools anyway so, having a separate pkg_resources is higher up on the list I think for that reason. without a separate pkg_resources, I think the "dynamic install of setuptools" idea wins out, or no change at all. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noah at coderanger.net Thu Jul 18 18:42:50 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Thu, 18 Jul 2013 09:42:50 -0700 Subject: [Distutils] API for registering/managing URLs for a package In-Reply-To: References: <51E7F759.1060707@egenix.com> Message-ID: <7B228AA9-038F-440D-BF6D-17FCAFD5542F@coderanger.net> On Jul 18, 2013, at 8:06 AM, Noah Kantrowitz wrote: > > On Jul 18, 2013, at 7:10 AM, M.-A. Lemburg wrote: > >> I would like to write a script to automatically register release URLs >> for PyPI packages. >> >> Is the REST API documented somewhere, or is the implementation the >> spec ? ;-) >> >> And related to this: >> >> Will there be an option to tell PyPI's CDN to cache the release >> URL's contents ? > > I think you are perhaps confused, the use of external URLs on PyPI is formally deprecated. The way you inform the PyPI and the CDN network about your package is you upload it to PyPI. pip 1.4 effectively disables "unsafe" external URLs, and all external URLs will follow soon. Someone reminded me that I'm only partially correct, the external URL stuffs will continue to be supported, but only as a convenience during package registration/upload. From the PoV of clients (and the CDN) everything will be local. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 18:49:26 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 16:49:26 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Marcus Smith gmail.com> writes: > the idea to have pip vendor setuptools crumbles a bit due to console scripts needing pkg_resources. They don't *need* pkg_resources. All they're doing is taking a module name and the name of a nested object in the form 'a.b.c', and distlib-generated scripts show that no external references are needed. Here's the template for a distlib-generated script: SCRIPT_TEMPLATE = '''%(shebang)s if __name__ == '__main__': import sys, re def _resolve(module, func): __import__(module) mod = sys.modules[module] parts = func.split('.') result = getattr(mod, parts.pop(0)) for p in parts: result = getattr(result, p) return result try: sys.argv[0] = re.sub('-script.pyw?$', '', sys.argv[0]) func = _resolve('%(module)s', '%(func)s') rc = func() # None interpreted as 0 except Exception as e: # only supporting Python >= 2.6 sys.stderr.write('%%s\\n' %% e) rc = 1 sys.exit(rc) ''' I don't see any reason why setuptools couldn't be updated to use this approach. Regards, Vinay Sajip From dholth at gmail.com Thu Jul 18 18:53:09 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 12:53:09 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 12:12 PM, Marcus Smith wrote: > >> >> It would actually be nice if "pkg_resources" and "setuptools-core" >> were available as separate PyPI distributions, and setuptools bundled >> them together with easy_install. It's a *long* way down the priority >> list thing (and will likely never make it to the top, although it may >> be more practical once pip vendors the bits it needs). > > > the idea to have pip vendor setuptools crumbles a bit due to console scripts > needing pkg_resources. > you're left with 2 poor solutions: 1) rewriting script import lines, or 2) > still installing setuptools anyway > > so, having a separate pkg_resources is higher up on the list I think for > that reason. > without a separate pkg_resources, I think the "dynamic install of > setuptools" idea wins out, or no change at all. I think it's still useful to have pip vendor just pkg_resources (as pip.pkg_resources). It's easy, it gives you enough to install wheels, and it's not the only thing you would do. It shouldn't make much difference whether the vendoring happens before or after pkg_resource's separation. The trickiest parts might be adding the undeclared pkg_resources / setuptools dependency when appropriate and figuring out whether we can install setuptools even if it's not available as a wheel. Meanwhile someone might add a flag or a plugin to setuptools' console_scripts handler to generate them in a different way. I am not worried that 99.9% of pypi-hosted packages depend on setuptools or distutils. It is enough to introduce only the possibility of getting along without it. For the rest it is appropriate to install and use setuptools to build packages that were in fact designed to use it. From dholth at gmail.com Thu Jul 18 19:05:48 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 13:05:48 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: I tried it out. pip can install setuptools when only pkg_resources is installed. The only thing stopping it is a small check for whether the current setuptools is of the distribute variety. On Thu, Jul 18, 2013 at 12:53 PM, Daniel Holth wrote: > On Thu, Jul 18, 2013 at 12:12 PM, Marcus Smith wrote: >> >>> >>> It would actually be nice if "pkg_resources" and "setuptools-core" >>> were available as separate PyPI distributions, and setuptools bundled >>> them together with easy_install. It's a *long* way down the priority >>> list thing (and will likely never make it to the top, although it may >>> be more practical once pip vendors the bits it needs). >> >> >> the idea to have pip vendor setuptools crumbles a bit due to console scripts >> needing pkg_resources. >> you're left with 2 poor solutions: 1) rewriting script import lines, or 2) >> still installing setuptools anyway >> >> so, having a separate pkg_resources is higher up on the list I think for >> that reason. >> without a separate pkg_resources, I think the "dynamic install of >> setuptools" idea wins out, or no change at all. > > I think it's still useful to have pip vendor just pkg_resources (as > pip.pkg_resources). It's easy, it gives you enough to install wheels, > and it's not the only thing you would do. It shouldn't make much > difference whether the vendoring happens before or after > pkg_resource's separation. The trickiest parts might be adding the > undeclared pkg_resources / setuptools dependency when appropriate and > figuring out whether we can install setuptools even if it's not > available as a wheel. > > Meanwhile someone might add a flag or a plugin to setuptools' > console_scripts handler to generate them in a different way. > > I am not worried that 99.9% of pypi-hosted packages depend on > setuptools or distutils. It is enough to introduce only the > possibility of getting along without it. For the rest it is > appropriate to install and use setuptools to build packages that were > in fact designed to use it. From qwcode at gmail.com Thu Jul 18 19:01:32 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 10:01:32 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 9:49 AM, Vinay Sajip wrote: > Marcus Smith gmail.com> writes: > > > the idea to have pip vendor setuptools crumbles a bit due to console > scripts > needing pkg_resources. > > They don't *need* pkg_resources. All they're doing is taking a module name > and the name of a nested object in the form 'a.b.c', and distlib-generated > scripts show that no external references are needed. Here's the template > for > a distlib-generated script: > pkg_resources scripts confirm the version. don't see that here? not necessary? -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 19:24:10 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 10:24:10 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > I think it's still useful to have pip vendor just pkg_resources (as > pip.pkg_resources). It's easy, it gives you enough to install wheels, > and it's not the only thing you would do. I agree. there's 2 problems to be solved here 1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions for #2, we have a few paths I think 1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Jul 18 19:50:11 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 17:50:11 +0000 (UTC) Subject: [Distutils] entry points PEP References: Message-ID: Daniel Holth gmail.com> writes: > On Thu, Jul 18, 2013 at 9:27 AM, Paul Moore gmail.com> wrote: > It is an extension so it can be a separate PEP, since there's enough > to talk about in the main PEP. The document tries to write down what > setuptools does in a straightforward json way. If the JSON we're talking about goes in the main metadata dictionary, perhaps it should go into PEP 426, so that everything is in one place. As we're talking about special handling of script generation by installers, it may make sense to not consider them as extensions but as core metadata. > > 2. distlib calls these "exports" and I think that's a better name. But if > > names are all that we argue over, I'm happy The reason for picking "exports" is that you can export data, not just code, and "entry point" has a connotation of being code. PJE suggested "exports" and I think it fits the bill better than "entry_points". > > 3. Someone (I think it was PJE) pointed out that having entry points in a > > separate metadata file was good because it allowed a fast check of "does > > this distribution expose entry points?" Note that this isn't a useful thing > > to check for script wrappers, which again argues that those should be > > handled separately. That seems generally true, except that there might be applications out there that want to walk over the scripts associated with different installed distributions. That seems a legitimate, though uncommon, use case. In any case, I think the script metadata should be structured as "scripts": { "console": [spec1, spec2] "gui": [spec1, spec2] } Because that allows a sensible place for e.g. script generation options, as in "scripts": { "console": [spec1, spec2] "gui": [spec1, spec2] "options": { ... } } > I am more interested in seeing the installer update some kind of index > like a sqlite database. I don't know if it would be faster since I > haven't tried it. That (a SQLite installation database) is probably some way off, and would require more significant changes elsewhere. The big advantage of the current setup with text files is that every thing is human readable - very handy when things go wrong. I don't know whether this area is a performance bottleneck, but we will be able to deal with it using a separate exports.json file in the short term. > For one thing you can have more than one mysql = in the same > sqlalchemy.dialects. I think in this instance the string parsing is Don't you say in the PEP about the key that "It must be locally unique within this distribution?s group."? > probably simpler than defining a more JSONy version. I think Paul's point is that if it was JSON, you wouldn't need to parse anything. The current format of entries is name = module:attrs [flag1,flag2] which could be { "name": name, "module": module, "attrs": attrs, "flags": ["flag1", "flag2"] } which is obviously more verbose. Note that I don't see necessarily a connection between extras and flags, though you've mentioned that they're extras. Does setuptools store, against an installed distribution, the extras it was installed with? AFAIK it doesn't. (Even if it did, it would need to keep that updated if one of the extras' dependencies were later uninstalled.) And if not, how would having extras in the specification help, since you can't test the "must be installed" part? On the other hand, you might want to use generalised flags that provide control over how the exported entry is processed. One reason for keeping the format as-is might be in case any migration issues come up (i.e. people depending on this specific format in some way), but I'm not sure whether there are any such issues or what they might be. > FWIW the PEP 426 metadata is also full of structured strings. True - the dependency specifier, if nothing else. One other thing is that the PEP needs to state that the exports metadata must be written to exports.json in the .dist-info folder by an installer - that isn't mentioned anywhere. Also, whether it should contain the scripts part, or just the other exports (but see my comment above as to why scripts might be considered exports worth iterating over, like any other). Regards, Vinay Sajip From dholth at gmail.com Thu Jul 18 19:59:49 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 13:59:49 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 1:01 PM, Marcus Smith wrote: > > > On Thu, Jul 18, 2013 at 9:49 AM, Vinay Sajip > wrote: >> >> Marcus Smith gmail.com> writes: >> >> > the idea to have pip vendor setuptools crumbles a bit due to console >> scripts >> needing pkg_resources. >> >> They don't *need* pkg_resources. All they're doing is taking a module name >> and the name of a nested object in the form 'a.b.c', and distlib-generated >> scripts show that no external references are needed. Here's the template >> for >> a distlib-generated script: > > > pkg_resources scripts confirm the version. don't see that here? not > necessary? It's useful when you have more than one version of things installed as eggs. pkg_resources does the full dependency resolution and adds everything to the sys.path. When you are not doing that then it's not needed. From vinay_sajip at yahoo.co.uk Thu Jul 18 20:02:47 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 18:02:47 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Marcus Smith gmail.com> writes: > pkg_resources scripts confirm the version. ?don't see that here? ?not necessary? The load_entry_point needs the dist name because of how it's implemented - it defers to the distribution instance. AFAICT there are no actual checks. def load_entry_point(dist, group, name): """Return `name` entry point of `group` for `dist` or raise ImportError""" return get_distribution(dist).load_entry_point(group, name) Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Jul 18 20:08:25 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 18:08:25 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Marcus Smith gmail.com> writes: > > > > I think it's still useful to have pip vendor just pkg_resources (as > pip.pkg_resources). It's easy, it gives you enough to install wheels, > and it's not the only thing you would do. > > I agree. ?there's 2 problems to be solved here > > 1) making pip a self-sufficient wheel installer ?(which requires some internal pkg_resources equivalent) > 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions > > for #2, we have a few paths I think > > 1) bundle setuptools ?(and have pip install "pkg_resources" for console scripts, if it existed as a separate project) > 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) > 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) > 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?) If setuptools changes the script generation, the need for pkg_resources is gone at least from that part of the picture. Perhaps you're forgetting that there already is an internal pkg_resources equivalent in my pip-distlib branch - this is a pkg_resources compatibility shim using pip.vendor.distlib which passed all the pip tests when it was submitted as a PR. Regards, Vinay Sajip From dholth at gmail.com Thu Jul 18 20:10:55 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 14:10:55 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 1:50 PM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: > >> On Thu, Jul 18, 2013 at 9:27 AM, Paul Moore gmail.com> > wrote: > >> It is an extension so it can be a separate PEP, since there's enough >> to talk about in the main PEP. The document tries to write down what >> setuptools does in a straightforward json way. > > If the JSON we're talking about goes in the main metadata dictionary, > perhaps > it should go into PEP 426, so that everything is in one place. As we're > talking > about special handling of script generation by installers, it may make sense > to > not consider them as extensions but as core metadata. > >> > 2. distlib calls these "exports" and I think that's a better name. But > if >> > names are all that we argue over, I'm happy > > The reason for picking "exports" is that you can export data, not just code, > and "entry point" has a connotation of being code. PJE suggested "exports" > and > I think it fits the bill better than "entry_points". > >> > 3. Someone (I think it was PJE) pointed out that having entry points in > a >> > separate metadata file was good because it allowed a fast check of "does >> > this distribution expose entry points?" Note that this isn't a useful > thing >> > to check for script wrappers, which again argues that those should be >> > handled separately. > > That seems generally true, except that there might be applications out there > that want to walk over the scripts associated with different installed > distributions. That seems a legitimate, though uncommon, use case. > > In any case, I think the script metadata should be structured as > > "scripts": { > "console": [spec1, spec2] > "gui": [spec1, spec2] > } > > Because that allows a sensible place for e.g. script generation options, as > in > > "scripts": { > "console": [spec1, spec2] > "gui": [spec1, spec2] > "options": { ... } > } > > >> I am more interested in seeing the installer update some kind of index >> like a sqlite database. I don't know if it would be faster since I >> haven't tried it. > > That (a SQLite installation database) is probably some way off, and would > require more significant changes elsewhere. The big advantage of the current > setup with text files is that every thing is human readable - very handy > when > things go wrong. I don't know whether this area is a performance bottleneck, > but we will be able to deal with it using a separate exports.json file in > the > short term. Who knows. On some filesystems stat() is painfully slow and you could be better off just parsing the single metadata file. >> For one thing you can have more than one mysql = in the same >> sqlalchemy.dialects. I think in this instance the string parsing is > > Don't you say in the PEP about the key that "It must be locally unique > within > this distribution?s group."? The kind of entry point has to be unique, but the name inside the spec does not: dialects : [ "mysql = first mysql driver...", "mysql = second mysql driver..." ] >> probably simpler than defining a more JSONy version. > > I think Paul's point is that if it was JSON, you wouldn't need to parse > anything. The current format of entries is > > name = module:attrs [flag1,flag2] > > which could be { "name": name, "module": module, "attrs": attrs, "flags": > ["flag1", "flag2"] } which is obviously more verbose. > > Note that I don't see necessarily a connection between extras and flags, > though > you've mentioned that they're extras. Does setuptools store, against an > installed distribution, the extras it was installed with? AFAIK it doesn't. > (Even if it did, it would need to keep that updated if one of the extras' > dependencies were later uninstalled.) And if not, how would having extras in > the specification help, since you can't test the "must be installed" part? > On > the other hand, you might want to use generalised flags that provide control > over how the exported entry is processed. You might mean the document's mention that in setuptools loading an entry point can require a particular extra. In setuptools this would mean additional eggs could be added to sys.path as a result of loading the entry point. > One reason for keeping the format as-is might be in case any migration > issues > come up (i.e. people depending on this specific format in some way), but I'm > not sure whether there are any such issues or what they might be. > >> FWIW the PEP 426 metadata is also full of structured strings. > > True - the dependency specifier, if nothing else. > > One other thing is that the PEP needs to state that the exports metadata > must > be written to exports.json in the .dist-info folder by an installer - that > isn't mentioned anywhere. Also, whether it should contain the scripts part, > or > just the other exports (but see my comment above as to why scripts might be > considered exports worth iterating over, like any other). I would like to see it timed with and without exports.json Why not keep a single API for iterating over console scripts and other entry points and exports. Seems harmless. > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From qwcode at gmail.com Thu Jul 18 20:17:08 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 11:17:08 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > The load_entry_point needs the dist name because of how it's implemented - > it > defers to the distribution instance. AFAICT there are no actual checks. > > def load_entry_point(dist, group, name): > """Return `name` entry point of `group` for `dist` or raise > ImportError""" > return get_distribution(dist).load_entry_point(group, name) > > it checks the version. you get this. I have pip-1.5dev1 in this case, but a script wrapper referencing 1.4rc5 (pip)qwcode at qwcode:~/p/pypa/pip$ pip --version Traceback (most recent call last): File "/home/qwcode/.qwdev/pip/bin/pip", line 5, in from pkg_resources import load_entry_point File "/home/qwcode/.qwdev/pip/lib/python2.6/site-packages/pkg_resources.py", line 3011, in parse_requirements(__requires__), Environment() File "/home/qwcode/.qwdev/pip/lib/python2.6/site-packages/pkg_resources.py", line 626, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: pip==1.4rc5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 20:19:29 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 11:19:29 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > > Perhaps you're forgetting that there already is an internal pkg_resources > equivalent in my pip-distlib branch - this is a pkg_resources compatibility > shim using pip.vendor.distlib which passed all the pip tests when it was > submitted as a PR. > : ) no I haven't forgotten. I actually bring it up with others pretty often. my use of "pkg_resource equivalent" was actually a reference to your PR work. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 20:41:50 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 11:41:50 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? In-Reply-To: <51E82CD1.6010000@numenet.com> References: <51E76EDD.7040009@numenet.com> <51E82CD1.6010000@numenet.com> Message-ID: On Thu, Jul 18, 2013 at 10:58 AM, Liam Kirsher wrote: > Marcus, > > Thanks! After reading that I think I can fix this by installing pip 1.4. > you can also make your recipe run "pip install -U setuptools" separately before moving on to the supervisor upgrade. that will work and is maybe easier > However, some questions remain. Pip is currently being installed via > distribute_setup.py, which is retrieved from here: > http://python-distribute.org/distribute_setup.py > a setuptools person should probably speak to this, but I would say, don't use this anymore. use "ez_setup.py" which is for setuptools. if you're starting an environment from scratch, just use the new setuptools. https://pypi.python.org/pypi/setuptools/0.9.6#installation-instructions Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 21:00:10 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 15:00:10 -0400 Subject: [Distutils] API for registering/managing URLs for a package In-Reply-To: <7B228AA9-038F-440D-BF6D-17FCAFD5542F@coderanger.net> References: <51E7F759.1060707@egenix.com> <7B228AA9-038F-440D-BF6D-17FCAFD5542F@coderanger.net> Message-ID: <64C8442E-6D77-4753-9749-5632B43A1127@stufft.io> On Jul 18, 2013, at 12:42 PM, Noah Kantrowitz wrote: > > On Jul 18, 2013, at 8:06 AM, Noah Kantrowitz wrote: > >> >> On Jul 18, 2013, at 7:10 AM, M.-A. Lemburg wrote: >> >>> I would like to write a script to automatically register release URLs >>> for PyPI packages. >>> >>> Is the REST API documented somewhere, or is the implementation the >>> spec ? ;-) >>> >>> And related to this: >>> >>> Will there be an option to tell PyPI's CDN to cache the release >>> URL's contents ? >> >> I think you are perhaps confused, the use of external URLs on PyPI is formally deprecated. The way you inform the PyPI and the CDN network about your package is you upload it to PyPI. pip 1.4 effectively disables "unsafe" external URLs, and all external URLs will follow soon. > > Someone reminded me that I'm only partially correct, the external URL stuffs will continue to be supported, but only as a convenience during package registration/upload. From the PoV of clients (and the CDN) everything will be local. > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Noah, External urls are still supported (Although discouraged). Marc-Andre, There is documentation in the PEP, however I have another PEP coming up for a more streamlined upload process that also contains a much nicer method of sending external urls as well. So you might want to wait for that. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Thu Jul 18 22:03:46 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 16:03:46 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 2:19 PM, Marcus Smith wrote: > >> >> Perhaps you're forgetting that there already is an internal pkg_resources >> equivalent in my pip-distlib branch - this is a pkg_resources >> compatibility >> shim using pip.vendor.distlib which passed all the pip tests when it was >> submitted as a PR. > > > : ) no I haven't forgotten. I actually bring it up with others pretty > often. > my use of "pkg_resource equivalent" was actually a reference to your PR > work. > > Marcus > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > From dholth at gmail.com Thu Jul 18 22:20:25 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 16:20:25 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 1:24 PM, Marcus Smith wrote: > >> I think it's still useful to have pip vendor just pkg_resources (as >> pip.pkg_resources). It's easy, it gives you enough to install wheels, >> and it's not the only thing you would do. > > > I agree. there's 2 problems to be solved here > > 1) making pip a self-sufficient wheel installer (which requires some > internal pkg_resources equivalent) > 2) removing the user headache of a setuptools build *dependency* for > practically all current pypi distributions > > for #2, we have a few paths I think > > 1) bundle setuptools (and have pip install "pkg_resources" for console > scripts, if it existed as a separate project) > 2) bundle setuptools (and rewrite the console script wrapper logic to not > need pkg_resources?) > 3) dynamic install of setuptools from wheel when pip needs to instal sdists > (which is 99.9% of the time, so this feels a bit silly) > 4) just be happy that the pip bootstrap/bundle efforts will alleviate the > pain in new versions of python (by pre-installing setuptools?) virtualenv /tmp/builder /tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt virtualenv /tmp/no-setuptools /tmp/no-setuptools/bin/pip install --use-wheel --find-links=/tmp/wheels --no-index -r requirements.txt That is the anti-setuptools workflow I envision. The build environment has an appropriate amount of setuptools and the no-setuptools environment has none. This gives you the option of not having setuptools if you don't want it, something that some people will appreciate. It does not try to avoid the non-problem of installing setuptools when you actually need it. Eventually there may be more sophisticated build requirements handling, for whatever that's worth, so that you might not have to have an explicit setuptools virtualenv. System packaging certainly doesn't install build requirements into their own isolated environment. From p.f.moore at gmail.com Thu Jul 18 22:36:21 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 21:36:21 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 18 July 2013 18:24, Marcus Smith wrote: > > I think it's still useful to have pip vendor just pkg_resources (as >> pip.pkg_resources). It's easy, it gives you enough to install wheels, >> and it's not the only thing you would do. > > > I agree. there's 2 problems to be solved here > > 1) making pip a self-sufficient wheel installer (which requires some > internal pkg_resources equivalent) > 2) removing the user headache of a setuptools build *dependency* for > practically all current pypi distributions > > for #2, we have a few paths I think > > 1) bundle setuptools (and have pip install "pkg_resources" for console > scripts, if it existed as a separate project) > 2) bundle setuptools (and rewrite the console script wrapper logic to not > need pkg_resources?) > 3) dynamic install of setuptools from wheel when pip needs to instal > sdists (which is 99.9% of the time, so this feels a bit silly) > 4) just be happy that the pip bootstrap/bundle efforts will alleviate the > pain in new versions of python (by pre-installing setuptools?) > As you say, for #1 using an internal pkg_resources (probably distlib's, why bother vendoring a second one?) works. Given that pip forces use of setuptools for *all* sdist builds, I think we have to bundle it for that purpose. I really dislike the need to do this, but I don't see a way round it. And if we do, we can just as easily use the real pkg_resources as distlib's emulation. As regards console scripts, I think they should be rewritten to remove the dependency on pkg_resources. That should be a setuptools fix, maybe triggered by a flag (possibly implied by --single-version-externally-managed, as the pkg_resources complexity is only needed when multi-versions are involved). If Jason's not comfortable with the change, then we'll probably have to find some way of doing it within pip, which will likely to be a fairly gross hack (unless we go really bleeding-edge and don't pit scripts into a wheel *at all* (or just omit exes and -script.py files, I don't know) and put the exports metadata in the wheel assuming that it's the wheel installer's job to create the wrappers. We can do that for pip install, and we just have to assume that other tools (wheel install, distlib) will do the same. TBH, my preference is for the metadata approach, do it correctly from the start. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Jul 18 22:41:36 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 16:41:36 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <9C86E803-0A66-48C1-893C-4004BD54BEC6@stufft.io> On Jul 18, 2013, at 4:36 PM, Paul Moore wrote: > On 18 July 2013 18:24, Marcus Smith wrote: > > I think it's still useful to have pip vendor just pkg_resources (as > pip.pkg_resources). It's easy, it gives you enough to install wheels, > and it's not the only thing you would do. > > I agree. there's 2 problems to be solved here > > 1) making pip a self-sufficient wheel installer (which requires some internal pkg_resources equivalent) > 2) removing the user headache of a setuptools build *dependency* for practically all current pypi distributions > > for #2, we have a few paths I think > > 1) bundle setuptools (and have pip install "pkg_resources" for console scripts, if it existed as a separate project) > 2) bundle setuptools (and rewrite the console script wrapper logic to not need pkg_resources?) > 3) dynamic install of setuptools from wheel when pip needs to instal sdists (which is 99.9% of the time, so this feels a bit silly) > 4) just be happy that the pip bootstrap/bundle efforts will alleviate the pain in new versions of python (by pre-installing setuptools?) > > As you say, for #1 using an internal pkg_resources (probably distlib's, why bother vendoring a second one?) works. > > Given that pip forces use of setuptools for *all* sdist builds, I think we have to bundle it for that purpose. I really dislike the need to do this, but I don't see a way round it. And if we do, we can just as easily use the real pkg_resources as distlib's emulation. > > As regards console scripts, I think they should be rewritten to remove the dependency on pkg_resources. That should be a setuptools fix, maybe triggered by a flag (possibly implied by --single-version-externally-managed, as the pkg_resources complexity is only needed when multi-versions are involved). If Jason's not comfortable with the change, then we'll probably have to find some way of doing it within pip, which will likely to be a fairly gross hack (unless we go really bleeding-edge and don't pit scripts into a wheel *at all* (or just omit exes and -script.py files, I don't know) and put the exports metadata in the wheel assuming that it's the wheel installer's job to create the wrappers. We can do that for pip install, and we just have to assume that other tools (wheel install, distlib) will do the same. > > TBH, my preference is for the metadata approach, do it correctly from the start. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Console scripta aren't the only use of entry points fwiw. THere's other entry points programs use. I don't know if they all depend on setuptools or if just assume it's there. Technically the should depend but that would break things for those people. I think either way pkg_resources is going to need to be installed, but setuptools won't. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu Jul 18 22:49:16 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 21:49:16 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <9C86E803-0A66-48C1-893C-4004BD54BEC6@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <9C86E803-0A66-48C1-893C-4004BD54BEC6@stufft.io> Message-ID: On 18 July 2013 21:41, Donald Stufft wrote: > Console scripta aren't the only use of entry points fwiw. THere's other > entry points programs use. I don't know if they all depend on setuptools or > if just assume it's there. Technically the should depend but that would > break things for those people. > > I think either way pkg_resources is going to need to be installed, but > setuptools won't. > If a project uses setuptools features at runtime, it should declare setuptools as a dependency. The difference with script wrappers is that the project didn't write the code, setuptools itself did. Any other use of entry points requires "import pkg_resources" in the user-written code, and should therefore be supported by having setuptools in the runtime dependency list. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 23:08:37 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 14:08:37 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > virtualenv /tmp/builder > /tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt > people will expect to be able to do this globally (i.e not in a virtualenv). that's when the headache starts It does not try to avoid the non-problem of installing setuptools when you > actually need it > it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip) -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Thu Jul 18 23:39:16 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 14:39:16 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: fyi, I'm updating donald's original setuptools bundle issue with all of this as the choices become clearer https://github.com/pypa/pip/issues/1049 On Thu, Jul 18, 2013 at 2:08 PM, Marcus Smith wrote: > > virtualenv /tmp/builder >> /tmp/builder/bin/pip wheel -w /tmp/wheels -r requirements.txt >> > > people will expect to be able to do this globally (i.e not in a > virtualenv). that's when the headache starts > > It does not try to avoid the non-problem of installing setuptools when you >> actually need it >> > > it's a practical problem for users, due to being currently responsible for > fulfilling the setuptools dependency themselves in non-virtualenv > environments > IMO, we need to bundle or install it for them (through dynamic installs, > or add the logic to get-pip) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Thu Jul 18 23:34:44 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 18 Jul 2013 17:34:44 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 1:50 PM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: >> For one thing you can have more than one mysql = in the same >> sqlalchemy.dialects. I think in this instance the string parsing is > > Don't you say in the PEP about the key that "It must be locally unique > within this distribution?s group."? Setuptools requires this per-distribution uniqueness, but note that uniqueness is not required across distributions. So more than one distribution can export a 'mysql' in the 'sqlalchemy.dialects' group. It's up to the application to decide how to handle multiple definitions; typically one either uses all of them or the first one found on sys.path, or some other tie-breaking mechanism. The pkg_resources entry point APIs just provide operations for iterating over entry points defined on either a single distribution, or across all distributions on a specified set of directories. (Via the WorkingSet API.) > Note that I don't see necessarily a connection between extras and flags, > though > you've mentioned that they're extras. Does setuptools store, against an > installed distribution, the extras it was installed with? AFAIK it doesn't. > (Even if it did, it would need to keep that updated if one of the extras' > dependencies were later uninstalled.) And if not, how would having extras in > the specification help, since you can't test the "must be installed" part? The pkg_resources implementation does a require() for the extras at the time the entry point is loaded, i.e., just before importing. This allows it to dynamically add requirements to sys.path, or alternatively raise an error to indicate the extras aren't available. In addition, various entry point API functions take an 'installer' keyword argument, specifying a callback to handle installation of missing extras. Setuptools uses this feature internally, so that if you use a setup.py command whose entry point needs additional dependencies, those will be fetched on-the-fly. From p.f.moore at gmail.com Thu Jul 18 23:56:08 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 18 Jul 2013 22:56:08 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On 18 July 2013 22:08, Marcus Smith wrote: > it's a practical problem for users, due to being currently responsible for > fulfilling the setuptools dependency themselves in non-virtualenv > environments > IMO, we need to bundle or install it for them (through dynamic installs, > or add the logic to get-pip) > Seriously, we're talking here about bundling pip with the Python installer. Why not just bundle setuptools as well? Don't vendor it, don't jump through hoops, just bundle it too, so that all Python environments can be assumed to have pip and setuptools present. (Note that I'm one of the least likely people to advocate setuptools around here, and yet even I don't see why we're working so hard to avoid just having the thing available...) It seems to me that by bundling pip but not setuptools, we're just making unnecessary work for ourselves. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 19 00:05:52 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 18:05:52 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <1F622F67-4C36-4B87-8032-65DAD1236682@stufft.io> On Jul 18, 2013, at 5:56 PM, Paul Moore wrote: > On 18 July 2013 22:08, Marcus Smith wrote: > it's a practical problem for users, due to being currently responsible for fulfilling the setuptools dependency themselves in non-virtualenv environments > IMO, we need to bundle or install it for them (through dynamic installs, or add the logic to get-pip) > > Seriously, we're talking here about bundling pip with the Python installer. Why not just bundle setuptools as well? Don't vendor it, don't jump through hoops, just bundle it too, so that all Python environments can be assumed to have pip and setuptools present. (Note that I'm one of the least likely people to advocate setuptools around here, and yet even I don't see why we're working so hard to avoid just having the thing available...) > > It seems to me that by bundling pip but not setuptools, we're just making unnecessary work for ourselves. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Because a significant number of people have had issues with things breaking because their setuptools install got messed up. Typically some combination of things convinced pip to uninstall setuptools which then breaks pip completely (due to a reliance on pkg_resources) and breaks installing from sdists (due to a reliance on setuptools). This isn't a problem for most tools because they could just use pip to fix their dependencies. However when it's the package manager that breaks you're stuck fixing things manually. While it's obvious to you or I what the problem is I've found that the bulk of people who have these issues have no idea why they are getting the error and how to fix it. Bundling this means that pip is either installed and works, or it isn't installed. It makes it much simpler for end users to deal with and makes it much more robust. Right now this is particularly troublesome because there's a huge bug that's causing this to happen and I think i've not gone a day without having someone different run into the problem. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Fri Jul 19 00:23:33 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 18 Jul 2013 18:23:33 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 4:36 PM, Paul Moore wrote: > As regards console scripts, I think they should be rewritten to remove the > dependency on pkg_resources. That should be a setuptools fix, As others have already mentioned, this is not a bug but a feature. Setuptools-generated scripts are linked to a specific version of the project, which means that you can install more than one version by renaming the scripts or installing the scripts to different directories. While other strategies are definitely possible, distlib's approach is not backward-compatible, as it means installing new versions of a project will change *existing scripts'* semantics, even if you installed the previous version's scripts to different locations and intended them to remain accessible. If you want an example of doing it right, see buildout, which hardcodes the entire sys.path of a script to refer to the exact versions of all dependencies; while this has different failure modes (i.e., dependence on absolute paths), it is more stable as to script semantics even than setuptools' default behavior. > maybe triggered by a flag (possibly implied by > --single-version-externally-managed, as the pkg_resources complexity is only > needed when multi-versions are involved). That option does not preclude the existence of multiple versions, or the possibility of installing the same script to different directories for different installed versions. If you *must* do this, I suggest using buildout's approach of hardwiring sys.path in the script, only strengthened by checking for the actual existence and versions, rather than distlib's anything-goes approach. (Of course, as Donald points out, this won't do anything for those scripts that themselves make use of other packages' entry points: they will have a runtime dependency on pkg_resources anyway.) From ncoghlan at gmail.com Fri Jul 19 00:42:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 08:42:42 +1000 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: I actually now plan to make scripts and exports first class citizens in PEP 426, with pydist-scripts.json and pydist-exports.json as extracted summary files (like the existing pydist-dependencies.json). They're important enough to include directly. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 19 00:46:43 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 18:46:43 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 5:56 PM, Paul Moore wrote: > On 18 July 2013 22:08, Marcus Smith wrote: >> >> it's a practical problem for users, due to being currently responsible for >> fulfilling the setuptools dependency themselves in non-virtualenv >> environments >> IMO, we need to bundle or install it for them (through dynamic installs, >> or add the logic to get-pip) > > > Seriously, we're talking here about bundling pip with the Python installer. > Why not just bundle setuptools as well? Don't vendor it, don't jump through > hoops, just bundle it too, so that all Python environments can be assumed to > have pip and setuptools present. (Note that I'm one of the least likely > people to advocate setuptools around here, and yet even I don't see why > we're working so hard to avoid just having the thing available...) > > It seems to me that by bundling pip but not setuptools, we're just making > unnecessary work for ourselves. I'll see if I can do a patch. I don't think it will be hard at all, and I do think it's work that will eventually become necessary. PJE is correct that if we surprise people with non-pkg_resources console_scripts then we will break things for people who are more interested in a working packaging experience. From ncoghlan at gmail.com Fri Jul 19 00:49:33 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 08:49:33 +1000 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 Jul 2013 08:42, "Nick Coghlan" wrote: > > I actually now plan to make scripts and exports first class citizens in PEP 426, with pydist-scripts.json and pydist-exports.json as extracted summary files (like the existing pydist-dependencies.json). > > They're important enough to include directly. The PEP that will define the updated dist-info contents will be the sdist 2.0 PEP. Things are probably stable enough for me to write that, now. Cheers, Nick. > > Cheers, > Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 19 00:51:54 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 18:51:54 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Thu, Jul 18, 2013 at 6:42 PM, Nick Coghlan wrote: > I actually now plan to make scripts and exports first class citizens in PEP > 426, with pydist-scripts.json and pydist-exports.json as extracted summary > files (like the existing pydist-dependencies.json). > > They're important enough to include directly. > > Cheers, > Nick. Must they be two separate features? One of the reasons I use entry_points scripts is that I forget that the scripts= command to setup() exists at all. From dholth at gmail.com Fri Jul 19 00:54:45 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 18:54:45 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: OH -scripts would be the distutils-style scrips. On Thu, Jul 18, 2013 at 6:51 PM, Daniel Holth wrote: > On Thu, Jul 18, 2013 at 6:42 PM, Nick Coghlan wrote: >> I actually now plan to make scripts and exports first class citizens in PEP >> 426, with pydist-scripts.json and pydist-exports.json as extracted summary >> files (like the existing pydist-dependencies.json). >> >> They're important enough to include directly. >> >> Cheers, >> Nick. > > Must they be two separate features? One of the reasons I use > entry_points scripts is that I forget that the scripts= command to > setup() exists at all. From qwcode at gmail.com Fri Jul 19 00:58:41 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 15:58:41 -0700 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: > > I'll see if I can do a patch. I don't think it will be hard at all, > and I do think it's work that will eventually become necessary. > patch for which solution? -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 19 01:09:51 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 23:09:51 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: PJ Eby telecommunity.com> writes: > As others have already mentioned, this is not a bug but a feature. > Setuptools-generated scripts are linked to a specific version of the > project, which means that you can install more than one version by > renaming the scripts or installing the scripts to different > directories. > > While other strategies are definitely possible, distlib's approach is > not backward-compatible, as it means installing new versions of a Correct, because distlib does not support multiple installed versions of the same distribution, nor does it do the sys.path manipulations on the fly which have caused many people to have a problem with setuptools. Do people see this as a problem? I would have thought that venvs would allow people to deal with multiple versions in a less magical way. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Fri Jul 19 01:20:54 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 18 Jul 2013 23:20:54 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: Daniel Holth gmail.com> writes: > PJE is correct that if we surprise people with non-pkg_resources > console_scripts then we will break things for people who are more > interested in a working packaging experience. Do you mean that you think multiple versions have to be supported, and that's why console scripts should remain pkg_resources - dependent? If you don't think that multiple version support is needed, then the non- pkg_resources versions of the script should be able to locate the function to call from the script, assuming it can import the module. Are you saying that the import or function call will fail, because the distribution didn't reference setuptools as a dependency, and yet expects it to be there? Regards, Vinay Sajip From donald at stufft.io Fri Jul 19 01:24:16 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 19:24:16 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> On Jul 18, 2013, at 7:20 PM, Vinay Sajip wrote: > Daniel Holth gmail.com> writes: > >> PJE is correct that if we surprise people with non-pkg_resources >> console_scripts then we will break things for people who are more >> interested in a working packaging experience. > > Do you mean that you think multiple versions have to be supported, and that's > why console scripts should remain pkg_resources - dependent? > > If you don't think that multiple version support is needed, then the non- > pkg_resources versions of the script should be able to locate the function to > call from the script, assuming it can import the module. Are you saying that > the import or function call will fail, because the distribution didn't > reference setuptools as a dependency, and yet expects it to be there? > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I think the point is that people might be dependent on this functionality and changing it out from underneath them could break their world. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Fri Jul 19 01:37:33 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 00:37:33 +0100 (BST) Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> Message-ID: <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> > I think the point is that people might be dependent on this functionality and? > changing it out from underneath them could break their world. I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general). If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-) Regards, Vinay Sajip From pje at telecommunity.com Fri Jul 19 02:10:03 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 18 Jul 2013 20:10:03 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: On Thu, Jul 18, 2013 at 7:09 PM, Vinay Sajip wrote: > PJ Eby telecommunity.com> writes: >> While other strategies are definitely possible, distlib's approach is >> not backward-compatible, as it means installing new versions of a > > Correct, because distlib does not support multiple installed versions of the > same distribution, nor does it do the sys.path manipulations on the fly which > have caused many people to have a problem with setuptools. > > Do people see this as a problem? I would have thought that venvs would allow > people to deal with multiple versions in a less magical way. So does buildout, which doesn't need venvs; it just (if you configure it that way) puts all your eggs in a giant cache directory and writes scripts with hardcoded sys.path to include the right ones. This is actually more explicit than venvs, since it doesn't depend on environment variables or on installation state. IOW, there are other choices available besides "implicit environment-based path" and "dynamically generated path". Even setuptools doesn't require that you have a dynamic path. > If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. Distutils lets you install things wherever you want; in the naive case you could use install --root to install every package to a version-specific directory and then use something like Gnu Stow to create symlink farms. Python supports explicit sys.path construction and modification, and of course people certainly "vendor" (i.e. bundle) their dependencies directly in order to have a specific version of them. So, I don't think it's accurate to consider multi-version installation a totally new feature. (And AFAIK, the point of contention isn't that setuptools *supports* multi-version installation, it's that it's the *default* implementation.) In any event, wheels are designed to be able to be used in the same way as eggs for multi-version installs. The question of *how* has been brought up by Nick before, and I've thrown out some counter-proposals. It's still an open issue as to how much *active* support will be provided, but my impression of the discussion is that even if the stdlib isn't exactly *encouraging* multi-version installs, we don't want to *break* them. Hence my suggestion that if you want to drop pkg_resources use from generated scripts, you should use buildout's approach (explicit sys.path baked into the script) rather than distlib's current laissez-faire approach. Or you can just check versions, I'm not that picky. All I want is that if you install a new version of a package and still have an old copy of the script, the old script should still run the old version, or at least give you an error telling you the script wasn't updated, rather than silently running a different version. Buildout's approach accomplishes this by hardcoding egg paths, so as long as you don't delete the eggs, everything is fine, and if you do delete any of them, you can see what's wrong by looking at the script source. From donald at stufft.io Fri Jul 19 02:15:16 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 20:15:16 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> On Jul 18, 2013, at 7:37 PM, Vinay Sajip wrote: >> I think the point is that people might be dependent on this functionality and > >> changing it out from underneath them could break their world. > > > I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. > > A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general). > > If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-) Yes I'm against implementation defined features. However this is already the status quo for this particular implementation. Basically I'm worried we are trying to fix too much at once. One of the major reasons for distutils/packaging failing was it tried to fix the world in one fell swoop. I see this same pattern starting to happen here. The problem is each solution has a bunch of corner cases and gotchas and the more things we try to fix at once the less eyes we'll have on each individual one and the more rushed the entire toolchain is going to be. I think it's *really* important we limit the scope of what we fix at any one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at yet, this is going to be another PEP. On top of that we have a number of issues related to those PEPs but not specifically part of those PEPs. A lot of things is being done right now and I personally have having trouble keeping up and keeping things straight. I know i'm not the only one because I've had a number of participants of these discussions privately tell me that they aren't sure how I'm keeping up (and i'm struggling to do so). I really don't want us to ship a bunch of half baked / not entirely thought through solutions. So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better. > > Regards, > > Vinay Sajip ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Fri Jul 19 02:20:32 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 18 Jul 2013 20:20:32 -0400 Subject: [Distutils] Replacing pip.exe with a Python script In-Reply-To: References: <565EE2A4-AFC2-4DC5-8E71-90FDBD2A021A@stufft.io> Message-ID: On Tue, Jul 16, 2013 at 8:23 AM, Paul Moore wrote: > > On 16 July 2013 12:42, Oscar Benjamin wrote: >> >> I thought that 64 bit Windows could run 32 bit Windows .exe files >> (although I don't have a way to check this). > > > Yes, but there are 32-bit and 64-bit exe wrappers, which I suspect is > because a 32-bit exe can't load a 64-bit DLL (and may be vice versa). As I > said, I don't know for sure at the moment, but it needs investigating. That's not why they exist; the .exe's don't load the Python DLL, they just invoke python.exe. The existence of separate 32- and 64-bit .exe's is a Distribute innovation, actually; setuptools 0.6 doesn't use them. Instead, setuptools writes a manifest file to tell Windows that it doesn't need privilege escalation or to create a separate console. This meant that only one (32-bit) .exe was needed. I forget what happened with the Distribute approach or why the 64-bit version was kept after the merge; ISTM there was some other use for it, or at least Jason thought so. But DLL loading is not the reason. (Actually, after searching my email, my guess is that there actually *isn't* any need for the 64-bit .exe; ISTM that it was introduced solely as a false fix for the privilege escalation problem, that only fixes it for 64-bit Windows and doesn't help 32-bit.) From donald at stufft.io Fri Jul 19 02:32:37 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 20:32:37 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: <3C3955CC-3F8A-484A-9700-E3EF1C6E8AE4@stufft.io> On Jul 18, 2013, at 8:15 PM, Donald Stufft wrote: > So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better. Let me just specify though that i'm not stating exactly where that line should be drawn. I just see things heading in this direction and I think we're letting scope creep hit us hard and it will absolutely kill our efforts. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Jul 19 02:36:25 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 20:36:25 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On Jul 18, 2013, at 8:33 PM, Daniel Holth wrote: > We might as well allow happy setuptools users to continue using > setuptools. I don't care about making a pkg_resources console_scripts > handler that does the same thing because we can just use the existing > one. The more important contribution is to provide an alternative for > people who are not happy setuptools users. I generally agree with this :) I just think that we need to close the loop on our current efforts before adding more things into the fray. The only major change to the eco system we've made so far that has actually *shipped* to end users is the distribute/setuptools merge and that's causing a lot of pain to people. Soon we'll at least have a pip version with prelim wheel support but I don't even know if it supports metadata 2.0 at all or not yet? I think there's a pre-release of wheel that does though? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Fri Jul 19 02:33:29 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 20:33:29 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On Thu, Jul 18, 2013 at 8:15 PM, Donald Stufft wrote: > > On Jul 18, 2013, at 7:37 PM, Vinay Sajip wrote: > >>> I think the point is that people might be dependent on this functionality and >> >>> changing it out from underneath them could break their world. >> >> >> I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. >> >> A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general). >> >> If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-) > > Yes I'm against implementation defined features. However this is already the status quo for this particular implementation. Basically I'm worried we are trying to fix too much at once. > > One of the major reasons for distutils/packaging failing was it tried to fix the world in one fell swoop. I see this same pattern starting to happen here. The problem is each solution has a bunch of corner cases and gotchas and the more things we try to fix at once the less eyes we'll have on each individual one and the more rushed the entire toolchain is going to be. > > I think it's *really* important we limit the scope of what we fix at any one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at yet, this is going to be another PEP. On top of that we have a number of issues related to those PEPs but not specifically part of those PEPs. > > A lot of things is being done right now and I personally have having trouble keeping up and keeping things straight. I know i'm not the only one because I've had a number of participants of these discussions privately tell me that they aren't sure how I'm keeping up (and i'm struggling to do so). I really don't want us to ship a bunch of half baked / not entirely thought through solutions. > > So can we please limit our scope? Let's start by fixing the stuff we have now, punting on fixing some other problems by using the existing tooling and then let's come back to the things we've punted once we've closed the loop on some of these other outstanding things and fix them better. I feel your pain. We might as well allow happy setuptools users to continue using setuptools. I don't care about making a pkg_resources console_scripts handler that does the same thing because we can just use the existing one. The more important contribution is to provide an alternative for people who are not happy setuptools users. From dholth at gmail.com Fri Jul 19 02:40:51 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 18 Jul 2013 20:40:51 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On Thu, Jul 18, 2013 at 8:36 PM, Donald Stufft wrote: > > On Jul 18, 2013, at 8:33 PM, Daniel Holth wrote: > > We might as well allow happy setuptools users to continue using > setuptools. I don't care about making a pkg_resources console_scripts > handler that does the same thing because we can just use the existing > one. The more important contribution is to provide an alternative for > people who are not happy setuptools users. > > > I generally agree with this :) I just think that we need to close the loop > on > our current efforts before adding more things into the fray. The only major > change to the eco system we've made so far that has actually *shipped* > to end users is the distribute/setuptools merge and that's causing a lot > of pain to people. > > Soon we'll at least have a pip version with prelim wheel support but I don't > even know if it supports metadata 2.0 at all or not yet? I think there's a > pre-release of wheel that does though? bdist_wheel will produce json metadata that generally conforms to the current PEP but no consumer takes advantage of it just yet. I added the "generator" key to the metadata so it would be easy to throw out outdated or buggy json metadata. From vinay_sajip at yahoo.co.uk Fri Jul 19 02:47:12 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 01:47:12 +0100 (BST) Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> Message-ID: <1374194832.39396.YahooMailNeo@web171403.mail.ir2.yahoo.com> > version of them.? So, I don't think it's accurate to consider > multi-version installation a totally new feature.? (And AFAIK, the > point of contention isn't that setuptools *supports* multi-version > installation, it's that it's the *default* implementation.) That distutils features could be manipulated in some esoteric way doesn't mean that distutils supports multi-version installations - not by design, anyway. It's perfectly fine for setuptools, buildout and other third-party tools to support multi-version installations in whatever way they see fit ?- I only raised the question of a PEP because multi-version would be a significant new feature if in Python (leaving aside technicalities about whether something "bundled with Python" is "in Python"). Regards, Vinay Sajip From noah at coderanger.net Fri Jul 19 02:39:38 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Thu, 18 Jul 2013 17:39:38 -0700 Subject: [Distutils] Worry about lack of focus Message-ID: So we've recently seen a big resurgence in activity on improving Python packaging. First off, thats good, hopefully thats why we are all here. That said, I'm becoming worried about a possible lack of focus, and I know I'm not the only one. There have been many ideas floated, and many PEPs either sketched out, reworked, or are stated to be in planning. I think perhaps we should work out some kind of shortlist of what we think can and should be accomplished in the short term and just keep a running list of topics that need energy but are lower priority. This would reduce the chances of hitting the "fix the whole world at once" situation that we have run in to before in this attempt, which often results in burnout and frustration all around. Just to kick things off here are the rough topics I can think of that I've seen discussed recently (ignoring that many of these are dependent on each other): * Including pip with Python 3.4 * Bundling setuptools with pip * Splitting setuptools and pkg_resources * Replacing the executable generation in pip with something new * Working out how to let pip upgrade itself on Windows * Entrypoints in distutils/the stdlib * Executable generation in distlib * Signing/vetting of releases * General improvements to the wheel format * General improvements to package metadata Apologies for anything I have mis-paraphrased or missed, but that is definitely a lot of things to have up in the air. Just want to make sure we can get everything done without anyone going crazy(er) and that we keep sight of whats going on. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Jul 19 03:26:24 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 21:26:24 -0400 Subject: [Distutils] Worry about lack of focus In-Reply-To: References: Message-ID: <949369A2-CCE9-498C-94E2-AE2B7D52E090@stufft.io> On Jul 18, 2013, at 8:39 PM, Noah Kantrowitz wrote: > So we've recently seen a big resurgence in activity on improving Python packaging. First off, thats good, hopefully thats why we are all here. That said, I'm becoming worried about a possible lack of focus, and I know I'm not the only one. There have been many ideas floated, and many PEPs either sketched out, reworked, or are stated to be in planning. I think perhaps we should work out some kind of shortlist of what we think can and should be accomplished in the short term and just keep a running list of topics that need energy but are lower priority. This would reduce the chances of hitting the "fix the whole world at once" situation that we have run in to before in this attempt, which often results in burnout and frustration all around. Just to kick things off here are the rough topics I can think of that I've seen discussed recently (ignoring that many of these are dependent on each other): > > * Including pip with Python 3.4 > * Bundling setuptools with pip > * Splitting setuptools and pkg_resources > * Replacing the executable generation in pip with something new > * Working out how to let pip upgrade itself on Windows > * Entrypoints in distutils/the stdlib > * Executable generation in distlib > * Signing/vetting of releases > * General improvements to the wheel format > * General improvements to package metadata > > Apologies for anything I have mis-paraphrased or missed, but that is definitely a lot of things to have up in the air. Just want to make sure we can get everything done without anyone going crazy(er) and that we keep sight of whats going on. > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig As my last email said, I completely agree here. We don't just risk burn out but we also risk blessing solutions that haven't been thought through entirely. We also increase the "churn" of packaging making it more difficult for people who *aren't* following along to figure out what they are supposed to do. If we come in and try to advocate them changing huge swathes of their toolchain or how to do things they are going to get frustrated. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Jul 19 03:49:28 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 18 Jul 2013 21:49:28 -0400 Subject: [Distutils] Worry about lack of focus In-Reply-To: References: <949369A2-CCE9-498C-94E2-AE2B7D52E090@stufft.io> Message-ID: <8B0DA96D-36A8-4442-8BF0-91B6E54C7D04@stufft.io> On Jul 18, 2013, at 9:47 PM, Robert Collins wrote: > I'm not too worried about whats in progress... > > I am worried about disruption when we rush things - e.g. the current > broken state of setuptools+pip. > > -Rob I'd argue that doing too much at once will lead to rushing things and other brokenness. It spreads people out and provides less eyes on each component and less going back and forth because people just don't have the energy to keep track of everything. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Fri Jul 19 03:47:30 2013 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 19 Jul 2013 13:47:30 +1200 Subject: [Distutils] Worry about lack of focus In-Reply-To: <949369A2-CCE9-498C-94E2-AE2B7D52E090@stufft.io> References: <949369A2-CCE9-498C-94E2-AE2B7D52E090@stufft.io> Message-ID: On 19 July 2013 13:26, Donald Stufft wrote: > > On Jul 18, 2013, at 8:39 PM, Noah Kantrowitz wrote: > >> So we've recently seen a big resurgence in activity on improving Python packaging. First off, thats good, hopefully thats why we are all here. That said, I'm becoming worried about a possible lack of focus, and I know I'm not the only one. There have been many ideas floated, and many PEPs either sketched out, reworked, or are stated to be in planning. I think perhaps we should work out some kind of shortlist of what we think can and should be accomplished in the short term and just keep a running list of topics that need energy but are lower priority. This would reduce the chances of hitting the "fix the whole world at once" situation that we have run in to before in this attempt, which often results in burnout and frustration all around. Just to kick things off here are the rough topics I can think of that I've seen discussed recently (ignoring that many of these are dependent on each other): >> >> * Including pip with Python 3.4 >> * Bundling setuptools with pip >> * Splitting setuptools and pkg_resources >> * Replacing the executable generation in pip with something new >> * Working out how to let pip upgrade itself on Windows >> * Entrypoints in distutils/the stdlib >> * Executable generation in distlib >> * Signing/vetting of releases >> * General improvements to the wheel format >> * General improvements to package metadata >> >> Apologies for anything I have mis-paraphrased or missed, but that is definitely a lot of things to have up in the air. Just want to make sure we can get everything done without anyone going crazy(er) and that we keep sight of whats going on. I'm not too worried about whats in progress... I am worried about disruption when we rush things - e.g. the current broken state of setuptools+pip. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From ncoghlan at gmail.com Fri Jul 19 06:06:43 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 14:06:43 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline Message-ID: We have a lot of initiatives going every which way at the moment, so I figured it would be a good idea to get a common perception of what we consider to be the important near term goals and a realistic timeline for improving the packaging ecosystem (in particular, the timing relative to the CPython 3.4 release cycle). One of the big things I'd like us to do is ensure we separate out "urgent" things that are coupled to the 3.4 release cycle (like ensuring in-place upgrades for pip work on Windows) from the "important but not urgent" things where we can take more time (like metadata 2.0) This is kinda long, but if people aren't used to long emails from me by now, they haven't been paying attention ;) (PyPA devs - please forward this to pypa-dev for discussion with the folks that don't frequent distutils-sig) My current impression is that the goals below should be fairly realistic. Already done or very close to done (Yay!): * improved PyPI SSL support * setuptools/distribute merger * easy_install SSL verification * setuptools support for additional hashes beyond md5 * pip 1.4 release with SSL verification and initial wheel support (soon!) Before Python 3.4 feature freeze (currently November 23, 2013) * decide on a bundling or explicit bootstrapping scheme for pip (this still needs a PEP to help clarify the pros and cons of the various alternatives) * get RM & installer builder consensus on that scheme * make any necessary updates to CPython (e.g. possibly adding Lib/getpip.py) * (hopefully) add support for indirect imports (see http://mail.python.org/pipermail/import-sig/2013-July/000645.html for the draft PEP - thanks Eric for taking this from a rough idea in email to a concrete proposal!) Before Python 3.4 first release candidate (currently Jan 18, 2014) * pip 1.5 available (or at least release candidates) * setuptools releases as needed * improved handling of in-place pip upgrades on Windows * improved handling of pip/setuptools/pkg_resources division of responsibility * both pip and setuptools available as cross platform wheel files on PyPI * Key requirement: "pip uninstall setuptools" must be supported & must not fundamentally break pip (but may disable installation from anything other than wheel files) * Highly desirable: possible to install pkg_resources without installing setuptools * Highly desirable: possible to install setuptools without the easy_install script (just the script, having the implementation in the setuptools.commands subpackage is fine) Following Python 3.4 final release (currently Feb 22, 2014) * further proposals target pip 1.6 - decoupled from CPython release cycle * metadata 2.0 (PEP 426/440) * sdist 2.0 and wheel 1.1 * installation database format update * revisit distlib-based pip (assuming 1.5 isn't based on a vendored distlib) * revisit TUF-for-PyPI (that's more likely to be pip 1.7 timeframe, though...) Independent activities & miscellaneous suggestions * maybe suggest "pip install distlib" over pip gaining its own programmatic API? * PEP 8 cleanup (including clarification of what constitutes an internal API) * improved PyPI upload API (Donald's working on this) * getting Warehouse to a point where it can be brought online as "pypi-next.python.org" * TUF-for-PyPI exploration (the TUF folks seems to have this well in hand) * improved local PyPI hosting (especially devpi) Specifically on the "bundle or bootstrap pip" front, I'll note that due to the concerns regarding how bundling pip with the CPython MSI installer may interact with in-place upgrades, I'm leaning back towards explicit bootstrapping, with an option to run the bootstrap as part of the installation process for both CPython 3.4+ and the Python Launcher for Windows. Doing that also gives Linux distros something they can patch in the system Python to direct users towards the appropriate system package manager command. Regardless, we still need the various bundling-or-bootstrap alternatives that aren't covered in PEP 439 extracted from the list archives and turned into a PEP that compares them and suggests a preferred solution. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Jul 19 06:23:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 14:23:09 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 19 July 2013 09:37, Vinay Sajip wrote: >> I think the point is that people might be dependent on this functionality and > >> changing it out from underneath them could break their world. > > > I got the point that Daniel made, and my question was about *how* their world would break, and whether we really need to support multiple versions of something installed side-by-side, with on-the-fly sys.path manipulation. If that is a real requirement which should be supported, shouldn't there be a PEP for it, if it's coming into Python? It's not supported by distutils, and it has been a point of contention. It's a real requirement - Linux distros need it to work around parallel installation of backwards incompatible libraries in the system Python. Yes, it's an implementation defined feature of pkg_resources (not setuptools per se), but it's one that works well enough even if the error message can be opaque and the configuration can get a little arcane :) > A PEP would allow standardisation of the multiple-versions feature it it's considered desirable, rather than definition by implementation (which I understand you're not in favour of, in general). > > If it's not considered desirable and doesn't need support, then we only need to consider if it's undeclared setuptools dependencies that we're concerned with, or some other failure mode not yet identified - hence, my questions. I like to get into specifics :-) I like the idea of switching to zc.buildout style entry points - it makes it easier to get pip to a point where "no setuptools" means "can only install from wheel files" rather than "can't install anything" (that way pip can install setuptools from a wheel if it needs to build something else from source). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Jul 19 06:30:15 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 14:30:15 +1000 Subject: [Distutils] Worry about lack of focus In-Reply-To: References: Message-ID: On 19 July 2013 10:39, Noah Kantrowitz wrote: > So we've recently seen a big resurgence in activity on improving Python packaging. First off, thats good, hopefully thats why we are all here. That said, I'm becoming worried about a possible lack of focus, and I know I'm not the only one. Indeed, I realised I had a timeline sketched in my head, but had never actually shared it (and many people seem to see metadata 2.0 becoming relevant to end users *far* earlier than I had in mind - I don't see it as becoming relevant until some time in the middle of next year). Posted now, though (I started working on it this morning and just hit send before seeing this thread). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Fri Jul 19 06:34:03 2013 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 18 Jul 2013 21:34:03 -0700 Subject: [Distutils] "ImportError: No module named setuptools" (when using "pip install --upgrade") Message-ID: If you're getting "ImportError: No module named setuptools" when using "pip install --upgrade", see here for an explanation and solution: https://github.com/pypa/pip/issues/1064 -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 19 06:39:39 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 19 Jul 2013 00:39:39 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <0346ED5C-49A4-49D8-B54F-316C3D5CF763@stufft.io> On Jul 19, 2013, at 12:23 AM, Nick Coghlan wrote: > I like the idea of switching to zc.buildout style entry points - it > makes it easier to get pip to a point where "no setuptools" means "can > only install from wheel files" rather than "can't install anything" > (that way pip can install setuptools from a wheel if it needs to build > something else from source). I plan on making pip bundle setuptools regardless. To underline how important that is, it's been discovered (though we are still working out _why_) that pip 1.3.1 on python 3.x+ is broken with setuptools 0.7+. Historically we haven't tested old versions of pip against new versions of setuptools (and with how quickly setuptools is releasing now a days that matrix is going to become very big very fast). Bundling setuptools makes things way more stable and alleviates a lot of long term support headaches. Also just to be specific entry points don't require setuptools, they require pkg_resources which currently is installed as part of setuptools but can likely be split out. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Jul 19 06:43:29 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 19 Jul 2013 00:43:29 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <0346ED5C-49A4-49D8-B54F-316C3D5CF763@stufft.io> References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <0346ED5C-49A4-49D8-B54F-316C3D5CF763@stufft.io> Message-ID: <0AE8D42F-966C-4164-8202-7B08F9C3697F@stufft.io> On Jul 19, 2013, at 12:39 AM, Donald Stufft wrote: > > On Jul 19, 2013, at 12:23 AM, Nick Coghlan wrote: > >> I like the idea of switching to zc.buildout style entry points - it >> makes it easier to get pip to a point where "no setuptools" means "can >> only install from wheel files" rather than "can't install anything" >> (that way pip can install setuptools from a wheel if it needs to build >> something else from source). > > I plan on making pip bundle setuptools regardless. > > To underline how important that is, it's been discovered (though we are still working out _why_) that pip 1.3.1 on python 3.x+ is broken with setuptools 0.7+. Historically we haven't tested old versions of pip against new versions of setuptools (and with how quickly setuptools is releasing now a days that matrix is going to become very big very fast). > > Bundling setuptools makes things way more stable and alleviates a lot of long term support headaches. > > Also just to be specific entry points don't require setuptools, they require pkg_resources which currently is installed as part of setuptools but can likely be split out. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Just to expand a bit here. I think the only reason this worked at all historically is because setuptools hadn't changed much in the last few years so there wasn't much chance for regression. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Fri Jul 19 07:07:27 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 15:07:27 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 14:06, Nick Coghlan wrote: > Already done or very close to done (Yay!): > > * improved PyPI SSL support > * setuptools/distribute merger > * easy_install SSL verification > * setuptools support for additional hashes beyond md5 > * pip 1.4 release with SSL verification and initial wheel support (soon!) Marcus pointed out that should be: * pip 1.3 release with SSL verification support * pip 1.4 release with initial wheel support (soon!) I also missed out: * PyPI relocation to OSU/OSL * PyPI CDN * Massive reduction in external link scraping (PEP 438) Good things have already been done, more good things are coming, we just need to pick a sensible order and time frame to avoid burning anyone out :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Fri Jul 19 10:13:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Jul 2013 09:13:53 +0100 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 05:06, Nick Coghlan wrote: > * both pip and setuptools available as cross platform wheel files on > PyPI > Just to point out, this is the one that needs a solution to the "script wrappers" issues. At the moment, nominally cross-platform/architecture wheels are actually not so, because the wrapper scripts are fundamentally platform specific. Apologies if everyone already understood this fact. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jul 19 10:17:07 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Jul 2013 09:17:07 +0100 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 05:06, Nick Coghlan wrote: > * (hopefully) add support for indirect imports (see > http://mail.python.org/pipermail/import-sig/2013-July/000645.html for > the draft PEP - thanks Eric for taking this from a rough idea in email > to a concrete proposal!) > Looking at the import-sig archives, it looks like the list is essentially dead (I obviously unsubscribed some time ago, as I hadn't received the referenced email). Might be worth discussing this on python-dev? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Jul 19 10:28:37 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Jul 2013 09:28:37 +0100 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 18 July 2013 23:51, Daniel Holth wrote: > On Thu, Jul 18, 2013 at 6:42 PM, Nick Coghlan wrote: > > I actually now plan to make scripts and exports first class citizens in > PEP > > 426, with pydist-scripts.json and pydist-exports.json as extracted > summary > > files (like the existing pydist-dependencies.json). > > > > They're important enough to include directly. > > > > Cheers, > > Nick. > > Must they be two separate features? One of the reasons I use > entry_points scripts is that I forget that the scripts= command to > setup() exists at all. This is just the metadata. I would assume that the console-scripts/gui-scripts entry pointdefinitions in setuptools would be extracted and put into the scripts metadata, and any non-script entry points would go into exports. The legacy scripts= arguments would also go into scripts. Nick - How would the 2 methods of specifying scripts (legacy name of a (potentially platform-specific) file, and setuptools module:function style spec requiring wrapper generation) be recorded, under your approach? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Jul 19 10:31:41 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 18:31:41 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 18:17, Paul Moore wrote: > On 19 July 2013 05:06, Nick Coghlan wrote: >> >> * (hopefully) add support for indirect imports (see >> http://mail.python.org/pipermail/import-sig/2013-July/000645.html for >> the draft PEP - thanks Eric for taking this from a rough idea in email >> to a concrete proposal!) > > > Looking at the import-sig archives, it looks like the list is essentially > dead (I obviously unsubscribed some time ago, as I hadn't received the > referenced email). There hasn't been a lot to talk about since the namespace package discussions :) > Might be worth discussing this on python-dev? Given the functionality it aims to replace, here would probably be a better place for preliminary discussions to thrash out a detailed proposal. Eric and I were actually wondering about whether or not to cross-post it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Jul 19 10:35:27 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 18:35:27 +1000 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 July 2013 18:28, Paul Moore wrote: > This is just the metadata. I would assume that the > console-scripts/gui-scripts entry pointdefinitions in setuptools would be > extracted and put into the scripts metadata, and any non-script entry points > would go into exports. The legacy scripts= arguments would also go into > scripts. > > Nick - How would the 2 methods of specifying scripts (legacy name of a > (potentially platform-specific) file, and setuptools module:function style > spec requiring wrapper generation) be recorded, under your approach? As in "we install this script" vs "we need a script wrapper generated at install time for this"? Not sure, I hadn't even the idea of letting people register arbitrary "we install this script". Heck, I haven't even worked out what I want the format to look like :) I'll take that into account now, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Fri Jul 19 10:38:09 2013 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 19 Jul 2013 20:38:09 +1200 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 July 2013 01:03, Daniel Holth wrote: > Abstract > > This PEP proposes a way to represent the setuptools ?entry points? > feature in standard Python metadata. Entry points are a useful > mechanism for advertising or discovering plugins or other exported > functionality without having to depend on the module namespace. Since > the feature is used by many existing Python distributions and not > everyone wants to use setuptools, it is useful to have a way to > represent the functionality that is not tied to setuptools itself. > > The proposed feature defines an extension field for the standard > Python distribution metadata and some basic semantics for its use. So my question here would be - can we make it faster? We have just been diagnosing a performance problem in nova due to rootwrap being a pkg_resources scripts entry point : just getting to the first line of main() takes 200ms, and we make dozens of subprocess calls (has to be, we're escalating privileges) to the script in question : that time is nearly entirely doing introspection of metadata from disk. -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From vinay_sajip at yahoo.co.uk Fri Jul 19 11:24:56 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 09:24:56 +0000 (UTC) Subject: [Distutils] entry points PEP References: Message-ID: Robert Collins robertcollins.net> writes: > So my question here would be - can we make it faster? We have just > been diagnosing a performance problem in nova due to rootwrap being a > pkg_resources scripts entry point : just getting to the first line of > main() takes 200ms, and we make dozens of subprocess calls (has to be, > we're escalating privileges) to the script in question : that time is > nearly entirely doing introspection of metadata from disk. Is there more detailed information about where the time is being spent? e.g. os.stat(), file I/O, parsing of the actual metadata files, load_entry_point() etc. Regards, Vinay Sajip From p.f.moore at gmail.com Fri Jul 19 11:29:25 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Jul 2013 10:29:25 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 19 July 2013 05:23, Nick Coghlan wrote: > On 19 July 2013 09:37, Vinay Sajip wrote: > >> I think the point is that people might be dependent on this > functionality and > > > >> changing it out from underneath them could break their world. > > > > > > I got the point that Daniel made, and my question was about *how* their > world would break, and whether we really need to support multiple versions > of something installed side-by-side, with on-the-fly sys.path manipulation. > If that is a real requirement which should be supported, shouldn't there be > a PEP for it, if it's coming into Python? It's not supported by distutils, > and it has been a point of contention. > > It's a real requirement - Linux distros need it to work around > parallel installation of backwards incompatible libraries in the > system Python. Yes, it's an implementation defined feature of > pkg_resources (not setuptools per se), but it's one that works well > enough even if the error message can be opaque and the configuration > can get a little arcane :) > Just to be absolutely clear on my interest in this: 1. I believe (but cannot prove, so I'll accept others stating that I'm wrong) that many people using setuptools for the console-script entry point functionality, have no specific interest in or requirement for multi-version. As an example, take pip itself. So while it is true that functionality will be lost, I do not believe that users will actually be affected in the majority of cases. That's not to say that just removing the functionality without asking is valid. 2. Projects typically do not declare a runtime dependency on setuptools just because they use script wrappers. Maybe they should, but they don't. Again, pip is an example. So wheel-based installs of such projects can break on systems without setuptools (pkg_resources). This is going to be a bigger problem in future, as pip install from wheels does not need setuptools to be installed on the target (and if we vendor setuptools in pip, nor does install from sdist). Of course, after the first time you hit this, you install setuptools and it's never a problem again. But it's a bad user experience. 3. It's an issue for pip itself, as we explicitly do not want a dependency on a system installed setuptools. So we have to hack or replace the setuptools-generated wrappers. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Jul 19 11:32:37 2013 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 19 Jul 2013 21:32:37 +1200 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 July 2013 21:24, Vinay Sajip wrote: > Robert Collins robertcollins.net> writes: > >> So my question here would be - can we make it faster? We have just >> been diagnosing a performance problem in nova due to rootwrap being a >> pkg_resources scripts entry point : just getting to the first line of >> main() takes 200ms, and we make dozens of subprocess calls (has to be, >> we're escalating privileges) to the script in question : that time is >> nearly entirely doing introspection of metadata from disk. > > Is there more detailed information about where the time is being spent? e.g. > os.stat(), file I/O, parsing of the actual metadata files, load_entry_point() > etc. Not sure. Joe? -Rob -- Robert Collins Distinguished Technologist HP Cloud Services From p.f.moore at gmail.com Fri Jul 19 11:48:52 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 19 Jul 2013 10:48:52 +0100 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 July 2013 09:35, Nick Coghlan wrote: > Not sure, I hadn't even the idea of letting people register arbitrary > "we install this script". Heck, I haven't even worked out what I want > the format to look like :) > That's the big legacy issue. The old distutils script= argument just dumps arbitrary files into the scripts location. I thought people only used that for the same thing that setuptools entry points can do (and so could safely be treated as "legacy, phase it out") but Daniel has scanned PyPI and tells me otherwise :-( There's also setuptools itself, which generates exes and py files (on Windows, on Unix presumably just a #! script) which are then effectively "legacy-style" scripts It could certainly be changed to just write metadata, but as per the other thread this would be a functionality change that people seem to feel would be unacceptable. > I'll take that into account now, though. > Best of luck :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From liamk at numenet.com Thu Jul 18 19:58:41 2013 From: liamk at numenet.com (Liam Kirsher) Date: Thu, 18 Jul 2013 10:58:41 -0700 Subject: [Distutils] distribute 0.7.3 causing installation error? In-Reply-To: References: <51E76EDD.7040009@numenet.com> Message-ID: <51E82CD1.6010000@numenet.com> Marcus, Thanks! After reading that I think I can fix this by installing pip 1.4. However, some questions remain. Pip is currently being installed via distribute_setup.py, which is retrieved from here: http://python-distribute.org/distribute_setup.py However, that doesn't seem to work to install 0.7.3 because it is looking for .tar.gz and what's actually there is .zip: https://pypi.python.org/packages/source/d/distribute/ It looks like distribute_setup.py will just install the DEFAULT_VERSION which is 0.6.49. I think that ends up making you install pip 1.3.1. So, am I right in thinking that I have to modify distribute_setup.py to deal with the .zip file -- or is it possible for someone to add a .tar.gz version? Obviously, the latter option is preferred! Secondly, I would have to create a different version of distribute_setup.py anyway that would default to 0.7.3, since it is being run via its main() function, and not via "from distribute_setup import use_setuptools; use_setuptools()" Thanks, Liam On 07/17/2013 10:21 PM, Marcus Smith wrote: > Hello Liam: > The problem and solutions are explained here: > https://github.com/pypa/pip/issues/1033#issuecomment-20546202 > Btw, the issue includes comments from chef maintainers about a similar > (or the same) supervisor recipe. > Marcus > > > On Wed, Jul 17, 2013 at 9:28 PM, Liam Kirsher > wrote: > > Hi, > > I ran into an error about a month ago caused by a change in the > PyPi version of distribute. Thankfully, someone was able to roll > back the change. Unfortunately, I'm getting a similar kind of > problem now -- and I notice that 0.7.3 was released on 5 July, > so... I'm wondering if it might be related. This is being > included in a Chef recipe. > > I'm attaching the pip.log, which shows it uninstalling distribute > (which looks like version 0.6.49), and then failing to find it and > attempting to install 0.7.3, and subsequent package installs failing. > > Anyway, I'm not quite sure what to do here! How can I fix this > problem? (And also, how can I prevent it from happening in the > future by pegging the version to something that works?) > > > The pip recipe includes the following comments, which may be > relevant. > >> # Ubuntu's python-setuptools, python-p ip and py thon-virtualenv >> packages >> # are broken...this feels like Rubygems! >> # >> http://stackoverflow.com/questions/4324558/whats-the-proper-way-to-install-pip-virtualenv-and-distribute-for-python >> >> # >> https://bitbucket.org/ianb/pip/issue/104/pip-uninstall-on-ubuntu-linux >> >> remote_file >> "#{Chef::Config[:file_cache_path]}/distribute_setup.py" do >> >> source node['python']['distribute_script_url'] >> >> mode "0644" >> not_if { ::File.exists?(pip_binary) } >> >> end >> execute "install-pip" do >> cwd Chef::Config[:file_cache_path] >> command <<-EOF >> >> #{node['python']['binary']} distribute_setup.py >> --download-base=#{node['python']['distribute_option']['download_base']} >> #{::File.dirname(pip_binary)}/easy_install pip >> >> EOF >> not_if { ::File.exists?(pip_binary) } >> end > > > > Chef run log: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Recipe: >> python::virtualenv >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> * >> python_pip[virtualenv] action install >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> - >> install package python_pip[virtualenv] version latest >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Recipe: >> supervisor::default >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> * >> python_pip[supervisor] action upgrade >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ================================================================================ >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Error >> executing action `upgrade` on resource 'python_pip[supervisor]' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ================================================================================ >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Mixlib::ShellOut::ShellCommandFailed >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ------------------------------------ >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Expected process to exit with [0], but received '1' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> ---- >> Begin output of pip install --upgrade supervisor ---- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> STDOUT: >> Downloading/unpacking supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking distribute from >> https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a >> (from supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking meld3>=0.6.5 (from supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking setuptools>=0.7 (from distribute->supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing collected packages: supervisor, distribute, meld3, >> setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Skipping installation of >> /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py >> (namespace package) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing >> /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing echo_supervisord_conf script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing pidproxy script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing supervisorctl script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing supervisord script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Found >> existing installation: distribute 0.6.49 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Uninstalling distribute: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Successfully uninstalled distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Traceback (most recent call last): >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> File "", line 1, in >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ImportError: No module named setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Complete output from command /usr/bin/python -c "import >> setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', >> '\n'), __file__, 'exec'))" install --record >> /tmp/pip-mDCOBa-record/install-record.txt >> --single-version-externally-managed: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Traceback (most recent call last): >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> File >> "", line 1, in >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ImportError: No module named setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ---------------------------------------- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Command >> /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> -root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', >> '\n'), __file__, 'exec'))" install --record >> /tmp/pip-mDCOBa-record/install-record.txt >> --single-version-externally-managed failed with error code 1 in >> /tmp/pip-build-root/meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Storing >> complete log in /home/ubuntu/.pip/pip.log >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> STDERR: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> ---- >> End output of pip install --upgrade supervisor ---- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Ran pip >> install --upgrade supervisor returned 1 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Cookbook Trace: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> --------------- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> /var/chef/cache/cookbooks/python/providers/pip.rb:155:in `pip_cmd' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> /var/chef/cache/cookbooks/python/providers/pip.rb:139:in >> `install_package' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> /var/chef/cache/cookbooks/python/providers/pip.rb:144:in >> `upgrade_package' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> /var/chef/cache/cookbooks/python/providers/pip.rb:60:in `block (2 >> levels) in class_from_file' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> /var/chef/cache/cookbooks/python/providers/pip.rb:58:in `block in >> class_from_file' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Resource Declaration: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> --------------------- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> # In >> /var/chef/cache/cookbooks/supervisor/recipes/default.rb >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> 29: >> python_pip "supervisor" do >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> 30: >> action :upgrade >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> 31: >> version node['supervisor']['version'] if >> node['supervisor']['version'] >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> 32: end >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> 33: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Compiled Resource: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ------------------ >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> # >> Declared in >> /var/chef/cache/cookbooks/supervisor/recipes/default.rb:29:in >> `from_file' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> python_pip("supervisor") do >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> action [:upgrade] >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> retries 0 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> retry_delay 2 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> cookbook_name "supervisor" >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> recipe_name "default" >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> package_name "supervisor" >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> timeout 900 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> options " --upgrade" >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> end >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Recipe: >> ntp::default >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> * >> service[ntp] action restart >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> - >> restart service service[ntp] >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Recipe: >> rabbitmq::default >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> * >> service[rabbitmq-server] action restart >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> - >> restart service service[rabbitmq-server] >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> [2013-07-16T02:36:50+00:00] ERROR: Running exception handlers >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> [2013-07-16T02:36:51+00:00] FATAL: Saving node information to >> /var/chef/cache/failed-run-data.json >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> [2013-07-16T02:36:51+00:00] ERROR: Exception handlers complete >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Chef >> Client failed. 35 resources updated >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> [2013-07-16T02:36:51+00:00] FATAL: Stacktrace dumped to >> /var/chef/cache/chef-stacktrace.out >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> [2013-07-16T02:36:51+00:00] FATAL: >> Mixlib::ShellOut::ShellCommandFailed: python_pip[supervisor] >> (supervisor::default line 29) had an error: >> Mixlib::ShellOut::ShellCommandFailed: Expected process to exit >> with [0], but received '1' >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> ---- >> Begin output of pip install --upgrade supervisor ---- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> STDOUT: >> Downloading/unpacking supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking distribute from >> https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a >> (from supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking meld3>=0.6.5 (from supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Downloading/unpacking setuptools>=0.7 (from distribute->supervisor) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py egg_info for package setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing collected packages: supervisor, distribute, meld3, >> setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for supervisor >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Skipping installation of >> /usr/local/lib/python2.7/dist-packages/supervisor/__init__.py >> (namespace package) >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing >> /usr/local/lib/python2.7/dist-packages/supervisor-3.0b2-py2.7-nspkg.pth >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing echo_supervisord_conf script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing pidproxy script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing supervisorctl script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Installing supervisord script to /usr/local/bin >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Found >> existing installation: distribute 0.6.49 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Uninstalling distribute: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Successfully uninstalled distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for distribute >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Running setup.py install for meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Traceback (most recent call last): >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> File "", line 1, in >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ImportError: No module named setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Complete output from command /usr/bin/python -c "import >> setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', >> '\n'), __file__, 'exec'))" install --record >> /tmp/pip-mDCOBa-record/install-record.txt >> --single-version-externally-managed: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> Traceback (most recent call last): >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> File >> "> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> g>", >> line 1, in >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ImportError: No module named setuptools >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> >> ---------------------------------------- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Command >> /usr/bin/python -c "import >> setuptools;__file__='/tmp/pip-build-root/meld3/setup.py';exec(compile(open(__file__).read().replace('\r\n', >> '\n'), __file__, 'exec'))" install --record >> /tmp/pip-mDCOBa-record/install-record.txt >> --single-version-externally-managed failed with error code 1 in >> /tmp/pip-build-root/meld3 >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Storing >> complete log in /home/ubuntu/.pip/pip.log >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> STDERR: >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> ---- >> End output of pip install --upgrade supervisor ---- >> ec2-54-245-36-62.us-west-2.compute.amazonaws.com >> Ran pip >> install --upgrade supervisor returned 1 > > -- > Liam Kirsher > PGP: http://liam.numenet.com/pgp/ > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > -- Liam Kirsher PGP: http://liam.numenet.com/pgp/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Jul 19 14:58:03 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 08:58:03 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 5:32 AM, Robert Collins wrote: > On 19 July 2013 21:24, Vinay Sajip wrote: >> Robert Collins robertcollins.net> writes: >> >>> So my question here would be - can we make it faster? We have just >>> been diagnosing a performance problem in nova due to rootwrap being a >>> pkg_resources scripts entry point : just getting to the first line of >>> main() takes 200ms, and we make dozens of subprocess calls (has to be, >>> we're escalating privileges) to the script in question : that time is >>> nearly entirely doing introspection of metadata from disk. >> >> Is there more detailed information about where the time is being spent? e.g. >> os.stat(), file I/O, parsing of the actual metadata files, load_entry_point() >> etc. > > Not sure. Joe? > > -Rob You should at least time it against the simpler "import sys, x.main; sys.exit(main())" style wrapper. As a pkg_resources optimization it might be worthwhile to try using https://github.com/benhoyt/scandir/ or in Python 3, the undocumented cache used by the importer system, to try to speed things up. It is a bit tricky to profile pkg_resources since it does a lot of work at import. From dholth at gmail.com Fri Jul 19 15:04:50 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 09:04:50 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Fri, Jul 19, 2013 at 5:29 AM, Paul Moore wrote: > On 19 July 2013 05:23, Nick Coghlan wrote: >> >> On 19 July 2013 09:37, Vinay Sajip wrote: >> >> I think the point is that people might be dependent on this >> >> functionality and >> > >> >> changing it out from underneath them could break their world. >> > >> > >> > I got the point that Daniel made, and my question was about *how* their >> > world would break, and whether we really need to support multiple versions >> > of something installed side-by-side, with on-the-fly sys.path manipulation. >> > If that is a real requirement which should be supported, shouldn't there be >> > a PEP for it, if it's coming into Python? It's not supported by distutils, >> > and it has been a point of contention. >> >> It's a real requirement - Linux distros need it to work around >> parallel installation of backwards incompatible libraries in the >> system Python. Yes, it's an implementation defined feature of >> pkg_resources (not setuptools per se), but it's one that works well >> enough even if the error message can be opaque and the configuration >> can get a little arcane :) > > > Just to be absolutely clear on my interest in this: > > 1. I believe (but cannot prove, so I'll accept others stating that I'm > wrong) that many people using setuptools for the console-script entry point > functionality, have no specific interest in or requirement for > multi-version. As an example, take pip itself. So while it is true that > functionality will be lost, I do not believe that users will actually be > affected in the majority of cases. That's not to say that just removing the > functionality without asking is valid. > > 2. Projects typically do not declare a runtime dependency on setuptools just > because they use script wrappers. Maybe they should, but they don't. Again, > pip is an example. So wheel-based installs of such projects can break on > systems without setuptools (pkg_resources). This is going to be a bigger > problem in future, as pip install from wheels does not need setuptools to be > installed on the target (and if we vendor setuptools in pip, nor does > install from sdist). Of course, after the first time you hit this, you > install setuptools and it's never a problem again. But it's a bad user > experience. > > 3. It's an issue for pip itself, as we explicitly do not want a dependency > on a system installed setuptools. So we have to hack or replace the > setuptools-generated wrappers. > > Paul. pip should just add pkg_resources as a dependency for any package that has console_scripts entry points. From ncoghlan at gmail.com Fri Jul 19 15:10:51 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 19 Jul 2013 23:10:51 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 19 July 2013 19:29, Paul Moore wrote: > On 19 July 2013 05:23, Nick Coghlan wrote: >> >> On 19 July 2013 09:37, Vinay Sajip wrote: >> >> I think the point is that people might be dependent on this >> >> functionality and >> > >> >> changing it out from underneath them could break their world. >> > >> > >> > I got the point that Daniel made, and my question was about *how* their >> > world would break, and whether we really need to support multiple versions >> > of something installed side-by-side, with on-the-fly sys.path manipulation. >> > If that is a real requirement which should be supported, shouldn't there be >> > a PEP for it, if it's coming into Python? It's not supported by distutils, >> > and it has been a point of contention. >> >> It's a real requirement - Linux distros need it to work around >> parallel installation of backwards incompatible libraries in the >> system Python. Yes, it's an implementation defined feature of >> pkg_resources (not setuptools per se), but it's one that works well >> enough even if the error message can be opaque and the configuration >> can get a little arcane :) > > > Just to be absolutely clear on my interest in this: > > 1. I believe (but cannot prove, so I'll accept others stating that I'm > wrong) that many people using setuptools for the console-script entry point > functionality, have no specific interest in or requirement for > multi-version. As an example, take pip itself. So while it is true that > functionality will be lost, I do not believe that users will actually be > affected in the majority of cases. That's not to say that just removing the > functionality without asking is valid. I was going to say it would affect Linux distro packagers (since the multi-version support is necessary for us to hack together something vaguely resembling parallel install support for Python libraries that make backwards incompatible changes), but then I remembered that at least Fedora & RHEL SRPMs generally call setup.py directly in the build phase (with setuptools as a build dependency). This means that what pip chooses when installing from source or a wheel won't actually affect distro packaging (since I assume other distros are doing something at least vaguely similar to what we do). With our widely deployed (but still highly specialised) use case out of the picture, I think you're probably right. > 2. Projects typically do not declare a runtime dependency on setuptools just > because they use script wrappers. Maybe they should, but they don't. Again, > pip is an example. So wheel-based installs of such projects can break on > systems without setuptools (pkg_resources). This is going to be a bigger > problem in future, as pip install from wheels does not need setuptools to be > installed on the target (and if we vendor setuptools in pip, nor does > install from sdist). Of course, after the first time you hit this, you > install setuptools and it's never a problem again. But it's a bad user > experience. > > 3. It's an issue for pip itself, as we explicitly do not want a dependency > on a system installed setuptools. So we have to hack or replace the > setuptools-generated wrappers. Right, I think the reasonable near term solutions are for pip to either: 1. generate zc.buildout style wrappers with absolute paths to avoid the implied runtime dependency 2. interpret use of script entry points as an implied dependency on setuptools and install it even if not otherwise requested Either way, pip would need to do something about its *own* command line script, which heavily favours option 1 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Fri Jul 19 16:50:28 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 10:50:28 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: On Fri, Jul 5, 2013 at 4:25 AM, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> The basic problem with the list form is that allowing two representations >> for the same metadata makes for extra complexity we don't really want. It >> means we have to decide if the decomposed version (3 separate entries >> with one item in each install list) is still legal. > > I'm not sure how prescriptive we need to be. For example, posit metadata like: > > { > "install": ["a", "b", "c"], > "extra": "foo" > }, > { > "install": ["d", "e", "f"], > "extra": "foo" > }, > { > "install": ["g"], > "extra": "foo" > } > > Even though there's no particular rationale for structuring it like this, > the intention is clear: "a" .. "g" are dependencies when extra "foo" is > specified. As long as the method by which these entries are processed is > clear in the PEP, then it's not clear what's to be gained by being overly > constraining. > > There are numerous ways in which dependency information can be represented > which are not worth the effort to canonicalise. For example, the order in > which extras or version constraints are declared in a dependency specifier: > > dist-name [foo,bar] (>= 1.0, < 2.0) > > and > > dist-name [bar,foo] (< 2.0, >= 1.0) > > are equivalent, but in any simplistic handling this would slip past e.g. > database uniqueness constraints. More sophisticated handling (by modelling > below the Dependency level) is possible, but whether it's worth it is debatable. > > Regards, > > Vinay Sajip I would really like to see one more level of nesting: requires : { run : [ ... ], test : [ ... ] } The parser and the specification will be simplified by putting all of the the requirements categories inside a uniform dict instead of having magic _-separated top level key names that have to be mapped to the "run", "meta", "test" category names. That way the top-level parser can just check: if metadata['requires'].keys() contains only the allowed values: parse_requirements(metadata['requires']) Then parse_requirements() works the same no matter how many requirements categories there are. From mal at egenix.com Fri Jul 19 17:00:12 2013 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 19 Jul 2013 17:00:12 +0200 Subject: [Distutils] API for registering/managing URLs for a package In-Reply-To: <64C8442E-6D77-4753-9749-5632B43A1127@stufft.io> References: <51E7F759.1060707@egenix.com> <7B228AA9-038F-440D-BF6D-17FCAFD5542F@coderanger.net> <64C8442E-6D77-4753-9749-5632B43A1127@stufft.io> Message-ID: <51E9547C.1030803@egenix.com> On 18.07.2013 21:00, Donald Stufft wrote: > Noah, > External urls are still supported (Although discouraged). > > Marc-Andre, > There is documentation in the PEP, however I have another PEP > coming up for a more streamlined upload process that also contains > a much nicer method of sending external urls as well. So you might > want to wait for that. Thanks for the update, Donald. I found this section: http://www.python.org/dev/peps/pep-0438/#api-for-submitting-external-distribution-urls I'll play around with that API a bit and then migrate to your new API. Thanks, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jul 19 2013) >>> Python Projects, Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From brett at python.org Fri Jul 19 17:20:46 2013 From: brett at python.org (Brett Cannon) Date: Fri, 19 Jul 2013 11:20:46 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On Thu, Jul 18, 2013 at 8:33 PM, Daniel Holth wrote: > On Thu, Jul 18, 2013 at 8:15 PM, Donald Stufft wrote: > > > > On Jul 18, 2013, at 7:37 PM, Vinay Sajip > wrote: > > > >>> I think the point is that people might be dependent on this > functionality and > >> > >>> changing it out from underneath them could break their world. > >> > >> > >> I got the point that Daniel made, and my question was about *how* their > world would break, and whether we really need to support multiple versions > of something installed side-by-side, with on-the-fly sys.path manipulation. > If that is a real requirement which should be supported, shouldn't there be > a PEP for it, if it's coming into Python? It's not supported by distutils, > and it has been a point of contention. > >> > >> A PEP would allow standardisation of the multiple-versions feature it > it's considered desirable, rather than definition by implementation (which > I understand you're not in favour of, in general). > >> > >> If it's not considered desirable and doesn't need support, then we only > need to consider if it's undeclared setuptools dependencies that we're > concerned with, or some other failure mode not yet identified - hence, my > questions. I like to get into specifics :-) > > > > Yes I'm against implementation defined features. However this is already > the status quo for this particular implementation. Basically I'm worried we > are trying to fix too much at once. > > > > One of the major reasons for distutils/packaging failing was it tried to > fix the world in one fell swoop. I see this same pattern starting to > happen here. The problem is each solution has a bunch of corner cases and > gotchas and the more things we try to fix at once the less eyes we'll have > on each individual one and the more rushed the entire toolchain is going to > be. > > > > I think it's *really* important we limit the scope of what we fix at any > one time. Right now we have PEP426, PEP440, PEP439, PEP427, Nick is talking > about an Sdist 2.0 PEP, Daniel just posted another PEP I haven't looked at > yet, this is going to be another PEP. On top of that we have a number of > issues related to those PEPs but not specifically part of those PEPs. > > > > A lot of things is being done right now and I personally have having > trouble keeping up and keeping things straight. I know i'm not the only one > because I've had a number of participants of these discussions privately > tell me that they aren't sure how I'm keeping up (and i'm struggling to do > so). I really don't want us to ship a bunch of half baked / not entirely > thought through solutions. > > > > So can we please limit our scope? Let's start by fixing the stuff we > have now, punting on fixing some other problems by using the existing > tooling and then let's come back to the things we've punted once we've > closed the loop on some of these other outstanding things and fix them > better. > > I feel your pain. > > We might as well allow happy setuptools users to continue using > setuptools. I don't care about making a pkg_resources console_scripts > handler that does the same thing because we can just use the existing > one. The more important contribution is to provide an alternative for > people who are not happy setuptools users. Which is an argument, in my mind, to vendor setuptools over bundling (assuming people are using "bundling" as in "install setuptools next to pip or at least install a .pth file to access the vendored version"). Including pip with Python installers is blessing it as the installer, but if we include setuptools as well that would also be blessing setuptools as *the* building tool as well. If people's preference for virtualenv over venv simply because they didn't want to install pip manually has shown us anything is that the lazy path is the used path. If the long-term plan is to bless setuptools then go for the bundling, but if that decision has not been made yet then bundling may be premature if the bundling of pip with Python moves forward. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 19 17:23:41 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 16:23:41 +0100 (BST) Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> Message-ID: <1374247421.59057.YahooMailNeo@web171405.mail.ir2.yahoo.com> > > I would really like to see one more level of nesting: > > requires : { run : [ ... ], test : [ ... ] } > I've already changed distlib's code several times as the spec has evolved, and would like not to see any more changes so that I can concentrate on some real work ;-) Seriously, what's currently there now works OK, and the code is fairly simple. I had suggested a variant with even less nesting - one single "requires" list with each entry as it is currently, but having an additional "kind" key with value ":run:", ":test:" etc. This has the merit that you can add additional kinds without major changes, while processing code can filter the list according to its needs at the time. This was shot down by Donald on the basis that it would make things too complicated, or something. Seems a simpler organisation, to me; any argument about additional time to process is unlikely to be a problem in practice, and there are no numbers to point to any performance problems. Currently, with pip, you have to download whole archives while doing dependency resolution, which takes of the order of *seconds* - *minutes* if you're working with Zope/Plone. Doing it in tens/hundreds of milliseconds is sheer luxury :-) Let's not keep on chopping and changing parts of the JSON schema unless there are actual progress stoppers or missing functional areas, as we recently identified with exports/scripts. It looks as if you and I are the only ones actually implementing this PEP at present, so let's work on interoperability between our implementations so that we can e.g. each build wheels that the other can install, and so on. Interoperability will help confirm that we haven't missed anything. AFAIK distlib tip is up to date with PEP 426/440 as they are today - someone please tell me if they find a counter-example to this assertion. Regards, Vinay Sajip From brett at python.org Fri Jul 19 17:33:32 2013 From: brett at python.org (Brett Cannon) Date: Fri, 19 Jul 2013 11:33:32 -0400 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 4:31 AM, Nick Coghlan wrote: > On 19 July 2013 18:17, Paul Moore wrote: > > On 19 July 2013 05:06, Nick Coghlan wrote: > >> > >> * (hopefully) add support for indirect imports (see > >> http://mail.python.org/pipermail/import-sig/2013-July/000645.html for > >> the draft PEP - thanks Eric for taking this from a rough idea in email > >> to a concrete proposal!) > > > > > > Looking at the import-sig archives, it looks like the list is essentially > > dead (I obviously unsubscribed some time ago, as I hadn't received the > > referenced email). > > There hasn't been a lot to talk about since the namespace package > discussions :) > > > Might be worth discussing this on python-dev? > > Given the functionality it aims to replace, here would probably be a > better place for preliminary discussions to thrash out a detailed > proposal. Eric and I were actually wondering about whether or not to > cross-post it. I agree it should start there (I have already replied to Eric's draft PEP). All of the people with any form of intimate knowledge of how import works are on that list so I think it's still best to hash out any sticking points there before taking to python-dev for decision-making. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Jul 19 17:41:50 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 16:41:50 +0100 (BST) Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: <1374248510.90438.YahooMailNeo@web171405.mail.ir2.yahoo.com> > >If the long-term plan is to bless setuptools then go for the bundling, but if that decision has not been made yet then bundling may be premature if the bundling of pip with Python moves forward. > Well, Nick has said that he thinks that "distlib is the future" (or, I assume, something like it - something that is based on PEPs and standardisation rather than a de facto implementation which a sizeable minority have problems with, though it's pragmatically acceptable for the majority). If distlib or something like it (standards-based) is to be the future, we have to be very careful. As I've said to Nick in an off-list mail, that?sort of future is only going to fly if sufficient safeguards are in place such that we don't have to have compatibility shims for setuptools, pkg_resources and pip Python packages/APIs. Based on the actual work I did to replace pkg_resources with distlib in pip, it's not a thing I really want to do more of (or that anyone else should have to do). So, ISTM that pkg_resources and setuptools would need to be subsumed into pip so that they weren't externally visible - perhaps they would move to the pip.vendor package. Otherwise, we might was well accept pkg_resources and setuptools into the stdlib - no matter how many ifs and buts we put in the fine print, that's what we'd essentially have - with apologies to Robert Frost and to borrow from what Brett said, "the lazy road is ?the one most travelled, and that makes all the difference". Plus, there would need to be sufficient health warnings to indicate to people tempted to use these subsumed APIs or any pip API that they would be completely on their own as regards future-proofing. In my view this can't just be left up to the pip maintainers to decide on - it needs to be a condition set by python-dev, to apply if pip is shipped with Python. Otherwise, backward compatibility will tie our hands for ever (or at least, a very long time). Regards, Vinay Sajip From pje at telecommunity.com Fri Jul 19 17:47:04 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 19 Jul 2013 11:47:04 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan wrote: > Right, I think the reasonable near term solutions are for pip to either: > > 1. generate zc.buildout style wrappers with absolute paths to avoid > the implied runtime dependency > 2. interpret use of script entry points as an implied dependency on > setuptools and install it even if not otherwise requested > > Either way, pip would need to do something about its *own* command > line script, which heavily favours option 1 Option 1 also would address some or all of the startup performance complaint. It occurs to me that it might actually be a good idea *not* to put the script wrappers in the standard entry points file, even if that's what setuptools does right now: if lots of packages use that approach, it'll slow down the effective indexing for code that's scanning multiple packages for something like a sqlalchemy adapter. (Alternately, we could use something like 'exports-some.group.name.json' so that each export group is a separate file; this would keep scripts separate from everything else, and optimize plugin searches falling in a particular group. In fact, the files needn't have any contents; it'd be okay to just parse the main .json for any distribution that has exports in the group you're looking for. i.e., the real purpose of the separation of entry points was always just to avoid loading metadata for distributions that don't have the kind of exports you're looking for. In the old world, few distributions exported anything, so just identifying whether a distribution had exports was sufficient. In the new world, more and more distributions over time will have some kind of export, so knowing *which* exports they have will become more important.) From vinay_sajip at yahoo.co.uk Fri Jul 19 18:13:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 16:13:43 +0000 (UTC) Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: Oscar Benjamin gmail.com> writes: > Python 2.x uses GetCommandLineA and 3.x uses GetCommandLineW. A > wrapper to launch 2.x should use GetCommandLineA and CreateProcessA to > ensure that the 8-bit argument strings are passed through unaltered. > To launch 3.x it should use the W versions. If not then the MSVC > runtime (or the OS?) will convert between the 8-bit and 16-bit > encodings using its own lossy routines. There is a standalone launcher available (referenced in PEP 397) which was the reference implementation, but it uses Unicode throughout. I would have assumed that whatever provided the decoding behind GetCommandLineW would use the same encoding in CreateProcessW(..., unicode_command_line, ...). Do you have any specific examples showing potential failure modes? Regards, Vinay Sajip From qwcode at gmail.com Fri Jul 19 18:31:37 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 19 Jul 2013 09:31:37 -0700 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: > * decide on a bundling or explicit bootstrapping scheme for pip > (this still needs a PEP to help clarify the pros and cons of the > various alternatives) > if we improve things enough so that the get-pip.py experience is reliable and robust (and handles setuptools if not bundled), then might that be enough for now? (see the options here: https://github.com/pypa/pip/issues/1049) i.e. improve pip's installer experience, and then come back around to bundling/bootstrap with python. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Jul 19 18:34:30 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 19 Jul 2013 12:34:30 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On Jul 19, 2013, at 11:20 AM, Brett Cannon wrote: > Which is an argument, in my mind, to vendor setuptools over bundling (assuming people are using "bundling" as in "install setuptools next to pip or at least install a .pth file to access the vendored version"). Including pip with Python installers is blessing it as the installer, but if we include setuptools as well that would also be blessing setuptools as *the* building tool as well. If people's preference for virtualenv over venv simply because they didn't want to install pip manually has shown us anything is that the lazy path is the used path. I don't believe we want to bless setuptools in the long run hence why I want to vendor setuptools under pip.vendor.*. I believe pkg_resources should be split out and pip should just dynamically add it to the dependencies for anything that uses entry points. For it's own uses it should not generate scripts that depend on anything that isn't included with pip itself. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Fri Jul 19 18:58:21 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 19 Jul 2013 17:58:21 +0100 (BST) Subject: [Distutils] distlib Wheel.install API - proposed changes - distlib users, please comment Message-ID: <1374253101.40092.YahooMailNeo@web171401.mail.ir2.yahoo.com> The API for installing wheels in distlib [1] will need changes as a result of recent discussions around the need to have better control over how scripts are generated at installation time. Currently, the API looks like this: def install(self, paths, dry_run=False, executable=None, warner=None, ? ? ? ? ? ? lib_only=False): ? ? """ ? ? Install a wheel to the specified paths. If ``executable`` is specified, ? ? it should be the Unicode absolute path the to the executable written into ? ? the shebang lines of any scripts installed. If ``warner`` is specified, it ? ? should be a callable, which will be called with two tuples indicating the ? ? wheel version of this software and the wheel version in the file, if there ? ? is a discrepancy in the versions. This can be used to issue any warnings to ? ? raise any exceptions. If ``lib_only`` is True, only the purelib/platlib ? ? files are installed, and the headers, scripts, data and dist-info metadata ? ? are not written. ?The return value is a :class:`InstalledDistribution` ? ? instance unless ``lib_only`` is True, in which case the return value is ? ? ``None``. """ Internally, this method constructs a ScriptMaker [2] instance and uses it to write the scripts, any variants of the type fooX and foo-X.Y, and any native executable wrappers, to the installation target directory. Currently, scripts defined in exports (the equivalent of setuptools' "entry points") are written into the wheel at build time, and just copied to the target installation directory in this method. This is likely to change to require script generation as well as copying to happen at installation time. This means that we need to pass additional information to the install method, but the API already has quite a few keyword arguments, and it seems unwise to add any more. The "executable" argument is just passed to the ScriptMaker, and not otherwise used. So I propose to stream the API to be as follows: def install(self, paths, maker, options): The dry_run will be taken from the maker instance. The caller will be responsible for instantiating the maker and configuring it (e.g. setting a custom executable for shebangs, enabling fooX/foo-X.Y variants, etc). The warner and lib_only will be obtained from the options argument, which is just a holder for values in attributes, using warner = getattr(options, 'warner', None) lib_only = getattr(options, 'lib_only', False) Although this approach hides the individual options from the signature, which I don't like, it will avoid adding more arguments to the signature, which I do like. With PEP 426 still somewhat fluid, better controllability might require more options to be added. Does anyone have any views about this, or any suggestions for improvement? In the absence of any feedback, I'll make these changes soon. They will of course break any code you have which uses this API, hence the call for comments. Regards, Vinay Sajip [1] https://bitbucket.org/pypa/distlib/src/tip/distlib/wheel.py?at=default#cl-378 [2] https://bitbucket.org/pypa/distlib/src/tip/distlib/scripts.py?at=default#cl-65 From dholth at gmail.com Fri Jul 19 19:18:29 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 13:18:29 -0400 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: <1374247421.59057.YahooMailNeo@web171405.mail.ir2.yahoo.com> References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <1374247421.59057.YahooMailNeo@web171405.mail.ir2.yahoo.com> Message-ID: On Fri, Jul 19, 2013 at 11:23 AM, Vinay Sajip wrote: > > >> >> I would really like to see one more level of nesting: >> >> requires : { run : [ ... ], test : [ ... ] } >> > > > I've already changed distlib's code several times as the spec has evolved, and would like not to see any more changes so that I can concentrate on some real work ;-) > > Seriously, what's currently there now works OK, and the code is fairly simple. I had suggested a variant with even less nesting - one single "requires" list with each entry as it is currently, but having an additional "kind" key with value ":run:", ":test:" etc. This has the merit that you can add additional kinds without major changes, while processing code can filter the list according to its needs at the time. This was shot down by Donald on the basis that it would make things too complicated, or something. Seems a simpler organisation, to me; any argument about additional time to process is unlikely to be a problem in practice, and there are no numbers to point to any performance problems. Currently, with pip, you have to download whole archives while doing dependency resolution, which takes of the order of *seconds* - *minutes* if you're working with Zope/Plone. Doing it in tens/hundreds of milliseconds is sheer luxury :-) Either your proposal or mine would work out to be about the same. The advantage is that it helps people to conceptualize them as four instances of the same thing instead of four different kinds of things and it makes it easier to write a forwards-compatible implementation without looking for keys ending in _requires. It would also make the documentation significantly shorter. From joe.gordon0 at gmail.com Fri Jul 19 20:09:24 2013 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 19 Jul 2013 11:09:24 -0700 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: I have gone ahead and gathered some information using our standard development environment, devstack, I ran cProfile on our application, with the contents of it mocked out, http://paste.openstack.org/show/40948/ When I try importing pkg_resources in our development environment it is very slow: vagrant at precise64:/opt/stack/nova$ time python -c "from pkg_resources import load_entry_point" real 0m0.185s user 0m0.136s sys 0m0.044s vagrant at precise64:/opt/stack/nova$ time python -c "print 'hi'" hi real 0m0.047s user 0m0.036s sys 0m0.008s I also ran cProfile on just the import line: http://paste.openstack.org/show/40949/ and $ python -vvvvvv -c "import pkg_resources" http://paste.openstack.org/show/40952/ As for python 3, we have to maintain python 2.6 and 2.7 compatibility so a Python 3 only fix isn't acceptable On Fri, Jul 19, 2013 at 5:58 AM, Daniel Holth wrote: > On Fri, Jul 19, 2013 at 5:32 AM, Robert Collins > wrote: > > On 19 July 2013 21:24, Vinay Sajip wrote: > >> Robert Collins robertcollins.net> writes: > >> > >>> So my question here would be - can we make it faster? We have just > >>> been diagnosing a performance problem in nova due to rootwrap being a > >>> pkg_resources scripts entry point : just getting to the first line of > >>> main() takes 200ms, and we make dozens of subprocess calls (has to be, > >>> we're escalating privileges) to the script in question : that time is > >>> nearly entirely doing introspection of metadata from disk. > >> > >> Is there more detailed information about where the time is being spent? > e.g. > >> os.stat(), file I/O, parsing of the actual metadata files, > load_entry_point() > >> etc. > > > > Not sure. Joe? > > > > -Rob > > You should at least time it against the simpler "import sys, x.main; > sys.exit(main())" style wrapper. > > As a pkg_resources optimization it might be worthwhile to try using > https://github.com/benhoyt/scandir/ or in Python 3, the undocumented > cache used by the importer system, to try to speed things up. > > It is a bit tricky to profile pkg_resources since it does a lot of > work at import. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Fri Jul 19 21:48:03 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Fri, 19 Jul 2013 19:48:03 +0000 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> Message-ID: <44eaf14bd946464b860356ee913d0106@BLUPR03MB199.namprd03.prod.outlook.com> > From: Oscar Benjamin > I don't know whether or not you intend to have wrappers also work for > Python 2.7 (in a third-party package perhaps) but there is a slightly > subtle point to watch out for when non-ASCII characters in sys.argv > come into play. > > Python 2.x uses GetCommandLineA and 3.x uses GetCommandLineW. A > wrapper to launch 2.x should use GetCommandLineA and CreateProcessA to > ensure that the 8-bit argument strings are passed through unaltered. > To launch 3.x it should use the W versions. If not then the MSVC > runtime (or the OS?) will convert between the 8-bit and 16-bit > encodings using its own lossy routines. The launcher should always use GetCommandLineW, because the command line is already stored in a 16-bit encoding. GetCommandLineA will decode to an 8-bit encoding using some code page/settings (I can probably find out exactly which ones, but I don't know/care off the top of my head), and CreateProcessA will convert back using (hopefully) the same code page. There is never any point passing data between *A APIs in Windows, because they are just doing the conversion in the background. All you gain is that the launcher will corrupt the command line before python.exe gets a chance to. Cheers, Steve From pje at telecommunity.com Fri Jul 19 22:42:29 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 19 Jul 2013 16:42:29 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 2:09 PM, Joe Gordon wrote: > When I try importing pkg_resources in our development environment it is very > slow: Use zc.buildout to install the application you're invoking, and then it won't need to import pkg_resources. (Unless the actual app uses it.) From dholth at gmail.com Sat Jul 20 00:23:21 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 18:23:21 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 6:10 PM, Joe Gordon wrote: > > > > On Fri, Jul 19, 2013 at 1:42 PM, PJ Eby wrote: >> >> On Fri, Jul 19, 2013 at 2:09 PM, Joe Gordon wrote: >> > When I try importing pkg_resources in our development environment it is >> > very >> > slow: >> >> Use zc.buildout to install the application you're invoking, and then >> it won't need to import pkg_resources. (Unless the actual app uses >> it.) > > > It looks like zc.buildout is not an option as we are already heavily > invested in using pip and virtualenv. > Here's where the magic happens: https://bitbucket.org/pypa/setuptools/src/9dc434ac0308749d564d721a19ee412c2e79754f/setuptools/command/install_scripts.py?at=default#cl-37 And: https://bitbucket.org/pypa/setuptools/src/9dc434ac0308749d564d721a19ee412c2e79754f/setuptools/command/easy_install.py?at=default#cl-1840 From joe.gordon0 at gmail.com Sat Jul 20 00:10:36 2013 From: joe.gordon0 at gmail.com (Joe Gordon) Date: Fri, 19 Jul 2013 15:10:36 -0700 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On Fri, Jul 19, 2013 at 1:42 PM, PJ Eby wrote: > On Fri, Jul 19, 2013 at 2:09 PM, Joe Gordon wrote: > > When I try importing pkg_resources in our development environment it is > very > > slow: > > Use zc.buildout to install the application you're invoking, and then > it won't need to import pkg_resources. (Unless the actual app uses > it.) > It looks like zc.buildout is not an option as we are already heavily invested in using pip and virtualenv. -------------- next part -------------- An HTML attachment was scrubbed... URL: From monty.taylor at gmail.com Sat Jul 20 00:22:39 2013 From: monty.taylor at gmail.com (Monty Taylor) Date: Fri, 19 Jul 2013 15:22:39 -0700 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: Yeah. Not moving to zc.buildout for anything. I believe it will be a better option to just write by-hand scripts that get installed that just do: from nova.rootwrap import cmd return cmd.main(sys.argv) or something. Basically, a tiny boiler-plate script that does the same thing as a console_scripts entry point thing without loading the module in question via pkg_resources. On Fri, Jul 19, 2013 at 3:10 PM, Joe Gordon wrote: > > > > On Fri, Jul 19, 2013 at 1:42 PM, PJ Eby wrote: > >> On Fri, Jul 19, 2013 at 2:09 PM, Joe Gordon >> wrote: >> > When I try importing pkg_resources in our development environment it is >> very >> > slow: >> >> Use zc.buildout to install the application you're invoking, and then >> it won't need to import pkg_resources. (Unless the actual app uses >> it.) >> > > It looks like zc.buildout is not an option as we are already heavily > invested in using pip and virtualenv. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sat Jul 20 01:14:20 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 19 Jul 2013 19:14:20 -0400 Subject: [Distutils] wheel without console_scripts in 0.20.0 Message-ID: Hi Paul. https://bitbucket.org/dholth/wheel/commits/db913ecc2c0bb7fec14ba9eae68f99d9a9743bf6 Wheel 0.20.0 won't put entry_points console_scripts and gui_scripts in the wheel itself. The new "python -m wheel install-scripts wheel pip" command for example would install those scripts for wheel and pip into the default scripts directory. It requires setuptools, not just pkg_resources. Here's what I did to accomplish that using easy_install: dist = 'pip' pkg_resources_dist = pkg_resources.get_distribution(dist) install = wheel.paths.get_install_command(dist) command = easy_install.easy_install(install.distribution) command.args = ['wheel'] # dummy argument command.finalize_options() command.install_egg_scripts(pkg_resources_dist) It would be fun to experiment with alternative implementations like the "just import and run it" wrapper, and it might be nice to update the RECORD of installed files as part of the operation. It should also be possible to use less easy_install. From ncoghlan at gmail.com Sat Jul 20 05:27:24 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 13:27:24 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 20 July 2013 02:31, Marcus Smith wrote: > >> * decide on a bundling or explicit bootstrapping scheme for pip >> (this still needs a PEP to help clarify the pros and cons of the >> various alternatives) > > > if we improve things enough so that the get-pip.py experience is reliable > and robust (and handles setuptools if not bundled), then might that be > enough for now? > (see the options here: https://github.com/pypa/pip/issues/1049) > i.e. improve pip's installer experience, and then come back around to > bundling/bootstrap with python. If we can figure out a consistent download-and-run experience that doesn't assume the availability of curl or wget, I think so. Alternatively (as I think Donald suggested?) a simple download-and-run Windows executable or installer would be friendlier for people that may not be used to configuring the Windows command line (even if it still relied on get-pip.py to do the heavy lifting in terms of actually getting pip onto the system). It doesn't help that the Microsoft provided UI for configuring environment variables hasn't seen any serious improvements in the better part of two decades :P Also, if we'd like to do cert verification in the bootstrap script, keep in mind the fact that Python 2.6+ supports executable zip archives, so long as they have a __main__.py file at the top level. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sat Jul 20 05:33:07 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 19 Jul 2013 23:33:07 -0400 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On Jul 19, 2013, at 11:27 PM, Nick Coghlan wrote: > On 20 July 2013 02:31, Marcus Smith wrote: >> >>> * decide on a bundling or explicit bootstrapping scheme for pip >>> (this still needs a PEP to help clarify the pros and cons of the >>> various alternatives) >> >> >> if we improve things enough so that the get-pip.py experience is reliable >> and robust (and handles setuptools if not bundled), then might that be >> enough for now? >> (see the options here: https://github.com/pypa/pip/issues/1049) >> i.e. improve pip's installer experience, and then come back around to >> bundling/bootstrap with python. > > If we can figure out a consistent download-and-run experience that > doesn't assume the availability of curl or wget, I think so. > Alternatively (as I think Donald suggested?) a simple download-and-run > Windows executable or installer would be friendlier for people that > may not be used to configuring the Windows command line (even if it > still relied on get-pip.py to do the heavy lifting in terms of > actually getting pip onto the system). It doesn't help that the > Microsoft provided UI for configuring environment variables hasn't > seen any serious improvements in the better part of two decades :P > > Also, if we'd like to do cert verification in the bootstrap script, > keep in mind the fact that Python 2.6+ supports executable zip > archives, so long as they have a __main__.py file at the top level. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Folks are aware that get-pip.py includes an entire pip installation griped inside of it right? So if we ship get-pip.py we're shipping pip, which we'll then use to install pip over the network ? instead of just shipping pip. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sat Jul 20 05:42:57 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 13:42:57 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 14:06, Nick Coghlan wrote: > Independent activities & miscellaneous suggestions > > * maybe suggest "pip install distlib" over pip gaining its own > programmatic API? > * PEP 8 cleanup (including clarification of what constitutes an > internal API) > * improved PyPI upload API (Donald's working on this) > * getting Warehouse to a point where it can be brought online as > "pypi-next.python.org" > * TUF-for-PyPI exploration (the TUF folks seems to have this well in hand) > * improved local PyPI hosting (especially devpi) A significant one I left out here: * Getting the "Python Packaging User's Guide" up to scratch, and deferring to that in the docs on python.org This is hard to really get going *right now*, since it's tricky to provide clear guidelines while we're still trying to figure out exactly what the recommended approach *is*. However, I'm hopeful that as the core approach stabilises over the next few months we'll be able to use the guide as a place to capture those instructions, to the point where we're prepared to start pointing more people to it as the "one obvious place" to get info about the Python packaging ecosystem. The reason I consider it independent of the CPython release cycle is that we're pretty flexible with doc updates, and the online docs for all branches are refreshed daily. So whenever we deem the packaging user guide ready for broad consumption, we can adjust the material on docs.python.org to defer to it. For offline docs, this documented deprecation of the bundled docs will then get captured in the next CPython maintenance release (those that need to deal with isolated networks and working with an entirely private index server installation are such a special case that we can leave them to deal with their own documentation, especially since they have the option of making a private fork of the upstream packaging guide). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Sat Jul 20 05:52:03 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 19 Jul 2013 20:52:03 -0700 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: > > Folks are aware that get-pip.py includes an entire pip installation griped > inside of it right? > > So if we ship get-pip.py we're shipping pip, which we'll then use to > install pip over the network > ? instead of just shipping pip. > yes, it has pip in it. I mentioned it as something to improve for the present, not ship. that would be odd -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jul 20 06:21:24 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 14:21:24 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 19 July 2013 14:06, Nick Coghlan wrote: > We have a lot of initiatives going every which way at the moment, so I > figured it would be a good idea to get a common perception of what we > consider to be the important near term goals and a realistic timeline > for improving the packaging ecosystem (in particular, the timing > relative to the CPython 3.4 release cycle). What do people think of the idea of my writing something up as an actual "Python packaging road map" and adding it to the user guide? (It would probably replace https://python-packaging-user-guide.readthedocs.org/en/latest/future.html) Note that I wouldn't actually *do* this until after I get back from Flock to Fedora (the conference is mid-August, I get home late August since I'm taking some time off for a holiday), so we have plenty of time to discuss the general approach before it gets written up anywhere more "official". Just wanted to give people a change to think about the idea. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From qwcode at gmail.com Sat Jul 20 06:26:12 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 19 Jul 2013 21:26:12 -0700 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: > What do people think of the idea of my writing something up as an > actual "Python packaging road map" and adding it to the user guide? > (It would probably replace > https://python-packaging-user-guide.readthedocs.org/en/latest/future.html) > > Note that I wouldn't actually *do* this until after I get back from > Flock to Fedora (the conference is mid-August, I get home late August > since I'm taking some time off for a holiday), so we have plenty of > time to discuss the general approach before it gets written up > anywhere more "official". Just wanted to give people a change to think > about the idea. > I think it would be great. I was hoping for something like that. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jul 20 06:26:51 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 14:26:51 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 20 July 2013 13:52, Marcus Smith wrote: > >> >> Folks are aware that get-pip.py includes an entire pip installation griped >> inside of it right? >> >> So if we ship get-pip.py we're shipping pip, which we'll then use to >> install pip over the network >> ? instead of just shipping pip. > > > yes, it has pip in it. I mentioned it as something to improve for the > present, not ship. that would be odd Actually, the main thing that made me realised that bundling pip with the installer for something else was a potentially flawed notion was that it would mean we'd be inflicting on *ourselves* exactly the same problem that already exists on Linux: two different file management tools (PEP 376 installers and the system package manager) fighting over the same Python installation. It wouldn't be as severe (since it would only affect one project), but it's still a potentially ugly mess. By contrast, the explicit bootstrap (whether using the current get-pip.py structure or a zip archive with a __main__.py file and whether executed at install time or later) ensures that the site-packages for that particular Python installation remains under the control of the PEP 376 installers (including pip). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Jul 20 07:26:12 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 15:26:12 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> <4B02D4AD-55FF-4508-93FF-1064207733CA@stufft.io> Message-ID: On 20 July 2013 01:20, Brett Cannon wrote: > If the long-term plan is to bless setuptools then go for the bundling, but > if that decision has not been made yet then bundling may be premature if the > bundling of pip with Python moves forward. PEP 426 is currently looking at blessing a subset of *setup.py* commands as an interim build system, without blessing any particular tool. At the moment, I don't list any required arguments for the individual commands, but I'm starting to think that needs to change. It's probably worth looking at the common subset currently supported by setuptools and d2to1, and figuring out which can be left out as "you need to know which build system the project is using and invoke them appropriately" and which we want to standardise. Something else I see as potentially getting blessed is "assume setuptools" as a fallback option for projects that don't publish 2.0+ metadata (part of which will include providing a pre-generated dist-info directory in the sdist, as well as a way to indicate how to generate the metadata in a raw source tarball or VCS checkout) That's why I'm OK with the idea of the pip team *only* supporting installing from wheels if setuptools isn't installed, and treating setuptools as an implicit install_requires dependency if it is necessary to install from a source distribution. Resolving all of this formally is a ways down the todo list though, and the problem of source-based (rather than wheel-based) integration is one of the big reasons I see nailing down the metadata 2.0 spec as a process that still has several months left to run rather than being "almost finished". At the moment I *don't* see a good projects-can-use-any-build-system-they-like story for the path from a Python project tarball to a built and published Fedora or RHEL RPM, and that concerns me (since making it practical to almost fully automate that chain is one of my goals). If you had asked me a couple of months ago, I would have said I thought we could get away with deferring the answers to these questions (and PEP 426 is currently written that way), but I now think we're better of continuing with the setuptools-compatible metadata approach for the time being, and taking the time to get metadata 2.0 *right* for both binary and source distribution, rather than having to follow it up with a metadata 2.1 to fix the source distribution side of things. Getting PEP 427 (wheel 1.0) approved reasonable quickly was necessary to provide a successor to eggs that pip was willing to adopt, but I no longer think there's the same urgency for the metadata 2.0 standard in PEP 426 (ever since Daniel realised that wheels could work just as a well with setuptools compatible metadata as they could with a new metadata standard). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Jul 20 08:10:13 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 16:10:13 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 20 July 2013 01:47, PJ Eby wrote: > On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan wrote: >> Right, I think the reasonable near term solutions are for pip to either: >> >> 1. generate zc.buildout style wrappers with absolute paths to avoid >> the implied runtime dependency >> 2. interpret use of script entry points as an implied dependency on >> setuptools and install it even if not otherwise requested >> >> Either way, pip would need to do something about its *own* command >> line script, which heavily favours option 1 > > Option 1 also would address some or all of the startup performance complaint. > > It occurs to me that it might actually be a good idea *not* to put the > script wrappers in the standard entry points file, even if that's what > setuptools does right now: if lots of packages use that approach, > it'll slow down the effective indexing for code that's scanning > multiple packages for something like a sqlalchemy adapter. > > (Alternately, we could use something like > 'exports-some.group.name.json' so that each export group is a separate > file; this would keep scripts separate from everything else, and > optimize plugin searches falling in a particular group. In fact, the > files needn't have any contents; it'd be okay to just parse the main > .json for any distribution that has exports in the group you're > looking for. i.e., the real purpose of the separation of entry points > was always just to avoid loading metadata for distributions that don't > have the kind of exports you're looking for. In the old world, few > distributions exported anything, so just identifying whether a > distribution had exports was sufficient. In the new world, more and > more distributions over time will have some kind of export, so knowing > *which* exports they have will become more important.) A not-so-quick sketch of my current thinking: Two new fields in PEP 426: commands and exports Like the core dependency metadata, both get generated files: pydist-commands.json and pydist-exports.json (As far as the performance concern goes, I think longer term we'll probably move to a richer installation database format that includes an SQLite cache file managed by the installers. But near term, I like the idea of being able to check "has commands or not" and "has exports or not" with a single stat call for the appropriate file) Rather than using the "module.name:qualified.name" format (as the PEP currently does for the install_hooks), "export specifiers" would be defined as a mapping with the following subfields: * module * qualname (as per PEP 3155) * extra Both qualname and extra would be optional. "extra" indicates that the export is only present if that extra is installed. The top level commands field would have three subfields: "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and wrap_gui subfields would both be maps of command names to export specifiers (i.e. requests for an installer to generate the appropriate wrappers), while prebuilt would be a mapping of command names to paths relative to the scripts directory (as strings). Note that given that Python 2.7+ and 3.2+ can execute packages with a __main__ submodule, the export specifier for a command entry *may* just be the module component and it should still work. The exports field is just a rebranded and slightly rearranged entry_points structure: the top level keys in the hash map are "export groups" (defined in the same way as metadata extensions are defined) and the individual entries in each export group are arbitrary keys (meaning determined by the export group) mapping to export specifiers. With this change, I may even move the current top level "install_hooks" field inside the "exports" field. Even if it stay at the top level, the values will become export specifiers rather than using the entry points string format. Not sure when I'll get that tidied up and incorporated into a new draft of PEP 426, but I think it covers everything. For those wondering about my dividing line between "custom string format" and "structured data": the custom string formats in PEP 426 should be limited to things that are likely to be passed as command line arguments (like requirement specifiers and their assorted components), or those where using structured data would be extraordinarily verbose (like environment markers). If I have any custom string formats still in there that don't fit either of those categories, then let me know and I'll see if I can replace them with structured data. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Jul 20 08:18:15 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 16:18:15 +1000 Subject: [Distutils] Upcoming changes to PEP 426/440 In-Reply-To: References: <6DB07C69-0A0B-48C7-A9A6-80BB1B76D2B5@stufft.io> <1374247421.59057.YahooMailNeo@web171405.mail.ir2.yahoo.com> Message-ID: On 20 July 2013 03:18, Daniel Holth wrote: > On Fri, Jul 19, 2013 at 11:23 AM, Vinay Sajip wrote: >> >> >>> >>> I would really like to see one more level of nesting: >>> >>> requires : { run : [ ... ], test : [ ... ] } >>> >> >> >> I've already changed distlib's code several times as the spec has evolved, and would like not to see any more changes so that I can concentrate on some real work ;-) >> >> Seriously, what's currently there now works OK, and the code is fairly simple. I had suggested a variant with even less nesting - one single "requires" list with each entry as it is currently, but having an additional "kind" key with value ":run:", ":test:" etc. This has the merit that you can add additional kinds without major changes, while processing code can filter the list according to its needs at the time. This was shot down by Donald on the basis that it would make things too complicated, or something. Seems a simpler organisation, to me; any argument about additional time to process is unlikely to be a problem in practice, and there are no numbers to point to any performance problems. Currently, with pip, you have to download whole archives while doing dependency resolution, which takes of the order of *seconds* - *minutes* if you're working with Zope/Plone. Doing it in tens/hundreds of milliseconds is sheer luxury :-) > > Either your proposal or mine would work out to be about the same. The > advantage is that it helps people to conceptualize them as four > instances of the same thing instead of four different kinds of things > and it makes it easier to write a forwards-compatible implementation > without looking for keys ending in _requires. It would also make the > documentation significantly shorter. Yeah, I'm mostly interested in being able to *explain* the new metadata easily. Previously, merging the requirements wasn't especially practical, due to the requires/may_require split. Now, though, it's possible to merge them and have the keys match exactly with the names used in ":run:", etc. However, I don't think it's enough of a win to duplicate the "requires" key at two different levels (inside the dependency specifiers and as a top level field), so I'm happy with sticking to "Flat is better than nested" on this one :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Sat Jul 20 09:25:24 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 20 Jul 2013 08:25:24 +0100 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: On 19 July 2013 23:22, Monty Taylor wrote: > Yeah. Not moving to zc.buildout for anything. I believe it will be a > better option to just write by-hand scripts that get installed that just do: > > from nova.rootwrap import cmd > > return cmd.main(sys.argv) > > or something. Basically, a tiny boiler-plate script that does the same > thing as a console_scripts entry point thing without loading the module in > question via pkg_resources. Do you care about Windows compatibility for your app? If not, this is probably your best option. If you do, you'll need to add exe wrappers for the scripts on Windows - if you need that, then using distlib's script generator might be worth investigating. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 20 09:34:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 20 Jul 2013 08:34:38 +0100 Subject: [Distutils] wheel without console_scripts in 0.20.0 In-Reply-To: References: Message-ID: On 20 July 2013 00:14, Daniel Holth wrote: > it might be nice to update > the RECORD of installed files as part of the operation. > I would argue that keeping RECORD up to date is essential, as not doing so breaks uninstall. It would also not be in line with PEP 376 It's actually not entirely clear that PEP 376 allows for a second tool to update an installation like this anyway (what goes into the INSTALLER file in that case?) Actually, a more general question - to what extent is PEP 376 still relevant in the light of Metadata 2.0? Something needs to be updated to ensure that the format and management of the RECORD file remains standardised. There is a reasonable amount of information that is *only* specified in PEP 376, so it's not really possible just to deprecate it wholesale... Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Jul 20 09:40:14 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 20 Jul 2013 08:40:14 +0100 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 20 July 2013 04:27, Nick Coghlan wrote: > It doesn't help that the > Microsoft provided UI for configuring environment variables hasn't > seen any serious improvements in the better part of two decades :P > AFAIK, for setting environment variables, the Unix UI hasn't changed in a lot longer than that... (Sorry, knee-jerk reaction to digs at Windows there :-)) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Jul 20 10:29:11 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 18:29:11 +1000 Subject: [Distutils] wheel without console_scripts in 0.20.0 In-Reply-To: References: Message-ID: On 20 July 2013 17:34, Paul Moore wrote: > On 20 July 2013 00:14, Daniel Holth wrote: >> >> it might be nice to update >> the RECORD of installed files as part of the operation. > > > I would argue that keeping RECORD up to date is essential, as not doing so > breaks uninstall. It would also not be in line with PEP 376 It's actually > not entirely clear that PEP 376 allows for a second tool to update an > installation like this anyway (what goes into the INSTALLER file in that > case?) Perhaps define a solution along the lines of an UPDATES subdirectory with date/time based file names to avoid conflicts? For example: 1. Create a tracking directory as UPDATES/YYYYMMDD_hhmmss 2. Write an INSTALLER file to the tracking directory 3. Copy RECORD to RECORD.prev in the tracking directory 4. Update the main RECORD file with any changes I don't think that's necessary though. INSTALLER is supposed to be about tracking *ownership*, and registering a few extra files in RECORD doesn't change which installer is ultimately responsible for that distribution being present on the system. (Does pip actually do the INSTALLER consistency check to try to avoid getting into arguments with system package management tools?) > Actually, a more general question - to what extent is PEP 376 still relevant > in the light of Metadata 2.0? Something needs to be updated to ensure that > the format and management of the RECORD file remains standardised. There is > a reasonable amount of information that is *only* specified in PEP 376, so > it's not really possible just to deprecate it wholesale... I don't expect any significant changes to the installation database format for metadata 2.0, except to deprecate METADATA in favour of something like "the contents of the distribution's .dist-info directory, including pydist.json as defined in PEP 426 (or later versions of the metadata standard)". In particular, I don't see any reason for RECORD to change as CSV is a good format for that data. If wheel and/or pip adopt a modification tracking system like the one I suggest above, then the updated PEP would standardise that, too (I think such a scheme would be overkill, though). We *might* introduce an optional SQLite based caching mechanism somewhere along the line, but I think that's more appropriately handled on the consumer side of things than it is on the installer side. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sat Jul 20 10:34:11 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 20 Jul 2013 08:34:11 +0000 (UTC) Subject: [Distutils] =?utf-8?q?wheel_without_console=5Fscripts_in_0=2E20?= =?utf-8?q?=2E0?= References: Message-ID: Paul Moore gmail.com> writes: > I would argue that keeping RECORD up to date is essential, as not doing so breaks uninstall. It would also not be in line with PEP 376 ?It's actually not entirely clear that PEP 376 allows for a second tool to update an installation like this anyway (what goes into the INSTALLER file in that case?) I don't know if Daniel's post was replying to some other post that I've missed, so I'm not sure if Daniel is just trying things out or advancing his implementation more seriously. I think whatever does the installation (creates the .dist-info file) should be the INSTALLER. I don't know if it's a good idea for tools to subsequently change the contents of a .dist-info - do we have well established use cases for this? Unless I've misunderstood something, it's better for pip/wheel integration to be closer so that the .dist-info RECORD file is written in a single step. In distlib, that's done by Wheel.install, which is why I'm changing its API to accommodate the script generation requirements which have emerged. > Actually, a more general question - to what extent is PEP 376 still relevant in the light of Metadata 2.0? Something needs to be updated to ensure that the format and management of the RECORD file remains standardised. There is a reasonable amount of information that is *only* specified in PEP 376, so it's not really possible just to deprecate it wholesale... PEP 376 and Metadata 2.0 are orthogonal to each other, in my view. The new metadata format simply replaces the key-value METADATA file with pydist.json and siblings(for commands, exports etc.) Regards, Vinay Sajip From ncoghlan at gmail.com Sat Jul 20 10:53:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 20 Jul 2013 18:53:53 +1000 Subject: [Distutils] Specific packaging goals and a tentative timeline In-Reply-To: References: Message-ID: On 20 July 2013 17:40, Paul Moore wrote: > On 20 July 2013 04:27, Nick Coghlan wrote: >> >> It doesn't help that the >> Microsoft provided UI for configuring environment variables hasn't >> seen any serious improvements in the better part of two decades :P > > > AFAIK, for setting environment variables, the Unix UI hasn't changed in a > lot longer than that... > > (Sorry, knee-jerk reaction to digs at Windows there :-)) Hey, I come by my hatred of that panel honestly! When a flat text file with an arcane name is a UI upgrade... :) Cheers, Nick. P.S. For those that don't know (which is probably most people), I was a professional C/C++/Python programmer on Windows (starting with NT4!) from late 2000 through to early 2011. I just haven't used it for *personal* software development since 2004 or so, when I decided learning Linux was easier than trying to get CPython to build on Windows with the very limited free tools that existed in the days prior to Visual Studio Express :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Sat Jul 20 15:23:05 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 20 Jul 2013 09:23:05 -0400 Subject: [Distutils] wheel without console_scripts in 0.20.0 In-Reply-To: References: Message-ID: On Sat, Jul 20, 2013 at 4:34 AM, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> I would argue that keeping RECORD up to date is essential, as not doing so > breaks uninstall. It would also not be in line with PEP 376 It's actually > not entirely clear that PEP 376 allows for a second tool to update an > installation like this anyway (what goes into the INSTALLER file in that case?) > > I don't know if Daniel's post was replying to some other post that I've > missed, so I'm not sure if Daniel is just trying things out or advancing his > implementation more seriously. I think whatever does the installation > (creates the .dist-info file) should be the INSTALLER. I don't know if it's > a good idea for tools to subsequently change the contents of a .dist-info - > do we have well established use cases for this? Unless I've misunderstood > something, it's better for pip/wheel integration to be closer so that the > .dist-info RECORD file is written in a single step. In distlib, that's done > by Wheel.install, which is why I'm changing its API to accommodate the > script generation requirements which have emerged. I was a little surprised easy_install didn't already have a "redo the console scripts" command. Eggs omit the console scripts and put the normal scripts in .egg-info/scripts/. If the egg is installed instead of simply added to sys.path and used then easy_install iterates over the console_scripts and gui_scripts and writes them out. Obviously this operation should happen at or near the same time as the install but it will be quite harmless to append a few lines to RECORD. Shouldn't be any worse than the #! rewriting that already happens. I've decided to delay this version of wheel until I can come up with a patch to pip. And if you're in a hurry and deploying a web backend, why bother to generate the scripts at all? (not the default) >> Actually, a more general question - to what extent is PEP 376 still > relevant in the light of Metadata 2.0? Something needs to be updated to > ensure that the format and management of the RECORD file remains > standardised. There is a reasonable amount of information that is *only* > specified in PEP 376, so it's not really possible just to deprecate it > wholesale... > > PEP 376 and Metadata 2.0 are orthogonal to each other, in my view. The new > metadata format simply replaces the key-value METADATA file with pydist.json > and siblings(for commands, exports etc.) It's been a useful way to structure things. From doug.hellmann at gmail.com Sat Jul 20 19:54:51 2013 From: doug.hellmann at gmail.com (Doug Hellmann) Date: Sat, 20 Jul 2013 13:54:51 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: Message-ID: <590C320F-9573-4F5D-A862-BDEC9115ED10@gmail.com> On Jul 19, 2013, at 6:22 PM, Monty Taylor wrote: > Yeah. Not moving to zc.buildout for anything. I believe it will be a better option to just write by-hand scripts that get installed that just do: > > from nova.rootwrap import cmd > > return cmd.main(sys.argv) > > or something. Basically, a tiny boiler-plate script that does the same thing as a console_scripts entry point thing without loading the module in question via pkg_resources. We could also have pbr do this when it builds the sdist. Doug > > > On Fri, Jul 19, 2013 at 3:10 PM, Joe Gordon wrote: > > > > On Fri, Jul 19, 2013 at 1:42 PM, PJ Eby wrote: > On Fri, Jul 19, 2013 at 2:09 PM, Joe Gordon wrote: > > When I try importing pkg_resources in our development environment it is very > > slow: > > Use zc.buildout to install the application you're invoking, and then > it won't need to import pkg_resources. (Unless the actual app uses > it.) > > It looks like zc.buildout is not an option as we are already heavily invested in using pip and virtualenv. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From monty.taylor at gmail.com Sat Jul 20 19:57:46 2013 From: monty.taylor at gmail.com (Monty Taylor) Date: Sat, 20 Jul 2013 10:57:46 -0700 Subject: [Distutils] entry points PEP In-Reply-To: <590C320F-9573-4F5D-A862-BDEC9115ED10@gmail.com> References: <590C320F-9573-4F5D-A862-BDEC9115ED10@gmail.com> Message-ID: https://review.openstack.org/#/c/38000/ On not-Windows, install a non-pkg_resources based script content. On windows, defer to underlying setuptools functionality. (the test failures showing on the patch were build farm issues which we just sorted, I'll run-check the patch once the farm is good) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sat Jul 20 20:43:03 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 20 Jul 2013 14:43:03 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Sat, Jul 20, 2013 at 2:10 AM, Nick Coghlan wrote: > On 20 July 2013 01:47, PJ Eby wrote: >> On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan wrote: >>> Right, I think the reasonable near term solutions are for pip to either: >>> >>> 1. generate zc.buildout style wrappers with absolute paths to avoid >>> the implied runtime dependency >>> 2. interpret use of script entry points as an implied dependency on >>> setuptools and install it even if not otherwise requested >>> >>> Either way, pip would need to do something about its *own* command >>> line script, which heavily favours option 1 >> >> Option 1 also would address some or all of the startup performance complaint. >> >> It occurs to me that it might actually be a good idea *not* to put the >> script wrappers in the standard entry points file, even if that's what >> setuptools does right now: if lots of packages use that approach, >> it'll slow down the effective indexing for code that's scanning >> multiple packages for something like a sqlalchemy adapter. >> >> (Alternately, we could use something like >> 'exports-some.group.name.json' so that each export group is a separate >> file; this would keep scripts separate from everything else, and >> optimize plugin searches falling in a particular group. In fact, the >> files needn't have any contents; it'd be okay to just parse the main >> .json for any distribution that has exports in the group you're >> looking for. i.e., the real purpose of the separation of entry points >> was always just to avoid loading metadata for distributions that don't >> have the kind of exports you're looking for. In the old world, few >> distributions exported anything, so just identifying whether a >> distribution had exports was sufficient. In the new world, more and >> more distributions over time will have some kind of export, so knowing >> *which* exports they have will become more important.) > > A not-so-quick sketch of my current thinking: > > Two new fields in PEP 426: commands and exports > > Like the core dependency metadata, both get generated files: > pydist-commands.json and pydist-exports.json > > (As far as the performance concern goes, I think longer term we'll > probably move to a richer installation database format that includes > an SQLite cache file managed by the installers. But near term, I like > the idea of being able to check "has commands or not" and "has exports > or not" with a single stat call for the appropriate file) > > Rather than using the "module.name:qualified.name" format (as the PEP > currently does for the install_hooks), "export specifiers" would be > defined as a mapping with the following subfields: > > * module > * qualname (as per PEP 3155) > * extra > > Both qualname and extra would be optional. "extra" indicates that the > export is only present if that extra is installed. > > The top level commands field would have three subfields: > "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and > wrap_gui subfields would both be maps of command names to export > specifiers (i.e. requests for an installer to generate the appropriate > wrappers), while prebuilt would be a mapping of command names to paths > relative to the scripts directory (as strings). > > Note that given that Python 2.7+ and 3.2+ can execute packages with a > __main__ submodule, the export specifier for a command entry *may* > just be the module component and it should still work. > > The exports field is just a rebranded and slightly rearranged > entry_points structure: the top level keys in the hash map are "export > groups" (defined in the same way as metadata extensions are defined) > and the individual entries in each export group are arbitrary keys > (meaning determined by the export group) mapping to export specifiers. > > With this change, I may even move the current top level > "install_hooks" field inside the "exports" field. Even if it stay at > the top level, the values will become export specifiers rather than > using the entry points string format. > > Not sure when I'll get that tidied up and incorporated into a new > draft of PEP 426, but I think it covers everything. > > For those wondering about my dividing line between "custom string > format" and "structured data": the custom string formats in PEP 426 > should be limited to things that are likely to be passed as command > line arguments (like requirement specifiers and their assorted > components), or those where using structured data would be > extraordinarily verbose (like environment markers). If I have any > custom string formats still in there that don't fit either of those > categories, then let me know and I'll see if I can replace them with > structured data. > > Cheers, > Nick. It may be worth mentioning that I am not aware of any package that uses the "entry point requires extra" feature. IIUC pkg_resources doesn't just check whether something's installed but attempts to add the requirements of the entry point's distribution and any requested extras to sys.path as part of resolution. From dholth at gmail.com Sat Jul 20 23:25:21 2013 From: dholth at gmail.com (Daniel Holth) Date: Sat, 20 Jul 2013 17:25:21 -0400 Subject: [Distutils] entry points PEP In-Reply-To: References: <590C320F-9573-4F5D-A862-BDEC9115ED10@gmail.com> Message-ID: On Sat, Jul 20, 2013 at 1:57 PM, Monty Taylor wrote: > https://review.openstack.org/#/c/38000/ > > On not-Windows, install a non-pkg_resources based script content. On > windows, defer to underlying setuptools functionality. (the test failures > showing on the patch were build farm issues which we just sorted, I'll > run-check the patch once the farm is good) Another way to make your scripts run faster is to use on-demand imports. Mercurial does "from mercurial import demandimport; demandimport.enable()". apipkg is another implementation. The drawback is that some Python programs run incorrectly if the import order changes. From ncoghlan at gmail.com Sun Jul 21 02:08:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 21 Jul 2013 10:08:45 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 21 Jul 2013 04:43, "Daniel Holth" wrote: > > On Sat, Jul 20, 2013 at 2:10 AM, Nick Coghlan wrote: > > On 20 July 2013 01:47, PJ Eby wrote: > >> On Fri, Jul 19, 2013 at 9:10 AM, Nick Coghlan wrote: > >>> Right, I think the reasonable near term solutions are for pip to either: > >>> > >>> 1. generate zc.buildout style wrappers with absolute paths to avoid > >>> the implied runtime dependency > >>> 2. interpret use of script entry points as an implied dependency on > >>> setuptools and install it even if not otherwise requested > >>> > >>> Either way, pip would need to do something about its *own* command > >>> line script, which heavily favours option 1 > >> > >> Option 1 also would address some or all of the startup performance complaint. > >> > >> It occurs to me that it might actually be a good idea *not* to put the > >> script wrappers in the standard entry points file, even if that's what > >> setuptools does right now: if lots of packages use that approach, > >> it'll slow down the effective indexing for code that's scanning > >> multiple packages for something like a sqlalchemy adapter. > >> > >> (Alternately, we could use something like > >> 'exports-some.group.name.json' so that each export group is a separate > >> file; this would keep scripts separate from everything else, and > >> optimize plugin searches falling in a particular group. In fact, the > >> files needn't have any contents; it'd be okay to just parse the main > >> .json for any distribution that has exports in the group you're > >> looking for. i.e., the real purpose of the separation of entry points > >> was always just to avoid loading metadata for distributions that don't > >> have the kind of exports you're looking for. In the old world, few > >> distributions exported anything, so just identifying whether a > >> distribution had exports was sufficient. In the new world, more and > >> more distributions over time will have some kind of export, so knowing > >> *which* exports they have will become more important.) > > > > A not-so-quick sketch of my current thinking: > > > > Two new fields in PEP 426: commands and exports > > > > Like the core dependency metadata, both get generated files: > > pydist-commands.json and pydist-exports.json > > > > (As far as the performance concern goes, I think longer term we'll > > probably move to a richer installation database format that includes > > an SQLite cache file managed by the installers. But near term, I like > > the idea of being able to check "has commands or not" and "has exports > > or not" with a single stat call for the appropriate file) > > > > Rather than using the "module.name:qualified.name" format (as the PEP > > currently does for the install_hooks), "export specifiers" would be > > defined as a mapping with the following subfields: > > > > * module > > * qualname (as per PEP 3155) > > * extra > > > > Both qualname and extra would be optional. "extra" indicates that the > > export is only present if that extra is installed. > > > > The top level commands field would have three subfields: > > "wrap_console", "wrap_gui" and "prebuilt". The wrap_console and > > wrap_gui subfields would both be maps of command names to export > > specifiers (i.e. requests for an installer to generate the appropriate > > wrappers), while prebuilt would be a mapping of command names to paths > > relative to the scripts directory (as strings). > > > > Note that given that Python 2.7+ and 3.2+ can execute packages with a > > __main__ submodule, the export specifier for a command entry *may* > > just be the module component and it should still work. > > > > The exports field is just a rebranded and slightly rearranged > > entry_points structure: the top level keys in the hash map are "export > > groups" (defined in the same way as metadata extensions are defined) > > and the individual entries in each export group are arbitrary keys > > (meaning determined by the export group) mapping to export specifiers. > > > > With this change, I may even move the current top level > > "install_hooks" field inside the "exports" field. Even if it stay at > > the top level, the values will become export specifiers rather than > > using the entry points string format. > > > > Not sure when I'll get that tidied up and incorporated into a new > > draft of PEP 426, but I think it covers everything. > > > > For those wondering about my dividing line between "custom string > > format" and "structured data": the custom string formats in PEP 426 > > should be limited to things that are likely to be passed as command > > line arguments (like requirement specifiers and their assorted > > components), or those where using structured data would be > > extraordinarily verbose (like environment markers). If I have any > > custom string formats still in there that don't fit either of those > > categories, then let me know and I'll see if I can replace them with > > structured data. > > > > Cheers, > > Nick. > > It may be worth mentioning that I am not aware of any package that > uses the "entry point requires extra" feature. > > IIUC pkg_resources doesn't just check whether something's installed > but attempts to add the requirements of the entry point's distribution > and any requested extras to sys.path as part of resolution. I see it as more useful for making an executable optional by defining a "cli" extra. If your project just gets installed as a dependency, no wrapper would get generated. Only if you went "pip install myproject[cli]" (or another project specifically depended on the cli extra) would it be installed. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sun Jul 21 03:53:47 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 20 Jul 2013 21:53:47 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Sat, Jul 20, 2013 at 8:08 PM, Nick Coghlan wrote: > I see it as more useful for making an executable optional by defining a > "cli" extra. If your project just gets installed as a dependency, no wrapper > would get generated. > > Only if you went "pip install myproject[cli]" (or another project > specifically depended on the cli extra) would it be installed. Why stop there... how about environment markers for exports, too? ;-) And throw in an environment marker syntax for whether something was installed as a dependency or explicitly... ;-) (Btw, the above is a change from setuptools semantics, but I don't really see it as a problem; ISTM unlikely that anybody has used extras on a script wrapper. Extras on *other* entry points, however, *do* exist, at least IIRC. I'm pretty sure there was at least one concrete use case for them involving Chandler plugins when I originally implemented the feature. The possibility of having extras on a script is just a side effect, though, not an actually-intended feature; if you have the need, it actually makes more sense to just bundle the script in another package and require that pacakge from the extra, rather than putting it in the original package.) From ncoghlan at gmail.com Sun Jul 21 04:54:47 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 21 Jul 2013 12:54:47 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 21 July 2013 11:53, PJ Eby wrote: > On Sat, Jul 20, 2013 at 8:08 PM, Nick Coghlan wrote: >> I see it as more useful for making an executable optional by defining a >> "cli" extra. If your project just gets installed as a dependency, no wrapper >> would get generated. >> >> Only if you went "pip install myproject[cli]" (or another project >> specifically depended on the cli extra) would it be installed. > > Why stop there... how about environment markers for exports, too? > ;-) And throw in an environment marker syntax for whether something > was installed as a dependency or explicitly... ;-) I actually did think about various ideas along those lines (when pondering how build dependencies would work in practice), but realised that install time checks for that kind of thing would be problematic (since the dependencies for an extra might be present anyway, so why require that you explicitly request the extra *as well*?). > (Btw, the above is a change from setuptools semantics, but I don't > really see it as a problem; ISTM unlikely that anybody has used extras > on a script wrapper. Extras on *other* entry points, however, *do* > exist, at least IIRC. I'm pretty sure there was at least one concrete > use case for them involving Chandler plugins when I originally > implemented the feature. The possibility of having extras on a script > is just a side effect, though, not an actually-intended feature; if > you have the need, it actually makes more sense to just bundle the > script in another package and require that pacakge from the extra, > rather than putting it in the original package.) Ah, interesting! And thinking about it further, I believe any kind of "partial installation" of the *package itself* is a bad idea. Extras should just be a way to ask "are these optional dependencies present on this system?", without needing to worry about how they got there. For now, I'll switch export specifiers back to the concise "modulename:qualname" entry point format and add "Do we need to support the exported-only-if-extra-is-available feature?" as an open question. My current thinking is that the point you made about script wrappers (putting the wrapper in separate distribution and depending on that from an extra) applies to other plugins as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Sun Jul 21 17:07:35 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sun, 21 Jul 2013 15:07:35 +0000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <407ACD37-78F5-496E-A596-434C821B0773@stufft.io> <7F243DD6-812C-4CEB-859C-3DD8E6CC2D12@stufft.io> Message-ID: <552ba34c1e7b4127ad5a979e7a248a07@BLUPR03MB199.namprd03.prod.outlook.com> From: Paul Moore > On 18 July 2013 08:57, Nick Coghlan wrote: >> Shipping an msi installer for pip (perhaps bundling with setuptools) >> would also be an acceptable alternative. > > -1. > > I would suggest that this approach, if it were considered seriously, should be > reviewed carefully by someone who understands MSI installers (not me!). > Specifically, if I install pip via an MSI, then use "python -m pip install -U > pip", will the "Add/Remove Programs" entry created by the MSI still uninstall > cleanly? Broken uninstall options and incomplete package removals are a > perennial problem on Windows, usually caused by messing with installed files > outside control of the installer. > Paul Also -1, and I've spent quite a lot of time writing MSIs recently... It could be solved, but wheels are a better fix for the problems that people solve with MSIs. MSIs are also useless when virtualenvs are involved, since there's basically a guarantee that its metadata will get out of sync with reality as soon as someone deletes the virtualenv. IMHO bundling pip (and all dependencies) with the installer is best. Any bootstrap script hitting the internet will need to pin the version, so you may as well include a zip of the files and extract them on install. That way you'll always get a pip that can upgrade itself, and if you do a repair install you'll get a working pip back. Steve From pje at telecommunity.com Sun Jul 21 17:46:07 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 21 Jul 2013 11:46:07 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Sat, Jul 20, 2013 at 10:54 PM, Nick Coghlan wrote: > Extras should just be a way to ask "are these optional dependencies present > on this system?", without needing to worry about how they got there. Technically, they are a way to ask "can you get this for me?", since pkg_resources' API allows you to specify an installer callback when you ask to load an entry point. This means that an installer tool can dynamically obtain any extras it needs, not just check for their installation. To put it another way, it's not "exported only if extra is available", it's "exported, but make sure you have this first." A subtle difference, but important to the original use cases (see below). > For now, I'll switch export specifiers back to the concise > "modulename:qualname" entry point format and add "Do we need to > support the exported-only-if-extra-is-available feature?" as an open > question. My current thinking is that the point you made about script > wrappers (putting the wrapper in separate distribution and depending > on that from an extra) applies to other plugins as well. Now that I'm thinking about it some more, one of the motivating use cases for extras in entry points was startup performance in plugin-heavy GUI applications like Chandler. The use of extras allows for late-loading of additions to sys.path. IOW, it's intended more for a situation where not only are the entry points imported late, but you also want as few plugins as possible on sys.path to start with, in order to have fast startup. The other use case is similar, in that a plugin-heavy environment with self-upgrading abilities can defer *installation* of parts of a plug-in until it is actually used. (Which is why EntryPoint objects have a .require() method separate from .load() - you can loop over a relevant set of entry points to pre-test or pre-ensure that they're all available and dependencies are installed before importing any of them, even though .load() will also do that for a single entry point.) For the specific case of the meta build system itself, these use cases may be moot. For the overall use of exports, however, the use cases are still valuable for plugin-heavy apps. (Specifically, applications that use lots of plugins written by different people, and don't want to have to import everything at startup.) Indeed, this is the original use case for exports in the first place: it's a plugin system that doesn't require importing any plugins until you actually need a particular plugin's functionality. Extras just expand that slightly to "don't require installing things or putting them on sys.path until you need their functionality". Heck, if pip itself were split into two distributions, one of which were a command line script declared with an extra, pointing into the second distribution, it'd have dynamic bootstrapping. (Were it not for the part where it would need pip available to *do* the bootstrapping, of course. ;-) ) From p.f.moore at gmail.com Sun Jul 21 18:10:28 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 21 Jul 2013 17:10:28 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 21 July 2013 16:46, PJ Eby wrote: > Now that I'm thinking about it some more, one of the motivating use > cases for extras in entry points was startup performance in > plugin-heavy GUI applications like Chandler. The use of extras allows > for late-loading of additions to sys.path. IOW, it's intended more > for a situation where not only are the entry points imported late, but > you also want as few plugins as possible on sys.path to start with, in > order to have fast startup. > This type of complexity is completely outside of my experience. So I'm going to have to defer to people who understand the relevant scenarios to assess any proposed solutions. But could I make a general plea for an element of "keep the simple cases simple" in both the PEP and the implementations, here? I think it's critical that we make sure that the 99% of users[1] who want to do nothing more than bundle up an app with a few dependencies can both understand the mechanisms for doing so, and can use them straight out of the box. Paul [1] Yes, that number is made up - but to put it into context, I don't believe I've ever used a distribution from PyPI with entry points depending on extras. In fact, the only case I know of where I've seen extras in *any* context is in wheel, and I've never used them even there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Sun Jul 21 19:52:11 2013 From: dholth at gmail.com (Daniel Holth) Date: Sun, 21 Jul 2013 13:52:11 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Sun, Jul 21, 2013 at 12:10 PM, Paul Moore wrote: > On 21 July 2013 16:46, PJ Eby wrote: >> >> Now that I'm thinking about it some more, one of the motivating use >> cases for extras in entry points was startup performance in >> plugin-heavy GUI applications like Chandler. The use of extras allows >> for late-loading of additions to sys.path. IOW, it's intended more >> for a situation where not only are the entry points imported late, but >> you also want as few plugins as possible on sys.path to start with, in >> order to have fast startup. > > > This type of complexity is completely outside of my experience. So I'm going > to have to defer to people who understand the relevant scenarios to assess > any proposed solutions. > > But could I make a general plea for an element of "keep the simple cases > simple" in both the PEP and the implementations, here? I think it's critical > that we make sure that the 99% of users[1] who want to do nothing more than > bundle up an app with a few dependencies can both understand the mechanisms > for doing so, and can use them straight out of the box. > > Paul > > [1] Yes, that number is made up - but to put it into context, I don't > believe I've ever used a distribution from PyPI with entry points depending > on extras. In fact, the only case I know of where I've seen extras in *any* > context is in wheel, and I've never used them even there. The extras system is simple and more importantly ubiquitous with tens of thousands of releases taking advantage of it. The proposed system of also having separate kinds of build and test dependencies with their own extras hasn't been demonstrated. Entry points having extras is an extension of entry points having simple distribution-level dependencies "entry point depends on beaglevote" -> "entry point depends on beaglevote[doghouse]". I can see how perhaps in the setuptools case it may have been more straightforward to include extras rather than to exclude them. Someone else may want to check again but the last time I checked all pypi-hosted distributions for "entry points depending on extras" I found none. It would be pretty safe to leave this particular feature out. From ncoghlan at gmail.com Mon Jul 22 00:44:28 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Jul 2013 08:44:28 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 22 Jul 2013 01:46, "PJ Eby" wrote: > > On Sat, Jul 20, 2013 at 10:54 PM, Nick Coghlan wrote: > > Extras should just be a way to ask "are these optional dependencies present > > on this system?", without needing to worry about how they got there. > > Technically, they are a way to ask "can you get this for me?", since > pkg_resources' API allows you to specify an installer callback when > you ask to load an entry point. This means that an installer tool can > dynamically obtain any extras it needs, not just check for their > installation. > > To put it another way, it's not "exported only if extra is available", > it's "exported, but make sure you have this first." A subtle > difference, but important to the original use cases (see below). Ah, yes, I see the distinction (and it does make this notion conceptually simpler). > > > > For now, I'll switch export specifiers back to the concise > > "modulename:qualname" entry point format and add "Do we need to > > support the exported-only-if-extra-is-available feature?" as an open > > question. My current thinking is that the point you made about script > > wrappers (putting the wrapper in separate distribution and depending > > on that from an extra) applies to other plugins as well. > > Now that I'm thinking about it some more, one of the motivating use > cases for extras in entry points was startup performance in > plugin-heavy GUI applications like Chandler. The use of extras allows > for late-loading of additions to sys.path. IOW, it's intended more > for a situation where not only are the entry points imported late, but > you also want as few plugins as possible on sys.path to start with, in > order to have fast startup. I'm working with Eric Snow on a scheme that I hope will allow module-specific path entries that aren't processed at interpreter startup and never get added to sys.path at all (even if you import the module). Assuming we can get it to work the way I hope (which is still a "maybe" at this point in time), it should then be possible to backport it to earlier versions as a metaimporter. > The other use case is similar, in that a plugin-heavy environment with > self-upgrading abilities can defer *installation* of parts of a > plug-in until it is actually used. (Which is why EntryPoint objects > have a .require() method separate from .load() - you can loop over a > relevant set of entry points to pre-test or pre-ensure that they're > all available and dependencies are installed before importing any of > them, even though .load() will also do that for a single entry point.) OK, so as Daniel suggested, it's more like an export/entry-point specific "requires" field, but limited to the extras of the current distribution. > For the specific case of the meta build system itself, these use cases > may be moot. For the overall use of exports, however, the use cases > are still valuable for plugin-heavy apps. (Specifically, applications > that use lots of plugins written by different people, and don't want > to have to import everything at startup.) > > Indeed, this is the original use case for exports in the first place: > it's a plugin system that doesn't require importing any plugins until > you actually need a particular plugin's functionality. Extras just > expand that slightly to "don't require installing things or putting > them on sys.path until you need their functionality". OK, I understand the use case now. If I can come up with a relatively simple way to explain it, I'll keep it in the proposed metadata, otherwise I'll leave it to metadata extensions to handle the more sophisticated version where an export depends on an extra. Cheers, Nick. > > Heck, if pip itself were split into two distributions, one of which > were a command line script declared with an extra, pointing into the > second distribution, it'd have dynamic bootstrapping. (Were it not > for the part where it would need pip available to *do* the > bootstrapping, of course. ;-) ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Jul 22 05:25:36 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 21 Jul 2013 23:25:36 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Sun, Jul 21, 2013 at 6:44 PM, Nick Coghlan wrote: > > On 22 Jul 2013 01:46, "PJ Eby" wrote: >> >> Now that I'm thinking about it some more, one of the motivating use >> cases for extras in entry points was startup performance in >> plugin-heavy GUI applications like Chandler. The use of extras allows >> for late-loading of additions to sys.path. IOW, it's intended more >> for a situation where not only are the entry points imported late, but >> you also want as few plugins as possible on sys.path to start with, in >> order to have fast startup. > > I'm working with Eric Snow on a scheme that I hope will allow > module-specific path entries that aren't processed at interpreter startup > and never get added to sys.path at all (even if you import the module). > Assuming we can get it to work the way I hope (which is still a "maybe" at > this point in time), it should then be possible to backport it to earlier > versions as a metaimporter. I haven't had a chance to look at that proposal at more than surface depth, but my immediate concern with it is that it seems to be at the wrong level of abstraction for the packaging system, i.e., just because you can import a module, doesn't mean you can get at its project metadata (e.g., how would you find its exports, or even know what distribution it belonged to?). (Also, I don't actually see how it would be useful or relevant to the use case we're talking about; it seems maybe orthogonal at best.) > OK, so as Daniel suggested, it's more like an export/entry-point specific > "requires" field, but limited to the extras of the current distribution. Correct: at the time, it seemed a lot simpler to me than supporting arbitrary requirements, and allows for more DRY, since entry points might share some requirements. From ncoghlan at gmail.com Mon Jul 22 10:32:37 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Jul 2013 18:32:37 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 22 Jul 2013 13:26, "PJ Eby" wrote: > > On Sun, Jul 21, 2013 at 6:44 PM, Nick Coghlan wrote: > > > > On 22 Jul 2013 01:46, "PJ Eby" wrote: > >> > >> Now that I'm thinking about it some more, one of the motivating use > >> cases for extras in entry points was startup performance in > >> plugin-heavy GUI applications like Chandler. The use of extras allows > >> for late-loading of additions to sys.path. IOW, it's intended more > >> for a situation where not only are the entry points imported late, but > >> you also want as few plugins as possible on sys.path to start with, in > >> order to have fast startup. > > > > I'm working with Eric Snow on a scheme that I hope will allow > > module-specific path entries that aren't processed at interpreter startup > > and never get added to sys.path at all (even if you import the module). > > Assuming we can get it to work the way I hope (which is still a "maybe" at > > this point in time), it should then be possible to backport it to earlier > > versions as a metaimporter. > > I haven't had a chance to look at that proposal at more than surface > depth, but my immediate concern with it is that it seems to be at the > wrong level of abstraction for the packaging system, i.e., just > because you can import a module, doesn't mean you can get at its > project metadata (e.g., how would you find its exports, or even know > what distribution it belonged to?). > > (Also, I don't actually see how it would be useful or relevant to the > use case we're talking about; it seems maybe orthogonal at best.) The file format involved in that proposal was deliberately designed so you could also use it to look for PEP 376 dist-info directories. However, you're right, I forgot about the distribution-name-may-not-equal-package-name problem, so that aspect is completely broken in the current proto-PEP :( Cheers, Nick. > > > > OK, so as Daniel suggested, it's more like an export/entry-point specific > > "requires" field, but limited to the extras of the current distribution. > > Correct: at the time, it seemed a lot simpler to me than supporting > arbitrary requirements, and allows for more DRY, since entry points > might share some requirements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Mon Jul 22 11:11:13 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 22 Jul 2013 09:11:13 +0000 (UTC) Subject: [Distutils] Q about best practices now (or near future) References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Nick Coghlan gmail.com> writes: > On 22 Jul 2013 01:46, "PJ Eby" telecommunity.com> wrote: > > To put it another way, it's not "exported only if extra is available", > > it's "exported, but make sure you have this first." ?A subtle > > difference, but important to the original use cases (see below). > Ah, yes, I see the distinction (and it does make this notion conceptually simpler). Does "make sure you have this first" mean "install this if it's not present" or "raise an exception if it's not present"? AFAICT PEP 376 does not consider extras at all, and so does not have any standard way to store which extras a distribution was installed with. So what's the standard way of testing if "extra is available"? Regards, Vinay Sajip From ncoghlan at gmail.com Mon Jul 22 13:22:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 22 Jul 2013 21:22:53 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 22 Jul 2013 19:12, "Vinay Sajip" wrote: > > Nick Coghlan gmail.com> writes: > > > On 22 Jul 2013 01:46, "PJ Eby" telecommunity.com> wrote: > > > > To put it another way, it's not "exported only if extra is available", > > > it's "exported, but make sure you have this first." A subtle > > > difference, but important to the original use cases (see below). > > Ah, yes, I see the distinction (and it does make this notion conceptually > simpler). > > Does "make sure you have this first" mean "install this if it's not present" > or "raise an exception if it's not present"? AFAICT PEP 376 does not > consider extras at all, and so does not have any standard way to store which > extras a distribution was installed with. So what's the standard way of > testing if "extra is available"? Check if all the dependencies associated with that extra are present. That was my observation earlier: since extras aren't really a thing in their own right (they're just a shorthand for referring to an additional set of dependencies) allowing script wrapper generation to depend on an extra is likely a bad idea, since it may lead to a partially installed package. Since the check in pkg_resources is callback based, it's really up to the application looking for entry points to decide what unmet dependencies mean. Cheers, Nick. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Jul 22 13:29:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 22 Jul 2013 12:29:39 +0100 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 22 July 2013 12:22, Nick Coghlan wrote: > since extras aren't really a thing in their own right (they're just a > shorthand for referring to an additional set of dependencies) I'm still trying to be clear in my mind about what extras are, and how they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project? Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Mon Jul 22 14:25:32 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 22 Jul 2013 13:25:32 +0100 Subject: [Distutils] Wheels and console script entry point wrappers (Was: Replacing pip.exe with a Python script) In-Reply-To: <44eaf14bd946464b860356ee913d0106@BLUPR03MB199.namprd03.prod.outlook.com> References: <5891FE61-E0BB-44A1-BA9C-B3AFE9AE2800@mac.com> <44eaf14bd946464b860356ee913d0106@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: On 19 July 2013 20:48, Steve Dower wrote: >> From: Oscar Benjamin >> I don't know whether or not you intend to have wrappers also work for >> Python 2.7 (in a third-party package perhaps) but there is a slightly >> subtle point to watch out for when non-ASCII characters in sys.argv >> come into play. >> >> Python 2.x uses GetCommandLineA and 3.x uses GetCommandLineW. A >> wrapper to launch 2.x should use GetCommandLineA and CreateProcessA to >> ensure that the 8-bit argument strings are passed through unaltered. >> To launch 3.x it should use the W versions. If not then the MSVC >> runtime (or the OS?) will convert between the 8-bit and 16-bit >> encodings using its own lossy routines. > > The launcher should always use GetCommandLineW, because the command line is already stored in a 16-bit encoding. GetCommandLineA will decode to an 8-bit encoding using some code page/settings (I can probably find out exactly which ones, but I don't know/care off the top of my head), and CreateProcessA will convert back using (hopefully) the same code page. > > There is never any point passing data between *A APIs in Windows, because they are just doing the conversion in the background. All you gain is that the launcher will corrupt the command line before python.exe gets a chance to. Okay, thanks for the correction. The issue that made me think this was to do with calling Python 2.x as a subprocess of 3.x and vice-versa. When I looked back at it now I saw that the problem was to do with explicitly encoding with sys.getfilesystemencoding() in Python and using the mbcs codec (which previously had no error handling apart from 'replace'). Oscar From dholth at gmail.com Mon Jul 22 14:31:35 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 22 Jul 2013 08:31:35 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Mon, Jul 22, 2013 at 7:29 AM, Paul Moore wrote: > On 22 July 2013 12:22, Nick Coghlan wrote: >> >> since extras aren't really a thing in their own right (they're just a >> shorthand for referring to an additional set of dependencies) > > > I'm still trying to be clear in my mind about what extras are, and how they > should work. From this description, it occurs to me to ask, what is the > difference between an extra and a (metadata only, empty) second distribution > that depends on the base project as well as the "additional set of > dependencies"? Is it just the admin overhead of registering a second > project? > > Looking at extras this way gives a possible way of generating scripts only > when the extras are present - just add the scripts to the dummy "extra" > distribution. Yes, extras are *only* a way to create aliases for a set of dependencies. They are not recorded as installed. It should make no difference whether you install ipython[notebook], look up the dependencies for the ipython notebook and install them manually, or happen to have the ipython[notebook] dependencies installed and then later install ipython itself. What you get is a convenient way to install a distribution's optional dependencies without having to worry about whether the feature's dependencies change. It is a bad idea to make it too easy to install broken versions of distributions having missing scripts. From ncoghlan at gmail.com Mon Jul 22 16:07:10 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Jul 2013 00:07:10 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 22 Jul 2013 21:29, "Paul Moore" wrote: > > On 22 July 2013 12:22, Nick Coghlan wrote: >> >> since extras aren't really a thing in their own right (they're just a shorthand for referring to an additional set of dependencies) > > > I'm still trying to be clear in my mind about what extras are, and how they should work. From this description, it occurs to me to ask, what is the difference between an extra and a (metadata only, empty) second distribution that depends on the base project as well as the "additional set of dependencies"? Is it just the admin overhead of registering a second project? Sort of. The idea of an extra is "We have installed all the code for this, but it won't work due runtime failures if these dependencies aren't available". With an actual separate distribution, you can't easily tell that the other distribution contains no code of its own, and naming and versioning gets more complicated. You also can't do the trick 426 adds where "*" means "all optional dependencies". For other package systems like RPM that don't have the notion of extras, then yes, an extra would probably be mapped to a virtual package (in the specific case of yum, it copes fairly well with version locked virtual packages like that). > Looking at extras this way gives a possible way of generating scripts only when the extras are present - just add the scripts to the dummy "extra" distribution. Partial installs are problematic, since checking for optional dependencies is supposed to be a runtime thing, so it doesn't matter *how* those dependencies got there. Optional functionality like that would be better handled through a script that accepted subcommands, some of which would report an error if dependencies were missing. For a truly optional script, then it needs to be a genuinely separate package. Cheers, Nick. > > Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Jul 22 16:15:35 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 22 Jul 2013 10:15:35 -0400 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: <55F321B5-794B-43DA-8B83-1463BB51B3D2@stufft.io> <1374133849.87160.YahooMailNeo@web171404.mail.ir2.yahoo.com> <1E1D27C0-734E-412A-B292-A2BD1F4A4E6E@stufft.io> <1374190653.795.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Mon, Jul 22, 2013 at 7:29 AM, Paul Moore wrote: > I'm still trying to be clear in my mind about what extras are, and how they > should work. From this description, it occurs to me to ask, what is the > difference between an extra and a (metadata only, empty) second distribution > that depends on the base project as well as the "additional set of > dependencies"? Is it just the admin overhead of registering a second > project? That's one way of looking at it. But it's not implemented that way; it's more like environment markers -- i.e., conditional dependencies -- based on whether you want support for certain features that are, well, "extra". ;-) > Looking at extras this way gives a possible way of generating scripts only > when the extras are present - just add the scripts to the dummy "extra" > distribution. Setuptools doesn't actually *have* a dummy distribution (just conditional requirements in the base), but I don't see a problem with only installing a script if you asked to install the extras that script needs. It probably would've been sensible to implement easy_install that way. From carl at oddbird.net Mon Jul 22 21:14:07 2013 From: carl at oddbird.net (Carl Meyer) Date: Mon, 22 Jul 2013 13:14:07 -0600 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: References: Message-ID: <51ED847F.2060909@oddbird.net> On 07/22/2013 06:31 AM, Daniel Holth wrote: > Yes, extras are *only* a way to create aliases for a set of > dependencies. They are not recorded as installed. It should make no > difference whether you install ipython[notebook], look up the > dependencies for the ipython notebook and install them manually, or > happen to have the ipython[notebook] dependencies installed and then > later install ipython itself. In the broad view I don't think this is true, when you consider uninstall. If I install ipython[notebook] and later uninstall ipython, it would be reasonable for the uninstaller to prompt me to uninstall all the ipython notebook dependencies by default, whereas it should not do so if I had installed them separately and directly. That said, the REQUESTED flag in PEP 376 is probably sufficient for this, so it may still be true that there's no need to store which extras were installed with a package. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Tue Jul 23 02:24:11 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 23 Jul 2013 10:24:11 +1000 Subject: [Distutils] Q about best practices now (or near future) In-Reply-To: <51ED847F.2060909@oddbird.net> References: <51ED847F.2060909@oddbird.net> Message-ID: On 23 Jul 2013 05:53, "Carl Meyer" wrote: > > On 07/22/2013 06:31 AM, Daniel Holth wrote: > > Yes, extras are *only* a way to create aliases for a set of > > dependencies. They are not recorded as installed. It should make no > > difference whether you install ipython[notebook], look up the > > dependencies for the ipython notebook and install them manually, or > > happen to have the ipython[notebook] dependencies installed and then > > later install ipython itself. > > In the broad view I don't think this is true, when you consider > uninstall. If I install ipython[notebook] and later uninstall ipython, > it would be reasonable for the uninstaller to prompt me to uninstall all > the ipython notebook dependencies by default, whereas it should not do > so if I had installed them separately and directly. > > That said, the REQUESTED flag in PEP 376 is probably sufficient for > this, so it may still be true that there's no need to store which extras > were installed with a package. The safest logic for that kind of garbage collection feature is currently: * was it explicitly requested? * if not, does anything else (extra or not) still depend on it? Tracking extra requests directly currently falls into YAGNI territory in my opinion - if people want that level of control, then they really have a separate distribution rather than an extra. Cheers, Nick. > > Carl > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From setuptools at bugs.python.org Tue Jul 23 10:11:27 2013 From: setuptools at bugs.python.org (Chris Jones) Date: Tue, 23 Jul 2013 08:11:27 +0000 Subject: [Distutils] [issue156] SSL errors when using https proxy Message-ID: <1374567087.75.0.161207394076.issue156@psf.upfronthosting.co.za> New submission from Chris Jones: >From an image building script from diskimage-builder (part of OpenStack): + echo 'http_proxy: http://10.0.88.68:3128/' http_proxy: http://10.0.88.68:3128/ + echo 'https_proxy: http://10.0.88.68:3128/' https_proxy: http://10.0.88.68:3128/ + bash root at stonker:/# easy_install os-apply-config Searching for os-apply-config Reading https://pypi.python.org/simple/os-apply-config/ Download error on https://pypi.python.org/simple/os-apply-config/: [Errno 1] _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some packages may not be found! Couldn't find index page for 'os-apply-config' (maybe misspelled?) Scanning index of all packages (this may take a while) Reading https://pypi.python.org/simple/ Download error on https://pypi.python.org/simple/: [Errno 1] _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol -- Some packages may not be found! No local packages or download links found for os-apply-config error: Could not find suitable distribution for Requirement.parse('os-apply-config') root at stonker:/# (the proxy URL is a very close to stock squid3 configuration on another machine on my LAN, which is used elsewhere in the building script to download OS images, etc, so is not believed to be the issue). Reading through the setuptools code, I wondered if this is because the VerifyingHTTPSHandler inserted into the urllib2 opener chain, is trying to do direct socket connections. At the point it does that, I inserted a call to has_proxy() on the Request object and it returned False, which confused me as I would expect ProxyHandler to still be in the opener chain. ---------- messages: 741 nosy: cmsj priority: bug status: unread title: SSL errors when using https proxy _______________________________________________ Setuptools tracker _______________________________________________ From alexjeffburke at gmail.com Mon Jul 22 15:59:31 2013 From: alexjeffburke at gmail.com (Alex Burke) Date: Mon, 22 Jul 2013 15:59:31 +0200 Subject: [Distutils] get-pip.py user installation issue Message-ID: Hey, I was recently trying to do an all user installation of the packaging tools and (though I know this may change), was unable to use get-pip.py to install pip user locally as per PEP 370. Currently, instead of doing the ideal: # install pip $ curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python - --user I instead had to download the pip tarball, unpack it, and use setup.py to manually pass in --user as follows: # install pip $ curl -O https://pypi.python.org/packages/source/p/pip/pip-1.3.1.tar.gz $ tar xvfz pip-1.3.1.tar.gz $ cd pip-1.3.1 $ python setup.py install --user I think this happens because there is no code to pass down command line options into the pip bootstrap() called on de-serialisation. By contract this is possible to install setuptools locally: # install stuptools curl https://bitbucket.org/pypa/setuptools/raw/0.7.8/ez_setup.py | python - --user Am I right in thinking this is a problem? If so, but in case patching this is a non starter as it will be replaced by a bootstrap script installing a wheel, perhaps that may need to respect the --user argument? Thanks, Alex J Burke. From jannis at leidel.info Tue Jul 23 20:26:06 2013 From: jannis at leidel.info (Jannis Leidel) Date: Tue, 23 Jul 2013 20:26:06 +0200 Subject: [Distutils] get-pip.py user installation issue In-Reply-To: References: Message-ID: <24901FCF-CB60-4BB1-8461-7333F6A9E163@leidel.info> On 22.07.2013, at 15:59, Alex Burke wrote: > Hey, > > I was recently trying to do an all user installation of the packaging > tools and (though I know this may change), was unable to use > get-pip.py to install pip user locally as per PEP 370. > > Currently, instead of doing the ideal: > > # install pip > $ curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | > python - --user > > I instead had to download the pip tarball, unpack it, and use setup.py > to manually pass in --user as follows: > > # install pip > $ curl -O https://pypi.python.org/packages/source/p/pip/pip-1.3.1.tar.gz > $ tar xvfz pip-1.3.1.tar.gz > $ cd pip-1.3.1 > $ python setup.py install --user > > I think this happens because there is no code to pass down command > line options into the pip bootstrap() called on de-serialisation. By > contract this is possible to install setuptools locally: > > # install stuptools > curl https://bitbucket.org/pypa/setuptools/raw/0.7.8/ez_setup.py | > python - --user > > Am I right in thinking this is a problem? If so, but in case patching > this is a non starter as it will be replaced by a bootstrap script > installing a wheel, perhaps that may need to respect the --user > argument? Sure, feel free to open a ticket in the pip issue tracker: https://github.com/pypa/pip/issues Jannis From jannis at leidel.info Tue Jul 23 21:42:25 2013 From: jannis at leidel.info (Jannis Leidel) Date: Tue, 23 Jul 2013 21:42:25 +0200 Subject: [Distutils] get-pip.py user installation issue In-Reply-To: References: Message-ID: <177BA9E3-DD8F-4865-B455-D53F50E40378@leidel.info> On 22.07.2013, at 15:59, Alex Burke wrote: > Hey, > > I was recently trying to do an all user installation of the packaging > tools and (though I know this may change), was unable to use > get-pip.py to install pip user locally as per PEP 370. > > Currently, instead of doing the ideal: > > # install pip > $ curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | > python - --user > > I instead had to download the pip tarball, unpack it, and use setup.py > to manually pass in --user as follows: > > # install pip > $ curl -O https://pypi.python.org/packages/source/p/pip/pip-1.3.1.tar.gz > $ tar xvfz pip-1.3.1.tar.gz > $ cd pip-1.3.1 > $ python setup.py install --user > > I think this happens because there is no code to pass down command > line options into the pip bootstrap() called on de-serialisation. By > contract this is possible to install setuptools locally: > > # install stuptools > curl https://bitbucket.org/pypa/setuptools/raw/0.7.8/ez_setup.py | > python - --user > > Am I right in thinking this is a problem? If so, but in case patching > this is a non starter as it will be replaced by a bootstrap script > installing a wheel, perhaps that may need to respect the --user > argument? Actually, forget what I wrote earlier, it was fixed and merged a couple of months ago: https://github.com/pypa/pip/pull/895 Jannis From p at google-groups-2013.dobrogost.net Sun Jul 21 20:38:24 2013 From: p at google-groups-2013.dobrogost.net (Piotr Dobrogost) Date: Sun, 21 Jul 2013 11:38:24 -0700 (PDT) Subject: [Distutils] pip - installation of the latest stable version of package Message-ID: Hi! Does pip support installation of the latest _stable_ version of package (for all versions which adhere to PEP386)? Related - "Do not upload dev releases at PyPI" by Tarek Ziad?at http://ziade.org/2011/02/15/do-not-upload-dev-releases-at-pypi/ Regards, Piotr Dobrogost -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Wed Jul 24 19:10:45 2013 From: carl at oddbird.net (Carl Meyer) Date: Wed, 24 Jul 2013 12:10:45 -0500 Subject: [Distutils] pip - installation of the latest stable version of package In-Reply-To: References: Message-ID: <51F00A95.1060004@oddbird.net> On 07/21/2013 01:38 PM, Piotr Dobrogost wrote: > Does pip support installation of the latest _stable_ version of package > (for all versions which adhere to PEP386)? > Related - "Do not upload dev releases at PyPI" by Tarek Ziad? > at > http://ziade.org/2011/02/15/do-not-upload-dev-releases-at-pypi/ > Yes. It's a new feature in the just-released pip 1.4; in fact it is now the default behavior. See the top entry in the pip 1.4 changelog: http://www.pip-installer.org/en/latest/news.html#changelog Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Wed Jul 24 19:30:00 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 24 Jul 2013 13:30:00 -0400 Subject: [Distutils] pip - installation of the latest stable version of package In-Reply-To: <51F00A95.1060004@oddbird.net> References: <51F00A95.1060004@oddbird.net> Message-ID: <1EEC2690-C5CD-4EF9-8BAE-3C5D2BAEB7BA@stufft.io> On Jul 24, 2013, at 1:10 PM, Carl Meyer wrote: > On 07/21/2013 01:38 PM, Piotr Dobrogost wrote: >> Does pip support installation of the latest _stable_ version of package >> (for all versions which adhere to PEP386)? >> Related - "Do not upload dev releases at PyPI" by Tarek Ziad? >> at >> http://ziade.org/2011/02/15/do-not-upload-dev-releases-at-pypi/ >> > > Yes. It's a new feature in the just-released pip 1.4; in fact it is now > the default behavior. See the top entry in the pip 1.4 changelog: > http://www.pip-installer.org/en/latest/news.html#changelog > > Carl > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Technically it uses PEP440 but I believe they have the same rules as far as stable or unstable. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Jul 25 02:00:29 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 25 Jul 2013 10:00:29 +1000 Subject: [Distutils] pip - installation of the latest stable version of package In-Reply-To: <1EEC2690-C5CD-4EF9-8BAE-3C5D2BAEB7BA@stufft.io> References: <51F00A95.1060004@oddbird.net> <1EEC2690-C5CD-4EF9-8BAE-3C5D2BAEB7BA@stufft.io> Message-ID: On 25 Jul 2013 03:30, "Donald Stufft" wrote: > > > On Jul 24, 2013, at 1:10 PM, Carl Meyer wrote: > > > On 07/21/2013 01:38 PM, Piotr Dobrogost wrote: > >> Does pip support installation of the latest _stable_ version of package > >> (for all versions which adhere to PEP386)? > >> Related - "Do not upload dev releases at PyPI" by Tarek Ziad? > >> at > >> http://ziade.org/2011/02/15/do-not-upload-dev-releases-at-pypi/ > >> > > > > Yes. It's a new feature in the just-released pip 1.4; in fact it is now > > the default behavior. See the top entry in the pip 1.4 changelog: > > http://www.pip-installer.org/en/latest/news.html#changelog > > > > Carl > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > Technically it uses PEP440 but I believe they have the same rules as far as stable or unstable. Yeah, major items like release ordering and whether or not a version is a pre-release stay the same. The only ordering differences between 386 and 440 relate to some relatively obscure edge cases where the spec in 386 differed from the behaviour of existing tools without adequate justification. PEP 440 has the details. Cheers, Nick. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Thu Jul 25 07:38:26 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Thu, 25 Jul 2013 15:38:26 +1000 Subject: [Distutils] What to do about the PyPI mirrors Message-ID: Hi all, I've just been contacted by someone who's set up a new public mirror of PyPI and would like it integrated into the mirror ecosystem. I think it's probably time we thought about how to demote the mirrors: - they cause problems with security (being under the python.org domain causes various issues including inability to use HTTPS and cookie issues) - they're no longer necessary thanks to the CDN work So, things to do: - links and information on PyPI itself can be removed - tools that use mirrors still need to be able to but mention of using public mirrors is probably something to demote These are just rough thoughts that occurred to me just now. Richard From noah at coderanger.net Thu Jul 25 08:14:12 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 24 Jul 2013 23:14:12 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: <63D400E7-F5B7-4F0B-A409-B96EBB51DA2C@coderanger.net> On Jul 24, 2013, at 10:38 PM, Richard Jones wrote: > Hi all, > > I've just been contacted by someone who's set up a new public mirror > of PyPI and would like it integrated into the mirror ecosystem. > > I think it's probably time we thought about how to demote the mirrors: > > - they cause problems with security (being under the python.org domain > causes various issues including inability to use HTTPS and cookie > issues) > - they're no longer necessary thanks to the CDN work > > So, things to do: > > - links and information on PyPI itself can be removed > - tools that use mirrors still need to be able to but mention of using > public mirrors is probably something to demote > > These are just rough thoughts that occurred to me just now. +1, as envoy of infrastructure team we would like to formally retire the [a-z].pypi.python.org names. Anyone with an existing mirror should be encouraged to continue maintaining it, but it will be for their own use (or the use of their company/internal network). --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu Jul 25 08:30:48 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 25 Jul 2013 02:30:48 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <63D400E7-F5B7-4F0B-A409-B96EBB51DA2C@coderanger.net> References: <63D400E7-F5B7-4F0B-A409-B96EBB51DA2C@coderanger.net> Message-ID: <0FB9E022-1314-4DF8-A372-AAD5886685EA@stufft.io> On Jul 25, 2013, at 2:14 AM, Noah Kantrowitz wrote: > > On Jul 24, 2013, at 10:38 PM, Richard Jones wrote: > >> Hi all, >> >> I've just been contacted by someone who's set up a new public mirror >> of PyPI and would like it integrated into the mirror ecosystem. >> >> I think it's probably time we thought about how to demote the mirrors: >> >> - they cause problems with security (being under the python.org domain >> causes various issues including inability to use HTTPS and cookie >> issues) >> - they're no longer necessary thanks to the CDN work >> >> So, things to do: >> >> - links and information on PyPI itself can be removed >> - tools that use mirrors still need to be able to but mention of using >> public mirrors is probably something to demote >> >> These are just rough thoughts that occurred to me just now. > > +1, as envoy of infrastructure team we would like to formally retire the [a-z].pypi.python.org names. Anyone with an existing mirror should be encouraged to continue maintaining it, but it will be for their own use (or the use of their company/internal network). > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig +1 as well. 2/6 of the mirrors are gone already and I don't think anyone actually implemented the mirror authenticity protocol so afaik installing from mirrors is completely insecure since it's all via HTTP and moving to HTTPS would require getting SSL certs for each specific one which isn't likely to be something that we can do. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu Jul 25 08:33:17 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 25 Jul 2013 02:33:17 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: On Jul 25, 2013, at 1:38 AM, Richard Jones wrote: > Hi all, > > I've just been contacted by someone who's set up a new public mirror > of PyPI and would like it integrated into the mirror ecosystem. > > I think it's probably time we thought about how to demote the mirrors: > > - they cause problems with security (being under the python.org domain > causes various issues including inability to use HTTPS and cookie > issues) > - they're no longer necessary thanks to the CDN work > > So, things to do: > > - links and information on PyPI itself can be removed > - tools that use mirrors still need to be able to but mention of using > public mirrors is probably something to demote > > These are just rough thoughts that occurred to me just now. > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Just a quick note though to be clear. I still plan on supporting (and even improving) the ability to mirror PyPI. I think it's an important ability to have especially for companies, or projects like OpenStack who use a mirror for their massive CI infrastructure. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From a.badger at gmail.com Thu Jul 25 18:04:46 2013 From: a.badger at gmail.com (Toshio Kuratomi) Date: Thu, 25 Jul 2013 09:04:46 -0700 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 Message-ID: <20130725160446.GL11402@unaka.lan> Over on python-dev we're talking about Linux Distributions switching from python2 to python3, what steps they need to take and in what order. One of the things that's come up [1]_ is that a very early step in the process is making sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in PEP394 [2]_. Faced with the prospect of patching a whole bunch of scripts in the distribution, I'm wondering what distutils, distlib, setuptools, etc do with shebang lines. * Do they rewrite shebang lines? * If so, do they use #!/usr/bin/python2 or do they use #!/usr/bin/python ? * If the latter, is there hope that we could change that to match with PEP-394's recommendations? (setuptools seems to be moving relatively quickly these days so that seems reasonably easy.... distutils is tied to the release schedule of core python-2.7.x although if the change is accepted into the CPython tree we might consider backporting it to the current distribution package early. .. [1]_: http://mail.python.org/pipermail/python-dev/2013-July/127565.html .. [2]_: http://www.python.org/dev/peps/pep-0394/#recommendation Thanks, Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From pjenvey at underboss.org Fri Jul 26 00:37:47 2013 From: pjenvey at underboss.org (Philip Jenvey) Date: Thu, 25 Jul 2013 15:37:47 -0700 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: <20130725160446.GL11402@unaka.lan> References: <20130725160446.GL11402@unaka.lan> Message-ID: <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On Jul 25, 2013, at 9:04 AM, Toshio Kuratomi wrote: > Over on python-dev we're talking about Linux Distributions switching from > python2 to python3, what steps they need to take and in what order. One of > the things that's come up [1]_ is that a very early step in the process is making > sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in > PEP394 [2]_. Faced with the prospect of patching a whole bunch of scripts > in the distribution, I'm wondering what distutils, distlib, setuptools, etc > do with shebang lines. > * Do they rewrite shebang lines? distutils, distlib and setuptools all do. > * If so, do they use #!/usr/bin/python2 or do they use #!/usr/bin/python ? I believe they are actually all using sys.executable > * If the latter, is there hope that we could change that to match with PEP-394's > recommendations? (setuptools seems to be moving relatively quickly these > days so that seems reasonably easy.... distutils is tied to the release > schedule of core python-2.7.x although if the change is accepted into the > CPython tree we might consider backporting it to the current distribution > package early. What's the value of sys.executable in the brave new world of distributions that follow PEP 394? - -- Philip Jenvey -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.19 (Darwin) Comment: GPGTools - http://gpgtools.org iQEcBAEBCgAGBQJR8ai9AAoJEJ82HVHb51bRPsUH/0eH03riVb1EtruYxqQ0sFxZ F0XhVPSdSfMgU63e1aXTeLs1KT+NwHjH5w25Zt024/Xizv/HhDFr8ElfsPjCZwW3 fUBUuSOcq9zuLsDuAvsRXPGuv5jRSZf8DiRExDFQjV1GZSuZ46ExOf3D/TJmbcOf EibAzPzDCOZdgGTw4NcS2yzE9N/woyIjSyvJPXG8IyyFEFw9V4ZfpZL5uiKoKt0p acbC8tzh0w79atsHN1tCalYhcXzIgAsAWBpXj18V4RKxjtZW9i1k5KQKmkKn2FLF Vr5ELNjaoLFJ77RuBZWGN6GOv5rPh1C9PFzFRURrrSk4CWEma636GuktbpA3Kik= =zzg5 -----END PGP SIGNATURE----- From barry at python.org Fri Jul 26 02:44:15 2013 From: barry at python.org (Barry Warsaw) Date: Thu, 25 Jul 2013 20:44:15 -0400 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: <20130725204415.211d6d74@anarchist> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On Jul 25, 2013, at 03:37 PM, Philip Jenvey wrote: >What's the value of sys.executable in the brave new world of distributions >that follow PEP 394? One use case is locating other scripts/executables that might get installed next to sys.executable in a virtualenv. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.20 (GNU/Linux) iQIcBAEBCAAGBQJR8cZfAAoJEBJutWOnSwa/lnMQALE6VLkSNElQfJnIgBraS/PR /C7m2kCNbeBdHfTXjRJYeHoecFPO6z5mSurO2ikbFvwvxmWi1MjxX6/gKR6jaNJ6 PJcyoO1KaJJ9CKP0hlAcuLYGyj86Nhmw1cK20yeejV+YJzqpFjqXxHyH9GlaGQKc aDKG/xN/SX139V4/KfgaAcXKhbsCkhL6cAugHFGl8qTytdvaOOzRs2/jHsTcPDp3 Dfb2nz69PteQAYSPHi7Na9bAe2+lGy9nz93zfw7T6FLB4ya3trATnbxD1Ojmg2mw lnuvTBg6GiRv/BZgDB6Fl8OTMj6BvyfKKG9Nb6rCCCYyPR8Q8t1TkxODCAkHi6kY omwMOR+Fc4Btm/rtRcY4rXKPY6FXVgnzysOT/z4GcpjWotp2E6ZiSruwsm7okwM3 Pbl9BEAD4WdgotSJDrb17BmcLl48qqenODOmrHKvfIu67iagSAr8D9Qp2B0G1V0B GRtmCVrcvGheMKw+5VAYAmfyVnNQmi1MVY1fNN9pya9Wi38mYiWFxXhD/wIhU6Ih /vs38dt+PSu6AMfomvf7tzOuZKE+JKkS//e7+OlEwwaDl4am5awXh0Ci/6z8cJHg ywH003hYZ4SWzuMHWerkIxydXW3D8GUvRDcpR3OZaoknEKQYkGoMVY0KimKG2ykZ KcGciNDFFXpxaq6+4cc1 =Dy/O -----END PGP SIGNATURE----- From vinay_sajip at yahoo.co.uk Fri Jul 26 04:11:33 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 26 Jul 2013 02:11:33 +0000 (UTC) Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: Philip Jenvey underboss.org> writes: > > I believe they are actually all using sys.executable > On distlib that's the default, though it's relatively easy to set a custom value for the executable written into shebang lines. Regards, Vinay Sajip From ncoghlan at gmail.com Fri Jul 26 06:22:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 26 Jul 2013 14:22:06 +1000 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: <20130725204415.211d6d74@anarchist> References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> <20130725204415.211d6d74@anarchist> Message-ID: On 26 Jul 2013 10:45, "Barry Warsaw" wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > On Jul 25, 2013, at 03:37 PM, Philip Jenvey wrote: > > >What's the value of sys.executable in the brave new world of distributions > >that follow PEP 394? > > One use case is locating other scripts/executables that might get installed > next to sys.executable in a virtualenv. I believe Philip's question was literal rather than philosophical :) Cheers, Nick. > > - -Barry > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v2.0.20 (GNU/Linux) > > iQIcBAEBCAAGBQJR8cZfAAoJEBJutWOnSwa/lnMQALE6VLkSNElQfJnIgBraS/PR > /C7m2kCNbeBdHfTXjRJYeHoecFPO6z5mSurO2ikbFvwvxmWi1MjxX6/gKR6jaNJ6 > PJcyoO1KaJJ9CKP0hlAcuLYGyj86Nhmw1cK20yeejV+YJzqpFjqXxHyH9GlaGQKc > aDKG/xN/SX139V4/KfgaAcXKhbsCkhL6cAugHFGl8qTytdvaOOzRs2/jHsTcPDp3 > Dfb2nz69PteQAYSPHi7Na9bAe2+lGy9nz93zfw7T6FLB4ya3trATnbxD1Ojmg2mw > lnuvTBg6GiRv/BZgDB6Fl8OTMj6BvyfKKG9Nb6rCCCYyPR8Q8t1TkxODCAkHi6kY > omwMOR+Fc4Btm/rtRcY4rXKPY6FXVgnzysOT/z4GcpjWotp2E6ZiSruwsm7okwM3 > Pbl9BEAD4WdgotSJDrb17BmcLl48qqenODOmrHKvfIu67iagSAr8D9Qp2B0G1V0B > GRtmCVrcvGheMKw+5VAYAmfyVnNQmi1MVY1fNN9pya9Wi38mYiWFxXhD/wIhU6Ih > /vs38dt+PSu6AMfomvf7tzOuZKE+JKkS//e7+OlEwwaDl4am5awXh0Ci/6z8cJHg > ywH003hYZ4SWzuMHWerkIxydXW3D8GUvRDcpR3OZaoknEKQYkGoMVY0KimKG2ykZ > KcGciNDFFXpxaq6+4cc1 > =Dy/O > -----END PGP SIGNATURE----- > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexjeffburke at gmail.com Fri Jul 26 13:31:54 2013 From: alexjeffburke at gmail.com (Alex Burke) Date: Fri, 26 Jul 2013 13:31:54 +0200 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: On 26 July 2013 00:37, Philip Jenvey wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA512 > > > On Jul 25, 2013, at 9:04 AM, Toshio Kuratomi wrote: > >> Over on python-dev we're talking about Linux Distributions switching from >> python2 to python3, what steps they need to take and in what order. One of >> the things that's come up [1]_ is that a very early step in the process is making >> sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in >> PEP394 [2]_. Faced with the prospect of patching a whole bunch of scripts >> in the distribution, I'm wondering what distutils, distlib, setuptools, etc >> do with shebang lines. >> * Do they rewrite shebang lines? > > distutils, distlib and setuptools all do. > Hi, It was interesting that discussion came up on python-dev but I admit to being surprised by the suggestion the shebang lines may need to be rewritten in end user code. This may be a callous over-simplification but if #!python is rewritten by the python packaging infrastructure, would it not be changed for python2/python3 as appropriate at installation time? Thus a python 2 package (whatever it is named) would be generated by calling a python2 executable + setuptools while the same is true for v3 except using python3. The result is then packaged by rpm/dpkg. Keen to understand why it can't work this way if that's the case. Thanks, Alex J Burke. From ncoghlan at gmail.com Fri Jul 26 13:58:56 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 26 Jul 2013 21:58:56 +1000 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: On 26 July 2013 21:31, Alex Burke wrote: > On 26 July 2013 00:37, Philip Jenvey wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA512 >> >> >> On Jul 25, 2013, at 9:04 AM, Toshio Kuratomi wrote: >> >>> Over on python-dev we're talking about Linux Distributions switching from >>> python2 to python3, what steps they need to take and in what order. One of >>> the things that's come up [1]_ is that a very early step in the process is making >>> sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in >>> PEP394 [2]_. Faced with the prospect of patching a whole bunch of scripts >>> in the distribution, I'm wondering what distutils, distlib, setuptools, etc >>> do with shebang lines. >>> * Do they rewrite shebang lines? >> >> distutils, distlib and setuptools all do. > > Hi, > > It was interesting that discussion came up on python-dev but I admit > to being surprised by the suggestion the shebang lines may need to be > rewritten in end user code. > > This may be a callous over-simplification but if #!python is rewritten > by the python packaging infrastructure, would it not be changed for > python2/python3 as appropriate at installation time? Thus a python 2 > package (whatever it is named) would be generated by calling a python2 > executable + setuptools while the same is true for v3 except using > python3. The result is then packaged by rpm/dpkg. > > Keen to understand why it can't work this way if that's the case. It actually occurs to me that the following example (showing how symlinks affect "sys.executable") illustrates both the problem and the solution for cases where users are relying on generated script wrappers or the shebang line rewriting in wheel: $ ln -s /usr/bin/python foo $ ./foo -c "import sys; print sys.executable" /home/ncoghlan/foo So long as the distro build systems are updated to invoke setup.py through an appropriately versioned symlink (rather than through /usr/bin/python), setuptools et al should all automatically do the right thing when generating script wrappers. Not everybody uses generated script wrappers, though - if there is a hardcoded "/usr/bin/env python" or "/usr/bin/python" in a shebang line, the Python build tools won't touch it. There's also a whole lot of software that isn't packaged at all - it's sitting around in user and admin home directories, or maybe in a shared directory on a file server or even in a private source control repo. Published software is actually the vanishingly small tip of a very large iceberg, especially for languages like Python that handle scripting use cases fairly well and are thus popular for personal automation tasks amongst developers and system administrators. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Jul 26 18:25:36 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 12:25:36 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 Message-ID: PyPI has historically used MD5 in order to verify the downloads. However MD5 is severely broken and is generally regarded as something that should be migrated away from ASAP. From speaking with a number of cryptographers they've more or less said that the major reason they believe that MD5 hasn't had a published pre-image attack is just because it's so broken that most researchers have moved on to newer hashes. Since versions 1.2 pip has supported md5, sha1, and any of the sha2 family. Additionally it has only supported SSL verification since 1.3. This means there is no version of pip which both verifies SSL and only allows MD5. Since version 0.9 setuptools has supported md5, sha1, and any of the sha2 family and it has only supported SSL verification since 0.7. I propose we switch PyPI from using MD5 to using SHA256. There is no security lost from using a hash that pip prior to version 1.2 doesn't understand as it didn't verify SSL so an attacker could simply modify the hashes if they wanted. Additionally there is no security list from setuptools versions earlier than 0.7. However setuptools versions 0.7-0.8 will lose their hashes. I believe this is a ok thing to happen because the uptake on 0.7-0.8 is fairly low. Most people will use the setuptools bundled with virtualenv which has only ever bundled 0.6 or 0.9. So essentially moving from MD5 to SHA256 will only negatively affect the security of a handful of users while positively impacting the security of everyone else. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Fri Jul 26 20:33:07 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 26 Jul 2013 14:33:07 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: On Fri, Jul 26, 2013 at 12:25 PM, Donald Stufft wrote: > Additionally there is no security list from setuptools versions earlier than 0.7. Not true, actually. Setuptools 0.6 dev releases supported SSL verification since mid-May, but don't support any hashes besides MD5. Anybody who updated their setuptools between then and the release of 0.7 would have that version. Unfortunately, it's hard to tell how many people that is, though I could try and dig through my server logs to find out. There's also another issue with jumping to SHA256: Python prior to 2.5 didn't support it. Which brings up another point: the setuptools 0.6 series is the only setuptools available for Python 2.3. That's one of the reasons it's still available for download. If you want SSL verification on 2.3, it's the only thing available. (Meanwhile, a lot of people are still downloading 0.6c11; probably I should package up an 0.6c12 so those folks pick it up instead of 0.6c11.) Anyway, this is all somewhat moot since the hashes only matter when the download is hosted somewhere besides PyPI, since SSL verification is available for the PyPI part. Even so, I'd suggest that moving to SHA1 might be a good intermediate step: it's available on Python 2.3, so I could backport the relevant support to the 0.6 branch. (IIUC, Python 2.3 is still the default version for many Linux distros that have not reached end-of-life support.) From alex.gaynor at gmail.com Fri Jul 26 20:42:11 2013 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Fri, 26 Jul 2013 18:42:11 +0000 (UTC) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 References: Message-ID: PJ Eby telecommunity.com> writes: > > There's also another issue with jumping to SHA256: Python prior to 2.5 > didn't support it. I'm not sure this is a particularly relevant concern. Python's prior to 2.5 are no longer supported by the people who wrote them, or almost any major packages (SQLAlchemy, Django, pip, virtualenv, the list goes on and on). I'm not sure why we should expend any effort to support them either. Alex From donald at stufft.io Fri Jul 26 21:14:17 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 15:14:17 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: <9EB510F2-AFFF-4AB2-A66D-C188D628AA69@stufft.io> On Jul 26, 2013, at 2:33 PM, PJ Eby wrote: > On Fri, Jul 26, 2013 at 12:25 PM, Donald Stufft wrote: >> Additionally there is no security list from setuptools versions earlier than 0.7. > > Not true, actually. Setuptools 0.6 dev releases supported SSL > verification since mid-May, but don't support any hashes besides MD5. > Anybody who updated their setuptools between then and the release of > 0.7 would have that version. Unfortunately, it's hard to tell how > many people that is, though I could try and dig through my server logs > to find out. > > There's also another issue with jumping to SHA256: Python prior to 2.5 > didn't support it. > > Which brings up another point: the setuptools 0.6 series is the only > setuptools available for Python 2.3. That's one of the reasons it's > still available for download. If you want SSL verification on 2.3, > it's the only thing available. (Meanwhile, a lot of people are still > downloading 0.6c11; probably I should package up an 0.6c12 so those > folks pick it up instead of 0.6c11.) > > Anyway, this is all somewhat moot since the hashes only matter when > the download is hosted somewhere besides PyPI, since SSL verification > is available for the PyPI part. Even so, I'd suggest that moving to > SHA1 might be a good intermediate step: it's available on Python 2.3, > so I could backport the relevant support to the 0.6 branch. (IIUC, > Python 2.3 is still the default version for many Linux distros that > have not reached end-of-life support.) I don't have a Python 2.3 available to attempt to test. To be honest I've never even used Python 2.3. Does the hashlib backport I added to setuptools 0.9 for Python 2.4 work on 2.3? It's a pure python implementation of hashlib. Sha1 is better but it already has weaknesses so if at all possible it would be much preferred and significantly better to switch to sha256. Setuptools doesn't appear to include the python version in it's user agent so I can't get any sort of information about Python 2.3 usage. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From christian at python.org Fri Jul 26 21:24:59 2013 From: christian at python.org (Christian Heimes) Date: Fri, 26 Jul 2013 21:24:59 +0200 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: <51F2CD0B.4000903@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Am 26.07.2013 18:25, schrieb Donald Stufft: > PyPI has historically used MD5 in order to verify the downloads. > However MD5 is severely broken and is generally regarded as > something that should be migrated away from ASAP. From speaking > with a number of cryptographers they've more or less said that the > major reason they believe that MD5 hasn't had a published pre-image > attack is just because it's so broken that most researchers have > moved on to newer hashes. > > Since versions 1.2 pip has supported md5, sha1, and any of the sha2 > family. Additionally it has only supported SSL verification since > 1.3. This means there is no version of pip which both verifies SSL > and only allows MD5. > > Since version 0.9 setuptools has supported md5, sha1, and any of > the sha2 family and it has only supported SSL verification since > 0.7. > > I propose we switch PyPI from using MD5 to using SHA256. There is > no security lost from using a hash that pip prior to version 1.2 > doesn't understand as it didn't verify SSL so an attacker could > simply modify the hashes if they wanted. Additionally there is no > security list from setuptools versions earlier than 0.7. A couple of months ago I suggested a schema that includes MD5, SHA-2 and file size: file.tar.gz#MD5=1234&SHA-256=abcd&filesize=5023 That should work for old versions of setuptool and can easily be supported in new versions of pip and setuptools. A new hash sum scheme must include the possibility to add multiple and new hash algorithms. A download tool shall check the hash sum for all supported algorithms, too. I also like to see the file size in the scheme. It's useful to know the file size in preparation of the download. The file size validation mitigates some attack possibilities. Christian -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iQIcBAEBCAAGBQJR8s0HAAoJEMeIxMHUVQ1FO90P+gM6Lx3WZA94Tg8bEco6ckLI m00Yt+Gwn1HgnF+wSAaxIsThy2C/yltTPqfJiwZGErlzt0tAIdFZhYMbkhzO1//Z N6i1O+seH5eqMSgUd7K1mgiRIAsKpXH6SEt/U3VzPNm/qVvIV0FFIUTEIx9xXpkD HYKbup7dkcBkIkhreUpIG4TGEK22/Vcs+G4NjR8UllcqRS4iWrkiKzXuLfnto++t 9fnYfz0uxh2nG3doFGr2gzypLtctrRzAqy28AtlbgGEKaK5E2/hoGrRE8VIBZg+f SEWKLctTLoOcXHVTaxoAcp+3XzwKZPpGoJzjyLtPFDrH55kZFLA2a25vB51xteLA A7Kz60eccHe7Io76incJiL+RmorcpUTTp6FRoTCdqDUW2rSmTM1z8tUtenJNAQYG UnuyRrRbTeQ1JlImdakqXA1X5/qsYLy7kcaf4Xb9SXxIdlEk//0o3tiB4B92oIgF If5yx65KoKPUCg1pXA/ZawTuH/d1aJQWOjz0eP7Wn+GnEnHxoKIwYMP65xVyNCXU 0afS5lRs7gxtOKlXofWoXO1u7H7EHJQzFbbgdJkSl65mz+hOVMu1w7RQwPb7LzeO 16gnUtvIpXFaab/NCM4UmXuHWLx07jWB4ZJ45zsXuyXa3m4kdt9aMS0oVaSYgA/a Zq84rJiWc17eItR9iyU5 =ZRue -----END PGP SIGNATURE----- From donald at stufft.io Fri Jul 26 21:27:21 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 15:27:21 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <51F2CD0B.4000903@python.org> References: <51F2CD0B.4000903@python.org> Message-ID: On Jul 26, 2013, at 3:24 PM, Christian Heimes wrote: > A couple of months ago I suggested a schema that includes MD5, SHA-2 > and file size: > > file.tar.gz#MD5=1234&SHA-256=abcd&filesize=5023 > > That should work for old versions of setuptool and can easily be > supported in new versions of pip and setuptools. It won't work for old versions, it explicitly includes the end of line terminator and the #. > > A new hash sum scheme must include the possibility to add multiple and > new hash algorithms. A download tool shall check the hash sum for all > supported algorithms, too. I also like to see the file size in the > scheme. It's useful to know the file size in preparation of the > download. The file size validation mitigates some attack possibilities. Right now that would break too much. I agree this is where we need to get too but It'll likely need to wait for the new API in Warehouse. > > Christian > > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From barry at python.org Fri Jul 26 21:32:00 2013 From: barry at python.org (Barry Warsaw) Date: Fri, 26 Jul 2013 15:32:00 -0400 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: <20130726153200.02097c3e@anarchist> On Jul 26, 2013, at 09:58 PM, Nick Coghlan wrote: >Not everybody uses generated script wrappers, though - if there is a >hardcoded "/usr/bin/env python" or "/usr/bin/python" in a shebang >line, the Python build tools won't touch it. There's also a whole lot >of software that isn't packaged at all - it's sitting around in user >and admin home directories, or maybe in a shared directory on a file >server or even in a private source control repo. I actually think installing a script via setuptools should rewrite shebangs even in the case of /usr/bin/env. I love `#!/usr/bin/env python` *for development* but I really think its a bad thing to have for installed scripts. Certainly, for distro installed scripts, it's (usually) terrible. I think virtualenv installs are generally in the same boat though - if you're installing a script into a virtualenv, it's better to rewrite the shebang to use the executable that was used to install it. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From p.f.moore at gmail.com Fri Jul 26 21:38:51 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 26 Jul 2013 20:38:51 +0100 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: <20130726153200.02097c3e@anarchist> References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> <20130726153200.02097c3e@anarchist> Message-ID: On 26 July 2013 20:32, Barry Warsaw wrote: > I love `#!/usr/bin/env python` *for development* but I really think its a > bad > thing to have for installed scripts. Certainly, for distro installed > scripts, > it's (usually) terrible. I think virtualenv installs are generally in the > same boat though - if you're installing a script into a virtualenv, it's > better to rewrite the shebang to use the executable that was used to > install > it. > There are cases where it's useful and appropriate - the best example "in the wild" is distil, which uses #!/usr/bin/env python to allow it to run with "the current Python" specifically, so that it can be used to install packages whatever Python you're using and without needing multiple copies of the script installed. I've written similar types of scripts which benefit in the same way from a /user/bin/env shebang line. In fact, support for /usr/bin/env was added to the Windows launcher just recently to provide precisely this functionality for scripts on Windows. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Jul 26 22:27:32 2013 From: barry at python.org (Barry Warsaw) Date: Fri, 26 Jul 2013 16:27:32 -0400 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> <20130726153200.02097c3e@anarchist> Message-ID: <20130726162732.5f6fb607@anarchist> On Jul 26, 2013, at 08:38 PM, Paul Moore wrote: >There are cases where it's useful and appropriate Sure, I don't disagree. Just that I think the general rule should be: * Use /usr/bin/env in your source tree * Use /usr/bin/$python when installed I think those rules cover the majority of cases. Any automatic shebang rewriting must be selectable. I'd argue for default-rewrite with an option to disable. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From donald at stufft.io Fri Jul 26 22:45:41 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 16:45:41 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: On Jul 26, 2013, at 2:33 PM, PJ Eby wrote: > Anyway, this is all somewhat moot since the hashes only matter when > the download is hosted somewhere besides PyPI, since SSL verification > is available for the PyPI part. Even so, I'd suggest that moving to > SHA1 might be a good intermediate step: it's available on Python 2.3, > so I could backport the relevant support to the 0.6 branch. (IIUC, > Python 2.3 is still the default version for many Linux distros that > have not reached end-of-life support.) I think RHEL is the one that will support things the longest. As far as I can tell Python 2.3 was default in RHEL4 and RHEL5 used Python 2.4 ELS support for RHEL4 ends Feb of 2015, so that's roughly a year and a half till not even Red Hat supports Python 2.3 anymore that I can tell? It also appears that support for new installations ended roughly a year and a half ago. Many (most?) projects no longer support Python 2.3 on PyPI, and I seriously doubt that there is a significant number of people who both want the stability provided by RHEL and is willing to continue using a release that is this close to being EOL'd while simultaneously wanting to download things from PyPI. CPython hasn't supported Python 2.3 in years. Basically If RHEL customers want the security updates they should bother Red Hat for them, that's part of why they pay for RHEL instead of going with a free system. I don't think it's appropriate to allow a handful of people who might still be on a version of python first released 10 years ago and last released 8 years ago to negatively impact everyone else. Note: I don't use RHEL so my understanding of it's life cycle is from reading their support page. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tseaver at palladion.com Fri Jul 26 22:47:09 2013 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 26 Jul 2013 16:47:09 -0400 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: <20130726153200.02097c3e@anarchist> References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> <20130726153200.02097c3e@anarchist> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/26/2013 03:32 PM, Barry Warsaw wrote: > If you're installing a script into a virtualenv, it's better to > rewrite the shebang to use the executable that was used to install > it. Exactly -- the script likely won't run at all outside the environment where its dependencies are installed. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlHy4E0ACgkQ+gerLs4ltQ57XgCg1JsuCk2zxOpJyWNoHv1V/0h/ Y1EAoND2clmyfdHjqVY/3p+PafWXM0Lp =uzfB -----END PGP SIGNATURE----- From pje at telecommunity.com Fri Jul 26 22:59:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 26 Jul 2013 16:59:08 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <9EB510F2-AFFF-4AB2-A66D-C188D628AA69@stufft.io> References: <9EB510F2-AFFF-4AB2-A66D-C188D628AA69@stufft.io> Message-ID: On Fri, Jul 26, 2013 at 3:14 PM, Donald Stufft wrote: > Does the hashlib backport I added to > setuptools 0.9 for Python 2.4 work on 2.3? It's a pure python > implementation of hashlib. Ah, didn't know about that! I can't imagine what problems there would be; not much changed in 2.4 that can't be emulated in 2.3. Anyway, I'll have a look. Thanks! > I don't have a Python 2.3 available to attempt to test. To be honest I've > never even used Python 2.3. Heh. Noob. ;-) (j/k) 2.3 is basically 2.4 minus decorators, generator expressions, various builtins and stdlib features. Unless you used set types, reversed(), or various other odds and ends, I should be able to backport it. > [stuff about RHEL support] If there's a 2.4 hashlib backport, that addresses my concerns just fine. If I need to, I'll backport it to setuptools 0.6. Thanks. From donald at stufft.io Fri Jul 26 23:06:36 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 17:06:36 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: <9EB510F2-AFFF-4AB2-A66D-C188D628AA69@stufft.io> Message-ID: <54958F0F-A686-464C-9B6C-D22B44754C0D@stufft.io> On Jul 26, 2013, at 4:59 PM, PJ Eby wrote: > On Fri, Jul 26, 2013 at 3:14 PM, Donald Stufft wrote: >> Does the hashlib backport I added to >> setuptools 0.9 for Python 2.4 work on 2.3? It's a pure python >> implementation of hashlib. > > Ah, didn't know about that! I can't imagine what problems there would > be; not much changed in 2.4 that can't be emulated in 2.3. > > Anyway, I'll have a look. Thanks! Here's the relevant commits in the new setuptools stuff: https://bitbucket.org/pypa/setuptools/commits/330b628f38c9380c95a818e65fd56812cbc854c4 https://bitbucket.org/pypa/setuptools/commits/b1d4e48beebdcc3cf7cb06fae4c4005a85dfc9bc https://bitbucket.org/pypa/setuptools/commits/12dd4b89148a225856a060cbee1137fc4cf79736 The implementations are taken from PyPy so they are made to work on Python 2.7, but they worked just fine on 2.4 after removing a single b"". > > >> I don't have a Python 2.3 available to attempt to test. To be honest I've >> never even used Python 2.3. > > Heh. Noob. ;-) (j/k) :) I was still in high school when the last 2.3 was released :/ > > 2.3 is basically 2.4 minus decorators, generator expressions, various > builtins and stdlib features. Unless you used set types, reversed(), > or various other odds and ends, I should be able to backport it. Great! > > >> [stuff about RHEL support] > > If there's a 2.4 hashlib backport, that addresses my concerns just > fine. If I need to, I'll backport it to setuptools 0.6. Thanks. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From zooko at zooko.com Sat Jul 27 02:55:37 2013 From: zooko at zooko.com (zooko) Date: Sat, 27 Jul 2013 04:55:37 +0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: <20130727005536.GA15510@zooko.com> On Fri, Jul 26, 2013 at 12:25:36PM -0400, Donald Stufft wrote: > PyPI has historically used MD5 in order to verify the downloads. However MD5 is severely broken and is generally regarded as something that should be migrated away from ASAP. From speaking with a number of cryptographers they've more or less said that the major reason they believe that MD5 hasn't had a published pre-image attack is just because it's so broken that most researchers have moved on to newer hashes. Who said that? That contradicts my beliefs. Thanks! Regards, Zooko From donald at stufft.io Sat Jul 27 03:29:06 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 26 Jul 2013 21:29:06 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <20130727005536.GA15510@zooko.com> References: <20130727005536.GA15510@zooko.com> Message-ID: <8F97F5E9-0DA6-4128-99FF-3B824637F528@stufft.io> On Jul 26, 2013, at 8:55 PM, zooko wrote: > On Fri, Jul 26, 2013 at 12:25:36PM -0400, Donald Stufft wrote: >> PyPI has historically used MD5 in order to verify the downloads. However MD5 is severely broken and is generally regarded as something that should be migrated away from ASAP. From speaking with a number of cryptographers they've more or less said that the major reason they believe that MD5 hasn't had a published pre-image attack is just because it's so broken that most researchers have moved on to newer hashes. > > Who said that? That contradicts my beliefs. > It's possible I misunderstood the exact implications of what they were saying. I am not a cryptographer and it was a month or two ago we spoke. It was stressed to me that PyPI should be moving off of MD5. I do believe however that we don't know for sure if MD5 is going to be have a practical pre-image attack tomorrow, or if it will last another 10 years. Given that all security systems are not infallible and are generally designed so that you have margins of security so there is time to migrate. The safety margins on MD5 have long since gone so by continuing to use it we are ignoring prudence (especially at a fairly ideal time where we are at a transitioning from unverified HTTPS/HTTP to HTTPS so we do not need to regard backwards compatibility as highly). As far as I am aware these attacks tend to come all of a sudden and without warning. I would much rather have already migrated to something that still has it's safety margins than be caught with our proverbial pants down and need to scramble *if* an attack is discovered. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sat Jul 27 06:58:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 27 Jul 2013 14:58:06 +1000 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: <9EB510F2-AFFF-4AB2-A66D-C188D628AA69@stufft.io> Message-ID: On 27 July 2013 06:59, PJ Eby wrote: > On Fri, Jul 26, 2013 at 3:14 PM, Donald Stufft wrote: >> Does the hashlib backport I added to >> setuptools 0.9 for Python 2.4 work on 2.3? It's a pure python >> implementation of hashlib. > > Ah, didn't know about that! I can't imagine what problems there would > be; not much changed in 2.4 that can't be emulated in 2.3. > > Anyway, I'll have a look. Thanks! FWIW, I expect the intersection of "running RHEL 4" and "downloading software directly from PyPI" to be a vanishingly small subset of humanity - anybody conservative enough to be running a version of RHEL that old is going to have *very* strict rules about how software gets onto their production servers and are also likely to be using something far more recent to talk to the outside world. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From erik.bernoth at gmail.com Sat Jul 27 10:12:30 2013 From: erik.bernoth at gmail.com (Erik Bernoth) Date: Sat, 27 Jul 2013 10:12:30 +0200 Subject: [Distutils] Ubuntu says Virtualenv, Pip and Setuptools "untrusted" Message-ID: Hi everybody, did you know that Ubuntu 13.10 (or maybe Debian?) declares those packages as untrusted and asks you twice, if you really want to install them? Is there anything that can be done about that? Best Erik From solipsis at pitrou.net Sat Jul 27 13:12:03 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 27 Jul 2013 11:12:03 +0000 (UTC) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 References: Message-ID: Donald Stufft stufft.io> writes: > Most people will use the setuptools bundled with virtualenv which has > only ever bundled 0.6 or 0.9. Why do you think that? According to PyPI: - virtualenv: 4664 downloads in the last day 34512 downloads in the last week - setuptools: 23901 downloads in the last day 197351 downloads in the last week Certainly setuptools use seems much more widespread than virtualenv use. Regards Antoine. From donald at stufft.io Sat Jul 27 18:43:53 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 27 Jul 2013 12:43:53 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: <53E5C0E3-D60A-4980-824C-BA6B54E3F7B6@stufft.io> On Jul 27, 2013, at 7:12 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >> Most people will use the setuptools bundled with virtualenv which has >> only ever bundled 0.6 or 0.9. > > Why do you think that? According to PyPI: > - virtualenv: > 4664 downloads in the last day > 34512 downloads in the last week > - setuptools: > 23901 downloads in the last day > 197351 downloads in the last week > > Certainly setuptools use seems much more widespread than virtualenv use. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Comparing download numbers isn't generally useful. People almost never install virtualenv more than once per machine. There are a myriad of reasons why they might directly install setuptools. It's impossible to know for sure how it'll be gotten but my gut is that most people use whatever is default inside of virtualenv because that's how almost everyone i've seen who uses virtualenv does it unless they have special needs. However the 0.9 line have almost as many downloads alone as the 0.7 and 0.8 lines combined. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From alexjeffburke at gmail.com Sat Jul 27 18:21:21 2013 From: alexjeffburke at gmail.com (Alex Burke) Date: Sat, 27 Jul 2013 18:21:21 +0200 Subject: [Distutils] Shebang lines, /usr/bin/python, and PEP394 In-Reply-To: References: <20130725160446.GL11402@unaka.lan> <0632878B-A8E4-41FC-8E15-6B425E6EA500@underboss.org> Message-ID: On 26 July 2013 13:58, Nick Coghlan wrote: > On 26 July 2013 21:31, Alex Burke wrote: >> On 26 July 2013 00:37, Philip Jenvey wrote: >>> -----BEGIN PGP SIGNED MESSAGE----- >>> Hash: SHA512 >>> >>> >>> On Jul 25, 2013, at 9:04 AM, Toshio Kuratomi wrote: >>> >>>> Over on python-dev we're talking about Linux Distributions switching from >>>> python2 to python3, what steps they need to take and in what order. One of >>>> the things that's come up [1]_ is that a very early step in the process is making >>>> sure that shebang lines use /usr/bin/python2 or /usr/bin/python3 as noted in >>>> PEP394 [2]_. Faced with the prospect of patching a whole bunch of scripts >>>> in the distribution, I'm wondering what distutils, distlib, setuptools, etc >>>> do with shebang lines. >>>> * Do they rewrite shebang lines? >>> >>> distutils, distlib and setuptools all do. >> >> Hi, >> >> It was interesting that discussion came up on python-dev but I admit >> to being surprised by the suggestion the shebang lines may need to be >> rewritten in end user code. >> >> This may be a callous over-simplification but if #!python is rewritten >> by the python packaging infrastructure, would it not be changed for >> python2/python3 as appropriate at installation time? Thus a python 2 >> package (whatever it is named) would be generated by calling a python2 >> executable + setuptools while the same is true for v3 except using >> python3. The result is then packaged by rpm/dpkg. >> >> Keen to understand why it can't work this way if that's the case. > > It actually occurs to me that the following example (showing how > symlinks affect "sys.executable") illustrates both the problem and the > solution for cases where users are relying on generated script > wrappers or the shebang line rewriting in wheel: > > $ ln -s /usr/bin/python foo > $ ./foo -c "import sys; print sys.executable" > /home/ncoghlan/foo > > So long as the distro build systems are updated to invoke setup.py > through an appropriately versioned symlink (rather than through > /usr/bin/python), setuptools et al should all automatically do the > right thing when generating script wrappers. That's pretty much exactly the mechanism I had in mind. > Not everybody uses generated script wrappers, though - if there is a > hardcoded "/usr/bin/env python" or "/usr/bin/python" in a shebang > line, the Python build tools won't touch it. There's also a whole lot > of software that isn't packaged at all - it's sitting around in user > and admin home directories, or maybe in a shared directory on a file > server or even in a private source control repo. > > Published software is actually the vanishingly small tip of a very > large iceberg, especially for languages like Python that handle > scripting use cases fairly well and are thus popular for personal > automation tasks amongst developers and system administrators. Hmm, that's a very good point. I guess I'd been considering packaged software or at least things that installed through a distribution's package manager. That being said, if this is a sound approach I reckon the advice issued to packagers could be always use the shebang line if updating software that gets packages, and otherwise any other arbitrary 'env python' just defers to the top level 'python' symlink which perhaps is best decided by the system administrator themselves. Re the python-dev discussion does this actually act in favour of a python2/python3 one of which is chosen to be active? > Cheers, > Nick. Btw one more thought sprang to mind - may be entirely unfeasible but the un-packaged software case made me think it could be interesting to have a pip 'installscript' or python -m setuptools.installscipt (the second example for illustration purposes only!) that you could point at an arbitrary script; it places it in the bin directory and does a shebang swap. Thanks, Alex. From solipsis at pitrou.net Sat Jul 27 19:25:51 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 27 Jul 2013 17:25:51 +0000 (UTC) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 References: <53E5C0E3-D60A-4980-824C-BA6B54E3F7B6@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > On Jul 27, 2013, at 7:12 AM, Antoine Pitrou pitrou.net> wrote: > > > Donald Stufft stufft.io> writes: > >> Most people will use the setuptools bundled with virtualenv which has > >> only ever bundled 0.6 or 0.9. > > > > Why do you think that? According to PyPI: > > - virtualenv: > > 4664 downloads in the last day > > 34512 downloads in the last week > > - setuptools: > > 23901 downloads in the last day > > 197351 downloads in the last week > > > > Certainly setuptools use seems much more widespread than virtualenv use. > > > > Regards > > > > Antoine. > > > > Comparing download numbers isn't generally useful. People > almost never install virtualenv more than once per machine. There > are a myriad of reasons why they might directly install setuptools. If your assertion were true ("Most people will use the setuptools bundled with virtualenv"), then the setuptools download numbers would be minuscule. The actual numbers show it to be untrue. Whether or not they are directly comparable isn't important: the orders are magnitude are sufficiently eloquent. > It's impossible to know for sure how it'll be gotten but my gut is > that most people use whatever is default inside of virtualenv > because that's how almost everyone i've seen who uses virtualenv > does it unless they have special needs. Perhaps you don't realize that many people don't use virtualenv at all, so they simply cannot use virtualenv's setuptools, either. Which is perfectly compatible with those download numbers, unlike your original assertion. Regards Antoine. From donald at stufft.io Sat Jul 27 19:32:25 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 27 Jul 2013 13:32:25 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: <53E5C0E3-D60A-4980-824C-BA6B54E3F7B6@stufft.io> Message-ID: <91B7A23E-6E0E-4F64-9FE3-CE2E071E04D8@stufft.io> On Jul 27, 2013, at 1:25 PM, Antoine Pitrou wrote: > If your assertion were true ("Most people will use the setuptools > bundled with virtualenv"), then the setuptools download numbers > would be minuscule. The actual numbers show it to be untrue. > Whether or not they are directly comparable isn't important: the > orders are magnitude are sufficiently eloquent. > >> It's impossible to know for sure how it'll be gotten but my gut is >> that most people use whatever is default inside of virtualenv >> because that's how almost everyone i've seen who uses virtualenv >> does it unless they have special needs. > > Perhaps you don't realize that many people don't use virtualenv at all, > so they simply cannot use virtualenv's setuptools, either. Which is > perfectly compatible with those download numbers, unlike your original > assertion. > I don't think any claim can be made about the relative use between the two tools by looking at the download counts because their typical use is generally very different. But sure you're right whatever does that make you feel better? Are you trying to claim we shouldn't move to a stronger hash? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Sat Jul 27 19:47:52 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 27 Jul 2013 17:47:52 +0000 (UTC) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 References: <53E5C0E3-D60A-4980-824C-BA6B54E3F7B6@stufft.io> <91B7A23E-6E0E-4F64-9FE3-CE2E071E04D8@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > I don't think any claim can be made about the relative use between the > two tools by looking at the download counts because their typical use is > generally very different. I'll try to phrase it more clearly then: I am not *comparing* their relative use. I am simply pointing out that an extremely large number of people install setuptools separately. Whether or not they also use virtualenv is completely irrelevent (but, of course, chances are they don't: otherwise, as you say, they'd use the bundled versions). > But sure you're right whatever does that make > you feel better? Now, please calm down... > Are you trying to claim we shouldn't move to a stronger hash? No, I'm just saying the possibility of regressions isn't as small as you think based on a misinterpretation of how people actually get setuptools installed (many of them get it directly from PyPI). But, yes, we should of course move to something better than md5, and ideally make the format flexible enough to avoid further breakage when switching hashes again. Regards Antoine. From donald at stufft.io Sun Jul 28 12:55:20 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 28 Jul 2013 06:55:20 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: Message-ID: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> Ok so given that: - There's a readably available solution for Python 2.4+ with the likelihood being that most users are either using it or using an older version which doesn't support SSL. - The number of folks likely to be on Python 2.3 and wanting to install things from PyPI is likely to be very small. - There's possibly a future solution for Python 2.3 - The safety margins for MD5 are gone and cryptographers heavily suggest moving away from it. - A revised scheme will break backwards compatibility with the versions of the tooling that do support a stronger hash. I'm going to go ahead and make this change unless someone comes out and contests moving PyPI to SHA256. I'll give it a bit to make sure no one does have an issue with the move. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Jul 28 14:23:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 28 Jul 2013 22:23:58 +1000 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> Message-ID: On 28 July 2013 20:55, Donald Stufft wrote: > Ok so given that: > > - There's a readably available solution for Python 2.4+ with the likelihood > being that most users are either using it or using an older version which > doesn't support SSL. > - The number of folks likely to be on Python 2.3 and wanting to install things > from PyPI is likely to be very small. > - There's possibly a future solution for Python 2.3 > - The safety margins for MD5 are gone and cryptographers heavily suggest > moving away from it. > - A revised scheme will break backwards compatibility with the versions of > the tooling that do support a stronger hash. > > I'm going to go ahead and make this change unless someone comes out and > contests moving PyPI to SHA256. I'll give it a bit to make sure no one does > have an issue with the move. +1, this sounds like a good way forward for the existing PyPI interfaces. We can do something better once the focus shifts from "make the status quo not broken" to making the next generation interfaces a reality (PEP 426 et al). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sun Jul 28 14:31:55 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 28 Jul 2013 12:31:55 +0000 (UTC) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I'm going to go ahead and make this change unless someone comes out and > contests moving PyPI to SHA256. I'll give it a bit to make sure no one does > have an issue with the move. Your proposal is a little light on specification, unless I've missed it. For example: * How exactly will download URLs change? One would assume they'd have a fragment of 'sha256=...', where they currently have 'md5=...', but can you confirm this? * PyPI's XML-RPC API provides MD5 hashes in result dictionaries using a key 'md5_digest'. How will these result dictionaries change under your proposal? * PyPI's web interface has actions such as 'show_md5', will these stop working? (By actions, I mean query strings such as ':action=show_md5'.) Will new actions be added? I'm not familiar with the change process for PyPI - what is the workflow? For example, are patches posted for review? Regards, Vinay Sajip From donald at stufft.io Sun Jul 28 19:30:45 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 28 Jul 2013 13:30:45 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> Message-ID: <5970AF4F-25B3-4DF2-BEA9-345CA759B1FD@stufft.io> On Jul 28, 2013, at 8:31 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I'm going to go ahead and make this change unless someone comes out and >> contests moving PyPI to SHA256. I'll give it a bit to make sure no one does >> have an issue with the move. > > Your proposal is a little light on specification, unless I've missed it. For > example: > > * How exactly will download URLs change? One would assume they'd have a > fragment of 'sha256=...', where they currently have 'md5=...', but can you > confirm this? Yes they will change to have #sha256=?. instead of #md5=... > > * PyPI's XML-RPC API provides MD5 hashes in result dictionaries using a key > 'md5_digest'. How will these result dictionaries change under your > proposal? Here we are a little more flexible. I can leave the md5_digest key there and simply add a sha256_digest key. > > * PyPI's web interface has actions such as 'show_md5', will these stop > working? (By actions, I mean query strings such as ':action=show_md5'.) > Will new actions be added? Again more flexible. I can simply add a show_sha256 action. > > I'm not familiar with the change process for PyPI - what is the workflow? > For example, are patches posted for review? Typically it's left up to us. We often just work and deploy changes without any review process but we can (and I have) get reviews before hand. The biggest problem with Reviews is PyPI is a very messy codebase with very few people who understand it so the pool of developers qualified to review the code is very small. On the warehouse side of things I don't develop directly on master everything comes through pull requests and while there's no formal review process A number of folks have been checking my PR's and making comments as they deemed fit. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Sun Jul 28 21:13:30 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 28 Jul 2013 20:13:30 +0100 (BST) Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <5970AF4F-25B3-4DF2-BEA9-345CA759B1FD@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <5970AF4F-25B3-4DF2-BEA9-345CA759B1FD@stufft.io> Message-ID: <1375038810.59932.YahooMailNeo@web171401.mail.ir2.yahoo.com> > From: Donald Stufft > Yes they will change to have #sha256=?. instead of #md5=... [snip] > Here we are a little more flexible. I can leave the md5_digest key there and > simply add a sha256_digest key. [snip] > Again more flexible. I can simply add a show_sha256 action. [snip] > Typically it's left up to us. We often just work and deploy changes without [snip] Thanks for the update, it'll help me to update distlib when the time comes. Regards, Vinay Sajip From donald at stufft.io Sun Jul 28 21:16:21 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 28 Jul 2013 15:16:21 -0400 Subject: [Distutils] Migrating Hashes from MD5 to SHA256 In-Reply-To: <1375038810.59932.YahooMailNeo@web171401.mail.ir2.yahoo.com> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <5970AF4F-25B3-4DF2-BEA9-345CA759B1FD@stufft.io> <1375038810.59932.YahooMailNeo@web171401.mail.ir2.yahoo.com> Message-ID: <95BD2BA7-4B1E-4273-AD2C-45A15115A2E2@stufft.io> On Jul 28, 2013, at 3:13 PM, Vinay Sajip wrote: >> From: Donald Stufft > >> Yes they will change to have #sha256=?. instead of #md5=... > [snip] >> Here we are a little more flexible. I can leave the md5_digest key there and >> simply add a sha256_digest key. > [snip] >> Again more flexible. I can simply add a show_sha256 action. > [snip] >> Typically it's left up to us. We often just work and deploy changes without > [snip] > > Thanks for the update, it'll help me to update distlib when the time comes. > > Regards, > > Vinay Sajip > I sugggest distlib take the same approach as pip and setuptools. Allow any #hashname=hash so that it will function with whatever hash people use. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Mon Jul 29 11:38:32 2013 From: holger at merlinux.eu (holger krekel) Date: Mon, 29 Jul 2013 09:38:32 +0000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> Message-ID: <20130729093832.GI32284@merlinux.eu> Hi Nick, Donald, all, On Sun, Jul 28, 2013 at 22:23 +1000, Nick Coghlan wrote: > On 28 July 2013 20:55, Donald Stufft wrote: > > Ok so given that: > > > > - There's a readably available solution for Python 2.4+ with the likelihood > > being that most users are either using it or using an older version which > > doesn't support SSL. > > - The number of folks likely to be on Python 2.3 and wanting to install things > > from PyPI is likely to be very small. > > - There's possibly a future solution for Python 2.3 > > - The safety margins for MD5 are gone and cryptographers heavily suggest > > moving away from it. Please detail the actual attack scenario wrt PyPI/installer processes. > > - A revised scheme will break backwards compatibility with the versions of > > the tooling that do support a stronger hash. > > > > > I'm going to go ahead and make this change unless someone comes out and > > contests moving PyPI to SHA256. I'll give it a bit to make sure no one does > > have an issue with the move. Actually, i strongly object further backward-incompatible changes. Please (generally) find a way to introduce improvements without breaking existing installation processes at the same time. For example, in this case pip/easy_install could indicate to PYPI what kind of hashes it accepts (through a header or query param or whatever) and PyPI could serve it but we'd default to MD5 for now if nothing else was requested. Please also consider the PEP438 vetted registration of externals+hashses in this context. Once things and tools are working nicely we can switch to serving a non-MD5 hash as default after a sufficient grace period. > +1, this sounds like a good way forward for the existing PyPI interfaces. > > We can do something better once the focus shifts from "make the status > quo not broken" to making the next generation interfaces a reality > (PEP 426 et al). Just switching the hashes is likely to break things. Do you want to take a bet on that? Just switching to SSL broke things. Just switching pip-1.4 broke things. Just switching to newer setuptools broke things. For me the fall-out of all the well-intentioned changes has been frustrating lately. For example, one of my projects documented how to generate a supervisor-controled deployment and suddenly "pip install supervisor" does not work anymore because pip-1.4 doesn't consider any existing supervisor distributions as non-dev. You can argue that's supervisor's fault but in the end it doesn't matter: What used to work and was documented is now broken, users complaining or simply turning away, with a re-inforced view of "python packaging sucks" or "this tool sucks, the docs are wrong". There are >1000 times more people using python packaging tools than there are ones following the recent distutils-sig/Donald/Nick verdicts or tool release notes. And heck, even for me being quite involved in all this stuff, having written a related PEP, it's really hard to see what's coming and to prevent breakage for my tools's/projects's users. Therefore, to be honest, at this point i am almost afraid of what is released/deployed next in PyPI/Installer land. I'd rather like to get into a "can't wait to get the next release" mode. best, holger From ncoghlan at gmail.com Mon Jul 29 13:58:07 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 29 Jul 2013 21:58:07 +1000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: <20130729093832.GI32284@merlinux.eu> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> Message-ID: On 29 July 2013 19:38, holger krekel wrote: > Hi Nick, Donald, all, > > On Sun, Jul 28, 2013 at 22:23 +1000, Nick Coghlan wrote: >> On 28 July 2013 20:55, Donald Stufft wrote: >> > Ok so given that: >> > >> > - There's a readably available solution for Python 2.4+ with the likelihood >> > being that most users are either using it or using an older version which >> > doesn't support SSL. >> > - The number of folks likely to be on Python 2.3 and wanting to install things >> > from PyPI is likely to be very small. >> > - There's possibly a future solution for Python 2.3 >> > - The safety margins for MD5 are gone and cryptographers heavily suggest >> > moving away from it. > > Please detail the actual attack scenario wrt PyPI/installer processes. > >> > - A revised scheme will break backwards compatibility with the versions of >> > the tooling that do support a stronger hash. >> >> > >> > I'm going to go ahead and make this change unless someone comes out and >> > contests moving PyPI to SHA256. I'll give it a bit to make sure no one does >> > have an issue with the move. > > Actually, i strongly object further backward-incompatible changes. > > Please (generally) find a way to introduce improvements without breaking > existing installation processes at the same time. > > For example, in this case pip/easy_install could indicate to PYPI what > kind of hashes it accepts (through a header or query param or whatever) > and PyPI could serve it but we'd default to MD5 for now if nothing else > was requested. Please also consider the PEP438 vetted registration of > externals+hashses in this context. Once things and tools are working > nicely we can switch to serving a non-MD5 hash as default after a > sufficient grace period. Having the improved hashes be opt-in (by the client) strikes me as a reasonable request. Yes, this means nothing will actually happen until easy_install/pip are updated to request those improved hashes and those versions see significant uptake, but as Holger says, we need to ensure we put sufficient effort into smoothing out the roller coaster ride that has been the recent experience of packaging system users. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Mon Jul 29 16:30:18 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 10:30:18 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> Message-ID: <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> On Jul 29, 2013, at 7:58 AM, Nick Coghlan wrote: >> >> Actually, i strongly object further backward-incompatible changes. >> >> Please (generally) find a way to introduce improvements without breaking >> existing installation processes at the same time. >> >> For example, in this case pip/easy_install could indicate to PYPI what >> kind of hashes it accepts (through a header or query param or whatever) >> and PyPI could serve it but we'd default to MD5 for now if nothing else >> was requested. Please also consider the PEP438 vetted registration of >> externals+hashses in this context. Once things and tools are working >> nicely we can switch to serving a non-MD5 hash as default after a >> sufficient grace period. > > Having the improved hashes be opt-in (by the client) strikes me as a > reasonable request. > > Yes, this means nothing will actually happen until easy_install/pip > are updated to request those improved hashes and those versions see > significant uptake, but as Holger says, we need to ensure we put > sufficient effort into smoothing out the roller coaster ride that has > been the recent experience of packaging system users. There's basically zero way for this to fail closed in any of the current installers. The failure mode is unverified packages not uninstallable packages. I am not aware of a single installer that mandates the use of a hash. Crate.io has never used md5 hashes and has always used sha256 and I've never received a single report of an installer being unable to install because of it, which is exactly what I expect. Indicating via Header or query param pretty much destroys the effectiveness of the CDN's cache in order to fix a problem with a theoretical (as far as I am aware) installer that requires a md5 hash (and thus has never worked for any of the externally hosted packages. Additionally it doesn't account for external urls which need to be registered *with* a hash. As far as available attacks, *today* an author could upload a package that has been created so as to have a sister package with malicious code that has the same hash allowing them to have a malicious package they can substitute at will without the hashes changing at all. In the future it's possible that a pre-image attack on MD5 will be found and then we'll be dealing with this problem then when we've lost all verification on external urls instead of now when we have time to get external urls to switch. So by all means I will not migrate us if that's what you want. Old versions of the installation clients stick around far to long for the opt in mechanism to be much use. The point of switching was to cover the existing clients as well to narrow the gap until a new API is developed. Hopefully no one is relying on these hashes to prevent an author from maliciously injecting a sister package and hopefully the strength of MD5 holds and no new research is found that blows it's pre-image attack residence to pieces. As far as not breaking things goes backwards compatibility has been an important concern however progress forward *requires* breakage. It is required because there is a vast array of available ways to have your package and/or hosting configured many of them horrible practices which need to be killed. Killing them requires breaking backwards compatibility. You cite SSL, yes SSL has caused a number of errors for people mostly related to older versions of OpenSSL being unable to use a SSL certificate but downloading code you're going to execute over plaintext isn't just bad, it's downright negligent on the part of the toolchain. So that was a required breakage. You also mention the pip 1.4 *not* installing pre-releases by default. Yes that broke a handful of packages Supervisor and pytz being the major ones that I've seen anyone complain about. It was also known ahead of time that this was a backwards incompatible change (and it was noted as such in the release notes). It wasn't a surprising outcome. The pip developers "drew a line in the sand" to quote Paul Moore and I expect pip 1.5 where PEP438 becomes default to break even more packages from people who just haven't bothered to change their practices until it's forced on them. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Jul 29 16:51:42 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 10:51:42 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: <20130729093832.GI32284@merlinux.eu> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> Message-ID: <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> On Jul 29, 2013, at 5:38 AM, holger krekel wrote: > There are >1000 times more people using python packaging tools than > there are ones following the recent distutils-sig/Donald/Nick verdicts > or tool release notes. And heck, even for me being quite involved in all > this stuff, having written a related PEP, it's really hard to see what's > coming and to prevent breakage for my tools's/projects's users. > Therefore, to be honest, at this point i am almost afraid of what is > released/deployed next in PyPI/Installer land. I'd rather like to > get into a "can't wait to get the next release" mode. I'd also like to point out this. I think you got bitten by one particular backward incompatibility and are "Once bitten twice shy". I had a number people asking me near every day how long until pip 1.4 was coming out because they were excited for the new changes including the improved pre-release handling. I've personally gotten or seen more complaints over the naming of a variable in the config file then I have over any changes we've made. The runner up to that is the fallout from switching to requiring verified SSL. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From barry at python.org Mon Jul 29 17:38:02 2013 From: barry at python.org (Barry Warsaw) Date: Mon, 29 Jul 2013 11:38:02 -0400 Subject: [Distutils] Ubuntu says Virtualenv, Pip and Setuptools "untrusted" References: Message-ID: <20130729113802.2c7376f1@anarchist> On Jul 27, 2013, at 10:12 AM, Erik Bernoth wrote: >did you know that Ubuntu 13.10 (or maybe Debian?) declares those >packages as untrusted and asks you twice, if you really want to >install them? Is there anything that can be done about that? Um, what? Please provide details. What commands are you running and what do you see? -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tseaver at palladion.com Mon Jul 29 19:01:21 2013 From: tseaver at palladion.com (Tres Seaver) Date: Mon, 29 Jul 2013 13:01:21 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/29/2013 10:51 AM, Donald Stufft wrote: > I've personally gotten or seen more complaints over the naming of a > variable in the config file then I have over any changes we've made. > The runner up to that is the fallout from switching to requiring > verified SSL. The past few months have generated a *lot* of teeth-gnashing / hair-pulling, especially among "downstream" developers (those unlikely to be reading this SIG): - - HTTPS-only PyPY - - Distribute / setuptools merge, e.g. cratering folks who use a distro-managed 'python-distribute' package. - - Pip's new backward-imcompatible "final releases by default". I think we are going to be in a much better place for all that, but let's not deny the pain involved for *everybody* in getting there. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlH2n+EACgkQ+gerLs4ltQ6megCeN8V8IN4OT6rWfZg1tw1GtUaO 2jwAoLRXzLyjjMbiLrcfqLG0Ge1NQnMq =7ufm -----END PGP SIGNATURE----- From p.f.moore at gmail.com Mon Jul 29 19:18:00 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 29 Jul 2013 18:18:00 +0100 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On 29 July 2013 18:01, Tres Seaver wrote: > I think we are going to be in a much better place for all that, but let's > not deny the pain involved for *everybody* in getting there. > Agreed. I think the goal is valid, and the approach is fine. But we need to do a better job in managing people's expectations. I'd like to see a roadmap of the various changes planned, as well as some sort of explanation of how each of the changes contributes towards the end goal. Personally, none of the changes have detrimentally affected me, so my opinion is largely theoretical. But even I am getting a little frustrated by the constant claims that "what we have now is insecure and broken, and must be fixed ASAP". The reality is that everything's more or less OK - there's a risk, certainly, and it could be severe, but many, many people are routinely using PyPI all the time without issues. And telling them that they are wrong to do so, or that they are being extremely naive over security, isn't helping. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Mon Jul 29 19:28:51 2013 From: holger at merlinux.eu (holger krekel) Date: Mon, 29 Jul 2013 17:28:51 +0000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> Message-ID: <20130729172851.GJ32284@merlinux.eu> On Mon, Jul 29, 2013 at 10:30 -0400, Donald Stufft wrote: > On Jul 29, 2013, at 7:58 AM, Nick Coghlan wrote: > >> > >> Actually, i strongly object further backward-incompatible changes. > >> > >> Please (generally) find a way to introduce improvements without breaking > >> existing installation processes at the same time. > >> > >> For example, in this case pip/easy_install could indicate to PYPI what > >> kind of hashes it accepts (through a header or query param or whatever) > >> and PyPI could serve it but we'd default to MD5 for now if nothing else > >> was requested. Please also consider the PEP438 vetted registration of > >> externals+hashses in this context. Once things and tools are working > >> nicely we can switch to serving a non-MD5 hash as default after a > >> sufficient grace period. > > > > Having the improved hashes be opt-in (by the client) strikes me as a > > reasonable request. > > > > Yes, this means nothing will actually happen until easy_install/pip > > are updated to request those improved hashes and those versions see > > significant uptake, but as Holger says, we need to ensure we put > > sufficient effort into smoothing out the roller coaster ride that has > > been the recent experience of packaging system users. > > There's basically zero way for this to fail closed in any of the > current installers. The failure mode is unverified packages not > uninstallable packages. I am not aware of a single installer that > mandates the use of a hash. Crate.io has never used md5 hashes and has > always used sha256 and I've never received a single report of an > installer being unable to install because of it, which is exactly what > I expect. So you think the worst case for forcing SHA256 hashes is that installers who don't yet support sha256 hashes would just ignore it (and thus wouldn't do hash verification)? > Indicating via Header or query param pretty much destroys the > effectiveness of the CDN's cache in order to fix a problem with a > theoretical (as far as I am aware) installer that requires a md5 hash > (and thus has never worked for any of the externally hosted packages. > Additionally it doesn't account for external urls which need to be > registered *with* a hash. Currently there is no hash-type enforcement on registered externals, is there? > As far as available attacks, *today* an author could upload a package > that has been created so as to have a sister package with malicious > code that has the same hash allowing them to have a malicious package > they can substitute at will without the hashes changing at all. In the > future it's possible that a pre-image attack on MD5 will be found and > then we'll be dealing with this problem then when we've lost all > verification on external urls instead of now when we have time to get > external urls to switch. So the attack is a malicious author or someone else modifying an external release file (either directly on the server or via MITM) while maintaining the pre-registered MD5 hash, right? I am currently merely trying to understand more exactly what you are worried about. best, holger > So by all means I will not migrate us if that's what you want. Old > versions of the installation clients stick around far to long for the > opt in mechanism to be much use. The point of switching was to cover > the existing clients as well to narrow the gap until a new API is > developed. > > Hopefully no one is relying on these hashes to prevent an > author from maliciously injecting a sister package and hopefully the > strength of MD5 holds and no new research is found that blows it's > pre-image attack residence to pieces. > > As far as not breaking things goes backwards compatibility has been an > important concern however progress forward *requires* breakage. It is > required because there is a vast array of available ways to have your > package and/or hosting configured many of them horrible practices > which need to be killed. Killing them requires breaking backwards > compatibility. You cite SSL, yes SSL has caused a number of errors for > people mostly related to older versions of OpenSSL being unable to use > a SSL certificate but downloading code you're going to execute over > plaintext isn't just bad, it's downright negligent on the part of the > toolchain. So that was a required breakage. > > You also mention the pip 1.4 *not* installing pre-releases by default. > Yes that broke a handful of packages Supervisor and pytz being the > major ones that I've seen anyone complain about. It was also known > ahead of time that this was a backwards incompatible change (and it > was noted as such in the release notes). It wasn't a surprising > outcome. The pip developers "drew a line in the sand" to quote Paul > Moore and I expect pip 1.5 where PEP438 becomes default to break even > more packages from people who just haven't bothered to change their > practices until it's forced on them. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From qwcode at gmail.com Mon Jul 29 19:50:07 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 29 Jul 2013 10:50:07 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Mon, Jul 29, 2013 at 10:18 AM, Paul Moore wrote: > On 29 July 2013 18:01, Tres Seaver wrote: > >> I think we are going to be in a much better place for all that, but let's >> not deny the pain involved for *everybody* in getting there. >> > > Agreed. I think the goal is valid, and the approach is fine. But we need > to do a better job in managing people's expectations. I'd like to see a > roadmap of the various changes planned, as well as some sort of explanation > of how each of the changes contributes towards the end goal. > Nick said he's planning a roadmap on the new User Guide (in August I think after vacation), so that's something. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Jul 29 20:15:02 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 14:15:02 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 29, 2013, at 1:18 PM, Paul Moore wrote: > But even I am getting a little frustrated by the constant claims that "what we have now is insecure and broken, and must be fixed ASAP". The reality is that everything's more or less OK - there's a risk, certainly, and it could be severe, but many, many people are routinely using PyPI all the time without issues. And telling them that they are wrong to do so, or that they are being extremely naive over security, isn't helping. This shows a fundamental misunderstanding of how security issues present themselves. Of course things just work for people because security issues are not like regular bugs. They don't negatively affect you until someone attempts to use them to attack you. Keep your front door unlocked on your house and your valuables will remain inside _until_ someone decides to try and rob you. If you wait until people are affected by a security vulnerability then the horse has already fled the pasture and you're just attempting to close the gate after the fact. I'm pushing hard on doing what we can to secure the infrastructure because this shit matters. Everything is more or less OK, only because no one has decided that people installing from PyPI are not a valuable enough target to go after. Prior to this push that was basically the only thing prevent someone from attacking people, that they had never decided to bother too. We are better, it's somewhat harder now, but in many areas that's still the only thing keeping people safe. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Jul 29 20:17:59 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 14:17:59 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 29, 2013, at 1:01 PM, Tres Seaver wrote: > Signed PGP part > On 07/29/2013 10:51 AM, Donald Stufft wrote: > > I've personally gotten or seen more complaints over the naming of a > > variable in the config file then I have over any changes we've made. > > The runner up to that is the fallout from switching to requiring > > verified SSL. > > The past few months have generated a *lot* of teeth-gnashing / > hair-pulling, especially among "downstream" developers (those unlikely to > be reading this SIG): > > - - HTTPS-only PyPY I've seen very few people complain about this. It's probably the largest issue in either of the projects I'm involved with. > > > - - Distribute / setuptools merge, e.g. cratering folks who use a > distro-managed 'python-distribute' package. This is the biggest issue. I wasn't involved and it could have been handled better sure. > > > - - Pip's new backward-imcompatible "final releases by default". Again this negatively affects very few packages and it positively affects a number of people who were bit by the fact that pip does this. SQLAlchemy was one of the ones that were offhand. People don't realize the tools *didn't* do this and often times caused pain inadvertently for the down stream developers. > > > I think we are going to be in a much better place for all that, but let's > not deny the pain involved for *everybody* in getting there. > > > Tres. > - -- > =================================================================== > Tres Seaver +1 540-429-0999 tseaver at palladion.com > Palladion Software "Excellence by Design" http://palladion.com > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jim at zope.com Mon Jul 29 20:21:03 2013 From: jim at zope.com (Jim Fulton) Date: Mon, 29 Jul 2013 14:21:03 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Mon, Jul 29, 2013 at 2:15 PM, Donald Stufft wrote: > >> On Jul 29, 2013, at 1:18 PM, Paul Moore wrote: >> >> But even I am getting a little frustrated by the constant claims that "what >> we have now is insecure and broken, and must be fixed ASAP". The reality is >> that everything's more or less OK - there's a risk, certainly, and it could >> be severe, but many, many people are routinely using PyPI all the time >> without issues. And telling them that they are wrong to do so, or that they >> are being extremely naive over security, isn't helping. > This shows a fundamental misunderstanding of how security issues present > themselves. Of course things just work for people because security issues > are not like regular bugs. They don't negatively affect you until someone > attempts to use them to attack you. Keep your front door unlocked on your > house and your valuables will remain inside _until_ someone decides to try > and rob you. If you wait until people are affected by a security > vulnerability then the horse has already fled the pasture and you're just > attempting to close the gate after the fact. > > I'm pushing hard on doing what we can to secure the infrastructure because > this shit matters. Everything is more or less OK, only because no one has > decided that people installing from PyPI are not a valuable enough target to > go after. Prior to this push that was basically the only thing prevent > someone from attacking people, that they had never decided to bother too. We > are better, it's somewhat harder now, but in many areas that's still the > only thing keeping people safe. Well said. Security is a pain, but I'm really glad and appreciate that you and others are paying attention to it. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From donald at stufft.io Mon Jul 29 20:30:42 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 14:30:42 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: <20130729172851.GJ32284@merlinux.eu> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> <20130729172851.GJ32284@merlinux.eu> Message-ID: On Jul 29, 2013, at 1:28 PM, holger krekel wrote: > On Mon, Jul 29, 2013 at 10:30 -0400, Donald Stufft wrote: >> On Jul 29, 2013, at 7:58 AM, Nick Coghlan wrote: >>>> >>>> Actually, i strongly object further backward-incompatible changes. >>>> >>>> Please (generally) find a way to introduce improvements without breaking >>>> existing installation processes at the same time. >>>> >>>> For example, in this case pip/easy_install could indicate to PYPI what >>>> kind of hashes it accepts (through a header or query param or whatever) >>>> and PyPI could serve it but we'd default to MD5 for now if nothing else >>>> was requested. Please also consider the PEP438 vetted registration of >>>> externals+hashses in this context. Once things and tools are working >>>> nicely we can switch to serving a non-MD5 hash as default after a >>>> sufficient grace period. >>> >>> Having the improved hashes be opt-in (by the client) strikes me as a >>> reasonable request. >>> >>> Yes, this means nothing will actually happen until easy_install/pip >>> are updated to request those improved hashes and those versions see >>> significant uptake, but as Holger says, we need to ensure we put >>> sufficient effort into smoothing out the roller coaster ride that has >>> been the recent experience of packaging system users. >> >> There's basically zero way for this to fail closed in any of the >> current installers. The failure mode is unverified packages not >> uninstallable packages. I am not aware of a single installer that >> mandates the use of a hash. Crate.io has never used md5 hashes and has >> always used sha256 and I've never received a single report of an >> installer being unable to install because of it, which is exactly what >> I expect. > > So you think the worst case for forcing SHA256 hashes is that installers > who don't yet support sha256 hashes would just ignore it (and thus wouldn't > do hash verification)? Yes. I've been using sha256 on simple.crate.io for over a year and zero people have ever stated it didn't work for them. This also fits in with my knowledge of how setuptools and pip works. I know zc.buildout less well but to my knowledge they simple allow setuptools to handle the downloading. > >> Indicating via Header or query param pretty much destroys the >> effectiveness of the CDN's cache in order to fix a problem with a >> theoretical (as far as I am aware) installer that requires a md5 hash >> (and thus has never worked for any of the externally hosted packages. >> Additionally it doesn't account for external urls which need to be >> registered *with* a hash. > > Currently there is no hash-type enforcement on registered externals, is there? Registered externals must register with a md5 hash, scraped links and download urls etc do not require it because they are indirectly added. There is no verification by PyPI that the given hash matches the package at the end of the url. > >> As far as available attacks, *today* an author could upload a package >> that has been created so as to have a sister package with malicious >> code that has the same hash allowing them to have a malicious package >> they can substitute at will without the hashes changing at all. In the >> future it's possible that a pre-image attack on MD5 will be found and >> then we'll be dealing with this problem then when we've lost all >> verification on external urls instead of now when we have time to get >> external urls to switch. > > So the attack is a malicious author or someone else modifying an external > release file (either directly on the server or via MITM) while maintaining > the pre-registered MD5 hash, right? > > I am currently merely trying to understand more exactly what > you are worried about. For any hash function there are two major types of attacks you worry about. The first is a collision attack which is the ability to generate two arbitrary inputs that hash to the same thing. The second is a pre-image attack (either first or second pre-image) which essentially means given an already existing input generate another input that hashes to the same thing. So basically the difference between the two attacks are wether you have a hash you're trying to match or if you just need two inputs that hash to the same thing. MD5 is currently broken for collision resistance. This means that an author can generate two packages that hash to the same thing. Once package might be benign and one might be malicious. Given those two packages people using the md5 hashes will not be able to differentiate between the benign and the malicous package. MD5 is currently *not* broken for pre image resistance. This means that as of right now someone can not take an already existing package on PyPI and generate a second package that hashes to the same thing (besides via brute forcing). So right now, collision attacks possible == yes, pre image attacks possible == no. However designing secure systems is a practice of building in safety margins. If someone, for instance, can break 5 rounds of a function you use 15 rounds. With cryptographic hashes collision attacks are easier than pre-image attacks, so if you have two functions, one that has a collision attack and one that doesn't you can generally assume that the one without a collision attack is stronger and has a longer shelf life. So the problem with MD5 (ignoring for a second the fact that a collision attack can be bad on it's own) is that there are no more safety nets. If it gets broken for a pre-image then there's not likely to be any warning (we've already *had* the warning). It will just be broken and we will be scrambling to update things then (and hopefully nobody gets attacked in the meantime). And I do say *if* because as zooko pointed out, it's not a guarantee that MD5 will ever lose it's pre-image resistance (which just means that brute forcing is the quickest way to generate a hash). > > best, > holger > > >> So by all means I will not migrate us if that's what you want. Old >> versions of the installation clients stick around far to long for the >> opt in mechanism to be much use. The point of switching was to cover >> the existing clients as well to narrow the gap until a new API is >> developed. >> >> Hopefully no one is relying on these hashes to prevent an >> author from maliciously injecting a sister package and hopefully the >> strength of MD5 holds and no new research is found that blows it's >> pre-image attack residence to pieces. >> >> As far as not breaking things goes backwards compatibility has been an >> important concern however progress forward *requires* breakage. It is >> required because there is a vast array of available ways to have your >> package and/or hosting configured many of them horrible practices >> which need to be killed. Killing them requires breaking backwards >> compatibility. You cite SSL, yes SSL has caused a number of errors for >> people mostly related to older versions of OpenSSL being unable to use >> a SSL certificate but downloading code you're going to execute over >> plaintext isn't just bad, it's downright negligent on the part of the >> toolchain. So that was a required breakage. >> >> You also mention the pip 1.4 *not* installing pre-releases by default. >> Yes that broke a handful of packages Supervisor and pytz being the >> major ones that I've seen anyone complain about. It was also known >> ahead of time that this was a backwards incompatible change (and it >> was noted as such in the release notes). It wasn't a surprising >> outcome. The pip developers "drew a line in the sand" to quote Paul >> Moore and I expect pip 1.5 where PEP438 becomes default to break even >> more packages from people who just haven't bothered to change their >> practices until it's forced on them. >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From zooko at zooko.com Mon Jul 29 20:57:18 2013 From: zooko at zooko.com (zooko) Date: Mon, 29 Jul 2013 22:57:18 +0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: <20130729185717.GC15510@zooko.com> Folks: dstufft already correctly explained that relying on MD5 allows for "doppelganger" packages -- two (or more) packages which are engineered at birth to have the same hash as each other. It isn't clear to me that this can be used for evil, but it isn't obvious that it *can't* be used for evil, either. So it would certainly be helpful to upgrade the hash function so that we don't have to think about that anymore, but in my opinion it is not an emergency. I'd like to push back on the other risk, that someone might figure out how to make MD5 second-pre-images. I don't think this is a risk that we need to urgently address, and I've written a short note explaining why. This note is incomplete, badly edited, has not been peer-reviewed, and is not ready for publication, but I thought it might help folks evaluate how urgent it is to upgrade from MD5, so here it is. Regards, Zooko -------------- next part -------------- ??? ======================================================================================== the historical success of collision attacks does not imply a danger of pre-image attacks ======================================================================================== by Zooko Wilcox-O'Hearn, LeastAuthority.com, 2013-07-29 Summary ======= Most of the secure hash functions designed before 2007 have turned out to be vulnerable to collision attacks. This includes the widely-used secure hash functions MD5 and SHA-1. Newer hash functions, including those designed since the SHA-3 project began in 2007, do not appear to be vulnerable to collision attacks, but since they are newer, there has also been less time for cryptanalysts to find flaws in them. A widely cited web page shows a graphical representation of the history of various hash functions being broken: http://valerieaurora.org/hash.html The advice on that web page is that if you are relying on your hash function for collision-resistance, then you should be prepared to migrate to a new hash function every few years. .. insert table based on the main table below, but showing only vulnerability to collisions instead of pre-images What about pre-image attacks or second pre-image attacks? Some systems require their secure hash function to resist pre-image and 2nd-pre-image attacks, but do not require their secure hash function to resist collision attacks. For such systems, an interesting question is whether many or few of the secure hash functions published in the last 23 years???since the advent of modern cryptography???have turned out to be vulnerable to pre-image or 2nd-pre-image attacks. The answer is that with a single exception, no secure hash function has ever been shown to be vulnerable to (2nd-)pre-image attacks. That single exception is the second-oldest hash function ever designed, Snefru, which was designed in 1989 and 1990, and which turned out to be vulnerable to differential cryptanalysis. Differential cryptanalysis was discovered (by the open research community) in 1990. Preliminaries ============= The input to a secure hash function is called the *pre-image* and the output is called the *image*. A hash function *collision* is two different inputs (pre-images) which result in the same output. A hash function is *collision-resistant* if an adversary can't find any collision. A hash function is *pre-image resistant* if, given an output (image), an adversary can't find any input (pre-image) which results in that output. A hash function is *second pre-image resistant* if, given a pre-image, an adversary can't find any *other* pre-image which results in the same image. Motivation ========== My motivation for caring about pre-image resistance and 2nd-pre-image resistance is that it is possible to build digital signatures from secure hash functions. Some of these *hash-based digital signatures* have been proven to be secure (resistant to forgery) as long as the hash function they are built out of has second-pre-image resistance, e.g. Buchmann-2011_. Such a hash-based digital signature would fail if its underlying hash function failed at second-pre-image resistance, but this is the *only* way that it could be broken???any attack which was able to forge digital signatures against such a scheme would *have* to violate the second-pre-image resistance of the underlying hash function. One reason that hash-based digital signatures might be useful is that if an attacker has a sufficiently large quantum computer, they could forge digital signatures that rely on factorization or discrete log, such as RSA, DSA, ECDSA, or Ed25519. There is not any reason to think that such a quantum computer would enable them to break secure hash functions, however. Survey of attacks on hash functions =================================== We know that practical, widely-deployed secure hash functions have turned out to be vulnerable to collision attacks. MD5 is the most widely used secure hash function which turns out to admit collisions, but many other secure hash functions have likewise been found vulnerable to collisions. For some uses, such as the hash-based digital signatures mentioned above, it would be harmless to be able to generate collisions, but harmful to be able to generate pre-images or 2nd-pre-images [*]. For such systems, the relevant question is not whether hash function designs have historically been revealed to be vulnerable to collisions but instead whether they've been revealed to be vulnerable to (2nd-)pre-images. Here are the results of my search for pre-image or 2nd-pre-image attacks on widely-studied hash functions. *The bottom line is that no widely-studied hash function has ever succumbed to a (2nd-)pre-image attack except for Snefru.* Snefru was designed by Merkle in 1989 and proved vulnerable to differential cryptanalysis by Biham and Shamir in 1992. (Differential cryptanalysis had just been discovered in the open world by Biham and Shamir in 1990.) No other widely-studied hash function has been shown to be vulnerable to a practical (2nd-)pre-image attack. Furthermore, no other widely-studied hash function has been shown to be vulnerable to a (2nd-)pre-image attack that is more efficient than brute force, even if we were to count attacks too expensive for anyone to actually implement! The history of (2nd-)pre-image attacks is therefore quite different from the history of collision attacks. Most hash functions have been proven vulnerable to collision attacks more efficient than brute force, and even to collision attacks that could be implemented in practice. Here I omit papers with attacks which are less efficient than other published attacks (of course). I omit attacks on reduced-round variants of hash functions (there are a lot of those). I omit attacks which have unrealistic requirements, like attacks which require which require 2??????? precomputation or which require the messages to be 2?????? blocks long. "---" in a row means that I haven't found any published attack that fit these criteria. ??? *bit*: the number of bits of output ??? *cpb*: cycles per byte on ebash's amd64-sandy0_ for 4096 bytes, worst quartile ??? *comp*: approximate computation required for the attack ??? *mem*: approximate memory required for the attack ??? *??? brute*: does the attack comp * the attack mem cost less than brute force search (see Bernstein-2005_) ??? *possible?*: could the attack actually be implemented +-------------+------+-----+-----+--------------------------------------------------------+--------------------------------------------------------+ | hash | year | bit | cpb | (2nd-)preimage attacks | collision attacks | | | | | +------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | | | | | comp | mem | ??? brute | possible? | reference | comp | mem | ??? brute | possible? | reference | +=============+======+=====+=====+======+=====+=========+============+====================+======+=====+=========+============+====================+ | MD2 | 1989 | 128 | 638 | 2????? | 2????? | no | --- | Knudsen-2007_ | 2????? | 2????? | no | --- | Knudsen-2007_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | Snefru_ -2 | 1990 | 128 | ? | 2????? | 2??? | yes | yes | Biham-2008_ | 2???? | 2??? | yes | yes | Biham-2008_ | +-------------+ | +-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+ | | -3 | | | ? | 2?????? | 2??? | yes | yes | | 2????? | 2??? | yes | yes | | +-------------+ | +-----+------+-----+---------+------------+ +------+-----+---------+------------+ | | -4 | | | ? | ???2?????? | 2??? | yes | no | | ???2?????? | 2??? | yes | yes | | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | MD4 | 1990 | 128 | 4 | 2?????? | 2????? | no | --- | Zhong-2010_ | 2?? | 2??? | yes | yes | Naito-2006_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | RIPEMD | 1990 | 128 | ? | --- | --- | --- | --- | --- | 2????? | 2??? | yes | yes | Wang-2005a_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | MD5 | 1991 | 128 | 6 | 2?????? | 2?????? | no | --- | Sasaki-2009_ | 2????? | 2??? | yes | yes | Stevens-2007_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | HAVAL-256-3 | 1992 | 256 | ? | 2??????? | 2?????? | no | --- | Sasaki-2008_ | 2????? | 2??? | yes | yes | `Van_Rompay-2003`_ | +-------------+ + +-----+------+-----+---------+------------+ +------+-----+---------+------------+--------------------+ | -4 | | | ? | 2???????? | 2?????? | no | --- | | 2????? | 2??? | yes | yes | Yu-2006_ | +-------------+ + +-----+------+-----+---------+------------+ +------+-----+---------+------------+ + | -5 | | | ? | 2???????? | 2?????? | no | --- | | 2?????? | 2??? | yes | no | | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | SHA-0 | 1993 | 160 | ? | 2???????? | 2??? | no | --- | --- | 2????? | 2??? | yes | yes | Manuel-2008_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | GOST | 1994 | 256 | ? | 2??????? | 2?????? | no | --- | Mendel-2008_ | 2???????? | 2??? | yes | no | Mendel-2008_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | SHA-1 | 1995 | 160 | 8 | --- | --- | --- | --- | | 2?????? | 2??? | yes | yes | Wang-2005b_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | RIPEMD-160 | 1996 | 160 | 16 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | Tiger | 1996 | 192 | 7 | 2???????? | 2??? | no | --- | Guo-2010_ | --- | --- | --- | --- | --- | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | Whirlpool | 2000 | 512 | 25 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | Panama | 2000 | 512 | ? | --- | --- | --- | --- | --- | 2??? | 2??? | yes | yes | Daemen-2007_ | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | SHA-256 | 2002 | 256 | 18 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ | RadioGat??n | 2006 | 256 | ? | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +-------------+------+-----+-----+------+-----+---------+------------+--------------------+------+-----+---------+------------+--------------------+ .. _Daemen-2007: http://radiogatun.noekeon.org/panama/PanamaAttack.pdf Then we get to the SHA-3 competition, which began in 2007. None of the 14 second-round candidates for SHA-3 have been shown to be vulnerable to any attack better than brute force, either to find collisions or to find (2nd-)pre-images [SHA-3-Zoo_ ]. Here are rows for the five SHA-3 finalists. The cpb is potentially relevant because a hash function which used *too* few cycles per byte would fail at its goals, including its goal of (2nd-)pre-image-resistance. Of course we don't know how few is too few. The designers of these hash functions were probably choosing a number of cycles per byte to make their function competitive with SHA-256, and with other SHA-3 candidates, while not accidentally losing collision-resistance like so many of their predecessors had. +------------+------+-----+-----+---------------------------------------------------+---------------------------------------------------+ | hash | year | bit | cpb | (2nd-)preimage attacks | collision attacks | | | | | +------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ | | | | | comp | mem | ??? brute | possible? | reference | comp | mem | ??? brute | possible? | reference | +============+======+=====+=====+======+=====+=========+============+===============+======+=====+=========+============+===============+ | Skein | 2010 | 256 | 7 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +------------+------+-----+-----+------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ | Blake | 2010 | 256 | 8 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +------------+------+-----+-----+------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ | Gr??stl | 2010 | 256 | 10 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +------------+------+-----+-----+------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ | Keccak | 2010 | 256 | 12 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +------------+------+-----+-----+------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ | JH | 2010 | 256 | 14 | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | +------------+------+-----+-----+------+-----+---------+------------+---------------+------+-----+---------+------------+---------------+ If you are aware of any other papers which fit these criteria, or if you spot an error in this document, please write to me: zooko at LeastAuthority.com. [*] Be very careful about this???the ability to generate collisions can be surprisingly harmful to some systems. This is one of those subtleties of cryptographic engineering which frequently trip up engineers who are not cryptography experts. The famous "Internet Root Cert" attack [Sotirov-2009_ ] is an example of engineers incorrectly thinking that their system was not threatened by collisions absent 2nd-pre-images. Thanks to Daira Hopwood for comments on this note. .. _Snefru: http://www.springerlink.com/content/t10683l407363633/ .. _Van_Rompay-2003: http://academic.research.microsoft.com/Publication/676305/cryptanalysis-of-3pass-haval .. _Bernstein-2005: http://cr.yp.to/papers.html#bruteforce .. _Wang-2005a: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.4759 .. _Wang-2005b: http://people.csail.mit.edu/yiqun/SHA1AttackProceedingVersion.pdf .. _Yu-2006: http://www.springerlink.com/content/0n9018738x721090/ .. _Naito-2006: http://www.springerlink.com/content/v6526284mu858v37/ .. _Stevens-2007: http://marc-stevens.nl/research/papers/MTh%20Marc%20Stevens%20-%20On%20Collisions%20for%20MD5.pdf .. _Knudsen-2007: http://www.springerlink.com/content/qn746388035614r1/ .. _Biham-2008: http://www.springerlink.com/content/208q118x13181g32/ .. _Sasaki-2008: http://www.springerlink.com/content/d382324nl16251pp/ .. _Mendel-2008: http://www.cosic.esat.kuleuven.be/publications/article-2091.pdf .. _Manuel-2008: http://www.springerlink.com/content/3810jp9730369045/ .. _Leurent-2008: http://www.di.ens.fr/~leurent/files/MD4_FSE08.pdf .. _Sasaki-2009: http://www.springerlink.com/content/d7pm142n58853467/ .. _Sotirov-2009: http://www.win.tue.nl/hashclash/rogue-ca/ .. _Zhong-2010: http://eprint.iacr.org/2010/583 .. _Guo-2010: http://eprint.iacr.org/2010/016 .. _Buchmann-2011: http://eprint.iacr.org/2011/484 .. _SHA-3-Zoo: http://ehash.iaik.tugraz.at/wiki/The_SHA-3_Zoo .. _amd64-sandy0: http://bench.cr.yp.to/results-hash.html#amd64-sandy0 From donald at stufft.io Mon Jul 29 21:14:56 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 15:14:56 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <20130729185717.GC15510@zooko.com> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <20130729185717.GC15510@zooko.com> Message-ID: <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> On Jul 29, 2013, at 2:57 PM, zooko wrote: > I'd like to push back on the other risk, that someone might figure out how to > make MD5 second-pre-images. I don't think this is a risk that we need to > urgently address, and I've written a short note explaining why. This note is > incomplete, badly edited, has not been peer-reviewed, and is not ready for > publication, but I thought it might help folks evaluate how urgent it is to > upgrade from MD5, so here it is. I don't think it's urgent to fix it, but I think it's a good security hardening effort with very little downside and very little chance of regression. However, as I said if Holger, or anyone else, has a concern about the affects of adding this bit of security hardening to give us a safety net again then I simply won't do it in the simple API. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Mon Jul 29 21:15:59 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 29 Jul 2013 12:15:59 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: > > - - Distribute / setuptools merge, e.g. cratering folks who use a > distro-managed 'python-distribute' package. > > > This is the biggest issue. I wasn't involved and it could have been > handled better sure. > what issue are we talking about exactly? I'm aware of the "Import setuptools" problem during upgrades from distribute to setuptools: https://github.com/pypa/pip/issues/1064 pip-1.4 does prevent that now (released last week) but you specifically mentioned the linux distro name, "python-distribute"? I'm guessing maybe you're concerned with the scenario of how upgrading some project (that depends on setuptools/distribute) forces an unintended upgrade of distribute on a system where distribute is system-managed. Is there a link to a tracker issue for that somewhere? 3 solutions to ponder: 1) unreleasing the distribute-0.7.3 wrapper, and making any upgrades from distribute to setuptools manual. 2) changing pip's -U logic to not recursively upgrade satisfied dependencies, but that's a *big* change (it's a long story, but there are tentative plans in 1.5 for changing it) 3) in a patch release, include a special case for distribute to *not* upgrade if satisfied (when distribute is a dependency in the requirement set). Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Jul 29 21:28:20 2013 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 29 Jul 2013 12:28:20 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: > >> - - Distribute / setuptools merge, e.g. cratering folks who use a >> distro-managed 'python-distribute' package. >> >> >> This is the biggest issue. I wasn't involved and it could have been >> handled better sure. >> > > what issue are we talking about exactly? > > I'm aware of the "Import setuptools" problem during upgrades from > distribute to setuptools: https://github.com/pypa/pip/issues/1064 > pip-1.4 does prevent that now (released last week) > > but you specifically mentioned the linux distro name, "python-distribute"? > I'm guessing maybe you're concerned with the scenario of how upgrading > some project (that depends on setuptools/distribute) forces an unintended > upgrade of distribute on a system where distribute is system-managed. > Is there a link to a tracker issue for that somewhere? > but on 2nd thought, this is not a new problem. this would always been happening, just not involving a transition to setuptools. so I imagine, your concern was just the "Import" error? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Jul 29 22:33:11 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 29 Jul 2013 16:33:11 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <20130729185717.GC15510@zooko.com> <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> Message-ID: <819BBFC8-D6D8-4C1B-B2E3-1C2B35F32A94@stufft.io> On Jul 29, 2013, at 3:14 PM, Donald Stufft wrote: > > On Jul 29, 2013, at 2:57 PM, zooko wrote: > >> I'd like to push back on the other risk, that someone might figure out how to >> make MD5 second-pre-images. I don't think this is a risk that we need to >> urgently address, and I've written a short note explaining why. This note is >> incomplete, badly edited, has not been peer-reviewed, and is not ready for >> publication, but I thought it might help folks evaluate how urgent it is to >> upgrade from MD5, so here it is. > > I don't think it's urgent to fix it, but I think it's a good security hardening effort > with very little downside and very little chance of regression. However, as I > said if Holger, or anyone else, has a concern about the affects of adding this > bit of security hardening to give us a safety net again then I simply won't do > it in the simple API. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Somewhat relevant to the question at hand: http://valerieaurora.org/hash.html (Yes it lists sha-2 as weakened, which it is. However sha-3 isn't widespread enough for us :( ) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 841 bytes Desc: Message signed with OpenPGP using GPGMail URL: From zooko at zooko.com Mon Jul 29 23:04:48 2013 From: zooko at zooko.com (zooko) Date: Tue, 30 Jul 2013 01:04:48 +0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <819BBFC8-D6D8-4C1B-B2E3-1C2B35F32A94@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <20130729185717.GC15510@zooko.com> <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> <819BBFC8-D6D8-4C1B-B2E3-1C2B35F32A94@stufft.io> Message-ID: <20130729210448.GD15510@zooko.com> On Mon, Jul 29, 2013 at 04:33:11PM -0400, Donald Stufft wrote: > > Somewhat relevant to the question at hand: http://valerieaurora.org/hash.html Heh heh. That page is cited in my note. My note is kind of a response to that page, showing that the history of pre-image attacks is completely different than the history of collision attacks. > (Yes it lists sha-2 as weakened, which it is. However sha-3 isn't widespread > enough for us :( ) There's no reason to worry about SHA-2. In my opinion, there's no particular reason to think that it will be made vulnerable to collisions within the next decade! By the way, I'm a co-author of a secure hash function -- BLAKE2: https://blake2.net/ The intent of BLAKE2 is to be as secure as SHA-3 but as fast as MD5. Not only is it as fast as MD5, but it also has an optional parallel mode that can go 4 or 8 times as fast as MD5 by using 4 or 8 CPU cores! It is currently being adopted for uses like data deduplication, archiving, and distributed filesystems, where the data can be large (terabytes or more), and the performance of the hash function is a bottleneck. I don't think Python packaging has such needs, and BLAKE2 is not a standard like SHA-2 and SHA-3, so I'm not pushing to add support for it. Regards, Zooko From ronaldoussoren at mac.com Mon Jul 29 23:32:35 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 29 Jul 2013 23:32:35 +0200 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <819BBFC8-D6D8-4C1B-B2E3-1C2B35F32A94@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <20130729185717.GC15510@zooko.com> <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> <819BBFC8-D6D8-4C1B-B2E3-1C2B35F32A94@stufft.io> Message-ID: <0C1ABD3B-106E-4F2C-9834-9D7B925EF247@mac.com> On 29 Jul, 2013, at 22:33, Donald Stufft wrote: > > On Jul 29, 2013, at 3:14 PM, Donald Stufft wrote: > >> >> On Jul 29, 2013, at 2:57 PM, zooko wrote: >> >>> I'd like to push back on the other risk, that someone might figure out how to >>> make MD5 second-pre-images. I don't think this is a risk that we need to >>> urgently address, and I've written a short note explaining why. This note is >>> incomplete, badly edited, has not been peer-reviewed, and is not ready for >>> publication, but I thought it might help folks evaluate how urgent it is to >>> upgrade from MD5, so here it is. >> >> I don't think it's urgent to fix it, but I think it's a good security hardening effort >> with very little downside and very little chance of regression. However, as I >> said if Holger, or anyone else, has a concern about the affects of adding this >> bit of security hardening to give us a safety net again then I simply won't do >> it in the simple API. >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > Somewhat relevant to the question at hand: http://valerieaurora.org/hash.html > > (Yes it lists sha-2 as weakened, which it is. However sha-3 isn't widespread enough for us :( ) That SHA-3 isn't widespread yet is not a surprise, AFAIK it isn't even a standard yet :-). According to the standard will be finalized in Q2 2014. BTW. I agree that the MD5 checksums on PyPI will have to go some time, and it would be nice if the replacement scheme had a way to use multiple hashes to make it easier to switch to a hash in future. I know to little of the setuptools and pip implementations to have anything useful to add to the discussion about the timing for this. Ronald > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Tue Jul 30 01:43:44 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Jul 2013 09:43:44 +1000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <20130729185717.GC15510@zooko.com> <8C5D67EF-7562-49D7-92B4-BE04A148F11C@stufft.io> Message-ID: On 30 Jul 2013 05:15, "Donald Stufft" wrote: > > > On Jul 29, 2013, at 2:57 PM, zooko wrote: > >> I'd like to push back on the other risk, that someone might figure out how to >> make MD5 second-pre-images. I don't think this is a risk that we need to >> urgently address, and I've written a short note explaining why. This note is >> incomplete, badly edited, has not been peer-reviewed, and is not ready for >> publication, but I thought it might help folks evaluate how urgent it is to >> upgrade from MD5, so here it is. > > > I don't think it's urgent to fix it, but I think it's a good security hardening effort > with very little downside and very little chance of regression. However, as I > said if Holger, or anyone else, has a concern about the affects of adding this > bit of security hardening to give us a safety net again then I simply won't do > it in the simple API. I'm thinking that may be the way to go - treat verified SSL as our final stop-gap for the simple API and focus on hardening the next generation APIs. This is more for social reasons than strictly technical ones. I think you're right this particular change is unlikely to break anything, but there are also enough genuinely essential changes needed that we should avoid unnecessary flux in other areas. In this case, I think the need for a pre-image attack that still produces a working download and an old installer that isn't using verified SSL but can check SHA256 hashes reduces the attack window to a point where I'm prepared to live with the use of MD5 as a known risk. Cheers, Nick. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jul 30 07:41:19 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 05:41:19 +0000 (UTC) Subject: [Distutils] a plea for backward-compatibility / smooth transitions References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: Paul Moore gmail.com> writes: > > Personally, none of the changes have detrimentally affected me, so my > opinion is largely theoretical. But even I am getting a little frustrated > by the constant claims that "what we have now is insecure and broken, and > must be fixed ASAP". FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the consequences of it). Security issues should be fixed without breaking things in a hassle (which is the policy we followed e.g. for the ssl module, or hash randomization). The whole python.org infrastructure is built on an OS kernel written by someone who thinks security issues are normal bugs. AFAIK there is no plan to switch to OpenBSD. Regards Antoine. From noah at coderanger.net Tue Jul 30 08:02:28 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 29 Jul 2013 23:02:28 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 29, 2013, at 10:41 PM, Antoine Pitrou wrote: > Paul Moore gmail.com> writes: >> >> Personally, none of the changes have detrimentally affected me, so my >> opinion is largely theoretical. But even I am getting a little frustrated >> by the constant claims that "what we have now is insecure and broken, and >> must be fixed ASAP". > > FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the > consequences of it). Security issues should be fixed without breaking things > in a hassle (which is the policy we followed e.g. for the ssl module, or hash > randomization). You missed a key word "? when possible". If there is a problem we will fix it, when we can do that in a way that minimizes breakages we will do that. Its all just about cost-benefit, and when you are talking about "executing code downloaded from the internet" it becomes quite easy to see benefits outweighing costs even with pretty major UX changes. Not something we do lightly, but status quo does not win here, sorry. > > The whole python.org infrastructure is built on an OS kernel written by someone > who thinks security issues are normal bugs. AFAIK there is no plan to switch to > OpenBSD. This is news to me, we specifically run Ubuntu LTS because Canonical's security response team has a proven track record of handling issues. If you mean that Linus doesn't handle security issues well, then it is fortunate indeed that we don't actually use his software. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 30 08:08:36 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 02:08:36 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 30, 2013, at 1:41 AM, Antoine Pitrou wrote: > Paul Moore gmail.com> writes: >> >> Personally, none of the changes have detrimentally affected me, so my >> opinion is largely theoretical. But even I am getting a little frustrated >> by the constant claims that "what we have now is insecure and broken, and >> must be fixed ASAP". > > FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the > consequences of it). Security issues should be fixed without breaking things > in a hassle (which is the policy we followed e.g. for the ssl module, or hash > randomization). People are generally not paranoid until they've been successfully attacked. I *will* advocate and push for breaking things where security is concerned because regardless of if you care or not, a lot of people *do* care and the nature of the beast is that you're only as strong as the weakest link. This particular change wasn't an immediate vulnerability that I felt was urgent, hence why I've backed off on it when people were concerned about the backwards compat implications. I will not back off when it comes to issues that *do* have an immediate or near term issue, regardless of if some people don't care or not. > > The whole python.org infrastructure is built on an OS kernel written by someone > who thinks security issues are normal bugs. AFAIK there is no plan to switch to > OpenBSD. So classifying bugs as security vs "normal" is supposed to make it easier on people. The thought is that creating new releases and applying updates is a time consuming process and often times requires things such as reboots or service restarts so by dividing issues up into security vs not security the amount of disruption can be minimized for only "important" updates. There's actually pretty strong evidence that shows the process of classifying bugs as security bugs is a harmful process and that all updates should be treated the same because it's often times not immediately obvious what the security implications are, even to security experts[1]. I'm sure your dig at the OS is supposed to be some sort of masterstroke about how we're not being as secure as possible anyways however I would contest that OpenBSD is actually more secure. It's major claim to fame is that they haven't had a vulnerably in the OpenBSD base system in "a heck of a long time". The problem is the OpenBSD base system is terribly small and that claim cannot be made once you include their packages. Further more at the last I checked OpenBSD does not provide (although this may have changed) and abilities to do MAC which means you're relying entirely on an attackers ability to *not* get in versus providing fail safes to contain an attack once it's happened. Infrastructure is not using MAC currently but I would love to get us to that point as well. [1] citeseerx.ist.psu.edu/viewdoc/download;jsessionid=7B6E224144709E99B7FAEBFC497621A1?doi=10.1.1.148.9757&rep=rep1&type=pdf ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 30 08:10:59 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 02:10:59 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: <07FDD041-F9FC-47DD-80AE-64A0FE0A553F@stufft.io> On Jul 30, 2013, at 2:02 AM, Noah Kantrowitz wrote: > > On Jul 29, 2013, at 10:41 PM, Antoine Pitrou wrote: > >> Paul Moore gmail.com> writes: >>> >>> Personally, none of the changes have detrimentally affected me, so my >>> opinion is largely theoretical. But even I am getting a little frustrated >>> by the constant claims that "what we have now is insecure and broken, and >>> must be fixed ASAP". >> >> FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the >> consequences of it). Security issues should be fixed without breaking things >> in a hassle (which is the policy we followed e.g. for the ssl module, or hash >> randomization). > > You missed a key word "? when possible". If there is a problem we will fix it, when we can do that in a way that minimizes breakages we will do that. Its all just about cost-benefit, and when you are talking about "executing code downloaded from the internet" it becomes quite easy to see benefits outweighing costs even with pretty major UX changes. Not something we do lightly, but status quo does not win here, sorry. Basically said it better than I could :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 08:19:11 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 06:19:11 +0000 (UTC) Subject: [Distutils] a plea for backward-compatibility / smooth transitions References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: Noah Kantrowitz coderanger.net> writes: > > The whole python.org infrastructure is built on an OS kernel written by someone > > who thinks security issues are normal bugs. AFAIK there is no plan to switch to > > OpenBSD. > > This is news to me, we specifically run Ubuntu LTS because Canonical's security response team has a proven > track record of handling issues. If you mean that Linus doesn't handle security issues well, then it is > fortunate indeed that we don't actually use his software. Did you already forget what the discussion is about? Security/bugfix Ubuntu LTS updates don't break compatibility for the sake of hardening things, which is the whole point. (As for the idea that using "Canonical's kernel" amounts to not using "Linus' software", that's a rather unorthodox notion of authorship. It's very likely Canonical doesn't change more than 1% LOC in the kernel, so you're still bound to Linus' decisions for at least 99% of the code - and even probably for the remaining 1%, since Canonical's version won't be massively divergent.) Regards Antoine. From noah at coderanger.net Tue Jul 30 08:24:27 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 29 Jul 2013 23:24:27 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: <965577D9-C8EC-4D1D-ABB8-1414F2DE9A86@coderanger.net> On Jul 29, 2013, at 11:19 PM, Antoine Pitrou wrote: > Noah Kantrowitz coderanger.net> writes: >>> The whole python.org infrastructure is built on an OS kernel written by > someone >>> who thinks security issues are normal bugs. AFAIK there is no plan to > switch to >>> OpenBSD. >> >> This is news to me, we specifically run Ubuntu LTS because Canonical's > security response team has a proven >> track record of handling issues. If you mean that Linus doesn't handle > security issues well, then it is >> fortunate indeed that we don't actually use his software. > > Did you already forget what the discussion is about? > Security/bugfix Ubuntu LTS updates don't break compatibility for the sake of > hardening > things, which is the whole point. Again, speaking as the guy that has to clean up the mess when they do break compat, I promise you they do. Same deal, they only break compat when keeping compat would present a threat to users, which is quite often the case with security bugs. They are fortunately a bit further ahead of us on the long tail of finding problems, so this is far less frequent than it was in years past. We will get there too, but like I said, status quo is not a defense here, just strap in and hang on. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 08:28:27 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 06:28:27 +0000 (UTC) Subject: [Distutils] a plea for backward-compatibility / smooth transitions References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > I *will* advocate and push for breaking things where security is concerned because > regardless of if you care or not, a lot of people *do* care and the nature of the > beast is that you're only as strong as the weakest link. That's nice, but you're not alone here, so whatever you want to "push for" needn't always happen. > There's actually pretty strong evidence that > shows the process of classifying bugs as security bugs is a harmful process and that > all updates should be treated the same because it's often times not immediately > obvious what the security implications are, even to security experts[1]. Doesn't it contradict your own stance on the subject? ("This shows a fundamental misunderstanding of how security issues present themselves. Of course things just work for people because security issues are not like regular bugs" - which is a flawed argument btw. Many bugs have random or rare occurrences - not just security issues) > I'm sure your dig at the OS is supposed to be some sort of masterstroke about how > we're not being as secure as possible anyways however I would contest that > OpenBSD is actually more secure. WTF are you talking about? No it's not. I'm simply pointing out that, for some strange reason, you decided to trust an OS whose author has very different views on how to fix security issues than you have. Regards Antoine. From donald at stufft.io Tue Jul 30 08:38:32 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 02:38:32 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 30, 2013, at 2:19 AM, Antoine Pitrou wrote: > Noah Kantrowitz coderanger.net> writes: >>> The whole python.org infrastructure is built on an OS kernel written by > someone >>> who thinks security issues are normal bugs. AFAIK there is no plan to > switch to >>> OpenBSD. >> >> This is news to me, we specifically run Ubuntu LTS because Canonical's > security response team has a proven >> track record of handling issues. If you mean that Linus doesn't handle > security issues well, then it is >> fortunate indeed that we don't actually use his software. > > Did you already forget what the discussion is about? > Security/bugfix Ubuntu LTS updates don't break compatibility for the sake of > hardening > things, which is the whole point. Well for one PyPI doesn't have releases so there is no LTS or not LTS, it's just what's being served so there's no simple place to break backwards compatibility. As far as forgetting what's being discussed here then it sounds like you've apparently missed the fact I already conceded the change to MD5 and further more this thread was explicitly split off from the MD5 request because, as far as I can tell, Holger wanted to discuss the broader topic of compatibility in general and not just specific to this particular issue. > > (As for the idea that using "Canonical's kernel" amounts to not using "Linus' > software", that's a rather unorthodox notion of authorship. It's very likely > Canonical > doesn't change more than 1% LOC in the kernel, so you're still bound to Linus' > decisions for at least 99% of the code - and even probably for the remaining 1%, > since Canonical's version won't be massively divergent.) > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 30 08:43:32 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 02:43:32 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> On Jul 30, 2013, at 2:28 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >> >> I *will* advocate and push for breaking things where security is concerned > because >> regardless of if you care or not, a lot of people *do* care and the nature > of the >> beast is that you're only as strong as the weakest link. > > That's nice, but you're not alone here, so whatever you want to "push for" > needn't > always happen. I have zero qualms about releasing a full disclosure along with working exploits into the wild for a security vulnerability that people block me on. If I'm unable to rectify the problem I will make sure that everyone *knows* about the problem. > >> There's actually pretty strong evidence that >> shows the process of classifying bugs as security bugs is a harmful > process and that >> all updates should be treated the same because it's often times not > immediately >> obvious what the security implications are, even to security experts[1]. > > Doesn't it contradict your own stance on the subject? > > ("This shows a fundamental misunderstanding of how security issues present > themselves. Of course things just work for people because security issues > are not > like regular bugs" - which is a flawed argument btw. Many bugs have random or > rare occurrences - not just security issues) No? How you treat a bug and how they present themselves are not the same thing? Even a random occurrence will break for some percentage of people using the software some percentage of the time. If it didn't then it's unlikely anyone would notice it. Security vulnerabilities typically won't break until someone actively tries to break them. > >> I'm sure your dig at the OS is supposed to be some sort of masterstroke > about how >> we're not being as secure as possible anyways however I would contest that >> OpenBSD is actually more secure. > > WTF are you talking about? No it's not. I'm simply pointing out that, for > some strange reason, you decided to trust an OS whose author has very > different views on how to fix > security issues than you have. Well the Kernel isn't the OS, it's part of the OS and we run an OS whose authors actually care a whole lot about security. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 09:01:20 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 07:01:20 +0000 (UTC) Subject: [Distutils] a plea for backward-compatibility / smooth transitions References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > I have zero qualms about releasing a full disclosure along with working exploits > into the wild for a security vulnerability that people block me on. If I'm unable > to rectify the problem I will make sure that everyone *knows* about the problem. I don't know what I'm supposed to infer from such a statement, except that I probably don't want to trust you. You might think that "publish[ing] working exploits into the wild" is some kind of heroic, altruistic act, but I think few people would agree. > Even a random occurrence will break for some percentage of people using > the software some percentage of the time. If it didn't then it's unlikely anyone > would notice it. Security vulnerabilities typically won't break until someone actively > tries to break them. You're mistaken. Bugs can sometimes be fixed preemptively, even before they're noticed in the wild (by means of perusing the code and noticing an issue, for example). Which also includes, of course, security issues (which often get fixed before they ever get exploited). Regards Antoine. From noah at coderanger.net Tue Jul 30 09:05:43 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 30 Jul 2013 00:05:43 -0700 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> Message-ID: On Jul 30, 2013, at 12:01 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >> >> I have zero qualms about releasing a full disclosure along with working > exploits >> into the wild for a security vulnerability that people block me on. If I'm > unable >> to rectify the problem I will make sure that everyone *knows* about the > problem. > > I don't know what I'm supposed to infer from such a statement, except that I > probably don't want to trust you. You might think that "publish[ing] working > exploits into the wild" is some kind of heroic, altruistic act, but I think few > people would agree. No, this is the standard for security researchers. If the vendor ignores the reported exploit for long enough, they go public and try to make sure users understand the risks and how to mitigate them in the time it takes the vendor to fix it. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 30 09:07:07 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 03:07:07 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> Message-ID: <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> On Jul 30, 2013, at 3:01 AM, Antoine Pitrou wrote: > I don't know what I'm supposed to infer from such a statement, except that I > probably don't want to trust you. You might think that "publish[ing] working > exploits into the wild" is some kind of heroic, altruistic act, but I think few > people would agree. Full Disclosure is a common practice amongst security professionals when the upstream project is unwilling to rectify the problem. So yes I do think the practice of Full Disclosure is an altruistic act and often times the only thing that gets people who don't care to pull their head out of the sand and actually care. If you don't believe my words on it here's an essay by Bruce Schneier one of the foremost experts on security and a well respected and well trusted member of the security community. https://www.schneier.com/essay-146.html ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Jul 30 09:33:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Jul 2013 08:33:38 +0100 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On 30 July 2013 07:08, Donald Stufft wrote: > On Jul 30, 2013, at 1:41 AM, Antoine Pitrou wrote: > > > Paul Moore gmail.com> writes: > >> > >> Personally, none of the changes have detrimentally affected me, so my > >> opinion is largely theoretical. But even I am getting a little > frustrated > >> by the constant claims that "what we have now is insecure and broken, > and > >> must be fixed ASAP". > > > > FWIW, +1. You may be paranoid, but not everyone has to be (or suffer the > > consequences of it). Security issues should be fixed without breaking > things > > in a hassle (which is the policy we followed e.g. for the ssl module, or > hash > > randomization). > > People are generally not paranoid until they've been successfully > attacked. I > *will* advocate and push for breaking things where security is concerned > because > regardless of if you care or not, a lot of people *do* care and the nature > of the > beast is that you're only as strong as the weakest link. This particular > change > wasn't an immediate vulnerability that I felt was urgent, hence why I've > backed > off on it when people were concerned about the backwards compat > implications. I > will not back off when it comes to issues that *do* have an immediate or > near > term issue, regardless of if some people don't care or not. And in case it's not obvious, I think this is important. We need to have this sort of debate, certainly, but it won't happen without someone advocating (and implementing!) the changes, so many thanks for being that person and putting up with the flak. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Jul 30 09:56:38 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 30 Jul 2013 07:56:38 +0000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> <20130729172851.GJ32284@merlinux.eu> Message-ID: <20130730075638.GK32284@merlinux.eu> On Mon, Jul 29, 2013 at 14:30 -0400, Donald Stufft wrote: > On Jul 29, 2013, at 1:28 PM, holger krekel wrote: > > > On Mon, Jul 29, 2013 at 10:30 -0400, Donald Stufft wrote: > >> On Jul 29, 2013, at 7:58 AM, Nick Coghlan wrote: > >>>> > >>>> Actually, i strongly object further backward-incompatible changes. > >>>> > >>>> Please (generally) find a way to introduce improvements without breaking > >>>> existing installation processes at the same time. > >>>> > >>>> For example, in this case pip/easy_install could indicate to PYPI what > >>>> kind of hashes it accepts (through a header or query param or whatever) > >>>> and PyPI could serve it but we'd default to MD5 for now if nothing else > >>>> was requested. Please also consider the PEP438 vetted registration of > >>>> externals+hashses in this context. Once things and tools are working > >>>> nicely we can switch to serving a non-MD5 hash as default after a > >>>> sufficient grace period. > >>> > >>> Having the improved hashes be opt-in (by the client) strikes me as a > >>> reasonable request. > >>> > >>> Yes, this means nothing will actually happen until easy_install/pip > >>> are updated to request those improved hashes and those versions see > >>> significant uptake, but as Holger says, we need to ensure we put > >>> sufficient effort into smoothing out the roller coaster ride that has > >>> been the recent experience of packaging system users. > >> > >> There's basically zero way for this to fail closed in any of the > >> current installers. The failure mode is unverified packages not > >> uninstallable packages. I am not aware of a single installer that > >> mandates the use of a hash. Crate.io has never used md5 hashes and has > >> always used sha256 and I've never received a single report of an > >> installer being unable to install because of it, which is exactly what > >> I expect. > > > > So you think the worst case for forcing SHA256 hashes is that installers > > who don't yet support sha256 hashes would just ignore it (and thus wouldn't > > do hash verification)? > > Yes. I've been using sha256 on simple.crate.io for over a year and > zero people have ever stated it didn't work for them. This also fits > in with my knowledge of how setuptools and pip works. I know > zc.buildout less well but to my knowledge they simple allow setuptools > to handle the downloading. Sounds good. Maybe "secondary" tools could get problems, though. I know for sure that devpi-server might stumble but i can fix that. Also i remember there were tools that memorized MD5 hashes in requirements files etc. > >> Indicating via Header or query param pretty much destroys the > >> effectiveness of the CDN's cache in order to fix a problem with a > >> theoretical (as far as I am aware) installer that requires a md5 hash > >> (and thus has never worked for any of the externally hosted packages. > >> Additionally it doesn't account for external urls which need to be > >> registered *with* a hash. > > > > Currently there is no hash-type enforcement on registered externals, is there? > > Registered externals must register with a md5 hash, scraped links and > download urls etc do not require it because they are indirectly added. > There is no verification by PyPI that the given hash matches the > package at the end of the url. Hum, can we allow submitting multiple hashes? Are there tools already that help with registering externals? > >> As far as available attacks, *today* an author could upload a package > >> that has been created so as to have a sister package with malicious > >> code that has the same hash allowing them to have a malicious package > >> they can substitute at will without the hashes changing at all. In the > >> future it's possible that a pre-image attack on MD5 will be found and > >> then we'll be dealing with this problem then when we've lost all > >> verification on external urls instead of now when we have time to get > >> external urls to switch. > > > > So the attack is a malicious author or someone else modifying an external > > release file (either directly on the server or via MITM) while maintaining > > the pre-registered MD5 hash, right? > > > > I am currently merely trying to understand more exactly what > > you are worried about. > > (...) > > MD5 is currently broken for collision resistance. This means that an > author can generate two packages that hash to the same thing. Once > package might be benign and one might be malicious. Given those two > packages people using the md5 hashes will not be able to differentiate > between the benign and the malicous package. I think we should not pretend that PyPI has (by itself) any safety belts against malicious authors. There are numerous ways for malicious authors to do evil if they choose to. The potential ability to fake a package using a collision attack merely adds another way. Do you know, btw, if TUF is going to help with any of what we are discussing here? (I am again a bit lost as to the roadmap wrt to TUF - is there something?) cheers, holger > > MD5 is currently *not* broken for pre image resistance. This means that as of > right now someone can not take an already existing package on PyPI and generate > a second package that hashes to the same thing (besides via brute forcing). > > So right now, collision attacks possible == yes, pre image attacks possible == no. > > However designing secure systems is a practice of building in safety margins. If > someone, for instance, can break 5 rounds of a function you use 15 rounds. With > cryptographic hashes collision attacks are easier than pre-image attacks, so if you > have two functions, one that has a collision attack and one that doesn't you can > generally assume that the one without a collision attack is stronger and has a > longer shelf life. > > So the problem with MD5 (ignoring for a second the fact that a collision attack can > be bad on it's own) is that there are no more safety nets. If it gets broken for a pre-image > then there's not likely to be any warning (we've already *had* the warning). It will > just be broken and we will be scrambling to update things then (and hopefully nobody > gets attacked in the meantime). > > And I do say *if* because as zooko pointed out, it's not a guarantee that MD5 will > ever lose it's pre-image resistance (which just means that brute forcing is the quickest > way to generate a hash). > > > > > best, > > holger > > > > > >> So by all means I will not migrate us if that's what you want. Old > >> versions of the installation clients stick around far to long for the > >> opt in mechanism to be much use. The point of switching was to cover > >> the existing clients as well to narrow the gap until a new API is > >> developed. > >> > >> Hopefully no one is relying on these hashes to prevent an > >> author from maliciously injecting a sister package and hopefully the > >> strength of MD5 holds and no new research is found that blows it's > >> pre-image attack residence to pieces. > >> > >> As far as not breaking things goes backwards compatibility has been an > >> important concern however progress forward *requires* breakage. It is > >> required because there is a vast array of available ways to have your > >> package and/or hosting configured many of them horrible practices > >> which need to be killed. Killing them requires breaking backwards > >> compatibility. You cite SSL, yes SSL has caused a number of errors for > >> people mostly related to older versions of OpenSSL being unable to use > >> a SSL certificate but downloading code you're going to execute over > >> plaintext isn't just bad, it's downright negligent on the part of the > >> toolchain. So that was a required breakage. > >> > >> You also mention the pip 1.4 *not* installing pre-releases by default. > >> Yes that broke a handful of packages Supervisor and pytz being the > >> major ones that I've seen anyone complain about. It was also known > >> ahead of time that this was a backwards incompatible change (and it > >> was noted as such in the release notes). It wasn't a surprising > >> outcome. The pip developers "drew a line in the sand" to quote Paul > >> Moore and I expect pip 1.5 where PEP438 becomes default to break even > >> more packages from people who just haven't bothered to change their > >> practices until it's forced on them. > >> > >> ----------------- > >> Donald Stufft > >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > >> > > > > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From donald at stufft.io Tue Jul 30 10:01:08 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 04:01:08 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions (was: Re: Migrating Hashes from MD5 to SHA256) In-Reply-To: <20130730075638.GK32284@merlinux.eu> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <1E4A1F4C-05B9-415E-A0EA-11CC0C4A8609@stufft.io> <20130729172851.GJ32284@merlinux.eu> <20130730075638.GK32284@merlinux.eu> Message-ID: <34A4C5E9-1C59-43B6-A4DE-DC95DD2D6865@stufft.io> On Jul 30, 2013, at 3:56 AM, holger krekel wrote: >> Yes. I've been using sha256 on simple.crate.io for over a year and >> zero people have ever stated it didn't work for them. This also fits >> in with my knowledge of how setuptools and pip works. I know >> zc.buildout less well but to my knowledge they simple allow setuptools >> to handle the downloading. > > Sounds good. Maybe "secondary" tools could get problems, though. > I know for sure that devpi-server might stumble but i can fix that. > Also i remember there were tools that memorized MD5 hashes in requirements > files etc. The change was nixed but it wasn't about removing the ability for pip and such to use MD5s. Merely what PyPI serves. >> Registered externals must register with a md5 hash, scraped links and >> download urls etc do not require it because they are indirectly added. >> There is no verification by PyPI that the given hash matches the >> package at the end of the url. > > Hum, can we allow submitting multiple hashes? Are there tools already > that help with registering externals? Not easily and in a backwards compatible way. >> MD5 is currently broken for collision resistance. This means that an >> author can generate two packages that hash to the same thing. Once >> package might be benign and one might be malicious. Given those two >> packages people using the md5 hashes will not be able to differentiate >> between the benign and the malicous package. > > I think we should not pretend that PyPI has (by itself) any safety belts > against malicious authors. There are numerous ways for malicious authors > to do evil if they choose to. The potential ability to fake a package > using a collision attack merely adds another way. Correct, which is why I tried to be very specific about the types of attacks :) > > Do you know, btw, if TUF is going to help with any of what we are discussing > here? (I am again a bit lost as to the roadmap wrt to TUF - is there > something?) TUF would provide protections against a pre-image attack yes. However it has it's own problems and is still likely a ways out (if we use it). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Jul 30 10:10:44 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Jul 2013 18:10:44 +1000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On 30 July 2013 17:33, Paul Moore wrote: > And in case it's not obvious, I think this is important. We need to have > this sort of debate, certainly, but it won't happen without someone > advocating (and implementing!) the changes, so many thanks for being that > person and putting up with the flak. Part of the idea of having myself and Richard in the approval chain for packaging ecosystem and PyPI changes is to encourage people to get angry at *us* for saying "Yes, let's do that" when things break, rather than getting mad at the people actually volunteering to do the work needed to improve the Python community's software distribution infrastructure. That's the plan, anyway, even if it isn't always all that effective in practice :( Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Tue Jul 30 10:14:39 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 04:14:39 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Jul 30, 2013, at 4:10 AM, Nick Coghlan wrote: > On 30 July 2013 17:33, Paul Moore wrote: >> And in case it's not obvious, I think this is important. We need to have >> this sort of debate, certainly, but it won't happen without someone >> advocating (and implementing!) the changes, so many thanks for being that >> person and putting up with the flak. > > Part of the idea of having myself and Richard in the approval chain > for packaging ecosystem and PyPI changes is to encourage people to get > angry at *us* for saying "Yes, let's do that" when things break, > rather than getting mad at the people actually volunteering to do the > work needed to improve the Python community's software distribution > infrastructure. > > That's the plan, anyway, even if it isn't always all that effective in > practice :( Heh, I'm pretty good at getting yelled at :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jannis at leidel.info Tue Jul 30 11:19:04 2013 From: jannis at leidel.info (Jannis Leidel) Date: Tue, 30 Jul 2013 11:19:04 +0200 Subject: [Distutils] Ubuntu says Virtualenv, Pip and Setuptools "untrusted" In-Reply-To: References: Message-ID: <8D25792F-1CD5-4B67-B755-B883FE133578@leidel.info> On 27.07.2013, at 10:12, Erik Bernoth wrote: > Hi everybody, > > did you know that Ubuntu 13.10 (or maybe Debian?) declares those > packages as untrusted and asks you twice, if you really want to > install them? Is there anything that can be done about that? Hi Erik, Many thanks for the report, would you be so kind and provide some details to the pypa-dev mailing list, which is more appropriate for dealing with pip and virtualenv development? https://groups.google.com/group/pypa-dev Jannis -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 11:57:50 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 09:57:50 +0000 (UTC) Subject: [Distutils] =?utf-8?q?a_plea_for_backward-compatibility_/_smooth?= =?utf-8?q?=09transitions?= References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > On Jul 30, 2013, at 3:01 AM, Antoine Pitrou pitrou.net> wrote: > > I don't know what I'm supposed to infer from such a statement, except that Iprobably don't want to trust you. You might think that "publish[ing] workingexploits into the wild" is some kind of heroic, altruistic act, but I think fewpeople would agree. > > > Full Disclosure is a common practice amongst security professionals > whenthe upstream project is unwilling to rectify the problem. So yes I do think > the practice of Full Disclosure is an altruistic act and often times the only > thing that gets people who don't care to pull their head out of the sand > and actually care. You don't happen to be a random security professional, you are actually part of that upstream project and you have access to non-public (possibly confidential) data about its infrastructure, which gives you responsibilities towards your peers. I don't think I would be the only one to be angry if an infrastructure member starting publishing working exploits for unfixed vulnerabilities in the pdo infrastructure. It is a completely irresponsible way to act when you are part of a project or community. Regards Antoine. From regebro at gmail.com Tue Jul 30 12:38:16 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 30 Jul 2013 12:38:16 +0200 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Mon, Jul 29, 2013 at 8:17 PM, Donald Stufft wrote: >> - - Distribute / setuptools merge, e.g. cratering folks who use a >> distro-managed 'python-distribute' package. > > > This is the biggest issue. I wasn't involved and it could have been handled > better sure. I'm not convinced. The use-cases and ways these things can be installed in conjunction with things like system packages, virtualenv and buildout in various configurations and various python versions are mindboggling. Some of them were not initially tested, probably mostly because nobody realized the finer details of what would happen in these sometimes fairly obscure cases. The problems have, afaik, now been fixed, and this was done reasonably quickly IMO. OK, of course it *could* have been handled better, but I don't think it was handled badly. The current situation is a huge mess. Untangling it *will* be tricky. //Lennart From donald at stufft.io Tue Jul 30 12:46:08 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 06:46:08 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> Message-ID: <52AD69CC-EE31-4E54-AD0D-B07A34409614@stufft.io> On Jul 30, 2013, at 5:57 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >> >> On Jul 30, 2013, at 3:01 AM, Antoine Pitrou pitrou.net> wrote: >> >> I don't know what I'm supposed to infer from such a statement, except that > Iprobably don't want to trust you. You might think that "publish[ing] > workingexploits into the wild" is some kind of heroic, altruistic act, but I > think fewpeople would agree. >> >> >> Full Disclosure is a common practice amongst security professionals >> whenthe upstream project is unwilling to rectify the problem. So yes I do > think >> the practice of Full Disclosure is an altruistic act and often times the only >> thing that gets people who don't care to pull their head out of the sand >> and actually care. > > You don't happen to be a random security professional, you are actually part > of that upstream project and you have access to non-public (possibly > confidential) > data about its infrastructure, which gives you responsibilities towards your > peers. > > I don't think I would be the only one to be angry if an infrastructure member > starting publishing working exploits for unfixed vulnerabilities in the pdo > infrastructure. It is a completely irresponsible way to act when you are part > of a project or community. I don't really care if you'd be angry. The point of Full Disclosure (and it's cousin Responsible Disclosure) is to A) Inform everyone involved that they are taking a huge risk by using a particular thing and B) Provide incentive to people to fix their shit. If others are preventing fixes from landing then both reasons still apply wether the reporter is involved with the project or not. If I can find a vulnerability then so can someone else. Someone who won't inform people and will use it to maliciously attack people. If you feel I'd be overstepping my bounds then complain to my superiors, Richard/Nick on the packaging side of things and Noah on the Infrastructure team side of things. I'll continue to do what I feel best serves the community for as long as I have the ability to do so. Which I believe is work on improving these issues, fight and advocate for the important ones, accept defeat on less important ones, and, if necessary, issue a Full Disclosure. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 13:40:06 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 11:40:06 +0000 (UTC) Subject: [Distutils] =?utf-8?q?a_plea_for_backward-compatibility_/=09smoot?= =?utf-8?q?h=09transitions?= References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> <52AD69CC-EE31-4E54-AD0D-B07A34409614@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > You don't happen to be a random security professional, you are actually part > > of that upstream project and you have access to non-public (possibly > > confidential) > > data about its infrastructure, which gives you responsibilities towards your > > peers. > > > > I don't think I would be the only one to be angry if an infrastructure member > > starting publishing working exploits for unfixed vulnerabilities in the pdo > > infrastructure. It is a completely irresponsible way to act when you are part > > of a project or community. > > I don't really care if you'd be angry. Great to hear. This mindset is typical of many "security specialists": you're ready to tell everyone to go f*** themselves (I don't know how to voice this differently) if you think you have a higher mission to denounce some vulnerability. > The point of Full Disclosure (and it's cousin > Responsible Disclosure) is to A) Inform everyone involved that they are taking > a huge risk by using a particular thing and B) Provide incentive to people to > fix their shit. This does not necessarily involve publishing working exploits. By giving out code that can immediately attack and bring down the pdo infrastructure, you would be doing something else than merely "informing the public". (neither Bruce Schneier nor Wikipedia states that Full Disclosure implies publishing working exploits, btw. I suppose it means there is at the minimum some contention, rather than consensus, over the issue.) > If I can find a vulnerability then so can someone else. You may (and probably do) have domain knowledge and inside knowledge that others don't. > If you feel I'd be > overstepping my bounds then complain to my superiors, Richard/Nick on the > packaging side of things and Noah on the Infrastructure team side of things. "Superiors"? Are you making things up, or do you have an org chart to back that up? :-) (regardless, I would be surprised if any of those ordered *you* to publish an exploit, rather than take the responsibility of doing it themselves - or, rather, not doing it at all) Regards Antoine. From regebro at gmail.com Tue Jul 30 13:49:34 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 30 Jul 2013 13:49:34 +0200 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> Message-ID: On Tue, Jul 30, 2013 at 8:43 AM, Donald Stufft wrote: > I have zero qualms about releasing a full disclosure along with working exploits > into the wild for a security vulnerability that people block me on. This is a moot point, as nobody is going to block anyone. The discussion is silly. //Lennart From donald at stufft.io Tue Jul 30 13:53:33 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 07:53:33 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> <52AD69CC-EE31-4E54-AD0D-B07A34409614@stufft.io> Message-ID: On Jul 30, 2013, at 7:40 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >>> You don't happen to be a random security professional, you are actually part >>> of that upstream project and you have access to non-public (possibly >>> confidential) >>> data about its infrastructure, which gives you responsibilities towards your >>> peers. >>> >>> I don't think I would be the only one to be angry if an infrastructure > member >>> starting publishing working exploits for unfixed vulnerabilities in the pdo >>> infrastructure. It is a completely irresponsible way to act when you are > part >>> of a project or community. >> >> I don't really care if you'd be angry. > > Great to hear. This mindset is typical of many "security specialists": > you're ready to tell everyone to go f*** themselves (I don't know how to > voice this differently) if you think you have a higher mission to > denounce some vulnerability. Keeping an issue secret doesn't make people more secure, it only prevents them from making an informed decision about the things they use. > >> The point of Full Disclosure (and it's cousin >> Responsible Disclosure) is to A) Inform everyone involved that they are taking >> a huge risk by using a particular thing and B) Provide incentive to people to >> fix their shit. > > This does not necessarily involve publishing working exploits. By giving out > code that can immediately attack and bring down the pdo infrastructure, you > would be doing something else than merely "informing the public". > > (neither Bruce Schneier nor Wikipedia states that Full Disclosure implies > publishing working exploits, btw. I suppose it means there is at the > minimum some contention, rather than consensus, over the issue.) Partial disclosure is typically publishing enough information to allow people to make an informed choice but (hopefully) not enough information to allow others to attack it. Full Disclosure implies laying out the full details of the vulnerability. It doesn't necessarily mean wrapping it up in ready to run code but they almost always provide enough information so that anyone with a cursory understanding of programming (or the area it affects depending on how hard it is to replicate) can implement the attack and reproduce it. The problem with Partial Disclosure is that often times vendors/upstream/whatever will simply call the problem theoretical and still attempt to not fix things. > >> If I can find a vulnerability then so can someone else. > > You may (and probably do) have domain knowledge and inside knowledge that > others don't. There's not really any inside knowledge. Everything is OSS. Domain knowledge? Sure, but I really hope we aren't relying on people not knowing enough about how the tooling works to exploit it. > >> If you feel I'd be >> overstepping my bounds then complain to my superiors, Richard/Nick on the >> packaging side of things and Noah on the Infrastructure team side of things. > > "Superiors"? Are you making things up, or do you have an org chart to back that > up? :-) > (regardless, I would be surprised if any of those ordered *you* to publish an > exploit, rather than take the responsibility of doing it themselves - or, > rather, not doing it at all) Nick is the BDFL delegate for packaging tooling and Richard is the same for PyPI and I generally consider my involvement as a PyPI admin to be under him. Noah is in charge of the Infra team of which I am a member. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Jul 30 14:02:57 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 08:02:57 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> Message-ID: On Jul 30, 2013, at 7:49 AM, Lennart Regebro wrote: > On Tue, Jul 30, 2013 at 8:43 AM, Donald Stufft wrote: >> I have zero qualms about releasing a full disclosure along with working exploits >> into the wild for a security vulnerability that people block me on. > > This is a moot point, as nobody is going to block anyone. The > discussion is silly. > > //Lennart *Shrug* If someone says people who don't care shouldn't have to suffer the consequences i assume that means they are advocating for no breakages in API (else the hypothetical person would be suffering). If you can't break fundamentally insecure things then that's a blockade on fixing the system. If that's not what the comment was proposing then I have no idea what it means. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Jul 30 14:06:55 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 30 Jul 2013 22:06:55 +1000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> <52AD69CC-EE31-4E54-AD0D-B07A34409614@stufft.io> Message-ID: On 30 Jul 2013 21:41, "Antoine Pitrou" wrote: > > Donald Stufft stufft.io> writes: > > > You don't happen to be a random security professional, you are actually part > > > of that upstream project and you have access to non-public (possibly > > > confidential) > > > data about its infrastructure, which gives you responsibilities towards your > > > peers. > > > > > > I don't think I would be the only one to be angry if an infrastructure > member > > > starting publishing working exploits for unfixed vulnerabilities in the pdo > > > infrastructure. It is a completely irresponsible way to act when you are > part > > > of a project or community. > > > > I don't really care if you'd be angry. > > Great to hear. This mindset is typical of many "security specialists": > you're ready to tell everyone to go f*** themselves (I don't know how to > voice this differently) if you think you have a higher mission to > denounce some vulnerability. > > > The point of Full Disclosure (and it's cousin > > Responsible Disclosure) is to A) Inform everyone involved that they are taking > > a huge risk by using a particular thing and B) Provide incentive to people to > > fix their shit. > > This does not necessarily involve publishing working exploits. By giving out > code that can immediately attack and bring down the pdo infrastructure, you > would be doing something else than merely "informing the public". > > (neither Bruce Schneier nor Wikipedia states that Full Disclosure implies > publishing working exploits, btw. I suppose it means there is at the > minimum some contention, rather than consensus, over the issue.) > > > If I can find a vulnerability then so can someone else. > > You may (and probably do) have domain knowledge and inside knowledge that > others don't. > > > If you feel I'd be > > overstepping my bounds then complain to my superiors, Richard/Nick on the > > packaging side of things and Noah on the Infrastructure team side of things. > > "Superiors"? Are you making things up, or do you have an org chart to back that > up? :-) Effectively he does, yes. Richard is responsible for approving PyPI API changes (including PyPI specific PEPs), I'm BDFL delegate for other packaging PEPs and Noah has final say over the operation of the python.orginfrastructure. One or more of us are the ones that need to say yes on potentially controversial changes, so the responsibility for any mistakes ultimately lies with us, rather than Donald (and I'm greatly appreciative of the huge amount of work he is putting into improving the PyPI security story). > (regardless, I would be surprised if any of those ordered *you* to publish an > exploit, rather than take the responsibility of doing it themselves - or, > rather, not doing it at all) If Donald informed us of a vulnerability and we refused to allow him (or anyone else) to take the necessary steps to close it, then he would be *completely* justified in publishing full details of the vulnerability, up to and including working exploit code. It won't come to that though, because we're taking this seriously and closing security holes as quickly as is feasible while still ensuring a reasonable level of backwards compatibility :) Cheers, Nick. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Jul 30 14:23:45 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 08:23:45 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <0E1DDEC4-B480-4004-BFB2-7770F62626F2@stufft.io> <4106645A-1510-4635-9EFF-63E86C7CCFD8@stufft.io> <52AD69CC-EE31-4E54-AD0D-B07A34409614@stufft.io> Message-ID: <741CF1BB-4EC8-4CD0-950F-A7DB1C153605@stufft.io> On Jul 30, 2013, at 8:06 AM, Nick Coghlan wrote: > If Donald informed us of a vulnerability and we refused to allow him (or anyone else) to take the necessary steps to close it, then he would be *completely* justified in publishing full details of the vulnerability, up to and including working exploit code. > > It won't come to that though, because we're taking this seriously and closing security holes as quickly as is feasible while still ensuring a reasonable level of backwards compatibility :) > This basically. Maybe I'm not being clear because I have a headache and I'm reading too much into things because I'm sensitive to being shutdown on efforts to fix these things*. I don't expect with Nick, Richard, and Noah to ever need to do a Full Disclosure. I was only trying to be clear about what I consider my escalation path to be if a current, or near future vulnerability is forced to remain open. * I started trying to push for this ~2 years ago and got repeatedly shut down, for one reason or another. Which lead to to create Crate.io. It's only been relatively recently that I've been given permission to actually fix things. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Tue Jul 30 19:13:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 30 Jul 2013 13:13:08 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: > Heh, I'm pretty good at getting yelled at :) Nick is also pretty good at making people feel like he both knows and *cares* about their breakage, and isn't just dismissing their concerns as trivial or unimportant. Breakage isn't trivial or unimportant to the person who's yelling, so this is an important community-maintenance skill. It builds trust, and reduces the total amount of yelling. From donald at stufft.io Tue Jul 30 20:04:53 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 14:04:53 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> Message-ID: <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> On Jul 30, 2013, at 1:13 PM, PJ Eby wrote: > On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: >> Heh, I'm pretty good at getting yelled at :) > > Nick is also pretty good at making people feel like he both knows and > *cares* about their breakage, and isn't just dismissing their concerns > as trivial or unimportant. Breakage isn't trivial or unimportant to > the person who's yelling, so this is an important > community-maintenance skill. It builds trust, and reduces the total > amount of yelling. *shrug*, If I didn't care I would have made this change as soon as Nick said it was ok. Instead I declared I was going to and waited to make sure nobody else had any concerns. And once Holger said he did I said ok I won't do it. Maybe my mannerisms give the impression I don't but that's actually pretty far from the truth. For this particular change I originally created the pip commit that allowed it, and then again I created the setuptools commit, backporting hashlib into setuptools to support Python 2.4. I put a decent amount of effort into trying to make sure that nothing broke but in the end there were still concerns :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Jul 30 20:44:14 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 30 Jul 2013 18:44:14 +0000 (UTC) Subject: [Distutils] =?utf-8?q?a_plea_for_backward-compatibility_/_smooth?= =?utf-8?q?=09transitions?= References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > *shrug*, If I didn't care I would have made this change as soon as Nick > said it was ok. Instead I declared I was going to and waited to make sure > nobody else had any concerns. And once Holger said he did I said > ok I won't do it. Maybe my mannerisms give the impression I don't but > that's actually pretty far from the truth. For this particular change I originally > created the pip commit that allowed it, and then again I created the setuptools > commit, backporting hashlib into setuptools to support Python 2.4. I put > a decent amount of effort into trying to make sure that nothing broke > but in the end there were still concerns :) You're right. I've been a bit harsh, but you're definitely doing some very useful things here. Regards Antoine. From pje at telecommunity.com Tue Jul 30 22:01:11 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 30 Jul 2013 16:01:11 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: On Tue, Jul 30, 2013 at 2:04 PM, Donald Stufft wrote: > > On Jul 30, 2013, at 1:13 PM, PJ Eby wrote: > >> On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: >>> Heh, I'm pretty good at getting yelled at :) >> >> Nick is also pretty good at making people feel like he both knows and >> *cares* about their breakage, and isn't just dismissing their concerns >> as trivial or unimportant. Breakage isn't trivial or unimportant to >> the person who's yelling, so this is an important >> community-maintenance skill. It builds trust, and reduces the total >> amount of yelling. > > *shrug*, If I didn't care I would have made this change as soon as Nick > said it was ok. Instead I declared I was going to and waited to make sure > nobody else had any concerns. And once Holger said he did I said > ok I won't do it. Maybe my mannerisms give the impression I don't but > that's actually pretty far from the truth. I did say "feel like". ;-) Nick usually gives more of an impression that he's thought about concerns raised before rejecting them; your responses often sound like, "Who cares about that?" Asking for suggestions, for example, would be good. Nick also rarely seems irritated by people's concerns or problems, whereas you sometimes seem in a big hurry to fix something today or this week that's been broken for years, without giving folks a while to get used to the idea. Often your proposals seem less like proposals than, "I've decided to do this, so deal with it". I'm not saying all this because I want to complain or yell at you; I'm saying it because I think you do care enough to know how you're coming across, at least to me. Our discussions have gotten heated in the past because my impression of your reaction to the concerns I raise often seems like, "I don't care about supporting [group of people affected], so neither should you." Perhaps the issue is just one of confusion. When I raise an issue about, say, Python 2.3 users (who are still downloading setuptools 0.6 releases, and presumably also using them), it's not because I expect *you* to change your plans to support them, but because I need to know how *I* can, if the issue arises. So I don't actually expect you to care about Python 2.3 users (again, as an example), but I do expect you to care about *me* supporting them. In the most recent situation, you did in fact point me to your awesome hashlib port, so I do know you *do* care to at least that extent. But the rhetoric that you sent both before and after the helpful bit seemed on the inflammatory side, as though I were crazy to be thinking of Python 2.3. Whether or not this is true ;-) -- it's not especially *helpful* to the discussion. If I may offer a suggestion, asking questions in response to objections is generally more useful than immediately arguing the relevance of the objection. First, it tells the objector that you're interested in what they have to say, and second, it may well help the objector understand that there isn't actually any real problem, and gives them an easier path to backing down and/or compromising, whereas a frontal assault tends to focus people on responding to you instead of reconsidering their objection. On the hashlib issue, for example, it actually occurred to me later that it's completely a non-issue because the actual hash scenario I was most concerned about *can't actually happen*. (MD5 hashes in code or dependency_links, used e.g. by setuptools itself to secure its own downloads. Changing PyPI won't affect these, duh.) It might've occurred to me sooner, though, if you'd actually asked what scenario I was worried about, instead of arguing about the relevance. This isn't to say that you're responsible for what I do or don't figure out; my point is simply that asking questions and inviting suggestions in response to people's objections will generally get you more thoughtful responses and more trust, and resolve issues sooner, with less arguing. From donald at stufft.io Tue Jul 30 22:58:14 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 16:58:14 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: On Jul 30, 2013, at 4:01 PM, PJ Eby wrote: > On Tue, Jul 30, 2013 at 2:04 PM, Donald Stufft wrote: >> >> On Jul 30, 2013, at 1:13 PM, PJ Eby wrote: >> >>> On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: >>>> Heh, I'm pretty good at getting yelled at :) >>> >>> Nick is also pretty good at making people feel like he both knows and >>> *cares* about their breakage, and isn't just dismissing their concerns >>> as trivial or unimportant. Breakage isn't trivial or unimportant to >>> the person who's yelling, so this is an important >>> community-maintenance skill. It builds trust, and reduces the total >>> amount of yelling. >> >> *shrug*, If I didn't care I would have made this change as soon as Nick >> said it was ok. Instead I declared I was going to and waited to make sure >> nobody else had any concerns. And once Holger said he did I said >> ok I won't do it. Maybe my mannerisms give the impression I don't but >> that's actually pretty far from the truth. > > I did say "feel like". ;-) > > Nick usually gives more of an impression that he's thought about > concerns raised before rejecting them; your responses often sound > like, "Who cares about that?" Asking for suggestions, for example, > would be good. Nick also rarely seems irritated by people's concerns > or problems, whereas you sometimes seem in a big hurry to fix > something today or this week that's been broken for years, without > giving folks a while to get used to the idea. Often your proposals > seem less like proposals than, "I've decided to do this, so deal with > it". > > I'm not saying all this because I want to complain or yell at you; I'm > saying it because I think you do care enough to know how you're coming > across, at least to me. Our discussions have gotten heated in the > past because my impression of your reaction to the concerns I raise > often seems like, "I don't care about supporting [group of people > affected], so neither should you." > > Perhaps the issue is just one of confusion. When I raise an issue > about, say, Python 2.3 users (who are still downloading setuptools 0.6 > releases, and presumably also using them), it's not because I expect > *you* to change your plans to support them, but because I need to know > how *I* can, if the issue arises. So I don't actually expect you to > care about Python 2.3 users (again, as an example), but I do expect > you to care about *me* supporting them. > > In the most recent situation, you did in fact point me to your awesome > hashlib port, so I do know you *do* care to at least that extent. But > the rhetoric that you sent both before and after the helpful bit > seemed on the inflammatory side, as though I were crazy to be thinking > of Python 2.3. Whether or not this is true ;-) -- it's not especially > *helpful* to the discussion. > > If I may offer a suggestion, asking questions in response to > objections is generally more useful than immediately arguing the > relevance of the objection. First, it tells the objector that you're > interested in what they have to say, and second, it may well help the > objector understand that there isn't actually any real problem, and > gives them an easier path to backing down and/or compromising, whereas > a frontal assault tends to focus people on responding to you instead > of reconsidering their objection. > > On the hashlib issue, for example, it actually occurred to me later > that it's completely a non-issue because the actual hash scenario I > was most concerned about *can't actually happen*. (MD5 hashes in code > or dependency_links, used e.g. by setuptools itself to secure its own > downloads. Changing PyPI won't affect these, duh.) It might've > occurred to me sooner, though, if you'd actually asked what scenario I > was worried about, instead of arguing about the relevance. > > This isn't to say that you're responsible for what I do or don't > figure out; my point is simply that asking questions and inviting > suggestions in response to people's objections will generally get you > more thoughtful responses and more trust, and resolve issues sooner, > with less arguing. Hrm. So I hear what you're saying and part of the problem is likely due to the history of where I tried to get a change through and then felt like all I was getting was stop energy and people wanting to keep the status quo which ultimately ended up preventing changes has lead me to view distutils-sig in more of an adversarial light then is probably appropriate for the distutils-sig of 2013 (versus the distutils-sig of 2011/2012). This is probably reflected in my tone and likely has others, such as yourself, respond similarly, pushing us further down that path. My thought process has become "Ok here's something that needs to happen, now how do I get distutils-sig not to prevent it from happening". This was again reflected in the Python 2.3 discussion because my immediate reaction and impression was that you were attempting to block the move from MD5 due to Python 2.3, which I felt 2.3 wasn't worth blocking enhancements to PyPI. The "snark" in my statements primarily came from that feeling of someone was trying to "shut down" an enhancement. I'd have to look to see when I first starting trying to advocate for change on catalog-sig but it was sometime in 2011 because I had finally gotten frustrated and went to "take my ball and go home" and start a competitor to PyPI with Crate.io on Dec 25th 2011. So there was a good 1-2 years there where the various mailing lists *were* an adversary towards getting fixes put into place. Now that I am able to get changes put into place I'm trying to go back and solve the various issues that I've found and I do have a feeling of urgency because I get concerned that this willingness to move forward and fix things is temporary and I want to fit in as many fixes as I can as quickly as I can before my window of opportunity closes. I've gone so far as begun sleeping less (not that i ever slept anything that one could call a normal sleep schedule) in order to buy more hours to work on these issues to get them fixed faster. So I do get frustrated and/or irritated and a lot of that comes from the history there because prior to Nick/Richard becoming BDFL Delegates there was never anyone willing to say "Yes, we can do that". I'm also generally a person who gets angry quickly (but gets un-angry quickly as well) which leads to it being harder for me not to snark or go on the offensive. There is a reason my personal domain is https://caremad.io/ ;) As far as how to fix it I don't have a particularly magic answer. I will try to be more mindful of my tone and that distutils-sig is likely not my adversary anymore as well as try to ask questions instead of arguing the relevance immediately. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Jul 30 23:17:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 30 Jul 2013 22:17:53 +0100 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: On 30 July 2013 21:58, Donald Stufft wrote: > As far as how to fix it I don't have a particularly magic answer. I will > try to be more > mindful of my tone and that distutils-sig is likely not my adversary > anymore as well > as try to ask questions instead of arguing the relevance immediately. > Thank you - both for the thoughtful response and the explanation of what is going on in your thinking. I certainly tend to think about possible issues rather than benefits, maybe more than I should as I don't personally have any particular problems with a rapid pace of change (except in some specific areas where I'm in so much of a minority that I don't expect my concerns to be a big driver in the grand scheme of things). I think I'm doing this to make sure people haven't missed potential issues I'm aware of, but I can easily imagine that this comes across as negative "stop energy". I'll try to stick to issues where I have genuine concerns, and not theoretical ones in future. And yes, it does feel like distutils-sig of 2013 is a much nicer place than it used to be. Thanks to everyone for working to keep that the case. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Jul 31 00:19:18 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 30 Jul 2013 22:19:18 +0000 (UTC) Subject: [Distutils] =?utf-8?q?a_plea_for_backward-compatibility_/_smooth?= =?utf-8?q?=09transitions?= References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > to PyPI. The "snark" in my statements primarily came from that feeling of > someone was trying to "shut down" an enhancement. To be fair, the tone you've taken (which can rub people up the wrong way) hasn't just been over long-standing security issues. I've also felt uncomfortable with some of your responses relating to the PEP 426 discussion around the JSON schema for requirements, where there is not the same level of urgency, nor valid reason to believe that you're being thwarted somehow. If someone's approach comes off as unduly dismissive (even if not meant that way), it makes it harder to engage in constructive discussion. I recognise that it's hard communicating with people remotely, many of whom one might never have met. It's easy to miss nuance and misconstrue turns of phrase. It's best to tread gingerly, else too much time is spent smoothing ruffled feathers at the expense of getting real work done. Regards, Vinay Sajip From donald at stufft.io Wed Jul 31 00:43:06 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 18:43:06 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: On Jul 30, 2013, at 5:17 PM, Paul Moore wrote: > On 30 July 2013 21:58, Donald Stufft wrote: > As far as how to fix it I don't have a particularly magic answer. I will try to be more > mindful of my tone and that distutils-sig is likely not my adversary anymore as well > as try to ask questions instead of arguing the relevance immediately. > > Thank you - both for the thoughtful response and the explanation of what is going on in your thinking. > > I certainly tend to think about possible issues rather than benefits, maybe more than I should as I don't personally have any particular problems with a rapid pace of change (except in some specific areas where I'm in so much of a minority that I don't expect my concerns to be a big driver in the grand scheme of things). I think I'm doing this to make sure people haven't missed potential issues I'm aware of, but I can easily imagine that this comes across as negative "stop energy". I'll try to stick to issues where I have genuine concerns, and not theoretical ones in future. I don't actually mind even theoretical problems nor do I want people to feel like they need to coddle me because of my history with distutils-sig. That history and how It affects me is *my* problem. I think as long as we, including myself, approach theoretical problems as "Let's figure out if this theoretical problem is actually a problem, and if it is do we care about it" and not "here's a bunch of possible problems, we can't do this" then there will be no issues. It's true that theoretical problems can make me feel more like someones applying stop energy than concrete problems, but that's not a problem of the person bringing up that problem and more just it triggering old feelings from a time where I couldn't even get a SSL certificate trusted by most major browsers by default deployed ;). Not allowing those feelings to poison current efforts is on me. > > And yes, it does feel like distutils-sig of 2013 is a much nicer place than it used to be. Thanks to everyone for working to keep that the case. > > Paul ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Wed Jul 31 00:47:40 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 30 Jul 2013 22:47:40 +0000 (UTC) Subject: [Distutils] PEP 426 installation hooks and conventions Message-ID: I'm looking at PEP 426 specifications for installation hooks and wondering whether we need to tighten up the specification a little. My concern stems from the fact that hook code needs to be installed along with other code - at least, the code for preuninstall hooks needs to be in the installed code. As it's "only" hook code, the naming of modules which contain it may not be done as carefully as the substantive modules and packages in a distribution. However, if multiple distributions were to put their hooks in a "hooks.py" module, which might seem the simplest thing to do, that could lead to problems: if these hook modules get written to site- packages, the hooks.py from a later installed distribution would override that from an earlier installed distribution. Possible solutions: 1. Constrain the specification so that each distribution must put hook code in a subpackage of one of their main packages. This will affect any distribution that consists only of modules and which has no packages, as the authors will have to create a package just for the hook code. 2. Constrain the specification so that hook code is segregated to a single module, perhaps specified by a "hook_module" key in the "install_hooks" dict, which is written to the .dist-info directory. An installer could add the .dist-info to sys.path before resolving/importing. The code in the hooks module could invoke any code it needed from the main body of code in the distribution. Is my concern a valid one? If so, can I please have comments/suggestions about how to address it? Regards, Vinay Sajip From donald at stufft.io Wed Jul 31 00:51:25 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 30 Jul 2013 18:51:25 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: <80670955-903D-44C0-956A-1AB1188241EF@stufft.io> On Jul 30, 2013, at 6:19 PM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> to PyPI. The "snark" in my statements primarily came from that feeling of >> someone was trying to "shut down" an enhancement. > > To be fair, the tone you've taken (which can rub people up the wrong way) > hasn't just been over long-standing security issues. I've also felt > uncomfortable with some of your responses relating to the PEP 426 > discussion around the JSON schema for requirements, where there is not the > same level of urgency, nor valid reason to believe that you're being thwarted > somehow. I never meant to claim it was simply over security issues. I'm certainly more "energetic" about advocating for security issues and they frame the backdrop for a lot of my feelings with distutils-sig as an adversary rather than an ally . The simple fact is that every disagreement, or even simple suggestion, is framed against that feeling leading towards the Me vs Them tone. > > If someone's approach comes off as unduly dismissive (even if not meant that > way), it makes it harder to engage in constructive discussion. I recognise > that it's hard communicating with people remotely, many of whom one might > never have met. It's easy to miss nuance and misconstrue turns of phrase. It's > best to tread gingerly, else too much time is spent smoothing ruffled feathers > at the expense of getting real work done. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Jul 31 01:05:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 31 Jul 2013 09:05:09 +1000 Subject: [Distutils] PEP 426 installation hooks and conventions In-Reply-To: References: Message-ID: On 31 Jul 2013 08:49, "Vinay Sajip" wrote: > > I'm looking at PEP 426 specifications for installation hooks and wondering > whether we need to tighten up the specification a little. > > My concern stems from the fact that hook code needs to be installed along > with other code - at least, the code for preuninstall hooks needs to be in > the installed code. As it's "only" hook code, the naming of modules which > contain it may not be done as carefully as the substantive modules and > packages in a distribution. It's installed along with everything else, why would anyone assume they can ignore naming conflicts? If they do, then it's either a bug or a mutual incompatibility between affected distributions (just like other accidental name conflicts). >However, if multiple distributions were to put > their hooks in a "hooks.py" module, which might seem the simplest thing to > do, that could lead to problems: if these hook modules get written to site- > packages, the hooks.py from a later installed distribution would override > that from an earlier installed distribution. > > Possible solutions: > > 1. Constrain the specification so that each distribution must put hook code > in a subpackage of one of their main packages. This will affect any > distribution that consists only of modules and which has no packages, as the > authors will have to create a package just for the hook code. Why? If it's a single module that needs install hooks for some reason, the hook implementations can just go in there along with everything else (assuming they're not invoking a generic hook from a dependency). The names can be prefixed with an underscore to indicate they're not part of the regular API. > 2. Constrain the specification so that hook code is segregated to a single > module, perhaps specified by a "hook_module" key in the "install_hooks" > dict, which is written to the .dist-info directory. An installer could add > the .dist-info to sys.path before resolving/importing. The code in the hooks > module could invoke any code it needed from the main body of code in the > distribution. > > Is my concern a valid one? If so, can I please have comments/suggestions > about how to address it? Hook code is just normal code in the installed distribution (or a dependency). That's why there are deliberately no preinstall or postuninstall hooks - I don't want to come up with a new scheme for running code that hasn't been installed. Cheers, Nick. > > Regards, > > Vinay Sajip > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Jul 31 06:15:25 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 31 Jul 2013 00:15:25 -0400 Subject: [Distutils] Warehouse Migration Plan Message-ID: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> So, in the spirit of not treating distutils-sig like an adversary, here's the main thing I've been working on lately with regards to PyPI. None of this is set in stone but this is the general gist of the plan for moving things to be developed in a modern framework, as well as cleaning up the code and getting repeatable deployments. Warehouse Migration Plan ------------------------------------ Warehouse is currently primarily besides modeling for user accounts. It will be deployed alongside pypi-legacy at next.pypi.python.org in the near future. Initially it will have zero user facing value. As time goes on the database tables will be migrated from being "owned" by pypi-legacy to being "owned" by Warehouse. This primarily means that the schema definition and migration of those tables will be controlled by Warehouse. As tables are migrated to Warehouse ownership the PyPI code will be updated to reflect any changes in schema (without modification to what end users see). Once all tables that are going to be kept have been migrated, we will have a shared database that is accessible from both pypi-legacy and Warehouse. From this point Warehouse will begin evolving "legacy" views such as the simple and other pieces of API. The UX itself will continue to be ignored and focus will be on getting feature parity for automated tooling. Changes in behavior on these legacy views should be minimal and discussed on distutils-sig. Once the legacy views are finished people will be encouraged to test their real world workloads against those reimplemented legacy APIs. As changes in behaviors, missing functionality, or bugs are found they will be rectified or declared unsupported. During this time work on the UI of Warehouse will begin focusing on maintaing feature parity but with no promises of no changes to the url structure or UX. Once Warehouse gains feature parity with PyPI and has gotten enough testing against it's APIs then pypi-legacy will be retired and Warehouse will move from next.pypi.python.org to pypi.python.org. From there it will evolve on it's own without needing to keep pypi-legacy in sync. Specification & Acceptance Testing ------------------------------------------------ I do not want a packaging index server to be specified by implementation, so as the legacy API is ported over to Warehouse a specification will be drafted. This spec will represent the "promise" that PyPI makes about the API it presents to be consumed by machines. During the migration any behavior not codified inside of the spec is considered implementation defined behavior and backwards compatibility for that behavior will not be promised. In addition to a defined specification A repository of acceptance tests will be developed. These tests will be part of the test framework for any future changes to Warehouse but will be maintained separately alongside the specification. They will also allow any alternative implementation (such as DevPI) to test themselves against the spec. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Wed Jul 31 07:38:36 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 31 Jul 2013 05:38:36 +0000 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> References: <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: <20130731053836.GM32284@merlinux.eu> Hi Donald, On Tue, Jul 30, 2013 at 14:04 -0400, Donald Stufft wrote: > On Jul 30, 2013, at 1:13 PM, PJ Eby wrote: > > > On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: > >> Heh, I'm pretty good at getting yelled at :) > > > > Nick is also pretty good at making people feel like he both knows and > > *cares* about their breakage, and isn't just dismissing their concerns > > as trivial or unimportant. Breakage isn't trivial or unimportant to > > the person who's yelling, so this is an important > > community-maintenance skill. It builds trust, and reduces the total > > amount of yelling. > > *shrug*, If I didn't care I would have made this change as soon as > Nick said it was ok. Instead I declared I was going to and waited to > make sure nobody else had any concerns. And once Holger said he did I > said ok I won't do it. Maybe my mannerisms give the impression I don't > but that's actually pretty far from the truth. For this particular > change I originally created the pip commit that allowed it, and then > again I created the setuptools commit, backporting hashlib into > setuptools to support Python 2.4. I put a decent amount of effort into > trying to make sure that nothing broke but in the end there were still > concerns :) For the record, i am all for putting generic hash support into the installers and maybe prepare for an eventual change to make PyPI serve sha256 hashes. However, to me it's not clear if such a move may become obsolete through the potential advent of TUF. My original objection reason was tied to generally pushing for more focus on backward-compatibility. I am grateful that several people including you, Nick and Jannis acknowledged the point. best, holger > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Wed Jul 31 08:23:46 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 31 Jul 2013 02:23:46 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: <20130731053836.GM32284@merlinux.eu> References: <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> <20130731053836.GM32284@merlinux.eu> Message-ID: On Jul 31, 2013, at 1:38 AM, holger krekel wrote: > Hi Donald, > > On Tue, Jul 30, 2013 at 14:04 -0400, Donald Stufft wrote: >> On Jul 30, 2013, at 1:13 PM, PJ Eby wrote: >> >>> On Tue, Jul 30, 2013 at 4:14 AM, Donald Stufft wrote: >>>> Heh, I'm pretty good at getting yelled at :) >>> >>> Nick is also pretty good at making people feel like he both knows and >>> *cares* about their breakage, and isn't just dismissing their concerns >>> as trivial or unimportant. Breakage isn't trivial or unimportant to >>> the person who's yelling, so this is an important >>> community-maintenance skill. It builds trust, and reduces the total >>> amount of yelling. >> >> *shrug*, If I didn't care I would have made this change as soon as >> Nick said it was ok. Instead I declared I was going to and waited to >> make sure nobody else had any concerns. And once Holger said he did I >> said ok I won't do it. Maybe my mannerisms give the impression I don't >> but that's actually pretty far from the truth. For this particular >> change I originally created the pip commit that allowed it, and then >> again I created the setuptools commit, backporting hashlib into >> setuptools to support Python 2.4. I put a decent amount of effort into >> trying to make sure that nothing broke but in the end there were still >> concerns :) > > For the record, i am all for putting generic hash support into the > installers and maybe prepare for an eventual change to make PyPI serve > sha256 hashes. However, to me it's not clear if such a move may become > obsolete through the potential advent of TUF. pip has had generic hash support since 1.2, setuptools since 0.9, and since I believe zc.buildout just lets setuptools do the downloads so it'd be when using 0.9 for it. The idea was this was an easy win security hardening. Something like TUF would more or less obsolete it but TUF isn't on the immediate radar. > > My original objection reason was tied to generally pushing for more focus > on backward-compatibility. I am grateful that several people including you, > Nick and Jannis acknowledged the point. > > best, > holger > >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Jul 31 10:49:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 31 Jul 2013 09:49:46 +0100 Subject: [Distutils] Warehouse Migration Plan In-Reply-To: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> References: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> Message-ID: On 31 July 2013 05:15, Donald Stufft wrote: [...] +1. This sounds like a good plan I do not want a packaging index server to be specified by implementation, > so as > the legacy API is ported over to Warehouse a specification will be > drafted. This > spec will represent the "promise" that PyPI makes about the API it > presents to be > consumed by machines. During the migration any behavior not codified > inside of > the spec is considered implementation defined behavior and backwards > compatibility > for that behavior will not be promised. > And this is something I wholeheartedly support - having a properly specified PyPI API will not only help protect against future changes, it will also ensure that people writing their own index servers have a well-defined spec to work to. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Jul 31 11:34:32 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 31 Jul 2013 09:34:32 +0000 (UTC) Subject: [Distutils] SSL problem with PyPI? Message-ID: I've just started getting PyPI failures because the SSL certificate being presented by pypi.python.org is only valid for addvocate.com. Can anyone shed some light on this? Regards, Vinay Sajip From donald at stufft.io Wed Jul 31 11:36:01 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 31 Jul 2013 05:36:01 -0400 Subject: [Distutils] SSL problem with PyPI? In-Reply-To: References: Message-ID: <75E9499A-1517-4E91-B05B-4F8CFEE43B2F@stufft.io> On Jul 31, 2013, at 5:34 AM, Vinay Sajip wrote: > I've just started getting PyPI failures because the SSL certificate being > presented by pypi.python.org is only valid for addvocate.com. Can anyone shed > some light on this? > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig http://status.python.org/incidents/jj8d7xn41hr5 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Wed Jul 31 12:47:54 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 31 Jul 2013 06:47:54 -0400 Subject: [Distutils] SSL problem with PyPI? In-Reply-To: References: Message-ID: <9B712FD6-B320-471A-926C-68D84DEC20CF@stufft.io> On Jul 31, 2013, at 5:34 AM, Vinay Sajip wrote: > I've just started getting PyPI failures because the SSL certificate being > presented by pypi.python.org is only valid for addvocate.com. Can anyone shed > some light on this? > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Should be resolved now. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tk47 at students.poly.edu Wed Jul 31 13:27:20 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 31 Jul 2013 07:27:20 -0400 Subject: [Distutils] Status report on PyPI+pip+TUF Message-ID: <51F8F498.2070403@students.poly.edu> Hello Nick and the PyPI community, This is a brief status report on the integration of PyPI and pip with TUF. (A quick reminder: TUF is a general "plug-n-play" update framework designed to introduce usable security to community software repositories such as PyPI. If you think of PyPI as HTTP, then TUF is like adding SSL, and more, to HTTP. More information may be found at [https://www.updateframework.com/].) Firstly, thanks to the generous funding of the National Science Foundation, we are pleased to introduce the addition of a full-time developer, Vladimir Diaz, to our team. Vladimir has been instrumental to the development of TUF, and we are excited to have him join us full-time. (Now we do not just have one PhD student who works on TUF when he is not busy working on other projects!) We are also happy to have a few interns --- Zane Fisher, Tian Tian, John Ward, and Yuyu Zheng --- on board for the summer. Since the security attacks on the Python wiki infrastructure earlier this year, we have been closely following Distutils-SIG to see what we could do to help secure PyPI. We use Python heavily in all of our projects, and would love to help in any way we can. Here is what we have done: ========================== 1. At PyCon 2013, we showed that pip needs very little modification to work with a TUF-enabled PyPI mirror. 2. Soon after (during the spring break), we wrote automation to build a TUF-secured PyPI mirror (which is indistinguishable from any other PyPI mirror except that it has signed metadata about all of the files on PyPI). 3. At the same time, thanks to efforts of Konstantin Andrianov, we also wrote a lot of unit and integration tests to show the attacks that are possible without TUF and impossible with TUF. 4. After that, we started investigating the most efficient way to build TUF metadata for PyPI. We found that requiring a separate key for every package on PyPI may sound like a good idea, but besides generating too much metadata, this scheme also makes key management difficult. Here is what we are doing now: ============================== We are designing a usable key management scheme, coupled with efficient generation and download of metadata, which we think should make for a smooth integration of PyPI with TUF. We are actively working on this and think that we are almost there. As a conservative estimate, we do not believe that this should take longer than two weeks. Here is what we are going to do next: ===================================== In about a month, we will present to you a demonstration of a PyPI mirror and a pip client which are robust against entire classes of security attacks. We welcome you then to try our demo, be really critical of it and tell us what you think about what we could do better. Our goal with TUF is to provide a framework that works with as many software community repositories as possible and that secures as many users as possible. More details on our development are available at our mailing list: https://groups.google.com/forum/#!forum/theupdateframework We hope this gives you a good idea of the current status of integrating TUF with PyPI and pip. Let us know if you have questions. Thanks, The TUF team From holger at merlinux.eu Wed Jul 31 14:11:11 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 31 Jul 2013 12:11:11 +0000 Subject: [Distutils] Warehouse Migration Plan In-Reply-To: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> References: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> Message-ID: <20130731121111.GF3987@merlinux.eu> On Wed, Jul 31, 2013 at 00:15 -0400, Donald Stufft wrote: > So, in the spirit of not treating distutils-sig like an adversary, here's > the main thing I've been working on lately with regards to PyPI. None > of this is set in stone but this is the general gist of the plan for moving > things to be developed in a modern framework, as well as cleaning > up the code and getting repeatable deployments. Is warehouse a re-implementation or did it start from the current code base? > Warehouse Migration Plan > ------------------------------------ > > Warehouse is currently primarily besides modeling for user accounts. It > will be deployed alongside pypi-legacy at next.pypi.python.org in the near > future. Initially it will have zero user facing value. > > As time goes on the database tables will be migrated from being "owned" > by pypi-legacy to being "owned" by Warehouse. This primarily means that > the schema definition and migration of those tables will be controlled by > Warehouse. As tables are migrated to Warehouse ownership the PyPI code > will be updated to reflect any changes in schema (without modification to > what end users see). Am a bit skeptical if sharing databases is a good approach. Certainly has potential for disrupting pypi.python.org and making it harder for next. > Once all tables that are going to be kept have been migrated, we will have > a shared database that is accessible from both pypi-legacy and Warehouse. > From this point Warehouse will begin evolving "legacy" views such as the > simple and other pieces of API. The UX itself will continue to be ignored and > focus will be on getting feature parity for automated tooling. > > Changes in behavior on these legacy views should be minimal and > discussed on distutils-sig. Having a doc/spec of those interactions would indeed help and contribute to "not defined by implementation" as you state below. > Once the legacy views are finished people will be encouraged to test their > real world workloads against those reimplemented legacy APIs. As changes > in behaviors, missing functionality, or bugs are found they will be rectified or > declared unsupported. > > During this time work on the UI of Warehouse will begin focusing on maintaing > feature parity but with no promises of no changes to the url structure or UX. > > Once Warehouse gains feature parity with PyPI and has gotten enough testing > against it's APIs then pypi-legacy will be retired and Warehouse will move from > next.pypi.python.org to pypi.python.org. From there it will evolve on it's own without > needing to keep pypi-legacy in sync. > > Specification & Acceptance Testing > ------------------------------------------------ > > I do not want a packaging index server to be specified by implementation, so as > the legacy API is ported over to Warehouse a specification will be drafted. This > spec will represent the "promise" that PyPI makes about the API it presents to be > consumed by machines. During the migration any behavior not codified inside of > the spec is considered implementation defined behavior and backwards compatibility > for that behavior will not be promised. > > In addition to a defined specification A repository of acceptance tests will be developed. > These tests will be part of the test framework for any future changes to Warehouse > but will be maintained separately alongside the specification. They will also allow > any alternative implementation (such as DevPI) to test themselves against the spec. I'd be happy to discuss if we can collaborate or even merge some of our efforts. cheers, holger > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From holger at merlinux.eu Wed Jul 31 14:13:52 2013 From: holger at merlinux.eu (holger krekel) Date: Wed, 31 Jul 2013 12:13:52 +0000 Subject: [Distutils] Status report on PyPI+pip+TUF In-Reply-To: <51F8F498.2070403@students.poly.edu> References: <51F8F498.2070403@students.poly.edu> Message-ID: <20130731121352.GG3987@merlinux.eu> Hi Trishank, thanks for the high level overview. Do you have a current web page with more detailed technical info with respect to PyPI/TUF? best, holger On Wed, Jul 31, 2013 at 07:27 -0400, Trishank Karthik Kuppusamy wrote: > Hello Nick and the PyPI community, > > This is a brief status report on the integration of PyPI and pip with TUF. > > (A quick reminder: TUF is a general "plug-n-play" update framework > designed to introduce usable security to community software > repositories such as PyPI. If you think of PyPI as HTTP, then TUF is > like adding SSL, and more, to HTTP. More information may be found at > [https://www.updateframework.com/].) > > Firstly, thanks to the generous funding of the National Science > Foundation, we are pleased to introduce the addition of a full-time > developer, Vladimir Diaz, to our team. Vladimir has been > instrumental to the development of TUF, and we are excited to have > him join us full-time. (Now we do not just have one PhD student who > works on TUF when he is not busy working on other projects!) We are > also happy to have a few interns --- Zane Fisher, Tian Tian, John > Ward, and Yuyu Zheng --- on board for the summer. > > Since the security attacks on the Python wiki infrastructure earlier > this year, we have been closely following Distutils-SIG to see what > we could do to help secure PyPI. We use Python heavily in all of our > projects, and would love to help in any way we can. > > Here is what we have done: > ========================== > > 1. At PyCon 2013, we showed that pip needs very little modification > to work with a TUF-enabled PyPI mirror. > > 2. Soon after (during the spring break), we wrote automation to > build a TUF-secured PyPI mirror (which is indistinguishable from any > other PyPI mirror except that it has signed metadata about all of > the files on PyPI). > > 3. At the same time, thanks to efforts of Konstantin Andrianov, we > also wrote a lot of unit and integration tests to show the attacks > that are possible without TUF and impossible with TUF. > > 4. After that, we started investigating the most efficient way to > build TUF metadata for PyPI. We found that requiring a separate key > for every package on PyPI may sound like a good idea, but besides > generating too much metadata, this scheme also makes key management > difficult. > > Here is what we are doing now: > ============================== > > We are designing a usable key management scheme, coupled with > efficient generation and download of metadata, which we think should > make for a smooth integration of PyPI with TUF. We are actively > working on this and think that we are almost there. As a > conservative estimate, we do not believe that this should take > longer than two weeks. > > Here is what we are going to do next: > ===================================== > > In about a month, we will present to you a demonstration of a PyPI > mirror and a pip client which are robust against entire classes of > security attacks. We welcome you then to try our demo, be really > critical of it and tell us what you think about what we could do > better. Our goal with TUF is to provide a framework that works with > as many software community repositories as possible and that secures > as many users as possible. > > More details on our development are available at our mailing list: > https://groups.google.com/forum/#!forum/theupdateframework > > We hope this gives you a good idea of the current status of > integrating TUF with PyPI and pip. Let us know if you have > questions. > > Thanks, > The TUF team > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From tk47 at students.poly.edu Wed Jul 31 16:02:48 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 31 Jul 2013 10:02:48 -0400 Subject: [Distutils] Status report on PyPI+pip+TUF In-Reply-To: <20130731121352.GG3987@merlinux.eu> References: <51F8F498.2070403@students.poly.edu> <20130731121352.GG3987@merlinux.eu> Message-ID: <51F91908.60304@students.poly.edu> Hello Holger, On 07/31/2013 08:13 AM, holger krekel wrote: > thanks for the high level overview. Do you have a current web page with > more detailed technical info with respect to PyPI/TUF? Good question! I think it is a good idea to put up a "PyPI+pip+TUF current status" page on our web site, but in the meantime, here are a few links which should point you in the right direction: 1. pip+TUF: we use the interposition technique [https://github.com/theupdateframework/tuf/tree/master/tuf/interposition] to minimally modify pip [https://github.com/theupdateframework/pip/compare/tuf] to talk to a TUF-secured PyPI mirror. 2. PyPI+TUF: we use automation to build a testbed for investigating different key management and metadata schemes to secure PyPI [https://github.com/theupdateframework/pypi.updateframework.com]. (Note: at the time of writing, the automation is slightly out-of-date with our work-in-progress.) 3. These two links should give you a good picture, but they will not give you a complete one. We will formally write about what we mean with our upcoming key management as well as metadata generation and download scheme. Let me start a document and get back to you on that. Thanks, Trishank From alexis at notmyidea.org Wed Jul 31 17:04:24 2013 From: alexis at notmyidea.org (=?ISO-8859-1?Q?Alexis_M=E9taireau?=) Date: Wed, 31 Jul 2013 17:04:24 +0200 Subject: [Distutils] Request to add a trove classifier for pelican plugins In-Reply-To: <51E422EE.8060106@notmyidea.org> References: <51E422EE.8060106@notmyidea.org> Message-ID: <51F92778.30908@notmyidea.org> On 15/07/2013 18:27, Alexis M?taireau wrote: > Hi, > > I hope this is the right place to ask for this. > > I would like to have a trove classifier for pelican [0] plugins. We > plan to release them on PyPI and having a classifier to distinguish > them from all the other packages sounds the way to go. > > We're not really a framework, but following the already established > pattern, I guess "Framework :: Pelican" makes sense. > > Thanks! > Alexis > > [0] http://getpelican.com Hi, Just a quick follow-up on this request since I didn't got any answer. Don't hesitate to point me to the right person if you know more than I do. Cheers, Alexis From donald at stufft.io Wed Jul 31 18:25:20 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 31 Jul 2013 12:25:20 -0400 Subject: [Distutils] Warehouse Migration Plan In-Reply-To: <20130731121111.GF3987@merlinux.eu> References: <36327BB3-92EA-4382-929F-6D77EE10316F@stufft.io> <20130731121111.GF3987@merlinux.eu> Message-ID: On Jul 31, 2013, at 8:11 AM, holger krekel wrote: > On Wed, Jul 31, 2013 at 00:15 -0400, Donald Stufft wrote: >> So, in the spirit of not treating distutils-sig like an adversary, here's >> the main thing I've been working on lately with regards to PyPI. None >> of this is set in stone but this is the general gist of the plan for moving >> things to be developed in a modern framework, as well as cleaning >> up the code and getting repeatable deployments. > > Is warehouse a re-implementation or did it start from the current code base? Reimplementation. > >> Warehouse Migration Plan >> ------------------------------------ >> >> Warehouse is currently primarily besides modeling for user accounts. It >> will be deployed alongside pypi-legacy at next.pypi.python.org in the near >> future. Initially it will have zero user facing value. Just an update here, the name was changed to preview.pypi.python.org to follow suite with preview.python.org and to prevent any sort of confusion with last.pypi.python.org from the mirroring protocol. >> >> As time goes on the database tables will be migrated from being "owned" >> by pypi-legacy to being "owned" by Warehouse. This primarily means that >> the schema definition and migration of those tables will be controlled by >> Warehouse. As tables are migrated to Warehouse ownership the PyPI code >> will be updated to reflect any changes in schema (without modification to >> what end users see). > > Am a bit skeptical if sharing databases is a good approach. > Certainly has potential for disrupting pypi.python.org and making > it harder for next. Sharing the database is has two major purposes, first it allows Warehouse to be phased in having people test it with real live data while PyPI itself is largely unaffected. Second it prevents the need to have a big major switch point which will largely be a one way switch with no easy way to revert it. Essentially it's designed to strangle pypi-legacy slowly over time in smaller bite sized chunks instead of needing to do everything at once. > >> Once all tables that are going to be kept have been migrated, we will have >> a shared database that is accessible from both pypi-legacy and Warehouse. >> From this point Warehouse will begin evolving "legacy" views such as the >> simple and other pieces of API. The UX itself will continue to be ignored and >> focus will be on getting feature parity for automated tooling. >> >> Changes in behavior on these legacy views should be minimal and >> discussed on distutils-sig. > > Having a doc/spec of those interactions would indeed help and contribute > to "not defined by implementation" as you state below. > >> Once the legacy views are finished people will be encouraged to test their >> real world workloads against those reimplemented legacy APIs. As changes >> in behaviors, missing functionality, or bugs are found they will be rectified or >> declared unsupported. >> >> During this time work on the UI of Warehouse will begin focusing on maintaing >> feature parity but with no promises of no changes to the url structure or UX. >> >> Once Warehouse gains feature parity with PyPI and has gotten enough testing >> against it's APIs then pypi-legacy will be retired and Warehouse will move from >> next.pypi.python.org to pypi.python.org. From there it will evolve on it's own without >> needing to keep pypi-legacy in sync. >> >> Specification & Acceptance Testing >> ------------------------------------------------ >> >> I do not want a packaging index server to be specified by implementation, so as >> the legacy API is ported over to Warehouse a specification will be drafted. This >> spec will represent the "promise" that PyPI makes about the API it presents to be >> consumed by machines. During the migration any behavior not codified inside of >> the spec is considered implementation defined behavior and backwards compatibility >> for that behavior will not be promised. >> >> In addition to a defined specification A repository of acceptance tests will be developed. >> These tests will be part of the test framework for any future changes to Warehouse >> but will be maintained separately alongside the specification. They will also allow >> any alternative implementation (such as DevPI) to test themselves against the spec. > > I'd be happy to discuss if we can collaborate or even merge some of our > efforts. Sure. > > cheers, > holger > >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA >> > > > >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: