From richard at python.org Thu Aug 1 01:58:14 2013 From: richard at python.org (Richard Jones) Date: Thu, 1 Aug 2013 09:58:14 +1000 Subject: [Distutils] Request to add a trove classifier for pelican plugins In-Reply-To: <51F92778.30908@notmyidea.org> References: <51E422EE.8060106@notmyidea.org> <51F92778.30908@notmyidea.org> Message-ID: Sorry Alexis! I'm a little confused as to how Pelican deserves to be considered a framework. Generally before we add a Framework classifier we need to see a number of packages in PyPI which would have that classifier applied to them. Can you point to such packages? Richard On 1 August 2013 01:04, Alexis M?taireau wrote: > On 15/07/2013 18:27, Alexis M?taireau wrote: >> Hi, >> >> I hope this is the right place to ask for this. >> >> I would like to have a trove classifier for pelican [0] plugins. We >> plan to release them on PyPI and having a classifier to distinguish >> them from all the other packages sounds the way to go. >> >> We're not really a framework, but following the already established >> pattern, I guess "Framework :: Pelican" makes sense. >> >> Thanks! >> Alexis >> >> [0] http://getpelican.com > > Hi, Just a quick follow-up on this request since I didn't got any > answer. Don't hesitate to point me to the right person if you know more > than I do. > > Cheers, > Alexis > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From i at hexchain.org Thu Aug 1 13:27:21 2013 From: i at hexchain.org (Hexchain Tong) Date: Thu, 1 Aug 2013 19:27:21 +0800 Subject: [Distutils] New PyPI mirror in China Message-ID: Hi, We have set up a new PyPI mirror in China. Location: Wuhan, Hubei, China Bandwidth: 100Mbps connected to CERNET(AS4538) URL: http://pypi.hustunique.com/ Mirror homepage: http://mirrors.hustunique.com/ Contact email: it+pypi at hustunique.com We use bandersnatch to synchronize every 5 minutes. Please add us to the official mirror list. Thanks! Regards, Hexchain Tong -- IT Service Team at Unique Studio, HUST From holger at merlinux.eu Thu Aug 1 23:02:45 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 1 Aug 2013 21:02:45 +0000 Subject: [Distutils] Status report on PyPI+pip+TUF In-Reply-To: <51F91908.60304@students.poly.edu> References: <51F8F498.2070403@students.poly.edu> <20130731121352.GG3987@merlinux.eu> <51F91908.60304@students.poly.edu> Message-ID: <20130801210245.GB32459@merlinux.eu> Hi Trishank, On Wed, Jul 31, 2013 at 10:02 -0400, Trishank Karthik Kuppusamy wrote: > Hello Holger, > > On 07/31/2013 08:13 AM, holger krekel wrote: > >thanks for the high level overview. Do you have a current web page with > >more detailed technical info with respect to PyPI/TUF? > > Good question! I think it is a good idea to put up a "PyPI+pip+TUF > current status" page on our web site, but in the meantime, here are > a few links which should point you in the right direction: > > 1. pip+TUF: we use the interposition technique [https://github.com/theupdateframework/tuf/tree/master/tuf/interposition] > to minimally modify pip > [https://github.com/theupdateframework/pip/compare/tuf] to talk to a > TUF-secured PyPI mirror. > > 2. PyPI+TUF: we use automation to build a testbed for investigating > different key management and metadata schemes to secure PyPI > [https://github.com/theupdateframework/pypi.updateframework.com]. > (Note: at the time of writing, the automation is slightly > out-of-date with our work-in-progress.) > > 3. These two links should give you a good picture, but they will not > give you a complete one. We will formally write about what we mean > with our upcoming key management as well as metadata generation and > download scheme. Let me start a document and get back to you on > that. thanks for the links. They contain code instructions but i am not sure i get the overall picture yet. Do you have a whitepaper or overview describing the approach wrt to PyPI? If i understand the code correctly, you are implementing key signing, verification and revocation through calling openssl library functions. Have you considered just invoking or interfacing with "gpg"? On a minor note, for creating a pypi mirror it's better to use bandersnatch instead of pep381 (i am refering to this here: https://github.com/theupdateframework/pip/wiki/PyPI-over-TUF#mirror-pypi ) Lastly, maybe the advertisement that "TUF is like the 'S' in HTTPS" is not really a good advertisement given the several currently discussed problems with HTTPS, the most recent one being the BREACH attack: http://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/ :) cheers, holger > Thanks, > Trishank > From tk47 at students.poly.edu Fri Aug 2 02:42:04 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Thu, 1 Aug 2013 20:42:04 -0400 Subject: [Distutils] Status report on PyPI+pip+TUF In-Reply-To: <20130801210245.GB32459@merlinux.eu> References: <51F8F498.2070403@students.poly.edu> <20130731121352.GG3987@merlinux.eu> <51F91908.60304@students.poly.edu> <20130801210245.GB32459@merlinux.eu> Message-ID: <51FB005C.4000901@students.poly.edu> On 08/01/2013 05:02 PM, holger krekel wrote: > thanks for the links. They contain code instructions but i am > not sure i get the overall picture yet. Do you have a whitepaper > or overview describing the approach wrt to PyPI? We do, but it is not up-to-date with our latest thoughts. We will rectify this soon enough: https://docs.google.com/document/d/1sHMhgrGXNCvBZdmjVJzuoN5uMaUAUDWBmn3jo7vxjjw/edit > If i understand the code correctly, you are implementing key > signing, verification and revocation through calling openssl library > functions. Have you considered just invoking or interfacing with "gpg"? Yes, that is an option we could decide to implement, along with other cryptography libraries. I think we chose to start with interfacing with OpenSSL because it is generic, time-tested to be secure and available on many platforms. TUF does not need to exclusively depend on either OpenSSL, GPG or anything else: we can extend it to use what is available. > On a minor note, for creating a pypi mirror it's better to use > bandersnatch instead of pep381 (i am refering to this here: > https://github.com/theupdateframework/pip/wiki/PyPI-over-TUF#mirror-pypi ) Thanks for the tip. Indeed, we do use bandersnatch [https://github.com/theupdateframework/pypi.updateframework.com/blob/master/setup2.sh#L19]. That wiki entry points to an old set of instructions that we will remove soon. > Lastly, maybe the advertisement that "TUF is like the 'S' in HTTPS" > is not really a good advertisement given the several currently discussed > problems with HTTPS, the most recent one being the BREACH attack: > http://arstechnica.com/security/2013/08/gone-in-30-seconds-new-attack-plucks-secrets-from-https-protected-pages/ > I see what you are saying, but I do not think that it follows that TUF works like SSL :) Perhaps we can think of a better metaphor, but the idea we wanted to convey is that TUF is like a plug-in you simply drop into your software update system, and voil?, you get security for relatively little work. Let us know if you have more questions. In the meantime, we are busy designing our key management scheme for PyPI+TUF (which I think would highly interest you), so please bear with us while we hammer that out over this week. From lukshuntim at gmail.com Fri Aug 2 14:53:39 2013 From: lukshuntim at gmail.com (lukshuntim at gmail.com) Date: Fri, 02 Aug 2013 20:53:39 +0800 Subject: [Distutils] How to disable PYTHONPATH checking when installing packages using distribute Message-ID: <51FBABD3.2050801@gmail.com> Hi, During installing a package which uses distribute (matplotlib in this case), it refuses to work with this message "running install Checking .pth file support in /usr/local/stow/matplotlib-1.3.0/lib/python2.7/site-packages/ /usr/bin/python -E -c pass TEST FAILED: /usr/local/stow/matplotlib-1.3.0/lib/python2.7/site-packages/ does NOT support .pth files error: bad install directory or PYTHONPATH ... Please make the appropriate changes for your system and try again." I install local packages using the stow approach, which installs each package under its own sub-directory and later "stowed" (https://www.gnu.org/software/stow/). Such "error" becomes a nuisance as a different PYTHONPATH has to be set for each installation of a package. How can the checking be disable? I don't seem to be able to find anything in the documentation and would be grateful for any pointer. Or maybe it's better turned into a warning and users be reminded to add the install directory to PYTHONPATH. Regards, ST -- From ncoghlan at gmail.com Fri Aug 2 17:27:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 3 Aug 2013 01:27:46 +1000 Subject: [Distutils] Last PEP 426 update for a while Message-ID: I pushed a version of PEP 426 with an initial sketch of an entry points replacement design: http://hg.python.org/peps/rev/ea3d93e40e02 To give it a sensible home in the PEP, I ended up defining "modules" and "namespaces" fields in addition to "commands" and "exports". The overall section is called "Installed interfaces". I initially tried it with the unpacked multi-field mapping for export specifiers, but ended up reverting to something closer to the setuptools notation for readability purposes. For the moment, "requires_extra" is included since it isn't that hard to explain. The other two major additions to the PEP are a note near the top explaining that the expected time frame for metadata 2.0 is post Python 3.4 release and a caveat on the build system description explaining that we know it isn't ready for prime-time yet. I wanted to get this part up so anyone tinkering with wrapper scripts had at least a preliminary scheme to work from, but as per the note on time frames, I don't consider the details of PEP 426 to be an urgent topic for further discussion unless/until it directly impacts next generation tools like Warehouse and distlib. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Aug 2 18:36:47 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 2 Aug 2013 12:36:47 -0400 Subject: [Distutils] Monthly Download Counts on PyPI Message-ID: Just a quick update that the monthly rolling download counts are now enabled on PyPI which means the download count rollout is now complete. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Fri Aug 2 22:01:28 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 2 Aug 2013 16:01:28 -0400 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On Fri, Aug 2, 2013 at 11:27 AM, Nick Coghlan wrote: > I pushed a version of PEP 426 with an initial sketch of an entry > points replacement design: http://hg.python.org/peps/rev/ea3d93e40e02 > > To give it a sensible home in the PEP, I ended up defining "modules" > and "namespaces" fields in addition to "commands" and "exports". The > overall section is called "Installed interfaces". > > I initially tried it with the unpacked multi-field mapping for export > specifiers, but ended up reverting to something closer to the > setuptools notation for readability purposes. For the moment, > "requires_extra" is included since it isn't that hard to explain. Thanks again for all the hard work in putting this together! Btw, under setuptools, entry point *group* names are restricted to valid Python module names, so this is a subset of valid distribution names. Conversely, entry *point* names are intentionally arbitrary and may contain anything that isn't an '=', as long as they don't start with a '#'. The reason for these choices is that entry point groups are used to ensure global uniqueness, but need a standard way for subdividing namespaces. (Notice that setuptools has groups like distutils.setup_arguments and distutils.setup_commands.) Conversely, individual entry point names have a free-form syntax so that they can carry additional structured information, like a converter specifying what it converts from and to, with a quality metric. The idea is to allow tools to build plugin registries from this kind of information without needing to import the modules. Basically, if you can fit it on one line, before the '=', in a reasonably human-readable way, and it saves you from having to import the module in order to figure out whether you wanted to import it in the first place, you can put it in the name. You might wish to make names a bit more restrictive than this, I suppose, but I'm not sure that all of the limitations of distribution names are actually appropriate here. In particular, restricting to alphanumerics, dots, and dashes is *way* too restrictive. Entry point names are sometimes used for human-readable command descriptions, e.g. this is a perfectly valid entry point definition in setuptools: wikiup: Upload documents to one or more wiki pages = some.module:entrypoint [extra1, extra2] Anyway, entry point group names are definitely *not* recommended to follow distribution names, as that makes them rather useless. Things that consume entry points will generally have more than one group, eventually, so at least one of those groups will then have to *not* be named after a distribution, unless you arbitrarily break up the project into multiple distributions so the group names match, which is kind of silly. ;-) Finally, it might be good to point out once again that extras are not so much "a set of dependencies that will be checked for at runtime" as a set of dependencies that are *needed* at runtime. This need may or may not be checked, and may or may not be satisfied dynamically at runtime; it depends on the API in use, and how/whether it is invoked. From pje at telecommunity.com Fri Aug 2 22:28:49 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 2 Aug 2013 16:28:49 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: On Tue, Jul 30, 2013 at 4:58 PM, Donald Stufft wrote: > Hrm. > > So I hear what you're saying and part of the problem is likely due to the history > of where I tried to get a change through and then felt like all I was getting was > stop energy and people wanting to keep the status quo which ultimately > ended up preventing changes has lead me to view distutils-sig in more of an > adversarial light then is probably appropriate for the distutils-sig of 2013 (versus > the distutils-sig of 2011/2012). This is probably reflected in my tone and likely > has others, such as yourself, respond similarly, pushing us further down that > path. My thought process has become "Ok here's something that needs to > happen, now how do I get distutils-sig not to prevent it from happening". Thanks for the thoughtful response. I appreciate it. I also want to just throw in one extra piece of information for you and anybody else reading this: 99% of "stop energy" doesn't happen because people actively want to prevent progress or frustrate other people. It simply happens when people notice a problem but don't have as much personal stake in your goal as they do in not experiencing the problem they will experience (or perceive they will), from the proposed change. When you look at it from this perspective, it's easier to understand that the way to fix this is with more engagement on their part, which can only be gotten by engagement on your part. When I first proposed WSGI, the initial reaction of Web-SIG was pretty negative. "Stop energy" if you will. Things only moved forward once I was able to channel the energy of objections into offering solutions. It's helpful to remember that asking, "okay, so how you would recommend I do it?" *doesn't* obligate you to actually follow all of the recommendations you get. (Especially since some of them will be mutually contradictory!) Anyway, I guess what I'm saying is that people lacking enthusiasm for your goals is not really them trying to stop you. In fact, objections are a positive thing: it means you got somebody's attention. The next step is to leverage that attention into actually getting help, or at least more constructive input. ;-) It's true that some individuals will never provide really helpful input. In the WSGI effort, there were people whose advice I never took, because their goals were directed entirely opposite to where I wanted to go. But I remained engaged until it was mutually clear (or at least I thought it was) that our goals were different, and didn't try to persuade them to go in the same direction. Such attempts at persuasion are pretty much a waste of time, and a big energy drain. Consensus-building is something that you do with people who have at least some goals in common, so it's best to focus on finding out what you *do* agree on. > This was again reflected in the Python 2.3 discussion because my immediate > reaction and impression was that you were attempting to block the move > from MD5 due to Python 2.3, which I felt 2.3 wasn't worth blocking enhancements > to PyPI. The "snark" in my statements primarily came from that feeling of > someone was trying to "shut down" an enhancement. Right. In such a case, a question you could ask is, "Do you agree in general that we should move to a better hash at some point in the future?", because then the disagreement can be narrowed down to timeframe, migration or deprecation process, etc. The truth is, I had no intention of "blocking the move", I had concerns I wanted addressed about the impact, timing and process. (Actually, I originally just noticed a couple of errors in what you'd laid out as the current state of things, and wanted to make sure they were included in the assessment.) The point is, if somebody doesn't have *any* common ground with you, it's unlikely they're even talking to you. At the very least, if they're talking with you about PyPI, they must care about PyPI, even if they care about different things than you, or with different relative priorities. ;-) > As far as how to fix it I don't have a particularly magic answer. I will try to be more > mindful of my tone and that distutils-sig is likely not my adversary anymore as well > as try to ask questions instead of arguing the relevance immediately. Again, thank you. And hopefully, remember that probably nobody was intentionally being your adversary before, either. As the old adage says, the best way to destroy your enemies is to make friends with them. ;-) And we do that by focusing on common ground, and inviting participation. (This is again not to say that I've been 100% Mr. Wonderful myself; I know I haven't. But the community's best consensus-building happens when somebody is doing the tough work of engaging with all parties. Sometimes this doesn't happen, alas; back when I was developing setuptools there just weren't enough people interested in the problems available on Distutils-SIG to build any sort of consensus on the solutions, so I *had* to go run with the ball myself. I'm really happy that there is now BOTH a quorum of interested parties who understand the problems, AND a few leaders able to drive the consensus-building and actual development. If only we had done this ten years ago, setuptools might not have been necessary. Well, actually, that's probably not true: without *something* existing first, that quorum of people who understand the problem wouldn't have existed back then. But you know what I mean.) From donald at stufft.io Fri Aug 2 22:45:10 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 2 Aug 2013 16:45:10 -0400 Subject: [Distutils] a plea for backward-compatibility / smooth transitions In-Reply-To: References: <472639E1-2CD5-441F-8006-0937D6B0065D@stufft.io> <20130729093832.GI32284@merlinux.eu> <08B489E3-23BE-4797-ABC8-5D4CFBF5BF51@stufft.io> <249D03FB-E50E-4EEA-AA88-D59B8B9A0E74@stufft.io> Message-ID: <8E568EE8-5C02-49EC-816B-26CE095EF33C@stufft.io> On Aug 2, 2013, at 4:28 PM, PJ Eby wrote: > I also want to just throw in one extra piece of information for you > and anybody else reading this: 99% of "stop energy" doesn't happen > because people actively want to prevent progress or frustrate other > people. It simply happens when people notice a problem but don't have > as much personal stake in your goal as they do in not experiencing the > problem they will experience (or perceive they will), from the > proposed change. One of the biggest things I viewed as stop energy was how impossible it seemed to get a SSL certificate that was trusted in most major browsers to be used on PyPI. There was never any reason given that we couldn't move to one, just simply "we already have a certificate" without addressing the reasons I was trying to get a different one. Beyond that it was other random proposals that it felt like people put in the minimal amount of effort in order to find a reason to say no to it. Then if I addressed that particular concern, they'd again seem to put in the minimal amount of effort to find a reason to say no. It felt like the goal posts were just constantly being moved to slightly further away each time. That sort of behavior is mostly what I mean by "stop energy". It's entirely frustrating and it feels like it's entire purpose is to dissuade someone from changing the status quo, wether that was the intention or not. Everything else you said has been noted :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From aclark at aclark.net Sat Aug 3 02:02:09 2013 From: aclark at aclark.net (Alex Clark) Date: Sat, 3 Aug 2013 00:02:09 +0000 (UTC) Subject: [Distutils] Monthly Download Counts on PyPI References: Message-ID: Donald Stufft stufft.io> writes: > > Just a quick update that the monthly rolling download counts are now enabled on PyPI which means the download count rollout is now complete. \o/ Thanks From ncoghlan at gmail.com Sat Aug 3 05:03:28 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 3 Aug 2013 13:03:28 +1000 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On 3 Aug 2013 06:01, "PJ Eby" wrote: > > On Fri, Aug 2, 2013 at 11:27 AM, Nick Coghlan wrote: > > I pushed a version of PEP 426 with an initial sketch of an entry > > points replacement design: http://hg.python.org/peps/rev/ea3d93e40e02 > > > > To give it a sensible home in the PEP, I ended up defining "modules" > > and "namespaces" fields in addition to "commands" and "exports". The > > overall section is called "Installed interfaces". > > > > I initially tried it with the unpacked multi-field mapping for export > > specifiers, but ended up reverting to something closer to the > > setuptools notation for readability purposes. For the moment, > > "requires_extra" is included since it isn't that hard to explain. > > Thanks again for all the hard work in putting this together! > > Btw, under setuptools, entry point *group* names are restricted to > valid Python module names, so this is a subset of valid distribution > names. Conversely, entry *point* names are intentionally arbitrary > and may contain anything that isn't an '=', as long as they don't > start with a '#'. > > The reason for these choices is that entry point groups are used to > ensure global uniqueness, but need a standard way for subdividing > namespaces. (Notice that setuptools has groups like > distutils.setup_arguments and distutils.setup_commands.) > > Conversely, individual entry point names have a free-form syntax so > that they can carry additional structured information, like a > converter specifying what it converts from and to, with a quality > metric. The idea is to allow tools to build plugin registries from > this kind of information without needing to import the modules. > Basically, if you can fit it on one line, before the '=', in a > reasonably human-readable way, and it saves you from having to import > the module in order to figure out whether you wanted to import it in > the first place, you can put it in the name. > > You might wish to make names a bit more restrictive than this, I > suppose, but I'm not sure that all of the limitations of distribution > names are actually appropriate here. In particular, restricting to > alphanumerics, dots, and dashes is *way* too restrictive. Entry point > names are sometimes used for human-readable command descriptions, e.g. > this is a perfectly valid entry point definition in setuptools: > > wikiup: Upload documents to one or more wiki pages = > some.module:entrypoint [extra1, extra2] > > Anyway, entry point group names are definitely *not* recommended to > follow distribution names, as that makes them rather useless. Things > that consume entry points will generally have more than one group, > eventually, so at least one of those groups will then have to *not* be > named after a distribution, unless you arbitrarily break up the > project into multiple distributions so the group names match, which is > kind of silly. ;-) This makes sense - I was trying to cut down on the number of different naming schemes we had, but may not have divided things up well. How about we go with: Valid dotted names for: - modules - namespaces - module & name portions of export specifiers - export group names - extension names Valid distribution names for: - script wrappers Arbitrary strings for export group entries. > > Finally, it might be good to point out once again that extras are not > so much "a set of dependencies that will be checked for at runtime" as > a set of dependencies that are *needed* at runtime. This need may or > may not be checked, and may or may not be satisfied dynamically at > runtime; it depends on the API in use, and how/whether it is invoked. An unconditional import of an optional dependency counts as a runtime check in my view :) Agreed that could be clarified in the PEP, though. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Aug 3 15:46:10 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 3 Aug 2013 23:46:10 +1000 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On 3 August 2013 13:03, Nick Coghlan wrote: > How about we go with: > > Valid dotted names for: > - modules > - namespaces > - module & name portions of export specifiers > - export group names > - extension names > > Valid distribution names for: > - script wrappers > > Arbitrary strings for export group entries. Just pushed these changes. I'm happy to leave the PEP alone for a while now, as I now think it's a good indication of where we plan to go (so folks like Vinay, Daniel and Donald can use it as a basis for development work), while still having appropriate caveats in place to help ensure people that aren't closely tracking distutils-sig don't mistake it for a finished specification. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Sat Aug 3 19:07:06 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Aug 2013 17:07:06 +0000 (UTC) Subject: [Distutils] Last PEP 426 update for a while References: Message-ID: Nick Coghlan gmail.com> writes: > Just pushed these changes. I'm happy to leave the PEP alone for a > while now, Thanks for doing these updates. Can I suggest the following corrections? 1. In the section "Exports", there is a dangling sentence which needs to be completed: "The interpretation of the individual export keys is defined by the distribution that i" 2. In the same section, it says "Export group names SHOULD correspond to module names ..." and also "It is suggested that export groups be named after distributions to help avoid name conflicts." It should be one of these (I presume the former). Regards, Vinay Sajip From pje at telecommunity.com Sat Aug 3 20:14:14 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 3 Aug 2013 14:14:14 -0400 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On Sat, Aug 3, 2013 at 1:07 PM, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> Just pushed these changes. I'm happy to leave the PEP alone for a >> while now, > > Thanks for doing these updates. Can I suggest the following corrections? > > 1. In the section "Exports", there is a dangling sentence which needs to be > completed: "The interpretation of the individual export keys is defined > by the distribution that i" > > 2. In the same section, it says "Export group names SHOULD correspond to > module names ..." and also "It is suggested that export groups be named > after distributions to help avoid name conflicts." It should be one of > these (I presume the former). And not quite the former, either; the same arguments about not splitting a distribution apply to modules as well. i.e., a single module might consume exports from more than one group, so saying they should correspond is too strong; I would say instead that export group names are dotted names that should be *prefixed* with the name of a package or module provided by the relevant distribution. (Of course, it's also perfectly fine for one to use a domain name or other similarly-unique prefix; the real point is just that top-level names should be reserved for groups defined by the stdlib and/or PEPs, and everybody else should be using unique prefixes that give some indication where one would look for a spec.) From vinay_sajip at yahoo.co.uk Sat Aug 3 20:42:23 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Aug 2013 18:42:23 +0000 (UTC) Subject: [Distutils] Last PEP 426 update for a while References: Message-ID: PJ Eby telecommunity.com> writes: > And not quite the former, either; the same arguments about not > splitting a distribution apply to modules as well. i.e., a single > module might consume exports from more than one group, so saying they > should correspond is too strong; I would say instead that export group > names are dotted names that should be *prefixed* with the name of a > package or module provided by the relevant distribution. You're right - I was being a little sloppy, but that was my understanding (i.e. the emphasis on prefixes). > (Of course, it's also perfectly fine for one to use a domain name or > other similarly-unique prefix; the real point is just that top-level > names should be reserved for groups defined by the stdlib and/or PEPs, > and everybody else should be using unique prefixes that give some > indication where one would look for a spec.) Right, though it's probably enough to just use a module name which is "unique" to the distribution. Of course, nothing prevents two completely unrelated distributions having a top-level module "foo", but in that case any ambiguity in export names is probably the least of the worries of someone who installs two such conflicting distributions. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Sat Aug 3 21:14:59 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 3 Aug 2013 19:14:59 +0000 (UTC) Subject: [Distutils] Last PEP 426 update for a while References: Message-ID: Nick Coghlan gmail.com> writes: > Just pushed these changes. One more problem: the updated pydist-schema.json is not a valid schema file (it's not valid JSON either- I think there are trailing commas and a mismatched brace somewhere. Regards, Vinay Sajip From donald at stufft.io Sun Aug 4 02:17:03 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 3 Aug 2013 20:17:03 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: On Jul 25, 2013, at 1:38 AM, Richard Jones wrote: > Hi all, > > I've just been contacted by someone who's set up a new public mirror > of PyPI and would like it integrated into the mirror ecosystem. > > I think it's probably time we thought about how to demote the mirrors: > > - they cause problems with security (being under the python.org domain > causes various issues including inability to use HTTPS and cookie > issues) > - they're no longer necessary thanks to the CDN work > > So, things to do: > > - links and information on PyPI itself can be removed > - tools that use mirrors still need to be able to but mention of using > public mirrors is probably something to demote > > These are just rough thoughts that occurred to me just now. > > > Richard > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Can we close the loop on this? Ideally I think any public mirrors should need to register their own domain name. We can either maintain a list of unofficial mirrors, or Ken Cochrane has been doing a good job I think of keeping a list (as well as tracking some basic stats) at http://pypi-mirrors.org/ so maybe we can just point people to that as the list of mirrors? Ideally we should get all of them off the *.python.org namespace. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Aug 4 08:41:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 4 Aug 2013 16:41:23 +1000 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On 4 August 2013 05:14, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> Just pushed these changes. > > One more problem: the updated pydist-schema.json is not a valid schema file > (it's not valid JSON either- I think there are trailing commas and a > mismatched brace somewhere. It's at least valid JSON now. I make no promises about the current draft being a valid jsonschema :) It turned out I had missed a few other naming related updates, so I did another pass over all the interface and extension related sections. Some points of note: 1. I adopted the PEP 3155 "qualified name" terminology for dotted names. This applies to both module names (where their name and their qualified name are the same thing) *and* to the names of other objects within modules. 2. I adopted the term "fully qualified name" for the "module:name" notation used by export specifiers (amongst other things). For modules, the qualified name and the fully qualified name are the same. 3. I explicitly recommend the one distribution <-> one module/package equivalence. There are valid reasons for deviating from that recommendation, but it's still a good default. 4. I expand further on the intended usage of export groups (focusing on plugin systems) 5. Both export group names and metadata extension names are now required to use the qualified name format, with a recommendation that they use a prefix that matches a module published by the defining distribution 6. Qualified names are currently restricted to Python 2 compatible identifiers. Even though it isn't as scary as the idea of Unicode metadata names, I believe Unicode identifier support in export metadata is still a can of worms that we don't want to open yet. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From noah at coderanger.net Sun Aug 4 09:14:30 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Sun, 4 Aug 2013 00:14:30 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: On Aug 3, 2013, at 5:17 PM, Donald Stufft wrote: > > On Jul 25, 2013, at 1:38 AM, Richard Jones wrote: > >> Hi all, >> >> I've just been contacted by someone who's set up a new public mirror >> of PyPI and would like it integrated into the mirror ecosystem. >> >> I think it's probably time we thought about how to demote the mirrors: >> >> - they cause problems with security (being under the python.org domain >> causes various issues including inability to use HTTPS and cookie >> issues) >> - they're no longer necessary thanks to the CDN work >> >> So, things to do: >> >> - links and information on PyPI itself can be removed >> - tools that use mirrors still need to be able to but mention of using >> public mirrors is probably something to demote >> >> These are just rough thoughts that occurred to me just now. >> >> >> Richard >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> http://mail.python.org/mailman/listinfo/distutils-sig > > Can we close the loop on this? Ideally I think any public mirrors > should need to register their own domain name. We can either > maintain a list of unofficial mirrors, or Ken Cochrane has been > doing a good job I think of keeping a list (as well as tracking some > basic stats) at http://pypi-mirrors.org/ so maybe we can just point > people to that as the list of mirrors? > > Ideally we should get all of them off the *.python.org namespace. As the one with the finger on the not-the-metaphorical button, I think we should say that two (2) months from now, on October 1st 2013, the [a-g].pypi.python.org DNS names will all be redirected to front.python.org and another two months beyond that (2013-12-01) they will all be deleted (along with last.pypi.python.org). That seems like a very generous deprecation schedule, especially given that all the needs to change is some domain registrations. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sun Aug 4 10:48:40 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 4 Aug 2013 04:48:40 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> On Aug 4, 2013, at 3:14 AM, Noah Kantrowitz wrote: > > On Aug 3, 2013, at 5:17 PM, Donald Stufft wrote: > >> >> On Jul 25, 2013, at 1:38 AM, Richard Jones wrote: >> >>> Hi all, >>> >>> I've just been contacted by someone who's set up a new public mirror >>> of PyPI and would like it integrated into the mirror ecosystem. >>> >>> I think it's probably time we thought about how to demote the mirrors: >>> >>> - they cause problems with security (being under the python.org domain >>> causes various issues including inability to use HTTPS and cookie >>> issues) >>> - they're no longer necessary thanks to the CDN work >>> >>> So, things to do: >>> >>> - links and information on PyPI itself can be removed >>> - tools that use mirrors still need to be able to but mention of using >>> public mirrors is probably something to demote >>> >>> These are just rough thoughts that occurred to me just now. >>> >>> >>> Richard >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> http://mail.python.org/mailman/listinfo/distutils-sig >> >> Can we close the loop on this? Ideally I think any public mirrors >> should need to register their own domain name. We can either >> maintain a list of unofficial mirrors, or Ken Cochrane has been >> doing a good job I think of keeping a list (as well as tracking some >> basic stats) at http://pypi-mirrors.org/ so maybe we can just point >> people to that as the list of mirrors? >> >> Ideally we should get all of them off the *.python.org namespace. > > As the one with the finger on the not-the-metaphorical button, I think we should say that two (2) months from now, on October 1st 2013, the [a-g].pypi.python.org DNS names will all be redirected to front.python.org and another two months beyond that (2013-12-01) they will all be deleted (along with last.pypi.python.org). That seems like a very generous deprecation schedule, especially given that all the needs to change is some domain registrations. > > --Noah > Personally I +1 this proposal, it's been near 10 days with basically zero response of any kind, and no response to the negative. The only change I'd possibly make is change the deletion period to some period of time after the pip 1.5 release. 5 days ago my branch to remove mirroring support from pip was merged into pip's develop branch. I don't see any direct support for mirroring in setuptools nor do I see any in buildout so I think it makes sense to hold off on the final deletion until after the only of the 3 major installers that seems to have any direct support for mirrors has released a version without it plus a bit of lead time for people to switch. So I guess revised I'd say in roughly two months on Oct 1st the [a-g].pypi.python.rg DNS names will be redirected to front.python.org and then roughly 2 months after pip has released version 1.5 with the removal of the mirroring support they will be deleted along with last.pypi.python.org. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Aug 4 11:13:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 4 Aug 2013 19:13:18 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> References: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> Message-ID: On 4 August 2013 18:48, Donald Stufft wrote: > 5 days ago my branch to remove mirroring support from pip was merged into pip's develop branch. I don't see any direct support for mirroring in setuptools nor do I see any in buildout so I think it makes sense to hold off on the final deletion until after the only of the 3 major installers that seems to have any direct support for mirrors has released a version without it plus a bit of lead time for people to switch. > > So I guess revised I'd say in roughly two months on Oct 1st the [a-g].pypi.python.rg DNS names will be redirected to front.python.org and then roughly 2 months after pip has released version 1.5 with the removal of the mirroring support they will be deleted along with last.pypi.python.org. Sounds good to me. You may not love this next part though: since the mirror naming scheme was originally defined in a PEP (http://www.python.org/dev/peps/pep-0381/#mirror-listing-and-registering), I think its retirement should also be published that way. The PEP doesn't need to be long, it should just say that PyPI will continue to support the mirroring protocol (and perhaps even mention that the current preferred mirroring tool is bandersnatch rather than pep381client), but the CDN is considered to replace the public mirror network and those DNS names will all be retired. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Aug 4 11:18:35 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 4 Aug 2013 05:18:35 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> Message-ID: <8F2F2811-EE5A-43FD-AE56-F724548E7707@stufft.io> On Aug 4, 2013, at 5:13 AM, Nick Coghlan wrote: > On 4 August 2013 18:48, Donald Stufft wrote: >> 5 days ago my branch to remove mirroring support from pip was merged into pip's develop branch. I don't see any direct support for mirroring in setuptools nor do I see any in buildout so I think it makes sense to hold off on the final deletion until after the only of the 3 major installers that seems to have any direct support for mirrors has released a version without it plus a bit of lead time for people to switch. >> >> So I guess revised I'd say in roughly two months on Oct 1st the [a-g].pypi.python.rg DNS names will be redirected to front.python.org and then roughly 2 months after pip has released version 1.5 with the removal of the mirroring support they will be deleted along with last.pypi.python.org. > > Sounds good to me. You may not love this next part though: since the > mirror naming scheme was originally defined in a PEP > (http://www.python.org/dev/peps/pep-0381/#mirror-listing-and-registering), > I think its retirement should also be published that way. > > The PEP doesn't need to be long, it should just say that PyPI will > continue to support the mirroring protocol (and perhaps even mention > that the current preferred mirroring tool is bandersnatch rather than > pep381client), but the CDN is considered to replace the public mirror > network and those DNS names will all be retired. OTOH that PEP was never accepted and is currently a draft but if you really think it should be a PEP I can write one up. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Aug 4 11:45:54 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 4 Aug 2013 19:45:54 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <8F2F2811-EE5A-43FD-AE56-F724548E7707@stufft.io> References: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> <8F2F2811-EE5A-43FD-AE56-F724548E7707@stufft.io> Message-ID: On 4 August 2013 19:18, Donald Stufft wrote: > > On Aug 4, 2013, at 5:13 AM, Nick Coghlan wrote: > >> On 4 August 2013 18:48, Donald Stufft wrote: >>> 5 days ago my branch to remove mirroring support from pip was merged into pip's develop branch. I don't see any direct support for mirroring in setuptools nor do I see any in buildout so I think it makes sense to hold off on the final deletion until after the only of the 3 major installers that seems to have any direct support for mirrors has released a version without it plus a bit of lead time for people to switch. >>> >>> So I guess revised I'd say in roughly two months on Oct 1st the [a-g].pypi.python.rg DNS names will be redirected to front.python.org and then roughly 2 months after pip has released version 1.5 with the removal of the mirroring support they will be deleted along with last.pypi.python.org. >> >> Sounds good to me. You may not love this next part though: since the >> mirror naming scheme was originally defined in a PEP >> (http://www.python.org/dev/peps/pep-0381/#mirror-listing-and-registering), >> I think its retirement should also be published that way. >> >> The PEP doesn't need to be long, it should just say that PyPI will >> continue to support the mirroring protocol (and perhaps even mention >> that the current preferred mirroring tool is bandersnatch rather than >> pep381client), but the CDN is considered to replace the public mirror >> network and those DNS names will all be retired. > > OTOH that PEP was never accepted and is currently a draft Alas, the nominal status of the packaging PEPs isn't always a great guide to their real world usage :P > but if you really > think it should be a PEP I can write one up. I think we're at a point where overcommunicating is definitely better than the alternative :) In this case, what I suggest we do is put the new PEP to Accepted with a Replaces: header for 381, and then set 381 to Superseded. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Aug 4 11:56:25 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 4 Aug 2013 05:56:25 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <50938936-A2E6-4C3C-BEF2-44E3207437EB@stufft.io> <8F2F2811-EE5A-43FD-AE56-F724548E7707@stufft.io> Message-ID: <61BED6D3-19F2-4D93-8A86-001F965C1816@stufft.io> Okies I can write one up. On Aug 4, 2013, at 5:45 AM, Nick Coghlan wrote: > On 4 August 2013 19:18, Donald Stufft wrote: >> >> On Aug 4, 2013, at 5:13 AM, Nick Coghlan wrote: >> >>> On 4 August 2013 18:48, Donald Stufft wrote: >>>> 5 days ago my branch to remove mirroring support from pip was merged into pip's develop branch. I don't see any direct support for mirroring in setuptools nor do I see any in buildout so I think it makes sense to hold off on the final deletion until after the only of the 3 major installers that seems to have any direct support for mirrors has released a version without it plus a bit of lead time for people to switch. >>>> >>>> So I guess revised I'd say in roughly two months on Oct 1st the [a-g].pypi.python.rg DNS names will be redirected to front.python.org and then roughly 2 months after pip has released version 1.5 with the removal of the mirroring support they will be deleted along with last.pypi.python.org. >>> >>> Sounds good to me. You may not love this next part though: since the >>> mirror naming scheme was originally defined in a PEP >>> (http://www.python.org/dev/peps/pep-0381/#mirror-listing-and-registering), >>> I think its retirement should also be published that way. >>> >>> The PEP doesn't need to be long, it should just say that PyPI will >>> continue to support the mirroring protocol (and perhaps even mention >>> that the current preferred mirroring tool is bandersnatch rather than >>> pep381client), but the CDN is considered to replace the public mirror >>> network and those DNS names will all be retired. >> >> OTOH that PEP was never accepted and is currently a draft > > Alas, the nominal status of the packaging PEPs isn't always a great > guide to their real world usage :P > >> but if you really >> think it should be a PEP I can write one up. > > I think we're at a point where overcommunicating is definitely better > than the alternative :) > > In this case, what I suggest we do is put the new PEP to Accepted with > a Replaces: header for 381, and then set 381 to Superseded. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Mon Aug 5 00:25:01 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 4 Aug 2013 18:25:01 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: Message-ID: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Here's my PEP for Deprecating and Removing the Official Public Mirrors It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst Abstract ======== This PEP provides a path to deprecate and ultimately remove the official public mirroring infrastructure for `PyPI`_. It does not propose the removal of mirroring support in general. Rationale ========= The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to mirror the content of PyPI used by the automatic installers. It also provides a method for autodiscovery of mirrors and a consistent naming scheme. There are a number of problems with the official public mirrors: * They give control over a \*.python.org domain name to a third party, allowing that third party to set or read cookies on the pypi.python.org and python.org domain name. * The use of a sub domain of pypi.python.org means that the mirror operators will never be able to get a certificate of their own, and giving them one for a python.org domain name is unlikely to happen. * They are often out of date, most often by several hours to a few days, but regularly several days and even months. * With the introduction of the CDN on PyPI the public mirroring infrastructure is not as important as it once was as the CDN is also a globally distributed network of servers which will function even if PyPI is down. * Although there is provisions in place for it, there is currently no known installer which uses the authenticity checks discussed in `PEP381`_ which means that any download from a mirror is subject to attack by a malicious mirror operator, but further more due to the lack of TLS it also means that any download from a mirror is also subject to a MITM attack. * They have only ever been implemented by one installer (pip), and its implementation, besides being insecure, has serious issues with performance and is slated for removal with it's next release (1.5). Due to the number of issues, some of them very serious, and the CDN which more or less provides much of the same benefits this PEP proposes to first deprecate and then remove the public mirroring infrastructure. The ability to mirror and the method of mirroring will not be affected and the existing public mirrors are encouraged to acquire their own domains to host their mirrors on if they wish to continue hosting them. Plan for Deprecation & Removal ============================== Immediately upon acceptance of this PEP documentation on PyPI will be updated to reflect the deprecated nature of the official public mirrors and will direct users to external resources like http://www.pypi-mirrors.org/ to discover unofficial public mirrors if they wish to use one. On October 1st, 2013, roughly 2 months from the date of this PEP, the DNS names of the public mirrors ([a-g].pypi.python.org) will be changed to point back to PyPI which will be modified to accept requests from those domains. At this point in time the public mirrors will be considered deprecated. Then, roughly 2 months after the release of the first version of pip to have mirroring support removed (currently slated for pip 1.5) the DNS entries for [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will no longer accept requests at those domains. Unofficial Public or Private Mirrors ==================================== The mirroring protocol will continue to exist as defined in `PEP381`_ and people are encouraged to utilize to host unofficial public and private mirrors if they so desire. For operators of unofficial public or private mirrors the recommended mirroring client is `Bandersnatch`_. .. _PyPI: https://pypi.python.org/ .. _PEP381: http://www.python.org/dev/peps/pep-0381/ .. _Bandersnatch: https://pypi.python.org/pypi/bandersnatch ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Aug 5 08:00:09 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 5 Aug 2013 02:00:09 -0400 Subject: [Distutils] New PyPI mirror in China In-Reply-To: References: Message-ID: <319A4CE9-BD18-4A0A-B1BC-0CB513302387@stufft.io> On Aug 1, 2013, at 7:27 AM, Hexchain Tong wrote: > Hi, > > We have set up a new PyPI mirror in China. > > Location: Wuhan, Hubei, China > Bandwidth: 100Mbps connected to CERNET(AS4538) > URL: http://pypi.hustunique.com/ > Mirror homepage: http://mirrors.hustunique.com/ > Contact email: it+pypi at hustunique.com > > We use bandersnatch to synchronize every 5 minutes. > > Please add us to the official mirror list. Thanks! > > Regards, > Hexchain Tong > -- > IT Service Team at Unique Studio, HUST > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Just to give you a response. There are currently discussions going on about deprecating the official public mirroring infrastructure. If this happens it does not mean that you cannot operate and maintain your mirror, but it does mean that we will not be handing out any more N.pypi.python.org names (and the ones we have will eventually be removed. Users will be directed to external resources such as http://pypi-mirrors.org/ to locate mirrors if they wish to use one. I believe yours is already there! ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Mon Aug 5 08:29:15 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 5 Aug 2013 02:29:15 -0400 Subject: [Distutils] New PyPI mirror in China In-Reply-To: <319A4CE9-BD18-4A0A-B1BC-0CB513302387@stufft.io> References: <319A4CE9-BD18-4A0A-B1BC-0CB513302387@stufft.io> Message-ID: <5AD08AB4-121D-4154-8D9E-97561980962B@stufft.io> On Aug 5, 2013, at 2:00 AM, Donald Stufft wrote: > Just to give you a response. There are currently discussions going on about deprecating the official public mirroring infrastructure. Just to be more specific, the major thing we'd be deprecating is handing out names @ pypi.python.org for the mirrors (such as b.pypi.python.org) and the automatic discovery of those names via the last.pypi.python.org DNS entry. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From setuptools at bugs.python.org Mon Aug 5 10:12:05 2013 From: setuptools at bugs.python.org (Ned Deily) Date: Mon, 05 Aug 2013 08:12:05 +0000 Subject: [Distutils] [issue157] setuptools installation instructions fail on OS X: no wget Message-ID: <1375690325.74.0.629719535099.issue157@psf.upfronthosting.co.za> New submission from Ned Deily: The current installation instructions for "Unix-based Systems including Mac OS X" do not work on vanilla OS X systems because OS X does not ship with "wget". It does, however, ship with "curl". (The Distribute web page had been updated a long time ago to use "curl" instead of "wget"; I guess that never happened for the "setuptools" page.) https://pypi.python.org/pypi/setuptools/0.9.8#unix-based-systems-including-mac-os-x ---------- messages: 743 nosy: nad priority: bug status: unread title: setuptools installation instructions fail on OS X: no wget _______________________________________________ Setuptools tracker _______________________________________________ From pje at telecommunity.com Mon Aug 5 15:28:04 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 5 Aug 2013 09:28:04 -0400 Subject: [Distutils] Who administers bugs.python.org, and can we get a notice on the setuptools tracker? Message-ID: Hi. Not sure who this should go to, but it would be really good if we could get a prominent notice on the old setuptools tracker (at bugs.python.org), specifically on the issue creation screen, to inform people that this tracker is only for setuptools 0.6, and that issues for later versions should go to https://bitbucket.org/pypa/setuptools/issues instead (with a link, of course). Right now, what's happening is that, despite all the other prominent links to the correct tracker, there are still people going to the old tracker and posting issues for the newer versions of setuptools. This means they then have to wait for me to tell them they're in the wrong place, and then resubmit to the correct tracker. Setuptools 0.6 is still supported, so closing the tracker entirely isn't appropriate just yet, but putting up a prominent notice on the issue submission screen saying, "If you're reporting an issue for setuptools 0.7 or higher, please use....". Thanks in advance! From ncoghlan at gmail.com Mon Aug 5 17:10:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 01:10:46 +1000 Subject: [Distutils] Who administers bugs.python.org, and can we get a notice on the setuptools tracker? In-Reply-To: References: Message-ID: I filed the request on the metatracker: http://psf.upfronthosting.co.za/roundup/meta/issue522 From pje at telecommunity.com Mon Aug 5 19:48:38 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 5 Aug 2013 13:48:38 -0400 Subject: [Distutils] Who administers bugs.python.org, and can we get a notice on the setuptools tracker? In-Reply-To: References: Message-ID: On Mon, Aug 5, 2013 at 11:10 AM, Nick Coghlan wrote: > I filed the request on the metatracker: > http://psf.upfronthosting.co.za/roundup/meta/issue522 Thanks! That should keep me from having to keep telling people their princess is in another castle. ;-) From mgorny at gentoo.org Sun Aug 4 13:12:27 2013 From: mgorny at gentoo.org (=?UTF-8?B?TWljaGHFgiBHw7Nybnk=?=) Date: Sun, 4 Aug 2013 13:12:27 +0200 Subject: [Distutils] setuptools' install_script overwrites symlinked files rather than replacing symlinks Message-ID: <20130804131227.48a00372@gentoo.org> Hello, We've got a pretty specific setup where various Python scripts in /usr/bin are symlinked to a common wrapper. For example: /usr/bin/easy_install -> python-exec As a result, calling 'setup.py install' in a package that installs setuptools' legacy script wrappers (e.g. setuptools itself) rewrites python-exec (which -- shortly saying -- breaks a lot) rather than replacing the 'easy_install' symlink. I believe that the core issue is in command/easy_install.py. There, the write_script() command writes directly onto the destination file with no precautions: if not self.dry_run: ensure_directory(target) f = open(target,"w"+mode) f.write(contents) f.close() Therefore, if 'target' is a symlink, the symlink target is opened rather than the expected path. distutils itself is free of this issue since it removes the target before writing (distutils/file_util.py): if os.path.exists(dst): try: os.unlink(dst) except os.error, (errno, errstr): raise DistutilsFileError( "could not delete '%s': %s" % (dst, errstr)) This suffers a race condition but is better than nothing. Other tools usually create a temporary file in the target directory and use rename() to atomically replace the target. I'm willing to write a patch. Please just tell me which solution would you prefer. (please keep python at g.o in CC when replying) -- Best regards, Micha? G?rny -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 966 bytes Desc: not available URL: From i at hexchain.org Tue Aug 6 06:24:40 2013 From: i at hexchain.org (Hexchain Tong) Date: Tue, 6 Aug 2013 12:24:40 +0800 Subject: [Distutils] New PyPI mirror in China In-Reply-To: <5AD08AB4-121D-4154-8D9E-97561980962B@stufft.io> References: <319A4CE9-BD18-4A0A-B1BC-0CB513302387@stufft.io> <5AD08AB4-121D-4154-8D9E-97561980962B@stufft.io> Message-ID: On 08/05/2013 02:29 PM, Donald Stufft wrote: > Just to give you a response. There are currently discussions going on > about deprecating the official public mirroring infrastructure. Thank you! I didn't even realized that :-P From ncoghlan at gmail.com Tue Aug 6 07:08:42 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 15:08:42 +1000 Subject: [Distutils] setuptools' install_script overwrites symlinked files rather than replacing symlinks In-Reply-To: <20130804131227.48a00372@gentoo.org> References: <20130804131227.48a00372@gentoo.org> Message-ID: On 4 August 2013 21:12, Micha? G?rny wrote: > I'm willing to write a patch. Please just tell me which solution would > you prefer. The standard library has switched to atomic replacement for writing .pyc files, which seems like the appropriate solution for script writing as well (note that os.rename isn't atomic on Windows - on 3.3+, os.replace provides atomic renaming on all supported platforms) However, a "not a regular file or symlink" sanity check similar to the one now performed by pycompile (see http://bugs.python.org/issue17222) may also be appropriate here. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Aug 6 07:10:29 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 15:10:29 +1000 Subject: [Distutils] New PyPI mirror in China In-Reply-To: References: <319A4CE9-BD18-4A0A-B1BC-0CB513302387@stufft.io> <5AD08AB4-121D-4154-8D9E-97561980962B@stufft.io> Message-ID: On 6 August 2013 14:24, Hexchain Tong wrote: > On 08/05/2013 02:29 PM, Donald Stufft wrote: >> Just to give you a response. There are currently discussions going on >> about deprecating the official public mirroring infrastructure. > > Thank you! I didn't even realized that :-P Those discussions only turned into a concrete plan in the last couple of days, so it's not surprising you hadn't heard about it :) The details are up at http://www.python.org/dev/peps/pep-0449/ Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Aug 6 07:11:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 15:11:14 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: On 5 August 2013 08:25, Donald Stufft wrote: > Here's my PEP for Deprecating and Removing the Official Public Mirrors > > It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst Donald's proposal is now PEP 449: http://www.python.org/dev/peps/pep-0449/ Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ct at gocept.com Tue Aug 6 08:09:20 2013 From: ct at gocept.com (Christian Theune) Date: Tue, 6 Aug 2013 08:09:20 +0200 Subject: [Distutils] What to do about the PyPI mirrors References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: Hi, looks like I'm late to the party to figure out that I'm going to be hurt again. I'd like to suggest explicitly considering what is going to break due to this and how much work you are forcefully inflicting on others. My whole experience around the packaging (distribute/setuptools) and mirroring/CDN in this year estimates cost for my company somewhere between 10k-20k EUR just for keeping up with the breakage those changes incure. It might be that we're wonderfully stupid (..enough to contribute) and all of this causes no headaches for anybody else ?. Overall, guessing that the packaging infrastructure is used by probably multiple thousands of companies then I'd expect that at least 100 of them might be experiencing problems like us. Juggling arbritrary numbers I can see that we're inflicting around a million EURs of cost that nobody asked for. More specific statements below. On 2013-08-04 22:25:01 +0000, Donald Stufft said: > Here's my PEP for Deprecating and Removing the Official Public Mirrors > > It's source is at: > https://github.com/dstufft/peps/blob/master/mirror-removal.rst > > Abstract > ======= > This PEP provides a path to deprecate and ultimately remove the official > public mirroring infrastructure for `PyPI`_. It does not propose the removal > of mirroring support in general. -1 - maybe I don't have the right to speak up on CDN usage, but personally I feel it's a bad idea to delegate overall PyPI availability exclusively to a commercial third party. It's OK for me that we're using them to improve PyPI availability, but completely putting our faith in their hands, doesn't sound right to me. > Rationale > ======== > The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to > mirror the content of PyPI used by the automatic installers. It also provides > a method for autodiscovery of mirrors and a consistent naming scheme. > > There are a number of problems with the official public mirrors: > > * They give control over a \*.python.org domain name to a third party, > allowing that third party to set or read cookies on the pypi.python.org and > python.org domain name. Agreed, that's a problem. > * The use of a sub domain of pypi.python.org means that the mirror operators > will never be able to get a certificate of their own, and giving them > one for a python.org domain name is unlikely to happen. Agreed. > * They are often out of date, most often by several hours to a few days, but > regularly several days and even months. That's something that the mirroring infrastructure should have been constructed for. I completely agree that the way the mirroring was established was way sub-optimal. I think we can do better. > * With the introduction of the CDN on PyPI the public mirroring infrastructure > is not as important as it once was as the CDN is also a globally distributed > network of servers which will function even if PyPI is down. Well, now we have one breakage point more which keeps annoying me. This argument is not completely true. They may be getting better over time but we have invested heavily to accomodate the breakage - that needs to be balanced with some benefit in the near future. > * Although there is provisions in place for it, there is currently no known > installer which uses the authenticity checks discussed in `PEP381`_ which > means that any download from a mirror is subject to attack by a malicious > mirror operator, but further more due to the lack of TLS it also means that > any download from a mirror is also subject to a MITM attack. Again, I think that was a mistake during the introduction of the mirroring infrastructure: too few people, too confusing PEP. > * They have only ever been implemented by one installer (pip), and its > implementation, besides being insecure, has serious issues with performance > and is slated for removal with it's next release (1.5). Only if you consider the mirror auto-discovery protocol. I'm not sure whether using DNS was such a smart move. A simple HTTP request to find mirrors would have been nice. I think we can still do that. Also, not everyone wants or needs auto-detection the way that the protocol describes it. I personally just hand-pick a mirror (my own, hah) and keep using that. We are also thinking about providing system-level default configuration to hint tools like PIP and setuptools to a different default index that is closer from a network perspective. From a customer perspective this should be "PyPI". I'd like to avoid breakage. Again, if you don't let me choose where to spend my time, I'd rather invest the time I need for cleaning up the breakage into something constructive. The indices are in active use. f.pypi.python.org is seeing between 150-300GB of traffic per month, the patterns widely ranging over the last month. This is traffic that is not used internally from gocept. > Due to the number of issues, some of them very serious, and the CDN which more > or less provides much of the same benefits this PEP proposes to first > deprecate and then remove the public mirroring infrastructure. The ability to > mirror and the method of mirroring will not be affected and the existing > public mirrors are encouraged to acquire their own domains to host their > mirrors on if they wish to continue hosting them. The biggest benefit of the mirroring infrastructure is that it is intended to be de-centralized. As a community member I can step up and take over responsibility of availability, performance, and security of a mirror. As a community member I have to completely submit to whatever the CDN does and contacting another community member who hopefully will be with us for a long time and stay in good contact with the CDN for us. That's centralization and I don't like that a bit. > Plan for Deprecation & Removal > ============================= > Immediately upon acceptance of this PEP documentation on PyPI will be updated > to reflect the deprecated nature of the official public mirrors and will > direct users to external resources like http://www.pypi-mirrors.org/ to > discover unofficial public mirrors if they wish to use one. > > On October 1st, 2013, roughly 2 months from the date of this PEP, the DNS names > of the public mirrors ([a-g].pypi.python.org) will be changed to point back to > PyPI which will be modified to accept requests from those domains. At this > point in time the public mirrors will be considered deprecated. > > Then, roughly 2 months after the release of the first version of pip to have > mirroring support removed (currently slated for pip 1.5) the DNS entries for > [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will > no longer accept requests at those domains. Oh great. That means in about 4 months I have to go through *any installation that my company maintains* and sift through whether we're still referencing f.pypi.python.org anywhere. Can I write a check? > Unofficial Public or Private Mirrors > =================================== > The mirroring protocol will continue to exist as defined in `PEP381`_ and > people are encouraged to utilize to host unofficial public and private mirrors > if they so desire. For operators of unofficial public or private mirrors the > recommended mirroring client is `Bandersnatch`_. Thanks for the recommendation. Instead of this dance breaking many things yet again, I'd love if we could find a way forward keeping the infrastructure. Some ideas: - Take control of *.pypi.python.org back - Record other public names of the mirrors - Use 301 redirects to send old installations over to the new mirror names. - Make it easier for community members to help maintain the list of mirrors. - Make a better (faster) removal policy of mirrors if the owners are not responsive. - Make it easier for other community members to set up and maintain mirrors. I'm happy to improve bandersnatch where needed. Lastly, again, and I might be getting on everyones nerves. Why does it seem that other communities have figured this out much simpler, with less hassle, and with no significant changes for years and we need to keep changing stuff over and over and over and break things over and over and over. It's really hard for me to write this mail without cussing - the situation is very frustrating: the community dynamics seem to "want to move forward" where they from my perspective "wander left and right and break stuff like a drunken elephant driving a tank throught the Louvre". Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ct at gocept.com Tue Aug 6 08:11:57 2013 From: ct at gocept.com (Christian Theune) Date: Tue, 6 Aug 2013 08:11:57 +0200 Subject: [Distutils] What to do about the PyPI mirrors References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: Two more things: why is the CDN not suffering from the security problems you describe for the mirrors? a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? b) What does stop Fastly from introducing incorrect/rogue code in package downloads? Christian From noah at coderanger.net Tue Aug 6 08:31:08 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 5 Aug 2013 23:31:08 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> On Aug 5, 2013, at 11:11 PM, Christian Theune wrote: > Two more things: > > why is the CDN not suffering from the security problems you describe for the mirrors? > > a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). > b) What does stop Fastly from introducing incorrect/rogue code in package downloads? Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 08:35:52 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 02:35:52 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <28C725E9-A2A7-47EA-9155-24502DBF237D@stufft.io> On Aug 6, 2013, at 2:09 AM, Christian Theune wrote: > Hi, > > looks like I'm late to the party to figure out that I'm going to be hurt again. > > I'd like to suggest explicitly considering what is going to break due to this and how much work you are forcefully inflicting on others. My whole experience around the packaging (distribute/setuptools) and mirroring/CDN in this year estimates cost for my company somewhere between 10k-20k EUR just for keeping up with the breakage those changes incure. It might be that we're wonderfully stupid (..enough to contribute) and all of this causes no headaches for anybody else ?. Overall, guessing that the packaging infrastructure is used by probably multiple thousands of companies then I'd expect that at least 100 of them might be experiencing problems like us. Juggling arbritrary numbers I can see that we're inflicting around a million EURs of cost that nobody asked for. > > More specific statements below. > > On 2013-08-04 22:25:01 +0000, Donald Stufft said: > > Here's my PEP for Deprecating and Removing the Official Public Mirrors > > It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst > > Abstract > ======= > This PEP provides a path to deprecate and ultimately remove the official > public mirroring infrastructure for `PyPI`_. It does not propose the removal > of mirroring support in general. > > -1 - maybe I don't have the right to speak up on CDN usage, but personally I feel it's a bad idea to delegate overall PyPI availability exclusively to a commercial third party. It's OK for me that we're using them to improve PyPI availability, but completely putting our faith in their hands, doesn't sound right to me. Hm. Maybe I wasn't clear here? The mirrors don't go away, the only thing that goes away is the *.pypi.python.org names and the DNS discovery protocol. > > Rationale > ======== > The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to > mirror the content of PyPI used by the automatic installers. It also provides > a method for autodiscovery of mirrors and a consistent naming scheme. > > There are a number of problems with the official public mirrors: > > * They give control over a \*.python.org domain name to a third party, > allowing that third party to set or read cookies on the pypi.python.org and > python.org domain name. > > Agreed, that's a problem. > > * The use of a sub domain of pypi.python.org means that the mirror operators > will never be able to get a certificate of their own, and giving them > one for a python.org domain name is unlikely to happen. > > Agreed. > > * They are often out of date, most often by several hours to a few days, but > regularly several days and even months. > > That's something that the mirroring infrastructure should have been constructed for. I completely agree that the way the mirroring was established was way sub-optimal. I think we can do better. Better mirroring protocol is on my TODO list as well but isn't particularly related to this PEP except that the poor protocol certainly influences how useful the global mirrors can be. > > * With the introduction of the CDN on PyPI the public mirroring infrastructure > is not as important as it once was as the CDN is also a globally distributed > network of servers which will function even if PyPI is down. > > Well, now we have one breakage point more which keeps annoying me. This argument is not completely true. They may be getting better over time but we have invested heavily to accomodate the breakage - that needs to be balanced with some benefit in the near future. Can you expand further what you mean here? I don't believe I understand what you're saying. > > * Although there is provisions in place for it, there is currently no known > installer which uses the authenticity checks discussed in `PEP381`_ which > means that any download from a mirror is subject to attack by a malicious > mirror operator, but further more due to the lack of TLS it also means that > any download from a mirror is also subject to a MITM attack. > > Again, I think that was a mistake during the introduction of the mirroring infrastructure: too few people, too confusing PEP. See above about a new protocol being a TODO item for me, will likely be done in Warehouse. > > * They have only ever been implemented by one installer (pip), and its > implementation, besides being insecure, has serious issues with performance > and is slated for removal with it's next release (1.5). > > Only if you consider the mirror auto-discovery protocol. I'm not sure whether using DNS was such a smart move. A simple HTTP request to find mirrors would have been nice. I think we can still do that. > > Also, not everyone wants or needs auto-detection the way that the protocol describes it. I personally just hand-pick a mirror (my own, hah) and keep using that. The auto detection is the main thing going away. You'll still be able to hand pick a mirror, it will just have a domain name owned by the mirror operator instead of one owned under *.pypi.python.org. > > We are also thinking about providing system-level default configuration to hint tools like PIP and setuptools to a different default index that is closer from a network perspective. From a customer perspective this should be "PyPI". > > I'd like to avoid breakage. Again, if you don't let me choose where to spend my time, I'd rather invest the time I need for cleaning up the breakage into something constructive. > > The indices are in active use. f.pypi.python.org is seeing between 150-300GB of traffic per month, the patterns widely ranging over the last month. This is traffic that is not used internally from gocept. > > Due to the number of issues, some of them very serious, and the CDN which more > or less provides much of the same benefits this PEP proposes to first > deprecate and then remove the public mirroring infrastructure. The ability to > mirror and the method of mirroring will not be affected and the existing > public mirrors are encouraged to acquire their own domains to host their > mirrors on if they wish to continue hosting them. > > The biggest benefit of the mirroring infrastructure is that it is intended to be de-centralized. > As a community member I can step up and take over responsibility of availability, performance, and security of a mirror. > > As a community member I have to completely submit to whatever the CDN does and contacting another community member who hopefully will be with us for a long time and stay in good contact with the CDN for us. That's centralization and I don't like that a bit. Just to reiterate this doesn't remove the concept of mirroring at all, but it removes them from living under the PSF banner to living under the banner of the mirror operators. > > Plan for Deprecation & Removal > ============================= > Immediately upon acceptance of this PEP documentation on PyPI will be updated > to reflect the deprecated nature of the official public mirrors and will > direct users to external resources like http://www.pypi-mirrors.org/ to > discover unofficial public mirrors if they wish to use one. > > On October 1st, 2013, roughly 2 months from the date of this PEP, the DNS names > of the public mirrors ([a-g].pypi.python.org) will be changed to point back to > PyPI which will be modified to accept requests from those domains. At this > point in time the public mirrors will be considered deprecated. > > Then, roughly 2 months after the release of the first version of pip to have > mirroring support removed (currently slated for pip 1.5) the DNS entries for > [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will > no longer accept requests at those domains. > > Oh great. That means in about 4 months I have to go through *any installation that my company maintains* and sift through whether we're still referencing f.pypi.python.org anywhere. > > Can I write a check? > > Unofficial Public or Private Mirrors > =================================== > The mirroring protocol will continue to exist as defined in `PEP381`_ and > people are encouraged to utilize to host unofficial public and private mirrors > if they so desire. For operators of unofficial public or private mirrors the > recommended mirroring client is `Bandersnatch`_. > > Thanks for the recommendation. > > Instead of this dance breaking many things yet again, I'd love if we could find a way forward keeping the infrastructure. > > Some ideas: > > - Take control of *.pypi.python.org back > - Record other public names of the mirrors It's planned that a externally maintained list (at least for the time being) will be used for recording the public names of the mirrors. Most likely http://pypi-mirrors.org/ which also includes some information to be able to select which mirror you'd like to use and is already recommended on the mirroring page. > - Use 301 redirects to send old installations over to the new mirror names. If this is done, and it'd need to be cleared with Infra as far as us serving redirects. However we could lengthen the timeframe to give more time to handle the migration and as a mirror operator you can handle redirecting from N.pypi.python.org to your new domain until control is taken back. At some point though the hammer needs to come down on the N.pypi.python.org names because a long term goal is requiring TLS on all of python.org. > - Make it easier for community members to help maintain the list of mirrors. This is what part of the goals of offloading mirror listing to something like pypi-mirrors.org would be (as well as any other site that wants to maintain a list). > - Make a better (faster) removal policy of mirrors if the owners are not responsive. This becomes up to the decentralized sites to handle their own policy of what constitutes a reasonable removal policy. > - Make it easier for other community members to set up and maintain mirrors. I'm happy to improve bandersnatch where needed. This is partially handled by the future TODO of a better protocol, as well as pushing the maintenance of lists onto the community itself. > > Lastly, again, and I might be getting on everyones nerves. > > Why does it seem that other communities have figured this out much simpler, with less hassle, and with no significant changes for years and we need to keep changing stuff over and over and over and break things over and over and over. Other communities had the benefit of learning from our mistakes and a lot of the breakages in this area have been closing security holes that still exist in those other communities. > > It's really hard for me to write this mail without cussing - the situation is very frustrating: the community dynamics seem to "want to move forward" where they from my perspective "wander left and right and break stuff like a drunken elephant driving a tank throught the Louvre". > > Christian > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 08:47:29 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 02:47:29 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> Message-ID: <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> On Aug 6, 2013, at 2:31 AM, Noah Kantrowitz wrote: > > On Aug 5, 2013, at 11:11 PM, Christian Theune wrote: > >> Two more things: >> >> why is the CDN not suffering from the security problems you describe for the mirrors? >> >> a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? > > They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). > >> b) What does stop Fastly from introducing incorrect/rogue code in package downloads? > > Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. To further expand on this answer, you need to trust *someone*. If we cut out Fastly here you could say, well what prevents Dyn Inc (DNS host) from simply redirecting the DNS to a different host? What prevents OSUOL from simply accessing the machines stored there and doing bad things (?). Hell, how many people here know the entire infrastructure team and has personally decided to trust them? At the end of the day you need to pick and choose who you trust. Right now we're working on narrowing down the number of people trusted. The Python Infrastructure has decided it is willing to extend trust to Fastly to cover PyPI the same as it was willing to extend trust to Dyn, and OSOUL, and even the members of the Infra team. Now that being said narrowing the list of people you need to trust is an ongoing goal, and one that isn't going to stop with limiting the number of places able to publish at varying python.org domain names who don't need to be. We're not in a particularly well off position yet but we are getting better all the time. > > --Noah > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From noah at coderanger.net Tue Aug 6 08:49:45 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 5 Aug 2013 23:49:45 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> On Aug 5, 2013, at 11:09 PM, Christian Theune wrote: > Hi, > > looks like I'm late to the party to figure out that I'm going to be hurt again. > > I'd like to suggest explicitly considering what is going to break due to this and how much work you are forcefully inflicting on others. My whole experience around the packaging (distribute/setuptools) and mirroring/CDN in this year estimates cost for my company somewhere between 10k-20k EUR just for keeping up with the breakage those changes incure. It might be that we're wonderfully stupid (..enough to contribute) and all of this causes no headaches for anybody else ?. Overall, guessing that the packaging infrastructure is used by probably multiple thousands of companies then I'd expect that at least 100 of them might be experiencing problems like us. Juggling arbritrary numbers I can see that we're inflicting around a million EURs of cost that nobody asked for. > > More specific statements below. > > On 2013-08-04 22:25:01 +0000, Donald Stufft said: > > Here's my PEP for Deprecating and Removing the Official Public Mirrors > > It's source is at: https://github.com/dstufft/peps/blob/master/mirror-removal.rst > > Abstract > ======= > This PEP provides a path to deprecate and ultimately remove the official > public mirroring infrastructure for `PyPI`_. It does not propose the removal > of mirroring support in general. > > -1 - maybe I don't have the right to speak up on CDN usage, but personally I feel it's a bad idea to delegate overall PyPI availability exclusively to a commercial third party. It's OK for me that we're using them to improve PyPI availability, but completely putting our faith in their hands, doesn't sound right to me. > > Rationale > ======== > The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to > mirror the content of PyPI used by the automatic installers. It also provides > a method for autodiscovery of mirrors and a consistent naming scheme. > > There are a number of problems with the official public mirrors: > > * They give control over a \*.python.org domain name to a third party, > allowing that third party to set or read cookies on the pypi.python.org and > python.org domain name. > > Agreed, that's a problem. > > * The use of a sub domain of pypi.python.org means that the mirror operators > will never be able to get a certificate of their own, and giving them > one for a python.org domain name is unlikely to happen. > > Agreed. > > * They are often out of date, most often by several hours to a few days, but > regularly several days and even months. > > That's something that the mirroring infrastructure should have been constructed for. I completely agree that the way the mirroring was established was way sub-optimal. I think we can do better. > > * With the introduction of the CDN on PyPI the public mirroring infrastructure > is not as important as it once was as the CDN is also a globally distributed > network of servers which will function even if PyPI is down. > > Well, now we have one breakage point more which keeps annoying me. This argument is not completely true. They may be getting better over time but we have invested heavily to accomodate the breakage - that needs to be balanced with some benefit in the near future. To be clear, the CDN and other server-side improvements are not a hard-HA replacement like a local company mirror. You are exactly the use case that can and should be using a mirror for your own use. We are doing _nothing_ that disrupts this use case and will support is exactly as before. > > * Although there is provisions in place for it, there is currently no known > installer which uses the authenticity checks discussed in `PEP381`_ which > means that any download from a mirror is subject to attack by a malicious > mirror operator, but further more due to the lack of TLS it also means that > any download from a mirror is also subject to a MITM attack. > > Again, I think that was a mistake during the introduction of the mirroring infrastructure: too few people, too confusing PEP. > > * They have only ever been implemented by one installer (pip), and its > implementation, besides being insecure, has serious issues with performance > and is slated for removal with it's next release (1.5). > > Only if you consider the mirror auto-discovery protocol. I'm not sure whether using DNS was such a smart move. A simple HTTP request to find mirrors would have been nice. I think we can still do that. > > Also, not everyone wants or needs auto-detection the way that the protocol describes it. I personally just hand-pick a mirror (my own, hah) and keep using that. > > We are also thinking about providing system-level default configuration to hint tools like PIP and setuptools to a different default index that is closer from a network perspective. From a customer perspective this should be "PyPI". > > I'd like to avoid breakage. Again, if you don't let me choose where to spend my time, I'd rather invest the time I need for cleaning up the breakage into something constructive. > > The indices are in active use. f.pypi.python.org is seeing between 150-300GB of traffic per month, the patterns widely ranging over the last month. This is traffic that is not used internally from gocept. > > Due to the number of issues, some of them very serious, and the CDN which more > or less provides much of the same benefits this PEP proposes to first > deprecate and then remove the public mirroring infrastructure. The ability to > mirror and the method of mirroring will not be affected and the existing > public mirrors are encouraged to acquire their own domains to host their > mirrors on if they wish to continue hosting them. > > The biggest benefit of the mirroring infrastructure is that it is intended to be de-centralized. > As a community member I can step up and take over responsibility of availability, performance, and security of a mirror. > > As a community member I have to completely submit to whatever the CDN does and contacting another community member who hopefully will be with us for a long time and stay in good contact with the CDN for us. That's centralization and I don't like that a bit. > > Plan for Deprecation & Removal > ============================= > Immediately upon acceptance of this PEP documentation on PyPI will be updated > to reflect the deprecated nature of the official public mirrors and will > direct users to external resources like http://www.pypi-mirrors.org/ to > discover unofficial public mirrors if they wish to use one. > > On October 1st, 2013, roughly 2 months from the date of this PEP, the DNS names > of the public mirrors ([a-g].pypi.python.org) will be changed to point back to > PyPI which will be modified to accept requests from those domains. At this > point in time the public mirrors will be considered deprecated. > > Then, roughly 2 months after the release of the first version of pip to have > mirroring support removed (currently slated for pip 1.5) the DNS entries for > [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will > no longer accept requests at those domains. > > Oh great. That means in about 4 months I have to go through *any installation that my company maintains* and sift through whether we're still referencing f.pypi.python.org anywhere. > Yes, sorry, no matter how we procedurally handle this shutdown, this will be required. From doing this myself more times than I care to remember, I've not found the actual time difference between ~one month and a year+ mattering much, but having to do it at all is unlikely to change, just when and where. > Can I write a check? > > Unofficial Public or Private Mirrors > =================================== > The mirroring protocol will continue to exist as defined in `PEP381`_ and > people are encouraged to utilize to host unofficial public and private mirrors > if they so desire. For operators of unofficial public or private mirrors the > recommended mirroring client is `Bandersnatch`_. > > Thanks for the recommendation. > > Instead of this dance breaking many things yet again, I'd love if we could find a way forward keeping the infrastructure. > > Some ideas: > > - Take control of *.pypi.python.org back > - Record other public names of the mirrors > - Use 301 redirects to send old installations over to the new mirror names. > - Make it easier for community members to help maintain the list of mirrors. > - Make a better (faster) removal policy of mirrors if the owners are not responsive. > - Make it easier for other community members to set up and maintain mirrors. I'm happy to improve bandersnatch where needed. Between now and the first DNS change, I would absolutely recommend any current public mirrors to redirect users to their new domain name if they intend to have one, and we'll do whatever we can to help make users aware of the switch. I would rather have a clear timeline with fewer steps than add another stage where we (PSF) are issuing redirects to non-PSF servers. Very very +1 on the easier bandersnatch-ing though, I really would love to see more mirrors out there, I just don't want them associated with PyPI or python.org, and I don't want pip to be trying to auto-discover them. I am also hoping that pypi-mirrors.org will continue to operate as a community project (side note, I would be happy to assist with hosting for it if Ken reads this list and if thats a concern of his) and that the mirror operators can develop policies for things like this. I defer to Nick and Ken if they would like distutils-sig to be involved in that process, but as it stands Ken can apply whatever rules he wants to his mirror list. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 08:52:41 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 02:52:41 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> Message-ID: On Aug 6, 2013, at 2:49 AM, Noah Kantrowitz wrote: > I am also hoping that pypi-mirrors.org will continue to operate as a community project (side note, I would be happy to assist with hosting for it if Ken reads this list and if thats a concern of his) and that the mirror operators can develop policies for things like this. Additionally if anyone else wants to maintain a list like this I think it would be more than appropriate to link to it in addition to pypi-mirrors.org on the page about mirroring. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Tue Aug 6 08:56:37 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 6 Aug 2013 06:56:37 +0000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> Message-ID: <20130806065637.GF11961@merlinux.eu> On Mon, Aug 05, 2013 at 23:31 -0700, Noah Kantrowitz wrote: > On Aug 5, 2013, at 11:11 PM, Christian Theune wrote: > > > Two more things: > > > > why is the CDN not suffering from the security problems you describe for the mirrors? > > > > a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? > > They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). > > > b) What does stop Fastly from introducing incorrect/rogue code in package downloads? > > Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. The problem is not so much trusting individuals but that the companies in question are based in the US. If its government wants to temporarily serve backdoored packages to select regions, they could silently force Fastly to do it. I guess the only way around this is to work with pypi- and eventually author/maintainer-signatures and verification. best, holger From noah at coderanger.net Tue Aug 6 08:59:34 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 5 Aug 2013 23:59:34 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806065637.GF11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> Message-ID: <394BFD48-0096-47B7-AE92-772A22C2C2C6@coderanger.net> On Aug 5, 2013, at 11:56 PM, holger krekel wrote: > On Mon, Aug 05, 2013 at 23:31 -0700, Noah Kantrowitz wrote: >> On Aug 5, 2013, at 11:11 PM, Christian Theune wrote: >> >>> Two more things: >>> >>> why is the CDN not suffering from the security problems you describe for the mirrors? >>> >>> a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? >> >> They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). >> >>> b) What does stop Fastly from introducing incorrect/rogue code in package downloads? >> >> Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. > > The problem is not so much trusting individuals but that the companies > in question are based in the US. If its government wants to temporarily > serve backdoored packages to select regions, they could silently force Fastly > to do it. I guess the only way around this is to work with pypi- and > eventually author/maintainer-signatures and verification. No, I have carefully selected whom I trust to work with on the PSF infrastructure. I can promise you there is a 100% chance that the head of Fastly would sooner shut down the company than allow a government interdiction of any kind. I extend this trust to Dyn and OSL as well, and I do not do so lightly. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 09:01:08 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:01:08 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806065637.GF11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> Message-ID: On Aug 6, 2013, at 2:56 AM, holger krekel wrote: > On Mon, Aug 05, 2013 at 23:31 -0700, Noah Kantrowitz wrote: >> On Aug 5, 2013, at 11:11 PM, Christian Theune wrote: >> >>> Two more things: >>> >>> why is the CDN not suffering from the security problems you describe for the mirrors? >>> >>> a) Fastly seems to be the one owning the certificate for pypi.python.org. What?!? >> >> They have a delegated SAN for it, which digicert (the CA) authorizes with the domain contact (the board in this case). >> >>> b) What does stop Fastly from introducing incorrect/rogue code in package downloads? >> >> Basically this one boils down to personal trust from me to the Fastly team combined with the other companies using them being very reputable. At the end of the day, there is not currently any cryptographic mechanism preventing Fastly from doing bad things. > > The problem is not so much trusting individuals but that the companies > in question are based in the US. If its government wants to temporarily > serve backdoored packages to select regions, they could silently force Fastly > to do it. I guess the only way around this is to work with pypi- and > eventually author/maintainer-signatures and verification. PyPI is hosted in the US. Anything the Government could do to Fastly it could do to OSUOL where PyPI is hosted. The solution to that is signature validation but I think it's premature to worry too much about that when there are lower hanging fruit that don't require the US Government deciding to backdoor packages. > > best, > holger > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 6 09:01:20 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 17:01:20 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: On 6 August 2013 16:09, Christian Theune wrote: > Hi, > > > looks like I'm late to the party to figure out that I'm going to be hurt > again. That's why I asked for this to be put through the PEP process: to give it more visibility, and provide more opportunity for people potentially affected to have a chance to comment and offer alternatives. Giving third parties the opportunity to read python.org cookies indefinitely isn't an option. Everything else is negotiable. > I'd like to suggest explicitly considering what is going to break due to > this and how much work you are forcefully inflicting on others. My whole > experience around the packaging (distribute/setuptools) and mirroring/CDN in > this year estimates cost for my company somewhere between 10k-20k EUR just > for keeping up with the breakage those changes incure. It might be that > we're wonderfully stupid (..enough to contribute) and all of this causes no > headaches for anybody else ?. Overall, guessing that the packaging > infrastructure is used by probably multiple thousands of companies then I'd > expect that at least 100 of them might be experiencing problems like us. > Juggling arbritrary numbers I can see that we're inflicting around a million > EURs of cost that nobody asked for. > > > More specific statements below. > > > On 2013-08-04 22:25:01 +0000, Donald Stufft said: > > > Here's my PEP for Deprecating and Removing the Official Public Mirrors > > > It's source is at: > https://github.com/dstufft/peps/blob/master/mirror-removal.rst > > > Abstract > > ======= > > This PEP provides a path to deprecate and ultimately remove the official > > public mirroring infrastructure for `PyPI`_. It does not propose the removal > > of mirroring support in general. > > > -1 - maybe I don't have the right to speak up on CDN usage, but personally I > feel it's a bad idea to delegate overall PyPI availability exclusively to a > commercial third party. It's OK for me that we're using them to improve PyPI > availability, but completely putting our faith in their hands, doesn't sound > right to me. Would you be happier if it said "the current incarnation of the public mirroring infrastructure"? I have no objections to somebody proposing a *new* less broken mirroring process. > That's something that the mirroring infrastructure should have been > constructed for. I completely agree that the way the mirroring was > established was way sub-optimal. I think we can do better. As noted above, this PEP is about killing off the *current* public mirroring system as being irredeemably broken. If that inspires somebody to come up with a more sensible alternative, so much the better. > * With the introduction of the CDN on PyPI the public mirroring > infrastructure > > is not as important as it once was as the CDN is also a globally > distributed > > network of servers which will function even if PyPI is down. > > > Well, now we have one breakage point more which keeps annoying me. This > argument is not completely true. They may be getting better over time but we > have invested heavily to accomodate the breakage - that needs to be balanced > with some benefit in the near future. That's why explicit mirror usage is still supported and recommended. > * Although there is provisions in place for it, there is currently no known > > installer which uses the authenticity checks discussed in `PEP381`_ which > > means that any download from a mirror is subject to attack by a malicious > > mirror operator, but further more due to the lack of TLS it also means > that > > any download from a mirror is also subject to a MITM attack. > > > Again, I think that was a mistake during the introduction of the mirroring > infrastructure: too few people, too confusing PEP. Which is why *this* incarnation of it needs to go away. > * They have only ever been implemented by one installer (pip), and its > > implementation, besides being insecure, has serious issues with > performance > > and is slated for removal with it's next release (1.5). > > > Only if you consider the mirror auto-discovery protocol. I'm not sure > whether using DNS was such a smart move. A simple HTTP request to find > mirrors would have been nice. I think we can still do that. And can be done regardless of what happens to the current system. > Also, not everyone wants or needs auto-detection the way that the protocol > describes it. I personally just hand-pick a mirror (my own, hah) and keep > using that. Which will be unaffected for anyone not relying on a pypi.python.org subdomain. > We are also thinking about providing system-level default configuration to > hint tools like PIP and setuptools to a different default index that is > closer from a network perspective. From a customer perspective this should > be "PyPI". > > I'd like to avoid breakage. Again, if you don't let me choose where to spend > my time, I'd rather invest the time I need for cleaning up the breakage into > something constructive. > > The indices are in active use. f.pypi.python.org is seeing between 150-300GB > of traffic per month, the patterns widely ranging over the last month. This > is traffic that is not used internally from gocept. I think it would be suitable for the PEP to include an escape clause for maintainers of a domain to request that the PSF infrastructure team keep their subdomain active for longer than the general timeframe proposed, with a 301 redirect to a new host. This will need to be worked about between the infrastructure team and the maintainers of the specific instance. > The biggest benefit of the mirroring infrastructure is that it is intended > to be de-centralized. > > As a community member I can step up and take over responsibility of > availability, performance, and security of a mirror. And, indeed, that is still fully supported. What's going away is the delegation of pypi.python.org subdomains and the associated mirror auto-discovery system. There is no near term plan to create a replacement. > As a community member I have to completely submit to whatever the CDN does > and contacting another community member who hopefully will be with us for a > long time and stay in good contact with the CDN for us. That's > centralization and I don't like that a bit. Strictly speaking, you're submitting to the PSF infrastructure team, who manage the relationship with Fastly. Those interested in joining the infrastructure SIG can sign up here: http://mail.python.org/mailman/listinfo/infrastructure > Then, roughly 2 months after the release of the first version of pip to have > > mirroring support removed (currently slated for pip 1.5) the DNS entries for > > [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will > > no longer accept requests at those domains. > > > Oh great. That means in about 4 months I have to go through *any > installation that my company maintains* and sift through whether we're still > referencing f.pypi.python.org anywhere. > > > Can I write a check? I think it makes sense for maintainers of particular mirrors to request a stay of execution until their traffic logs show everything coming in under an updated FQDN. > Some ideas: > > - Take control of *.pypi.python.org back > > - Record other public names of the mirrors > > - Use 301 redirects to send old installations over to the new mirror names. I think it makes sense for mirror maintainers to be able to request this process over the default handling (redirection to the PyPI CDN) > - Make it easier for community members to help maintain the list of mirrors. > > - Make a better (faster) removal policy of mirrors if the owners are not > responsive. For these two points, I think having the PEP cover an addition and removal process for http://www.pypi-mirrors.org/ might make sense (assuming Ken is amenable to the idea). > - Make it easier for other community members to set up and maintain mirrors. > I'm happy to improve bandersnatch where needed. > > > Lastly, again, and I might be getting on everyones nerves. > > > Why does it seem that other communities have figured this out much simpler, > with less hassle, and with no significant changes for years and we need to > keep changing stuff over and over and over and break things over and over > and over. Because the current structure of PyPI is fundamentally flawed, and we're still suffering the consequences more than a decade later. A software distribution index server should be a static filesystem that contains all the necessary metadata (including signatures) and can be mirrored with rsync. PyPI is far from being that :P Perl gets credit for CPAN, but something I only realised recently is that they probably deserve more credit for PAUSE, which is the *upload* side of CPAN. Much of the CPAN metadata is derived directly from the distributed software by PAUSE rather than relying on client side tools. That means CPAN can publish new metadata just by upgrading PAUSE - they don't need to worry about how people are doing the uploads. Also, CPAN, like Linux distro trees, can be mirrored with rsync rather than needing a custom client. It's much easier to maintain backwards compatibility when the only required server API is the ability to serve static files. The only things that have changed recently are that: - the rubygems.org compromise has made it obvious that sticking our heads in the sand and trusting the fact that there are easier targets out there to protect us is no longer an adequate answer - we've made the decision to try to fix the underlying brokenness rather than living with it forever - we have people willing to do the work to make that happen > It's really hard for me to write this mail without cussing - the situation > is very frustrating: the community dynamics seem to "want to move forward" > where they from my perspective "wander left and right and break stuff like a > drunken elephant driving a tank throught the Louvre". Hence the PEP process. That's the opportunity for those that are more aware of the backwards compatibility issues to provide a course correction to those of us that are concerned with starting to close the many and varied security vulnerabilities in the PyPI ecosystem. Cheers, Nick. From regebro at gmail.com Tue Aug 6 08:36:21 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 6 Aug 2013 08:36:21 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: > -1 - maybe I don't have the right to speak up on CDN usage, but personally I > feel it's a bad idea to delegate overall PyPI availability exclusively to a > commercial third party. Well, it's been done, and it was always a better idea than the way mirrors was implemented. > It's OK for me that we're using them to improve PyPI > availability, but completely putting our faith in their hands, doesn't sound > right to me. We must put out faith in somebody's hands with regards to PyPI. That hasn't changed. > That's something that the mirroring infrastructure should have been > constructed for. I completely agree that the way the mirroring was > established was way sub-optimal. I think we can do better. Only by building our own CDN. We won't do better than the ones that exist. > Well, now we have one breakage point more which keeps annoying me. We do? How? > Also, not everyone wants or needs auto-detection the way that the protocol > describes it. I personally just hand-pick a mirror (my own, hah) and keep > using that. I agree that this is probably the best choice, and you can still do that. > I'd like to avoid breakage. Again, if you don't let me choose where to spend > my time, I'd rather invest the time I need for cleaning up the breakage into > something constructive. The only breakage I can see in this proposal is that the [a-z] dns names go away. That would take four months. I think perhaps that's a bit short. I don't see why we can't keep them around for much longer. A way to find mirrors is needed, but perhaps not automatic, but for when pypi goes down. //Lennart From holger at merlinux.eu Tue Aug 6 09:10:32 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 6 Aug 2013 07:10:32 +0000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> Message-ID: <20130806071032.GG11961@merlinux.eu> On Mon, Aug 05, 2013 at 23:49 -0700, Noah Kantrowitz wrote: > On Aug 5, 2013, at 11:09 PM, Christian Theune wrote: > (...) > Between now and the first DNS change, I would absolutely recommend any > current public mirrors to redirect users to their new domain name if > they intend to have one, and we'll do whatever we can to help make > users aware of the switch. I would rather have a clear timeline with > fewer steps than add another stage where we (PSF) are issuing > redirects to non-PSF servers. Very very +1 on the easier > bandersnatch-ing though, I really would love to see more mirrors out > there, I just don't want them associated with PyPI or python.org, and > I don't want pip to be trying to auto-discover them. PyPI mirrors _are_ associated with PyPI and pypi.python.org. (Why) Do do want to flatly rule out pip/pypi.python.org support for managing mirrors? The perl CPAN mirroring provides this nice little machine-readable file: http://www.cpan.org/indices/mirrors.json and a python-equivalent could be consumed by pip, i guess. best, holger From donald at stufft.io Tue Aug 6 09:11:49 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:11:49 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: On Aug 6, 2013, at 2:36 AM, Lennart Regebro wrote: > The only breakage I can see in this proposal is that the [a-z] dns > names go away. That would take four months. I think perhaps that's a > bit short. I don't see why we can't keep them around for much longer. I think we're all willing to increase the time :) The current timeframe was somewhat arbitrarily suggested by Noah and I just rolled with it for a first draft of the PEP figuring if it was too short someone would (hopefully!) speak up. The other breakage is people relying on --use-mirrors in PyPI but that is NOPd in the upcoming pip 1.5, and the side effect of removing these names in older versions of pip simply won't get mirroring support (It's mirroring support still hits PyPI itself). > > A way to find mirrors is needed, but perhaps not automatic, but for > when pypi goes down. Thank you for calling this out, I forgot to include in the PEP that moving the mirroring listing off of PyPI itself means that we increase the chances that the listing will be available if PyPI itself happens to be down. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From martin at v.loewis.de Tue Aug 6 09:03:47 2013 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 06 Aug 2013 09:03:47 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806065637.GF11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> Message-ID: <20130806090347.Horde.c693aSIeCwcjV6GQWxyvag4@webmail.df.eu> Quoting holger krekel : > The problem is not so much trusting individuals but that the companies > in question are based in the US. If its government wants to temporarily > serve backdoored packages to select regions, they could silently force Fastly > to do it. I guess the only way around this is to work with pypi- and > eventually author/maintainer-signatures and verification. Both are actually in place, just not widely used. Each simple page gets a pypi signature, in /serversig, which would allow to validate that a mirror or the CDN has the copy that is also on the master. For author signatures, PGP has been available for quite some time. As with any author signature, you then need to convince yourself that the key actually belongs to the author. Regards, Martin From noah at coderanger.net Tue Aug 6 09:13:01 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 6 Aug 2013 00:13:01 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <69DE74B6-AAE6-45DC-99DB-070B73BDCC96@coderanger.net> On Aug 6, 2013, at 12:01 AM, Nick Coghlan wrote: > On 6 August 2013 16:09, Christian Theune wrote: >> Hi, >> >> >> looks like I'm late to the party to figure out that I'm going to be hurt >> again. > > That's why I asked for this to be put through the PEP process: to give > it more visibility, and provide more opportunity for people > potentially affected to have a chance to comment and offer > alternatives. Giving third parties the opportunity to read python.org > cookies indefinitely isn't an option. > > Everything else is negotiable. > >> I'd like to suggest explicitly considering what is going to break due to >> this and how much work you are forcefully inflicting on others. My whole >> experience around the packaging (distribute/setuptools) and mirroring/CDN in >> this year estimates cost for my company somewhere between 10k-20k EUR just >> for keeping up with the breakage those changes incure. It might be that >> we're wonderfully stupid (..enough to contribute) and all of this causes no >> headaches for anybody else ?. Overall, guessing that the packaging >> infrastructure is used by probably multiple thousands of companies then I'd >> expect that at least 100 of them might be experiencing problems like us. >> Juggling arbritrary numbers I can see that we're inflicting around a million >> EURs of cost that nobody asked for. >> >> >> More specific statements below. >> >> >> On 2013-08-04 22:25:01 +0000, Donald Stufft said: >> >> >> Here's my PEP for Deprecating and Removing the Official Public Mirrors >> >> >> It's source is at: >> https://github.com/dstufft/peps/blob/master/mirror-removal.rst >> >> >> Abstract >> >> ======= >> >> This PEP provides a path to deprecate and ultimately remove the official >> >> public mirroring infrastructure for `PyPI`_. It does not propose the removal >> >> of mirroring support in general. >> >> >> -1 - maybe I don't have the right to speak up on CDN usage, but personally I >> feel it's a bad idea to delegate overall PyPI availability exclusively to a >> commercial third party. It's OK for me that we're using them to improve PyPI >> availability, but completely putting our faith in their hands, doesn't sound >> right to me. > > Would you be happier if it said "the current incarnation of the public > mirroring infrastructure"? I have no objections to somebody proposing > a *new* less broken mirroring process. > >> That's something that the mirroring infrastructure should have been >> constructed for. I completely agree that the way the mirroring was >> established was way sub-optimal. I think we can do better. > > As noted above, this PEP is about killing off the *current* public > mirroring system as being irredeemably broken. If that inspires > somebody to come up with a more sensible alternative, so much the > better. > >> * With the introduction of the CDN on PyPI the public mirroring >> infrastructure >> >> is not as important as it once was as the CDN is also a globally >> distributed >> >> network of servers which will function even if PyPI is down. >> >> >> Well, now we have one breakage point more which keeps annoying me. This >> argument is not completely true. They may be getting better over time but we >> have invested heavily to accomodate the breakage - that needs to be balanced >> with some benefit in the near future. > > That's why explicit mirror usage is still supported and recommended. > >> * Although there is provisions in place for it, there is currently no known >> >> installer which uses the authenticity checks discussed in `PEP381`_ which >> >> means that any download from a mirror is subject to attack by a malicious >> >> mirror operator, but further more due to the lack of TLS it also means >> that >> >> any download from a mirror is also subject to a MITM attack. >> >> >> Again, I think that was a mistake during the introduction of the mirroring >> infrastructure: too few people, too confusing PEP. > > Which is why *this* incarnation of it needs to go away. > >> * They have only ever been implemented by one installer (pip), and its >> >> implementation, besides being insecure, has serious issues with >> performance >> >> and is slated for removal with it's next release (1.5). >> >> >> Only if you consider the mirror auto-discovery protocol. I'm not sure >> whether using DNS was such a smart move. A simple HTTP request to find >> mirrors would have been nice. I think we can still do that. > > And can be done regardless of what happens to the current system. > >> Also, not everyone wants or needs auto-detection the way that the protocol >> describes it. I personally just hand-pick a mirror (my own, hah) and keep >> using that. > > Which will be unaffected for anyone not relying on a pypi.python.org subdomain. > >> We are also thinking about providing system-level default configuration to >> hint tools like PIP and setuptools to a different default index that is >> closer from a network perspective. From a customer perspective this should >> be "PyPI". >> >> I'd like to avoid breakage. Again, if you don't let me choose where to spend >> my time, I'd rather invest the time I need for cleaning up the breakage into >> something constructive. >> >> The indices are in active use. f.pypi.python.org is seeing between 150-300GB >> of traffic per month, the patterns widely ranging over the last month. This >> is traffic that is not used internally from gocept. > > I think it would be suitable for the PEP to include an escape clause > for maintainers of a domain to request that the PSF infrastructure > team keep their subdomain active for longer than the general timeframe > proposed, with a 301 redirect to a new host. This will need to be > worked about between the infrastructure team and the maintainers of > the specific instance. > > >> The biggest benefit of the mirroring infrastructure is that it is intended >> to be de-centralized. >> >> As a community member I can step up and take over responsibility of >> availability, performance, and security of a mirror. > > And, indeed, that is still fully supported. What's going away is the > delegation of pypi.python.org subdomains and the associated mirror > auto-discovery system. There is no near term plan to create a > replacement. > >> As a community member I have to completely submit to whatever the CDN does >> and contacting another community member who hopefully will be with us for a >> long time and stay in good contact with the CDN for us. That's >> centralization and I don't like that a bit. > > Strictly speaking, you're submitting to the PSF infrastructure team, > who manage the relationship with Fastly. Those interested in joining > the infrastructure SIG can sign up here: > http://mail.python.org/mailman/listinfo/infrastructure All members of the Infra Staff team have access to the Fastly admin panel, and Fastly is aware of all of us as authorized to work on the PSF account. Bus factor of 4 is definitely not perfect, but as the one that is responsible for safeguarding such things, I am okay with it for now. > > >> Then, roughly 2 months after the release of the first version of pip to have >> >> mirroring support removed (currently slated for pip 1.5) the DNS entries for >> >> [a-g].pypi.python.org and last.pypi.python.org will be removed and PyPI will >> >> no longer accept requests at those domains. >> >> >> Oh great. That means in about 4 months I have to go through *any >> installation that my company maintains* and sift through whether we're still >> referencing f.pypi.python.org anywhere. >> >> >> Can I write a check? > > I think it makes sense for maintainers of particular mirrors to > request a stay of execution until their traffic logs show everything > coming in under an updated FQDN. > >> Some ideas: >> >> - Take control of *.pypi.python.org back >> >> - Record other public names of the mirrors >> >> - Use 301 redirects to send old installations over to the new mirror names. > > I think it makes sense for mirror maintainers to be able to request > this process over the default handling (redirection to the PyPI CDN) > >> - Make it easier for community members to help maintain the list of mirrors. >> >> - Make a better (faster) removal policy of mirrors if the owners are not >> responsive. > > For these two points, I think having the PEP cover an addition and > removal process for http://www.pypi-mirrors.org/ might make sense > (assuming Ken is amenable to the idea). > >> - Make it easier for other community members to set up and maintain mirrors. >> I'm happy to improve bandersnatch where needed. >> >> >> Lastly, again, and I might be getting on everyones nerves. >> >> >> Why does it seem that other communities have figured this out much simpler, >> with less hassle, and with no significant changes for years and we need to >> keep changing stuff over and over and over and break things over and over >> and over. > > Because the current structure of PyPI is fundamentally flawed, and > we're still suffering the consequences more than a decade later. A > software distribution index server should be a static filesystem that > contains all the necessary metadata (including signatures) and can be > mirrored with rsync. PyPI is far from being that :P > > Perl gets credit for CPAN, but something I only realised recently is > that they probably deserve more credit for PAUSE, which is the > *upload* side of CPAN. Much of the CPAN metadata is derived directly > from the distributed software by PAUSE rather than relying on client > side tools. That means CPAN can publish new metadata just by upgrading > PAUSE - they don't need to worry about how people are doing the > uploads. > > Also, CPAN, like Linux distro trees, can be mirrored with rsync rather > than needing a custom client. It's much easier to maintain backwards > compatibility when the only required server API is the ability to > serve static files. > I will fight any attempt to do this with every fiber of my being. This kind of "dumb server" API means that any metadata indexing or searching either needs to be precomputed or implemented in a much more intelligent client. This is already somewhat the case with pip, and as someone that has to deal with multiple client implementations it makes me very sad that I can't just call a REST endpoint to know what will be installed when I do a thing. This is neither here nor there, but I wanted to stake out my grounds so I can growl when people get too close :) > The only things that have changed recently are that: > - the rubygems.org compromise has made it obvious that sticking our > heads in the sand and trusting the fact that there are easier targets > out there to protect us is no longer an adequate answer > - we've made the decision to try to fix the underlying brokenness > rather than living with it forever > - we have people willing to do the work to make that happen There is also finally much closer coordination between the whole stack of the packaging teams, which means that changes that once took years can now happen in a day or two. This definitely manifests as a possibly frustrating rate of changes compared to previously. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Tue Aug 6 09:15:32 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 6 Aug 2013 07:15:32 +0000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <20130806071532.GH11961@merlinux.eu> On Tue, Aug 06, 2013 at 08:36 +0200, Lennart Regebro wrote: > > Well, now we have one breakage point more which keeps annoying me. > > We do? How? Christian, Donald and me invested considerable debugging time, repeatably, to accomodate Fastly/CDN issues. It required multiple rounds of changes on bandersnatch, devpi and to pypi.python.org source code. Apart from that there have been intermittent install/cache-inconsistency failures. Due to the fast response times of the people involved most of these issues didn't last for too long but the CDN did introduce a new breakage point. holger From martin at v.loewis.de Tue Aug 6 09:15:18 2013 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 06 Aug 2013 09:15:18 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <20130806091518.Horde.AvYcGiqWmXqaqtxujgTzYg1@webmail.df.eu> Quoting Nick Coghlan : > On 6 August 2013 16:09, Christian Theune wrote: >> Hi, >> >> >> looks like I'm late to the party to figure out that I'm going to be hurt >> again. > > That's why I asked for this to be put through the PEP process: to give > it more visibility, and provide more opportunity for people > potentially affected to have a chance to comment and offer > alternatives. Giving third parties the opportunity to read python.org > cookies indefinitely isn't an option. Define "third party". There are a number of organisations other than the PSF that can read python.org cookies. As Noah explains, it's a matter of trust. Noah chooses to trust Fastly, I choose to trust Christian Theune. We both have then imposed our trust on the community. In any case, I consider the cookie issue a red herring. Mirror operators could only steal cookies if users actually pointed their web browsers to the mirrors. They typically don't, since they use setuptools or pip, which doesn't even have access to the cookies. And, if a mirror operator actually does request cookies, there is a high risk in being caught in doing so. If that happens, the mirror operator will not only lose the mirror, but also lose community trust. Regards, Martin From donald at stufft.io Tue Aug 6 09:17:35 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:17:35 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806090347.Horde.c693aSIeCwcjV6GQWxyvag4@webmail.df.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> <20130806090347.Horde.c693aSIeCwcjV6GQWxyvag4@webmail.df.eu> Message-ID: On Aug 6, 2013, at 3:03 AM, martin at v.loewis.de wrote: > > Quoting holger krekel : > >> The problem is not so much trusting individuals but that the companies >> in question are based in the US. If its government wants to temporarily >> serve backdoored packages to select regions, they could silently force Fastly >> to do it. I guess the only way around this is to work with pypi- and >> eventually author/maintainer-signatures and verification. > > Both are actually in place, just not widely used. Each simple page gets > a pypi signature, in /serversig, which would allow to validate that a > mirror or the CDN has the copy that is also on the master. Unless I'm forgetting something there's no real way to get the server key without going through Fastly, and even if there was Fastly could just hijack an upload (and murder their entire business in the process). > > Regards, > Martin > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 6 09:19:39 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 17:19:39 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <69DE74B6-AAE6-45DC-99DB-070B73BDCC96@coderanger.net> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <69DE74B6-AAE6-45DC-99DB-070B73BDCC96@coderanger.net> Message-ID: On 6 August 2013 17:13, Noah Kantrowitz wrote: >> Also, CPAN, like Linux distro trees, can be mirrored with rsync rather >> than needing a custom client. It's much easier to maintain backwards >> compatibility when the only required server API is the ability to >> serve static files. >> > > I will fight any attempt to do this with every fiber of my being. This kind of "dumb server" API means that any metadata indexing or searching either needs to be precomputed or implemented in a much more intelligent client. This is already somewhat the case with pip, and as someone that has to deal with multiple client implementations it makes me very sad that I can't just call a REST endpoint to know what will be installed when I do a thing. This is neither here nor there, but I wanted to stake out my grounds so I can growl when people get too close :) I agree having a smart server is good, I just think exposing a dumb, easy to mirror, signed data store is good, too :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From noah at coderanger.net Tue Aug 6 09:19:51 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 6 Aug 2013 00:19:51 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806071032.GG11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> <20130806071032.GG11961@merlinux.eu> Message-ID: <50DEFF5A-314A-431A-9C9D-5F5AF8FCA037@coderanger.net> On Aug 6, 2013, at 12:10 AM, holger krekel wrote: > On Mon, Aug 05, 2013 at 23:49 -0700, Noah Kantrowitz wrote: >> On Aug 5, 2013, at 11:09 PM, Christian Theune wrote: >> (...) >> Between now and the first DNS change, I would absolutely recommend any >> current public mirrors to redirect users to their new domain name if >> they intend to have one, and we'll do whatever we can to help make >> users aware of the switch. I would rather have a clear timeline with >> fewer steps than add another stage where we (PSF) are issuing >> redirects to non-PSF servers. Very very +1 on the easier >> bandersnatch-ing though, I really would love to see more mirrors out >> there, I just don't want them associated with PyPI or python.org, and >> I don't want pip to be trying to auto-discover them. > > PyPI mirrors _are_ associated with PyPI and pypi.python.org. > (Why) Do do want to flatly rule out pip/pypi.python.org support > for managing mirrors? > > The perl CPAN mirroring provides this nice little machine-readable file: > > http://www.cpan.org/indices/mirrors.json > > and a python-equivalent could be consumed by pip, i guess. Because at this time there is no Python package installer that can install from a public mirror in a way that makes me comfortable supporting it as an official resource. This could be addressed in pip by verifying the /simple signatures, but this mostly precludes improved mirroring mechanisms like that used by Crate. More to the point, I as the head of infrastructure am responsible for *.python.org, but if there is an issue with a mirror, be it downtime, server compromise, or anything else, me and my team can't do anything to fix that. This is, again, not a situation I am comfortable with. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 235 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 09:23:44 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:23:44 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806091518.Horde.AvYcGiqWmXqaqtxujgTzYg1@webmail.df.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806091518.Horde.AvYcGiqWmXqaqtxujgTzYg1@webmail.df.eu> Message-ID: <28593307-3E50-4D04-8267-41804A3054F0@stufft.io> On Aug 6, 2013, at 3:15 AM, martin at v.loewis.de wrote: > > Quoting Nick Coghlan : > >> On 6 August 2013 16:09, Christian Theune wrote: >>> Hi, >>> >>> >>> looks like I'm late to the party to figure out that I'm going to be hurt >>> again. >> >> That's why I asked for this to be put through the PEP process: to give >> it more visibility, and provide more opportunity for people >> potentially affected to have a chance to comment and offer >> alternatives. Giving third parties the opportunity to read python.org >> cookies indefinitely isn't an option. > > Define "third party". There are a number of organisations other than the > PSF that can read python.org cookies. > > As Noah explains, it's a matter of trust. Noah chooses to trust Fastly, > I choose to trust Christian Theune. We both have then imposed our trust > on the community. Sure, but there's also a matter of the *number* of people trusted each new person to trust is another potential pain point. There's really no requirement to have the mirrors hosted on N.pypi.python.org. The fact they do is a legacy issue that can be corrected with a much better story for reliability and security. > > In any case, I consider the cookie issue a red herring. Mirror operators > could only steal cookies if users actually pointed their web browsers to > the mirrors. They typically don't, since they use setuptools or pip, > which doesn't even have access to the cookies. And, if a mirror operator > actually does request cookies, there is a high risk in being caught in > doing so. If that happens, the mirror operator will not only lose the mirror, > but also lose community trust. The cookie issue is very serious because it does not require someone to knowingly point their browser at N.pypi.python.org. A mirror operator could simply inline an image tag in a package, someone views the package page, and automatically makes a request to N.pypi.python.org which is sent the cookie and a script on N.pypi.python.org can read it. Also the claim that there is a high risk in being caught, there isn't really. It would be very easily to do this near silently. > > Regards, > Martin > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From holger at merlinux.eu Tue Aug 6 09:24:30 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 6 Aug 2013 07:24:30 +0000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <69DE74B6-AAE6-45DC-99DB-070B73BDCC96@coderanger.net> Message-ID: <20130806072430.GI11961@merlinux.eu> On Tue, Aug 06, 2013 at 17:19 +1000, Nick Coghlan wrote: > On 6 August 2013 17:13, Noah Kantrowitz wrote: > >> Also, CPAN, like Linux distro trees, can be mirrored with rsync rather > >> than needing a custom client. It's much easier to maintain backwards > >> compatibility when the only required server API is the ability to > >> serve static files. > >> > > > > I will fight any attempt to do this with every fiber of my being. This kind of "dumb server" API means that any metadata indexing or searching either needs to be precomputed or implemented in a much more intelligent client. This is already somewhat the case with pip, and as someone that has to deal with multiple client implementations it makes me very sad that I can't just call a REST endpoint to know what will be installed when I do a thing. This is neither here nor there, but I wanted to stake out my grounds so I can growl when people get too close :) > > I agree having a smart server is good, I just think exposing a dumb, > easy to mirror, signed data store is good, too :) FWIW I think CPAN is structured such that search sites operate on mirrored data. The master server thus can remain dumb. Sounds like a good recipe to me. It's a bit sad but i think even now we are struggling to meet CPAN's architecture and ease-of-use, let alone improve on it. holger From martin at v.loewis.de Tue Aug 6 09:20:04 2013 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 06 Aug 2013 09:20:04 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> >> Well, now we have one breakage point more which keeps annoying me. > > We do? How? Installations that mention a specific mirror in their configuration file (such as f.pypi.python.org) will break when the DNS name is removed. >> Also, not everyone wants or needs auto-detection the way that the protocol >> describes it. I personally just hand-pick a mirror (my own, hah) and keep >> using that. > > I agree that this is probably the best choice, and you can still do that. See above. He did that, and the PyPI maintainers will break it. Regards, Martin From donald at stufft.io Tue Aug 6 09:27:43 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:27:43 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806072430.GI11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <69DE74B6-AAE6-45DC-99DB-070B73BDCC96@coderanger.net> <20130806072430.GI11961@merlinux.eu> Message-ID: <8F78ED33-736F-4FAA-9958-0447FEFED578@stufft.io> On Aug 6, 2013, at 3:24 AM, holger krekel wrote: > FWIW I think CPAN is structured such that search sites operate > on mirrored data. The master server thus can remain dumb. > Sounds like a good recipe to me. > > It's a bit sad but i think even now we are struggling to meet CPAN's > architecture and ease-of-use, let alone improve on it. Changes like this aren't off the table in the future. Right now a lot of the work is being to bring some level of sanity to what is already there and then make it possible to iterate and hash out what we want a python package index to look like. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From martin at v.loewis.de Tue Aug 6 09:29:53 2013 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 06 Aug 2013 09:29:53 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> <20130806090347.Horde.c693aSIeCwcjV6GQWxyvag4@webmail.df.eu> Message-ID: <20130806092953.Horde.Fx0R_ljqOdWcoK5dAHvN2g5@webmail.df.eu> Quoting Donald Stufft : > Unless I'm forgetting something there's no real way to get the server key > without going through Fastly You should have a copy of the server key upfront, on your disk. You can still get it directly from pypi with HTTP request to pypi.into.python.org/serverkey. > and even if there was Fastly could just hijack > an upload (and murder their entire business in the process). Couldn't you also use pypi.int.python.org for uploading? Regards, Martin From donald at stufft.io Tue Aug 6 09:30:04 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:30:04 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> Message-ID: On Aug 6, 2013, at 3:20 AM, martin at v.loewis.de wrote: >>> Well, now we have one breakage point more which keeps annoying me. >> >> We do? How? > > Installations that mention a specific mirror in their configuration file > (such as f.pypi.python.org) will break when the DNS name is removed. I don't see how this is relevant to his statement? He was talking about the CDN and I was asking him to clarify. > >>> Also, not everyone wants or needs auto-detection the way that the protocol >>> describes it. I personally just hand-pick a mirror (my own, hah) and keep >>> using that. >> >> I agree that this is probably the best choice, and you can still do that. > > See above. He did that, and the PyPI maintainers will break it. I don't think anyones claimed that removing the names won't break things for people who directly referenced them, but it's an important step that we do that. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue Aug 6 09:32:39 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:32:39 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806092953.Horde.Fx0R_ljqOdWcoK5dAHvN2g5@webmail.df.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <20130806065637.GF11961@merlinux.eu> <20130806090347.Horde.c693aSIeCwcjV6GQWxyvag4@webmail.df.eu> <20130806092953.Horde.Fx0R_ljqOdWcoK5dAHvN2g5@webmail.df.eu> Message-ID: <74CD61E7-FE3A-48E7-BC3F-7E6BF6E849F2@stufft.io> On Aug 6, 2013, at 3:29 AM, martin at v.loewis.de wrote: > > Quoting Donald Stufft : > >> Unless I'm forgetting something there's no real way to get the server key >> without going through Fastly > > You should have a copy of the server key upfront, on your disk. > > You can still get it directly from pypi with HTTP request to > pypi.into.python.org/serverkey. > >> and even if there was Fastly could just hijack >> an upload (and murder their entire business in the process). > > Couldn't you also use pypi.int.python.org for uploading? > > Regards, > Martin > > pypi.int.python.org is not a public name and has no promise on existing tomorrow. Even if it was it's HTTP only and thus now you have an attacker who can substitute his own key for the server key and his own serversig for packages downloaded over HTTP from a mirror. The same thing applies to uploading, so you remove the possibility of Fastly attacking you and open up the much wider chance that a MITM would attack you. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 6 09:47:38 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 17:47:38 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> Message-ID: On 6 August 2013 17:30, Donald Stufft wrote: > On Aug 6, 2013, at 3:20 AM, martin at v.loewis.de wrote: >> See above. He did that, and the PyPI maintainers will break it. > > I don't think anyones claimed that removing the names won't break things for > people who directly referenced them, but it's an important step that we do that. Right, but I think it's one where we can offer responsive mirror maintainers a generous time frame. We're down to only 5 mirrors using the *.pypi.python.org naming scheme anyway, so we should probably include contacting the maintainers directly in the transition plan. That makes the process: - we immediately stop handing out any new *.pypi.python.org mirror names (this has effectively happened already, the PEP will just be making it official) - the operators of the 5 current *.pypi.python.org mirrors are contacted directly, informing them of the plan to deprecate and remove those domain names, and offering the choice of two alternatives: 1. After 2 months (or earlier if requested), the domain name is redirected to the PyPI CDN and the mirror is effectively retired. 2 months after the release of pip 1.5, the name is removed entirely 2. The mirror operator establishes a 301 redirect to a HTTPS capable domain name they control and negotiates the time frame for retirement and removal of the *.pypi.python.org domain record with the PSF infrastructure team - after two months, last.pypi.python.org and any *.pypi.python.org mirror names which didn't request option 2 above are redirected to the CDN - two months after the release of pip 1.5, last.pypi.python.org and any *.pypi.python.org mirror names which didn't request option 2 above are removed from the DNS - the exact time frames for option 2 above will be worked out individually with the mirror operators that request it (that would be at least Christian for f.pypi.python.org, and perhaps some of the other mirror operators if they also choose option 2) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From regebro at gmail.com Tue Aug 6 09:53:05 2013 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 6 Aug 2013 09:53:05 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <20130806071032.GG11961@merlinux.eu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <0B107FE0-6211-4A53-BE7B-D0AFAE0AB7AE@coderanger.net> <20130806071032.GG11961@merlinux.eu> Message-ID: On Tue, Aug 6, 2013 at 9:10 AM, holger krekel wrote: > PyPI mirrors _are_ associated with PyPI and pypi.python.org. > (Why) Do do want to flatly rule out pip/pypi.python.org support > for managing mirrors? Automatic mirror discovery opens extra security holes until we have found some way to tighten up the security in general. Once we have a way of verifying packages that work and that doesn't rely on the mirror you are using, we could add it back. Indeed, just having a json list makes sense. //Lennart From donald at stufft.io Tue Aug 6 09:59:47 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 03:59:47 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> Message-ID: <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> On Aug 6, 2013, at 3:47 AM, Nick Coghlan wrote: > On 6 August 2013 17:30, Donald Stufft wrote: >> On Aug 6, 2013, at 3:20 AM, martin at v.loewis.de wrote: >>> See above. He did that, and the PyPI maintainers will break it. >> >> I don't think anyones claimed that removing the names won't break things for >> people who directly referenced them, but it's an important step that we do that. > > Right, but I think it's one where we can offer responsive mirror > maintainers a generous time frame. We're down to only 5 mirrors using > the *.pypi.python.org naming scheme anyway, so we should probably > include contacting the maintainers directly in the transition plan. > > That makes the process: > > - we immediately stop handing out any new *.pypi.python.org mirror > names (this has effectively happened already, the PEP will just be > making it official) > - the operators of the 5 current *.pypi.python.org mirrors are > contacted directly, informing them of the plan to deprecate and remove > those domain names, and offering the choice of two alternatives: Minor point but it's 4 mirrors. The a mirror is simply an alias for PyPI itself which leaves, c, e, f, g. > > 1. After 2 months (or earlier if requested), the domain name is > redirected to the PyPI CDN and the mirror is effectively retired. 2 > months after the release of pip 1.5, the name is removed entirely > 2. The mirror operator establishes a 301 redirect to a HTTPS > capable domain name they control and negotiates the time frame for > retirement and removal of the *.pypi.python.org domain record with the > PSF infrastructure team > > - after two months, last.pypi.python.org and any *.pypi.python.org > mirror names which didn't request option 2 above are redirected to the > CDN > - two months after the release of pip 1.5, last.pypi.python.org and > any *.pypi.python.org mirror names which didn't request option 2 above > are removed from the DNS > - the exact time frames for option 2 above will be worked out > individually with the mirror operators that request it (that would be > at least Christian for f.pypi.python.org, and perhaps some of the > other mirror operators if they also choose option 2) It's probably simpler to just lengthen the timeframe and allow early opt in to having the N.pypi.python.org redirected back to PyPI (Minor point, it doesn't actually go directly through the CDN because the CDN is configured to require SSL). I would much rather have the details laid out in the PEP than have the Infra team being placed in the line of fire. I think it would even be reasonable to not have a forced redirect to the CDN and instead say in N amount of time the DNS entries will be removed, and allow mirror operators to ask us to redirect their N.pypi.python.org back to the CDN if they've felt their migration is complete before N amount of time happens. The big question then becomes what is a reasonable value for N amount of time, the original proposal essentially used 4 months for no real reason. Would 6 months be better? 8? I think making this window _too_ long doesn't really do anything except delay the inevitable and the window should be decided on for what's a reasonable amount of time for people to move away from pointing directly at the N.pypi.python.org not delaying the need to do it until a later date. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 6 10:11:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 18:11:06 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> Message-ID: On 6 August 2013 17:59, Donald Stufft wrote: > It's probably simpler to just lengthen the timeframe and allow early opt > in to having the N.pypi.python.org redirected back to PyPI (Minor point, > it doesn't actually go directly through the CDN because the CDN is configured > to require SSL). > > I would much rather have the details laid out in the PEP than have the Infra > team being placed in the line of fire. I think it would even be reasonable to > not have a forced redirect to the CDN and instead say in N amount of time > the DNS entries will be removed, and allow mirror operators to ask us to > redirect their N.pypi.python.org back to the CDN if they've felt their migration > is complete before N amount of time happens. Sounds good to me. > The big question then becomes what is a reasonable value for N amount of time, > the original proposal essentially used 4 months for no real reason. Would 6 months > be better? 8? I think making this window _too_ long doesn't really do anything > except delay the inevitable and the window should be decided on for what's a reasonable > amount of time for people to move away from pointing directly at the N.pypi.python.org > not delaying the need to do it until a later date. I believe Christian gets to define reasonable on this point :) Plucking a date out of the air, though, why not: July 1, 2014, with 6 month, 3 month and 1 month warnings sent to the operators of mirrors that haven't yet been redirected back to PyPI. That's nearly 11 months away, and hopefully other changes will have settled down by then. If the mirror operators are happy their transition is complete before then, cool, otherwise they have a hard deadline to work with. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia On 6 August 2013 17:59, Donald Stufft wrote: > > On Aug 6, 2013, at 3:47 AM, Nick Coghlan wrote: > >> On 6 August 2013 17:30, Donald Stufft wrote: >>> On Aug 6, 2013, at 3:20 AM, martin at v.loewis.de wrote: >>>> See above. He did that, and the PyPI maintainers will break it. >>> >>> I don't think anyones claimed that removing the names won't break things for >>> people who directly referenced them, but it's an important step that we do that. >> >> Right, but I think it's one where we can offer responsive mirror >> maintainers a generous time frame. We're down to only 5 mirrors using >> the *.pypi.python.org naming scheme anyway, so we should probably >> include contacting the maintainers directly in the transition plan. >> >> That makes the process: >> >> - we immediately stop handing out any new *.pypi.python.org mirror >> names (this has effectively happened already, the PEP will just be >> making it official) >> - the operators of the 5 current *.pypi.python.org mirrors are >> contacted directly, informing them of the plan to deprecate and remove >> those domain names, and offering the choice of two alternatives: > > Minor point but it's 4 mirrors. The a mirror is simply an alias for PyPI itself > which leaves, c, e, f, g. > >> >> 1. After 2 months (or earlier if requested), the domain name is >> redirected to the PyPI CDN and the mirror is effectively retired. 2 >> months after the release of pip 1.5, the name is removed entirely >> 2. The mirror operator establishes a 301 redirect to a HTTPS >> capable domain name they control and negotiates the time frame for >> retirement and removal of the *.pypi.python.org domain record with the >> PSF infrastructure team >> >> - after two months, last.pypi.python.org and any *.pypi.python.org >> mirror names which didn't request option 2 above are redirected to the >> CDN >> - two months after the release of pip 1.5, last.pypi.python.org and >> any *.pypi.python.org mirror names which didn't request option 2 above >> are removed from the DNS >> - the exact time frames for option 2 above will be worked out >> individually with the mirror operators that request it (that would be >> at least Christian for f.pypi.python.org, and perhaps some of the >> other mirror operators if they also choose option 2) > > It's probably simpler to just lengthen the timeframe and allow early opt > in to having the N.pypi.python.org redirected back to PyPI (Minor point, > it doesn't actually go directly through the CDN because the CDN is configured > to require SSL). > > I would much rather have the details laid out in the PEP than have the Infra > team being placed in the line of fire. I think it would even be reasonable to > not have a forced redirect to the CDN and instead say in N amount of time > the DNS entries will be removed, and allow mirror operators to ask us to > redirect their N.pypi.python.org back to the CDN if they've felt their migration > is complete before N amount of time happens. > > The big question then becomes what is a reasonable value for N amount of time, > the original proposal essentially used 4 months for no real reason. Would 6 months > be better? 8? I think making this window _too_ long doesn't really do anything > except delay the inevitable and the window should be decided on for what's a reasonable > amount of time for people to move away from pointing directly at the N.pypi.python.org > not delaying the need to do it until a later date. > >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From martin at v.loewis.de Tue Aug 6 12:45:34 2013 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 06 Aug 2013 12:45:34 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> Message-ID: <5200D3CE.7000006@v.loewis.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Am 06.08.13 09:59, schrieb Donald Stufft: > The big question then becomes what is a reasonable value for N > amount of time, the original proposal essentially used 4 months for > no real reason. Would 6 months be better? 8? I think making this > window _too_ long doesn't really do anything except delay the > inevitable and the window should be decided on for what's a > reasonable amount of time for people to move away from pointing > directly at the N.pypi.python.org not delaying the need to do it > until a later date. Assuming the main breakage comes from people having hard-coded the mirror names in configuration files: Why not leave the *.pypi names available "forever" (ten years), all pointing to the master? Regards, Martin -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.18 (Darwin) Comment: GPGTools - http://gpgtools.org Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iEYEARECAAYFAlIA084ACgkQavBT8H2dyNLqhQCdFa1N3X/x7K2pYyakDlfkAgDW u74Ani9rN6zQ9TTGxAtl48MI36SmzNxc =kAJm -----END PGP SIGNATURE----- From donald at stufft.io Tue Aug 6 13:08:06 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 07:08:06 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <5200D3CE.7000006@v.loewis.de> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <20130806092004.Horde.Jb7mNCI6fr7W8WRmYVCXTg1@webmail.df.eu> <57134EB7-CADC-4736-95DE-0652457A915C@stufft.io> <5200D3CE.7000006@v.loewis.de> Message-ID: <8CA9F729-21F6-4A06-8B97-A188AD7CB9B2@stufft.io> On Aug 6, 2013, at 6:45 AM, "Martin v. L?wis" wrote: > Assuming the main breakage comes from people having hard-coded the > mirror names in configuration files: Why not leave the *.pypi names > available "forever" (ten years), all pointing to the master? The major reason (for me, Noah might have others as Infra lead) is that they have never been available via TLS, so everyone using them hard-coded is using them hard-coded as HTTP. A lot of those people likely don't realize that by using them they are risking a man in the middle attack. So by continuing to support them we are essentially continuing to enable a grossly insecure setting with the very likely case being the folks vulnerable to it have not made an informed decision to do so and instead have merely done what they thought was best practice. Ensuring that the transport is safe is one of my primary goals right now. A secondary (but minor) reason is simply one of logistics. Throughout various migrations around as things on PyPI settled the ones that do point back to PyPI have randomly become broken, sometimes for weeks or months. It's easy to miss checking all of them that they continue to work and I believe that it's better to have a clean break than half ass support those names. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ct at gocept.com Tue Aug 6 08:59:09 2013 From: ct at gocept.com (Christian Theune) Date: Tue, 6 Aug 2013 08:59:09 +0200 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> Message-ID: <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> Hi, Thanks for all the feedback, I'll calm down a bit and ponder some more structured reply. However, you're responding to the technicalities. I didn't see any consideration to the user pain. It seems irrelevant. Almost like arguing with the TSA about taking off your shoes. f.pypi.python.org is going to go away. And *everyone* using it needs to change it. Manually - or else. Other communities, like the Linux distributions, are doing simple, file-based stuff for ages. They did not learn from us, and AFAICT we didn't learn from them? My overall feeling is that we're telling the story to the outside world that they should not rely on Python. We can break anything, will find a good technical argument, and be done. If we're getting so much better with all the changes, then this goodness should be available to anyone invested with the platform already. Currently this seems to be: hurt the people who are with us now, so we can get more new ones when they leave. Sorry for the sarcasm, as promised, I'll come back with a more structured technical response later - need to go and calm down. Christian -- Christian Theune ? ct at gocept.com gocept gmbh & co. kg ? Forsterstra?e 29 ? 06112 Halle (Saale) ? Germany http://gocept.com ? Tel +49 345 1229889-7 Python, Pyramid, Plone, Zope ? consulting, development, hosting, operations -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 6 14:22:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 6 Aug 2013 22:22:46 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> Message-ID: On 6 August 2013 16:59, Christian Theune wrote: > Hi, > > Thanks for all the feedback, I'll calm down a bit and ponder some more structured reply. > > However, you're responding to the technicalities. I didn't see any consideration to the user pain. It seems irrelevant. Almost like arguing with the TSA about taking off your shoes. User pain is the only reason for not making the change tomorrow. People need time to adjust, or to propose alternative solutions. > f.pypi.python.org is going to go away. And *everyone* using it needs to change it. Manually - or else. Delegating subdomains of python.org without a contractual relationship in place was a fundamental mistake. It should never have happened. We can either admit "We screwed up and set up a seriously flawed mirroring system" and take steps to fix it, or we can leave the HTTP-only mirrors open as a security hole forever. One means by which I could see an f.pypi.python.org DNS record being left in place indefinitely is if the TUF folks are able to come up with a scheme for offering end-to-end security for the *existing* PyPI metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the TUF client side integrity checks are invoked by pip. In that case, the security argument regarding the lack of TLS on the subdomains would be rendered moot, and the backwards compatibility argument for keeping it active would win. Another potential alternative might be for Gocept to approach the PSF about getting an SSL certificate for that domain, ensuring pip and setuptools both support HSTS, and then switching that mirror over to using HSTS (so even configurations hardcoded to use http://f.pypi.python.org will still get a validated secure connection). Both of those approaches would close the security hole, while leaving the domain in place. If upgrading pip and easy_install clients is a more acceptable solution than updating affected configurations to use a different domain name, then these are certainly options we should discuss. The only option which I consider completely out of the question is leaving f.pypi.python.org (or any other *.pypi.python.org subdomain) in place indefinitely as an insecure HTTP-only endpoint. > Other communities, like the Linux distributions, are doing simple, file-based stuff for ages. They did not learn from us, and AFAICT we didn't learn from them? This case *is* a matter of us learning from other mirroring systems: none of them are based on delegating subdomains to third parties, they're all based on lists of mirror URLs, and some mechanism for retrieving that list. However, as long as the flawed way remains blessed as the official mirroring network, it's difficult for an alternative model to gain any traction. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From jcappos at poly.edu Tue Aug 6 15:13:25 2013 From: jcappos at poly.edu (Justin Cappos) Date: Tue, 6 Aug 2013 09:13:25 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> Message-ID: One means by which I could see an f.pypi.python.org DNS record being > left in place indefinitely is if the TUF folks are able to come up > with a scheme for offering end-to-end security for the *existing* PyPI > metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the > TUF client side integrity checks are invoked by pip. In that case, the > security argument regarding the lack of TLS on the subdomains would be > rendered moot, and the backwards compatibility argument for keeping it > active would win. > It seems like you've been reading our minds (or at least our mailing list)! Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Aug 6 17:58:26 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 11:58:26 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> Message-ID: <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> On Aug 6, 2013, at 8:22 AM, Nick Coghlan wrote: > On 6 August 2013 16:59, Christian Theune wrote: >> Hi, >> >> Thanks for all the feedback, I'll calm down a bit and ponder some more structured reply. >> >> However, you're responding to the technicalities. I didn't see any consideration to the user pain. It seems irrelevant. Almost like arguing with the TSA about taking off your shoes. > > User pain is the only reason for not making the change tomorrow. > People need time to adjust, or to propose alternative solutions. > >> f.pypi.python.org is going to go away. And *everyone* using it needs to change it. Manually - or else. > > Delegating subdomains of python.org without a contractual relationship > in place was a fundamental mistake. It should never have happened. We > can either admit "We screwed up and set up a seriously flawed > mirroring system" and take steps to fix it, or we can leave the > HTTP-only mirrors open as a security hole forever. > > One means by which I could see an f.pypi.python.org DNS record being > left in place indefinitely is if the TUF folks are able to come up > with a scheme for offering end-to-end security for the *existing* PyPI > metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the > TUF client side integrity checks are invoked by pip. In that case, the > security argument regarding the lack of TLS on the subdomains would be > rendered moot, and the backwards compatibility argument for keeping it > active would win. It would be rendered moot as far as any tooling that was updated to work with it (such as pip etc) however the browser level attacks would still be in play. If those can be solved essentially require giving mirror operators a SSL certificate for N.pypi.python.org (which still does not preclude a malicious mirror operator or a compromised mirror from being used to steal logins on PyPI). > > Another potential alternative might be for Gocept to approach the PSF > about getting an SSL certificate for that domain, ensuring pip and > setuptools both support HSTS, and then switching that mirror over to > using HSTS (so even configurations hardcoded to use > http://f.pypi.python.org will still get a validated secure > connection). FWIW this doesn't make it secure unless they change their configuration to point to https://f.pypi.python.org/simple instead of http://f.pypi.python.org/simple.* Programmatic libraries typically don't support HSTS (as HSTS it primarily used to prevent attacks that don't typically apply to command line clients). And certainly none of the existing tools support it. So given that we'd be relying on the redirect that upgrades the connection to HTTPS an attacker could simply return HTML instead of the redirect to HTTPS. Hence why it makes sense to remove, because either way users will need to edit their existing configurations if they intend to be secure and moving to a different domain will prevent the in browser attacks as well. * This is also true of anyone who has hard coded an url to http://pypi.python.org/simple/ however there's no reasonable way to fix that. > > Both of those approaches would close the security hole, while leaving > the domain in place. If upgrading pip and easy_install clients is a > more acceptable solution than updating affected configurations to use > a different domain name, then these are certainly options we should > discuss. > > The only option which I consider completely out of the question is > leaving f.pypi.python.org (or any other *.pypi.python.org subdomain) > in place indefinitely as an insecure HTTP-only endpoint. > >> Other communities, like the Linux distributions, are doing simple, file-based stuff for ages. They did not learn from us, and AFAICT we didn't learn from them? > > This case *is* a matter of us learning from other mirroring systems: > none of them are based on delegating subdomains to third parties, > they're all based on lists of mirror URLs, and some mechanism for > retrieving that list. However, as long as the flawed way remains > blessed as the official mirroring network, it's difficult for an > alternative model to gain any traction. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From erik.m.bray at gmail.com Tue Aug 6 19:57:25 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 6 Aug 2013 13:57:25 -0400 Subject: [Distutils] How to disable PYTHONPATH checking when installing packages using distribute In-Reply-To: <51FBABD3.2050801@gmail.com> References: <51FBABD3.2050801@gmail.com> Message-ID: On Fri, Aug 2, 2013 at 8:53 AM, wrote: > Hi, > > During installing a package which uses distribute (matplotlib in this case), > it refuses to work with this message > > "running install > Checking .pth file support in > /usr/local/stow/matplotlib-1.3.0/lib/python2.7/site-packages/ > /usr/bin/python -E -c pass > TEST FAILED: /usr/local/stow/matplotlib-1.3.0/lib/python2.7/site-packages/ > does NOT support .pth files > error: bad install directory or PYTHONPATH > ... > Please make the appropriate changes for your system and try again." > > I install local packages using the stow approach, which installs each > package under its own sub-directory and later "stowed" > (https://www.gnu.org/software/stow/). Such "error" becomes a nuisance as a > different PYTHONPATH has to be set for each installation of a package. > > How can the checking be disable? I don't seem to be able to find anything in > the documentation and would be grateful for any pointer. > > Or maybe it's better turned into a warning and users be reminded to add the > install directory to PYTHONPATH. By default setuptools (distribute) installs packages as eggs, and loading eggs requires the ability to write a .pth file to a directory that will be checked for .pth files at start up (i.e. is in PYTHONPATH or otherwise on sys.path by default). You can avoid doing an egg-based install by instead running: python setup.py install --single-version-externally-managed --prefix /usr/local/stow/matplotlib-1.3.0 or something to that effect. I think if you do this you also need to make sure to manually add the .egg-info directory as well. I think you can do this with python setup.py install_egg_info --install-dir /usr/local/stow/matplotlib-1.3.0/lib/python2.7/site-packages/ but YMMV. You might also try just installing with pip since it will basically install the package in the same way by default. Erik From noah at coderanger.net Tue Aug 6 20:37:02 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Tue, 6 Aug 2013 11:37:02 -0700 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> Message-ID: <40AB70F7-57BC-4E6D-B932-662E70A08E4C@coderanger.net> On Aug 6, 2013, at 5:22 AM, Nick Coghlan wrote: > On 6 August 2013 16:59, Christian Theune wrote: >> Hi, >> >> Thanks for all the feedback, I'll calm down a bit and ponder some more structured reply. >> >> However, you're responding to the technicalities. I didn't see any consideration to the user pain. It seems irrelevant. Almost like arguing with the TSA about taking off your shoes. > > User pain is the only reason for not making the change tomorrow. > People need time to adjust, or to propose alternative solutions. My reasoning for picking 4 months total on the migration is that an individual user switching their mirror hostnames is a relatively quick process (maybe a few days in a really big case) and anyone that doesn't hear about this change within a few months is highly unlikely to learn about it in a larger period of time. Humans are generally deadline-driven, so moving the dealing back doesn't get us much except moving the conversion work back with it. Basically I think paste the 6-8 weeks mark, we are just hitting the long tail in terms of actual benefit to users, and it is better to just break the system and force them to notice they need to fix things (since one reason for doing this is current system is unsafe and allowing that to exist for another year is not really on my list). --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed Aug 7 03:10:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 7 Aug 2013 11:10:14 +1000 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> Message-ID: On 7 August 2013 01:58, Donald Stufft wrote: > On Aug 6, 2013, at 8:22 AM, Nick Coghlan wrote: >> One means by which I could see an f.pypi.python.org DNS record being >> left in place indefinitely is if the TUF folks are able to come up >> with a scheme for offering end-to-end security for the *existing* PyPI >> metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the >> TUF client side integrity checks are invoked by pip. In that case, the >> security argument regarding the lack of TLS on the subdomains would be >> rendered moot, and the backwards compatibility argument for keeping it >> active would win. > > It would be rendered moot as far as any tooling that was updated to work > with it (such as pip etc) however the browser level attacks would still be > in play. If those can be solved essentially require giving mirror operators > a SSL certificate for N.pypi.python.org (which still does not preclude a > malicious mirror operator or a compromised mirror from being used to > steal logins on PyPI). We're not talking about "mirror operators" in an abstract sense any more: we're talking specifically about the possibility of a formal relationship with Gocept to keep the f.pypi.python.org domain alive indefinitely. It isn't Gocept's fault that the upstream mirroring system design was broken, so the burden is on us to work with Christian to come up with a transition plan that is acceptable to both sides. That may mean retiring the subdomain and Gocept changing the configuration at affected sites, or it may mean Gocept negotiating a more formal relationship with the PSF to continue operating f.pypi.python.org in particular. Both options need to be on the table from the upstream side, giving Gocept a chance to assess the alternatives (i.e. change existing sites to point to a new domain, or accept that some existing sites will remain insecure until they have been upgraded, just as we have to accept that old clients will remain insecure when accessing the main site over HTTP). >> Another potential alternative might be for Gocept to approach the PSF >> about getting an SSL certificate for that domain, ensuring pip and >> setuptools both support HSTS, and then switching that mirror over to >> using HSTS (so even configurations hardcoded to use >> http://f.pypi.python.org will still get a validated secure >> connection). > > FWIW this doesn't make it secure unless they change their configuration to > point to https://f.pypi.python.org/simple instead of http://f.pypi.python.org/simple.* > > Programmatic libraries typically don't support HSTS (as HSTS it primarily used > to prevent attacks that don't typically apply to command line clients). And certainly > none of the existing tools support it. So given that we'd be relying on the redirect > that upgrades the connection to HTTPS an attacker could simply return HTML > instead of the redirect to HTTPS. Yeah, I realised this flaw after posting. An alternative hack that would allow the problem to be solved through an "upgrade pip/easy_install" solution rather than an "ensure all configs have been edited" approach would be a simple URL translation map that converts legacy "http://f.pypi.python.org" references to a new URL. > Hence why it makes sense to remove, because either way users will need to edit > their existing configurations if they intend to be secure and moving to a different > domain will prevent the in browser attacks as well. > > * This is also true of anyone who has hard coded an url to http://pypi.python.org/simple/ > however there's no reasonable way to fix that. Hardcoded references to http://f.pypi.python.org/simple/ aren't *that* different from hardcoded references to the main site. The only addition is the inclusion of Gocept in the chain of trust. Given that Christian wrote the now recommended mirroring client, trusting Christian/Gocept is fairly unavoidable at this point :) We don't have a hard deadline for fixing this on the upstream side - it's in the "important but not currently urgent" category. If we can get the active legacy mirrors down to just f.pypi.python.org that will be solid progress, and then we can work out a specific arrangement for that last mirror which works for Gocept as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Aug 7 03:36:50 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 21:36:50 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> Message-ID: <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> On Aug 6, 2013, at 9:10 PM, Nick Coghlan wrote: > On 7 August 2013 01:58, Donald Stufft wrote: >> On Aug 6, 2013, at 8:22 AM, Nick Coghlan wrote: >>> One means by which I could see an f.pypi.python.org DNS record being >>> left in place indefinitely is if the TUF folks are able to come up >>> with a scheme for offering end-to-end security for the *existing* PyPI >>> metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the >>> TUF client side integrity checks are invoked by pip. In that case, the >>> security argument regarding the lack of TLS on the subdomains would be >>> rendered moot, and the backwards compatibility argument for keeping it >>> active would win. >> >> It would be rendered moot as far as any tooling that was updated to work >> with it (such as pip etc) however the browser level attacks would still be >> in play. If those can be solved essentially require giving mirror operators >> a SSL certificate for N.pypi.python.org (which still does not preclude a >> malicious mirror operator or a compromised mirror from being used to >> steal logins on PyPI). > > We're not talking about "mirror operators" in an abstract sense any > more: we're talking specifically about the possibility of a formal > relationship with Gocept to keep the f.pypi.python.org domain alive > indefinitely. > > It isn't Gocept's fault that the upstream mirroring system design was > broken, so the burden is on us to work with Christian to come up with > a transition plan that is acceptable to both sides. I recognize this. The problem is that Gocept isn't the sole user of their mirror. If they were (or we knew the set of users who were) then we could approach each one of them. > > That may mean retiring the subdomain and Gocept changing the > configuration at affected sites, or it may mean Gocept negotiating a > more formal relationship with the PSF to continue operating > f.pypi.python.org in particular. Both options need to be on the table > from the upstream side, giving Gocept a chance to assess the > alternatives (i.e. change existing sites to point to a new domain, or > accept that some existing sites will remain insecure until they have > been upgraded, just as we have to accept that old clients will remain > insecure when accessing the main site over HTTP). Gocept can't make that decision for installations other than their own. Christian said that his mirror is seeing 150-300GB of traffic besides the traffic Gocept generates. Some of that is coming from ``pip --use-mirrors`` I'm sure, but some of it is also coming from other people who have hardcoded f.pypi.python.org in their config for whom we don't know who they are and we don't have a good way of informing them so they can make an informed consent on using an insecure transport. > >>> Another potential alternative might be for Gocept to approach the PSF >>> about getting an SSL certificate for that domain, ensuring pip and >>> setuptools both support HSTS, and then switching that mirror over to >>> using HSTS (so even configurations hardcoded to use >>> http://f.pypi.python.org will still get a validated secure >>> connection). >> >> FWIW this doesn't make it secure unless they change their configuration to >> point to https://f.pypi.python.org/simple instead of http://f.pypi.python.org/simple.* >> >> Programmatic libraries typically don't support HSTS (as HSTS it primarily used >> to prevent attacks that don't typically apply to command line clients). And certainly >> none of the existing tools support it. So given that we'd be relying on the redirect >> that upgrades the connection to HTTPS an attacker could simply return HTML >> instead of the redirect to HTTPS. > > Yeah, I realised this flaw after posting. An alternative hack that > would allow the problem to be solved through an "upgrade > pip/easy_install" solution rather than an "ensure all configs have > been edited" approach would be a simple URL translation map that > converts legacy "http://f.pypi.python.org" references to a new URL. > >> Hence why it makes sense to remove, because either way users will need to edit >> their existing configurations if they intend to be secure and moving to a different >> domain will prevent the in browser attacks as well. >> >> * This is also true of anyone who has hard coded an url to http://pypi.python.org/simple/ >> however there's no reasonable way to fix that. > > Hardcoded references to http://f.pypi.python.org/simple/ aren't *that* > different from hardcoded references to the main site. The only > addition is the inclusion of Gocept in the chain of trust. Given that > Christian wrote the now recommended mirroring client, trusting > Christian/Gocept is fairly unavoidable at this point :) The main difference is hard coded references to PyPI is unlikely. The clients defaulted to that so there was no reason to point to it in your configuration. This is representative of the traffic we see coming into PyPI and the decline of HTTP connections to /simple/. > > We don't have a hard deadline for fixing this on the upstream side - > it's in the "important but not currently urgent" category. If we can > get the active legacy mirrors down to just f.pypi.python.org that will > be solid progress, and then we can work out a specific arrangement for > that last mirror which works for Gocept as well. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mmericke at gmail.com Wed Aug 7 03:50:33 2013 From: mmericke at gmail.com (Michael Merickel) Date: Tue, 6 Aug 2013 20:50:33 -0500 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> Message-ID: How about building a deprecation period into the tooling? pip 1.5+ could warn users who are using *.pypi.python.org of the error in their ways and encourage them to switch to the new system and gives a date of total removal. After removal the code could also be removed from pip 1.x+. - Michael On Tue, Aug 6, 2013 at 8:36 PM, Donald Stufft wrote: > > On Aug 6, 2013, at 9:10 PM, Nick Coghlan wrote: > > > On 7 August 2013 01:58, Donald Stufft wrote: > >> On Aug 6, 2013, at 8:22 AM, Nick Coghlan wrote: > >>> One means by which I could see an f.pypi.python.org DNS record being > >>> left in place indefinitely is if the TUF folks are able to come up > >>> with a scheme for offering end-to-end security for the *existing* PyPI > >>> metadata, *and* the TUF metadata is mirrored by bandersnatch *and* the > >>> TUF client side integrity checks are invoked by pip. In that case, the > >>> security argument regarding the lack of TLS on the subdomains would be > >>> rendered moot, and the backwards compatibility argument for keeping it > >>> active would win. > >> > >> It would be rendered moot as far as any tooling that was updated to work > >> with it (such as pip etc) however the browser level attacks would still > be > >> in play. If those can be solved essentially require giving mirror > operators > >> a SSL certificate for N.pypi.python.org (which still does not preclude > a > >> malicious mirror operator or a compromised mirror from being used to > >> steal logins on PyPI). > > > > We're not talking about "mirror operators" in an abstract sense any > > more: we're talking specifically about the possibility of a formal > > relationship with Gocept to keep the f.pypi.python.org domain alive > > indefinitely. > > > > It isn't Gocept's fault that the upstream mirroring system design was > > broken, so the burden is on us to work with Christian to come up with > > a transition plan that is acceptable to both sides. > > I recognize this. The problem is that Gocept isn't the sole user of > their mirror. If they were (or we knew the set of users who were) then > we could approach each one of them. > > > > > That may mean retiring the subdomain and Gocept changing the > > configuration at affected sites, or it may mean Gocept negotiating a > > more formal relationship with the PSF to continue operating > > f.pypi.python.org in particular. Both options need to be on the table > > from the upstream side, giving Gocept a chance to assess the > > alternatives (i.e. change existing sites to point to a new domain, or > > accept that some existing sites will remain insecure until they have > > been upgraded, just as we have to accept that old clients will remain > > insecure when accessing the main site over HTTP). > > Gocept can't make that decision for installations other than their own. > Christian said that his mirror is seeing 150-300GB of traffic besides the > traffic Gocept generates. Some of that is coming from ``pip --use-mirrors`` > I'm sure, but some of it is also coming from other people who have > hardcoded f.pypi.python.org in their config for whom we don't know who > they are and we don't have a good way of informing them so they can > make an informed consent on using an insecure transport. > > > > >>> Another potential alternative might be for Gocept to approach the PSF > >>> about getting an SSL certificate for that domain, ensuring pip and > >>> setuptools both support HSTS, and then switching that mirror over to > >>> using HSTS (so even configurations hardcoded to use > >>> http://f.pypi.python.org will still get a validated secure > >>> connection). > >> > >> FWIW this doesn't make it secure unless they change their configuration > to > >> point to https://f.pypi.python.org/simple instead of > http://f.pypi.python.org/simple.* > >> > >> Programmatic libraries typically don't support HSTS (as HSTS it > primarily used > >> to prevent attacks that don't typically apply to command line clients). > And certainly > >> none of the existing tools support it. So given that we'd be relying on > the redirect > >> that upgrades the connection to HTTPS an attacker could simply return > HTML > >> instead of the redirect to HTTPS. > > > > Yeah, I realised this flaw after posting. An alternative hack that > > would allow the problem to be solved through an "upgrade > > pip/easy_install" solution rather than an "ensure all configs have > > been edited" approach would be a simple URL translation map that > > converts legacy "http://f.pypi.python.org" references to a new URL. > > > >> Hence why it makes sense to remove, because either way users will need > to edit > >> their existing configurations if they intend to be secure and moving to > a different > >> domain will prevent the in browser attacks as well. > >> > >> * This is also true of anyone who has hard coded an url to > http://pypi.python.org/simple/ > >> however there's no reasonable way to fix that. > > > > Hardcoded references to http://f.pypi.python.org/simple/ aren't *that* > > different from hardcoded references to the main site. The only > > addition is the inclusion of Gocept in the chain of trust. Given that > > Christian wrote the now recommended mirroring client, trusting > > Christian/Gocept is fairly unavoidable at this point :) > > The main difference is hard coded references to PyPI is unlikely. The > clients defaulted to that so there was no reason to point to it in your > configuration. This is representative of the traffic we see coming into > PyPI and the decline of HTTP connections to /simple/. > > > > > We don't have a hard deadline for fixing this on the upstream side - > > it's in the "important but not currently urgent" category. If we can > > get the active legacy mirrors down to just f.pypi.python.org that will > > be solid progress, and then we can work out a specific arrangement for > > that last mirror which works for Gocept as well. > > > > Cheers, > > Nick. > > > > -- > > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 7 03:59:58 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 21:59:58 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> Message-ID: On Aug 6, 2013, at 9:50 PM, Michael Merickel wrote: > How about building a deprecation period into the tooling? pip 1.5+ could warn users who are using *.pypi.python.org of the error in their ways and encourage them to switch to the new system and gives a date of total removal. After removal the code could also be removed from pip 1.x+. > > - Michael pip 1.5 already warns if you use ``--use-mirrors`` or ``--mirrors``. I suppose a warning could be added if you use -i N.pypi.python.org as well. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Wed Aug 7 04:14:17 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 6 Aug 2013 22:14:17 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: <5201ACBD.7080700@students.poly.edu> References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> <5201ACBD.7080700@students.poly.edu> Message-ID: On Aug 6, 2013, at 10:11 PM, Trishank Karthik Kuppusamy wrote: > On 08/06/2013 09:59 PM, Donald Stufft wrote: >> >> On Aug 6, 2013, at 9:50 PM, Michael Merickel wrote: >> >>> How about building a deprecation period into the tooling? pip 1.5+ could warn users who are using *.pypi.python.org of the error in their ways and encourage them to switch to the new system and gives a date of total removal. After removal the code could also be removed from pip 1.x+. >>> >>> - Michael >> >> pip 1.5 already warns if you use ``--use-mirrors`` or ``--mirrors``. I suppose a warning could be added if you use -i N.pypi.python.org as well. >> > > Does anyone use anything other than pip to download from N.pypi.python.org? > > Yes. Other tooling can be pointed to N.pypi.python.org by specifying them as an index. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tk47 at students.poly.edu Wed Aug 7 04:11:09 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Tue, 6 Aug 2013 22:11:09 -0400 Subject: [Distutils] What to do about the PyPI mirrors In-Reply-To: References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> <86A9F496-C1FD-4A93-808A-94457A23EA01@coderanger.net> <26AFD176-B231-4678-91B8-B20A674F845E@stufft.io> <5AAD608A-29B1-4C03-9248-404F55C8878A@gocept.com> <0A38A9F2-B6A6-4F5C-BB84-5870DD222149@stufft.io> <9385C0C4-F915-4FA3-B0F8-7A0E487E5C26@stufft.io> Message-ID: <5201ACBD.7080700@students.poly.edu> On 08/06/2013 09:59 PM, Donald Stufft wrote: > > On Aug 6, 2013, at 9:50 PM, Michael Merickel > wrote: > >> How about building a deprecation period into the tooling? pip 1.5+ >> could warn users who are using *.pypi.python.org >> of the error in their ways and encourage >> them to switch to the new system and gives a date of total removal. >> After removal the code could also be removed from pip 1.x+. >> >> - Michael > > pip 1.5 already warns if you use ``--use-mirrors`` or ``--mirrors``. I > suppose a warning could be added if you use -i N.pypi.python.org > as well. > Does anyone use anything other than pip to download from N.pypi.python.org? -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Aug 7 14:46:25 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 7 Aug 2013 12:46:25 +0000 (UTC) Subject: [Distutils] What to do about the PyPI mirrors References: <86867B11-93C9-4940-94B6-F470BEEE526E@stufft.io> Message-ID: Christian Theune gocept.com> writes: > It's really hard for me to write this mail without cussing - the situation is very frustrating: the > community dynamics seem to "want to move forward" where they from my perspective "wander left and > right and break stuff like a drunken elephant driving a tank throught the Louvre". Le Louvre has a rather large inner yard, so it may actually be doable - the elephant will have to duck at times, though. Regards Antoine. From donald at stufft.io Thu Aug 8 06:49:15 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 8 Aug 2013 00:49:15 -0400 Subject: [Distutils] pip 1.4.1 and virtualenv 1.10.1 released Message-ID: We've released pip 1.4.1 and virtualenv 1.10.1. One major change is both of these releases are signed with My GPG key instead of Jannis's. pip 1.4.1 (2013-08-07) ------------------------------- * **New Signing Key** Release 1.4.1 is using a different key than normal with fingerprint: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA * Fixed issues with installing from pybundle files (Pull #1116). * Fixed error when sysconfig module throws an exception (Pull #1095). * Don't ignore already installed pre-releases (Pull #1076). * Fixes related to upgrading setuptools (Pull #1092). * Fixes so that --download works with wheel archives (Pull #1113). * Fixes related to recognizing and cleaning global build dirs (Pull #1080). virtualenv 1.10.1 (2013-08-07) ------------------------------------------ * **New Signing Key** Release 1.10.1 is using a different key than normal with fingerprint: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA * Upgraded pip to v1.4.1 * Upgraded setuptools to v0.9.8 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From bill at baddogconsulting.com Thu Aug 8 19:23:11 2013 From: bill at baddogconsulting.com (William Deegan) Date: Thu, 8 Aug 2013 10:23:11 -0700 Subject: [Distutils] Problem uploading SCons package to pypi. Receiving 403 You are not allowed to edit 'scons' package information Message-ID: Greetings, I apologize in advance if this is the wrong place for this. I've filed a support ticket and a bug request at the following locations: https://bitbucket.org/pypa/pypi/issue/49/upload-failed-403-you-are-not-allowed-to http://sourceforge.net/p/pypi/support-requests/285/ Here's the summary of the problem. The SCons project maintainer has been changed from Steven Knight (knight) to Gary Oberbrunner(garyo) and myself (William Deegan (bdbaddog)) The appropriate (I believe) roles have been added to Gary and my accounts: https://pypi.python.org/pypi?:action=role_form&package_name=SCons (We're both marked Owner). I updated my setup.py to indicate that I'd be packaging and tried: python setup.py --verbose sdist upload --show-response Which yields: ?. Creating tar archive removing 'scons-2.3.0' (and everything under it) running upload Using PyPI login from /Users/bdbaddog/.pypirc Submitting dist/scons-2.3.0.tar.gz to http://pypi.python.org/pypi Upload failed (403): You are not allowed to edit 'scons' package information The last package uploaded to pypi is ancient and we'd like to put a current version there. Can anyone either correct the permissions and/or point us to what we're doing wrong? Thanks, Bill From scott.e.townsend at nasa.gov Thu Aug 8 21:19:43 2013 From: scott.e.townsend at nasa.gov (Townsend, Scott E. (GRC-RTM0)[Vantage Partners, LLC]) Date: Thu, 8 Aug 2013 14:19:43 -0500 Subject: [Distutils] Unexpected VersionConflict Message-ID: During easy_install of an egg where two versions of pyparsing were available (1.5.2 and 1.5.6), a VersionConflict was raised: pkg_resources.VersionConflict: (pyparsing 1.5.6 (/usr/lib/python2.7/dist-packages), Requirement.parse('pyparsing==1.5.2')) This was unexpected since sys.path (via virtualenv) has version 1.5.2 before 1.5.6. And the system gets 1.5.2 from 'import pyparsing', not 1.5.6. I've traced this to the line calling _sort_dists(dists), line 801 in my copy of pkg_resources.py: def __getitem__(self,project_name): """Return a newest-to-oldest list of distributions for `project_name` """ try: return self._cache[project_name] except KeyError: project_name = project_name.lower() if project_name not in self._distmap: return [] if project_name not in self._cache: dists = self._cache[project_name] = self._distmap[project_name] _sort_dists(dists) return self._cache[project_name] The problem is that one dependent package of the egg has a requirement of 'pyparsing' while a subsequent dependent package has a requirement of 'pyparsing==1.5.2'. The intent was that by using virtualenv with a correct sys.path version 1.5.2 would be used for both requirements. Unfortunately, because of the call to _sort_dists(), the 'pyparsing' requirement is resolved to 1.5.6 by env.best_match() in WorkingSet.resolve(). Once that resolution was made, the more explicit requirement fails. Note that without the _sort_dists() call the egg loads and runs correctly, using pyparsing 1.5.2. Its not clear to me that removing the _sort_dists() call is correct in general, but it appears to be a bug that an egg which would load and run correctly reports a VersionConflict. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Fri Aug 9 01:44:03 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 8 Aug 2013 19:44:03 -0400 Subject: [Distutils] Unexpected VersionConflict In-Reply-To: References: Message-ID: On Thu, Aug 8, 2013 at 3:19 PM, Townsend, Scott E. (GRC-RTM0)[Vantage Partners, LLC] wrote: > During easy_install of an egg where two versions of pyparsing were available > (1.5.2 and 1.5.6), a VersionConflict was raised: > > pkg_resources.VersionConflict: (pyparsing 1.5.6 > (/usr/lib/python2.7/dist-packages), Requirement.parse('pyparsing==1.5.2')) > > This was unexpected since sys.path (via virtualenv) has version 1.5.2 before > 1.5.6. And the system gets 1.5.2 from 'import pyparsing', not 1.5.6. Have you tried declaring the 1.5.2 dependency from your main project? IIRC, that should make it take precedence over either of the indirect dependencies. From jess.austin at gmail.com Fri Aug 9 14:04:27 2013 From: jess.austin at gmail.com (Jess Austin) Date: Fri, 9 Aug 2013 07:04:27 -0500 Subject: [Distutils] help requested getting existing project "gv" into PyPI Message-ID: hi, I recently suggested that the "gv" package be added to PyPI. This is a close-to-C Graphviz port distributed by the Graphviz graph visualization project. They currently distribute this package in source, and in rpms, debs, and the equivalents of those for Windows, Mac, and Solaris. It seemed only natural that a small entry in PyPI be added so that "pip" would be able to install this package in a straightforward way (i.e., without using "git" references etc). The issue tracker url for this suggestion: http://www.graphviz.org/mantisbt/view.php?id=2324 As you can see at that link, the maintainer had some difficulty when trying to create the project through the web form, culminating in a "Not found()" error message. I unfortunately don't have a great deal of insight into this, but I thought someone on this list might be able to help. Alternatively, please tell me if I'm making an unreasonable suggestion. It was my understanding that "pip" is able to find installable files from even quite rudimentary PyPI entries, even if they're hosted elsewhere. However, if that's not the case, and the PyPI entry would require extensive ongoing interaction from the "gv" maintainers, then this probably isn't going to work. thanks, Jess -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Fri Aug 9 19:46:33 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 9 Aug 2013 13:46:33 -0400 Subject: [Distutils] help requested getting existing project "gv" into PyPI In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 8:04 AM, Jess Austin wrote: > hi, > > I recently suggested that the "gv" package be added to PyPI. This is a > close-to-C Graphviz port distributed by the Graphviz graph visualization > project. They currently distribute this package in source, and in rpms, > debs, and the equivalents of those for Windows, Mac, and Solaris. It seemed > only natural that a small entry in PyPI be added so that "pip" would be able > to install this package in a straightforward way (i.e., without using "git" > references etc). > > The issue tracker url for this suggestion: > > http://www.graphviz.org/mantisbt/view.php?id=2324 > > As you can see at that link, the maintainer had some difficulty when trying > to create the project through the web form, culminating in a "Not found()" > error message. I unfortunately don't have a great deal of insight into this, > but I thought someone on this list might be able to help. > > Alternatively, please tell me if I'm making an unreasonable suggestion. It > was my understanding that "pip" is able to find installable files from even > quite rudimentary PyPI entries, even if they're hosted elsewhere. However, > if that's not the case, and the PyPI entry would require extensive ongoing > interaction from the "gv" maintainers, then this probably isn't going to > work. It would require ongoing interaction with *somebody*, in order to update the download links whenever a new version is released. Note, too, that unless there is a specific download for the Python binding that uses setup.py to invoke its build process, a PyPI listing won't make the binding pip-installable. From pje at telecommunity.com Fri Aug 9 20:06:59 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 9 Aug 2013 14:06:59 -0400 Subject: [Distutils] Unexpected VersionConflict In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 9:04 AM, Townsend, Scott E. (GRC-RTM0)[Vantage Partners, LLC] wrote: > That does indeed fix this problem, but requiring an egg writer to > interrogate all dependent packages (and their dependent packages?) and > then hoist the dependencies up won't be robust if those dependent packages > change their requirements between the time the egg is written and the time > it's loaded. That's why it's generally left up to the application installer/integrator to address these sorts of conflicts, and why it's usually a bad idea for anybody to be requiring exact versions. (I'd suggest asking your dependency to not specify exact point releases, too.) There is one other possibility, though: have you tried reversing the list of your project's dependencies so that the more-specific project's dependencies are processed first? (i.e., so that 1.5.2 will be selected as "best" before the non-version-specific one is used) That might fix it without requiring you to pin a version yourself. > It seems to me that if a requirement has no version specified, then it > shouldn't have a way to cause a VersionConflict. One possible way of > implementing this would be to have resolve() only check that a > distribution exists if no version is specified, do not update 'best'. > 'to_activate' would need to be updated with 'generic' distributions only > if a requirement with a version specifier hadn't been seen. Thing is, the complete lack of a version requirement is pretty rare, AFAIK, and so is the exact version match that's causing your problem. The combination existing on the same library is therefore that much rarer, so such a change would just be something of a complex kludge that wouldn't improve any other use cases. Probably a better way would be to change the version resolution algorithm to be less "greedy", and simply rule out unacceptable versions as the process goes along, then picking the most recent versions left when everything necessary has been eliminated. (Ideally, such an algorithm would still track which distributions had the conflicting requirements, though.) That would be a pretty significant change, but potentially worth someone investigating. There are some big downsides, however: * It's not really a suitable algorithm for installation tools that don't have access to a universal dependency graph, because they can't tell what the next level of dependencies will be * Recursion causes a combinatorial explosion, because what if you select a different version and it has different dependencies (recursively)? Now you need backtracking, and there's a possibility that the algorithm will take a ridiculous amount of time to still conclude that there's nothing you can do about the conflict. These drawbacks are basically why I just wrote a simple greedy match algorithm in the first place, figuring it could always be improved later if it turned out to be needed in practice. There have been occasional comments over the last decade or so by people with ideas for better algorithms, but no actual code yet, as far as I know. From ncoghlan at gmail.com Fri Aug 9 23:41:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 10 Aug 2013 07:41:06 +1000 Subject: [Distutils] Unexpected VersionConflict In-Reply-To: References: Message-ID: On 10 August 2013 04:06, PJ Eby wrote: > Probably a better way would be to change the version resolution > algorithm to be less "greedy", and simply rule out unacceptable > versions as the process goes along, then picking the most recent > versions left when everything necessary has been eliminated. > (Ideally, such an algorithm would still track which distributions had > the conflicting requirements, though.) The part I find most surprising is the fact that pkg_resources ignores sys.path order entirely when choosing between multiple acceptable versions. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From scott.e.townsend at nasa.gov Fri Aug 9 15:04:50 2013 From: scott.e.townsend at nasa.gov (Townsend, Scott E. (GRC-RTM0)[Vantage Partners, LLC]) Date: Fri, 9 Aug 2013 08:04:50 -0500 Subject: [Distutils] Unexpected VersionConflict In-Reply-To: Message-ID: That does indeed fix this problem, but requiring an egg writer to interrogate all dependent packages (and their dependent packages?) and then hoist the dependencies up won't be robust if those dependent packages change their requirements between the time the egg is written and the time it's loaded. It seems to me that if a requirement has no version specified, then it shouldn't have a way to cause a VersionConflict. One possible way of implementing this would be to have resolve() only check that a distribution exists if no version is specified, do not update 'best'. 'to_activate' would need to be updated with 'generic' distributions only if a requirement with a version specifier hadn't been seen. On 8/8/13 7:44 PM, "PJ Eby" wrote: >On Thu, Aug 8, 2013 at 3:19 PM, Townsend, Scott E. (GRC-RTM0)[Vantage >Partners, LLC] wrote: >> During easy_install of an egg where two versions of pyparsing were >>available >> (1.5.2 and 1.5.6), a VersionConflict was raised: >> >> pkg_resources.VersionConflict: (pyparsing 1.5.6 >> (/usr/lib/python2.7/dist-packages), >>Requirement.parse('pyparsing==1.5.2')) >> >> This was unexpected since sys.path (via virtualenv) has version 1.5.2 >>before >> 1.5.6. And the system gets 1.5.2 from 'import pyparsing', not 1.5.6. > >Have you tried declaring the 1.5.2 dependency from your main project? >IIRC, that should make it take precedence over either of the indirect >dependencies. From pje at telecommunity.com Sat Aug 10 17:52:27 2013 From: pje at telecommunity.com (PJ Eby) Date: Sat, 10 Aug 2013 11:52:27 -0400 Subject: [Distutils] Unexpected VersionConflict In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 5:41 PM, Nick Coghlan wrote: > On 10 August 2013 04:06, PJ Eby wrote: >> Probably a better way would be to change the version resolution >> algorithm to be less "greedy", and simply rule out unacceptable >> versions as the process goes along, then picking the most recent >> versions left when everything necessary has been eliminated. >> (Ideally, such an algorithm would still track which distributions had >> the conflicting requirements, though.) > > The part I find most surprising is the fact that pkg_resources ignores > sys.path order entirely when choosing between multiple acceptable > versions. Technically, it doesn't ignore it: if a distribution is listed in sys.path, it takes precedence over any distribution listed later, or that has to be found *in* a directory on sys.path, and will in fact cause a VersionConflict if you ask for a version spec that it doesn't match. However, where the distributions aren't listed in sys.path, but merely *found in a directory on sys.path*, then sys.path has no bearing. It would make things a lot more complicated, and not just in an "implementation is hard to explain" kind of way. (In principle, you could write an Environment subclass that had a different precedence, but I'm not sure what benefit you would gain from the added complexity. The core version resolution algorithm wouldn't be affected, though, since it delegates the "find me something I haven't already got on sys.path" operation to an Environment instance's best_match() method.) From vinay_sajip at yahoo.co.uk Sat Aug 10 19:12:39 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 10 Aug 2013 17:12:39 +0000 (UTC) Subject: [Distutils] Last PEP 426 update for a while References: Message-ID: Nick Coghlan gmail.com> writes: > 6. Qualified names are currently restricted to Python 2 compatible > identifiers. This constraint causes some problems in practice, some of which might be more problematic than others: 1. Some existing distributions have group names which violate the constraint. For example, nose uses group names such as "nose.plugins.0.10". Around a hundred other distributions advertise plugins under group names like this. 2. Some distributions have package names which aren't actual packages but which nevertheless contain source files. For example, Django lists "django.utils.2to3_fixers" in its list of packages, though this is just a directory containing fixers and not an actual package. This, of course, also violates the constraint. It seems like it might be worth relaxing the restriction so that a qualified name starts with a Python 2 compatible segment, but can have subsequent components matching the [A-Za-z0-9]+ pattern. Regards, Vinay Sajip From ncoghlan at gmail.com Sat Aug 10 23:17:34 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 10 Aug 2013 17:17:34 -0400 Subject: [Distutils] Last PEP 426 update for a while In-Reply-To: References: Message-ID: On 10 August 2013 13:12, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> 6. Qualified names are currently restricted to Python 2 compatible >> identifiers. > > This constraint causes some problems in practice, some of which might be > more problematic than others: > > 1. Some existing distributions have group names which violate the constraint. > For example, nose uses group names such as "nose.plugins.0.10". Around a > hundred other distributions advertise plugins under group names like this. > > 2. Some distributions have package names which aren't actual packages but > which nevertheless contain source files. For example, Django lists > "django.utils.2to3_fixers" in its list of packages, though this is just > a directory containing fixers and not an actual package. This, of course, > also violates the constraint. > > It seems like it might be worth relaxing the restriction so that a qualified > name starts with a Python 2 compatible segment, but can have subsequent > components matching the [A-Za-z0-9]+ pattern. Done. Cheers, Nick. From donald at stufft.io Sun Aug 11 03:07:07 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 10 Aug 2013 21:07:07 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme Message-ID: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Ok round two of this PEP. I've made a number of modifications to it: * Changed the title to be more relevant to the actual proposal. * Cleared up language to make it obvious that this was explicitly the naming and auto discovery being removed and not the mirrors in general. * Removed pointless distinctions between "Official" and "Unofficial" mirrors * Removed comments about staleness and points about the CDN which were not directly related to this PEP as the PEP doesn't attempt to solve issues with the mirroring protocol, just the discovery and naming scheme. * Incorporate feedback from the thread and simplify the proposal as well as extend the time frame. * Respond to some of the feedback from the thread that wasn't incorporated. I've explicitly CC'd Christian to this email so he can hopefully see it easier. Abstract ======== This PEP provides a path to deprecate and ultimately remove the auto discovery of PyPI mirrors as well as the hard coded naming scheme which requires delegating a domain name under pypi.python.org to a third party. Rationale ========= The PyPI mirroring infrastructure (defined in `PEP381`_) provides a means to mirror the content of PyPI used by the automatic installers. It also provides a method for auto discovery of mirrors and a consistent naming scheme. There are a number of problems with the auto discovery protocol and the naming scheme: * They give control over a \*.python.org domain name to a third party, allowing that third party to set or read cookies on the pypi.python.org and python.org domain name. * The use of a sub domain of pypi.python.org means that the mirror operators will never be able to get a SSL certificate of their own, and giving them one for a python.org domain name is unlikely to happen. * The auto discovery uses an unauthenticated protocol (DNS). * The lack of a TLS certificate on these domains means that clients can not be sure that they have not been a victim of DNS poisoning or a MITM attack. * The auto discovery protocol was designed to enable a client to automatically select a mirror for use. This is no longer a requirement because the CDN that PyPI is now using a globally distributed network of servers which will automatically select one close to the client without any effort on the clients part. * The auto discovery protocol and use of the consistent naming scheme has only ever been implemented by one installer (pip), and its implementation, besides being insecure, has serious issues with performance and is slated for removal with it's next release (1.5). * While there are provisions in `PEP381`_ that would solve *some* of these issues for a dedicated client it would not solve the issues that affect a users browser. Additionally these provisions have not been implemented by any installer to date. Due to the number of issues, some of them very serious, and the CDN which provides most of the benefit of the auto discovery and consistent naming scheme this PEP proposes to first deprecate and then remove the [a..z].pypi.python.org names for mirrors and the last.pypi.python.org name for the auto discovery protocol. The ability to mirror and the method of mirror will not be affected and will continue to exist as written in `PEP381`_. Operators of existing mirrors are encouraged to acquire their own domains and certificates to use for their mirrors if they wish to continue hosting them. Plan for Deprecation & Removal ============================== Immediately upon acceptance of this PEP documentation on PyPI will be updated to reflect the deprecated nature of the official public mirrors and will direct users to external resources like http://www.pypi-mirrors.org/ to discover unofficial public mirrors if they wish to use one. Mirror operators, if they wish to continue operating their mirror, should acquire a domain name to represent their mirror and, if they are able, a TLS certificate. Once they have acquired a domain they should redirect their assigned N.pypi.python.org domain name to their new domain. On Feb 15th, 2014 the DNS entries for [a..z].pypi.python.org and last.pypi.python.org will be removed. At any time prior to Feb 15th, 2014 a mirror operator may request that their domain name be reclaimed by PyPI and pointed back at the master. Why Feb 15th, 2014 ------------------ The most critical decision of this PEP is the final cut off date. If the date is too soon then it needlessly punishes people by forcing them to drop everything to update their deployment scripts. If the date is too far away then the extended period of time does not help with the migration effort and merely puts off the migration until a later date. The date of Feb 15th, 2014 has been chosen because it is roughly 6 months from the date of the PEP. This should ensure a lengthy period of time to enable people to update their deployment procedures to point to the new domains names without merely padding the cut off date. Why the DNS entries must be removed ----------------------------------- While it would be possible to simply reclaim the domain names used in mirror and direct them back at PyPI in order to prevent users from needing to update configurations to point away from those domains this has a number of issues. * Anyone who currently has these names hard coded in their configuration has them hard coded as HTTP. This means that by allowing these names to continue resolving we make it simple for a MITM operator to attack users by rewriting the redirect to HTTPS prior to giving it to the client. * The overhead of maintaining several domains pointing at PyPI has proved troublesome for the small number of N.pypi.python.org domains that have already been reclaimed. They often times get mis-configured when things change on the service which often leaves them broken for months at a time until somebody notices. By leaving them in we leave users of these domains open to random breakages which are less likely to get caught or noticed. * People using these domains have explicitly chosen to use them for one reason or another. One such reason may be because they do not wish to deploy from a host located in a particular country. If these domains continue to resolve but do not point at their existing locations we have silently removed this choice from the existing users of those domains. That being said, removing the entries *will* require users who have modified their configuration to either point back at the master (PyPI) or select a new mirror name to point at. This is regarded as a regrettable requirement to protect PyPI itself and the users of the mirrors from the attacks outlined above or, at the very least, require them to make an informed decision about the insecurity. Public or Private Mirrors ========================= The mirroring protocol will continue to exist as defined in `PEP381`_ and people are encouraged to to host public and private mirrors if they so desire. The recommended mirroring client is `Bandersnatch`_. .. _PyPI: https://pypi.python.org/ .. _PEP381: http://www.python.org/dev/peps/pep-0381/ .. _Bandersnatch: https://pypi.python.org/pypi/bandersnatch ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From jaraco at jaraco.com Sun Aug 11 16:38:21 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sun, 11 Aug 2013 14:38:21 +0000 Subject: [Distutils] How to handle launcher script importability? Message-ID: In Setuptools 1.0 (currently in beta), I've added an experimental, opt-in feature to install pure Python launcher scripts on Windows instead of installing a launcher executable for each script, with the intention that these scripts will be launched by pylauncher or Python directly, eventually obviating the need for a launcher executable in setuptools at all. This means that instead of installing, for example: Scripts\my-command.exe Scripts\my-command-script.py Scripts\my-command.exe.manifest Instead Setuptools just installs: Scripts\my-command.py This technique is much like the scripts that get installed to bin/ on Unix, except that due to the nature of launching commands on Windows, the .py extension is essentially required. One problem with this technique is that if the script is also a valid module name, it can be imported, and because Python puts the script's directory at the top of sys.path, it _will_ be imported if that name is imported. This happens, for example, after installing Cython. Cython provides a command, 'cython', and a (extension) module called 'cython'. If one launches cython using the script launcher, the 'cython' module will not be importable (because "import cython" will import the launcher script). Presumably, this is why '-script' was added to the launcher scripts in previous versions. This is a rather unfortunate situation, and I'd like to solicit comments for a way to avoid this situation. I see a few options: 1. Have the setuptools-generated launcher scripts del sys.path[0] before launching. 2. Accept the implementation and file bugs with the offending projects like Cython to have them either rename their script or rename their internal module. 3. Continue to generate the script names with '-script.py' appended, requiring invocation to always include -script.py on Windows. 4. Continue to generate executables, duplicating the effort of pylauncher, and dealing with the maintenance burden of that functionality. I don't see (2), (3), or (4) as really viable, so my proposal is to move forward with (1) if there aren't any better suggestions. If we move forward with (1), there are a few concerns that come to mind. First, this approach would not apply to package-supplied scripts. What should be done about those (if anything)? Second, this would apply to Unix and Windows scripts (I'd rather not make the distinction if possible) - are there any concerns about removing sys.path[0] in the launch scripts on Unix? Third, is it possible some users are depending on the presence of sys.path[0] or the assumed positions of other items in sys.path that should be protected by this change? Would it make sense to replace sys.path[0] with a non-existent directory instead? My instinct is that these concerns are not of sufficient importance to account for them, and that we should just implement (1) as simply as possible. Your comments and suggestions are appreciated. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From pje at telecommunity.com Sun Aug 11 18:17:14 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 11 Aug 2013 12:17:14 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: On Sun, Aug 11, 2013 at 10:38 AM, Jason R. Coombs wrote: > In Setuptools 1.0 (currently in beta), I've added an experimental, opt-in > feature to install pure Python launcher scripts on Windows instead of > installing a launcher executable for each script, with the intention that > these scripts will be launched by pylauncher or Python directly, eventually > obviating the need for a launcher executable in setuptools at all. > > This means that instead of installing, for example: > > Scripts\my-command.exe > Scripts\my-command-script.py > Scripts\my-command.exe.manifest > > Instead Setuptools just installs: > > Scripts\my-command.py > > This technique is much like the scripts that get installed to bin/ on Unix, > except that due to the nature of launching commands on Windows, the .py > extension is essentially required. > > One problem with this technique is that if the script is also a valid module > name, it can be imported, and because Python puts the script's directory at > the top of sys.path, it _will_ be imported if that name is imported. > > This happens, for example, after installing Cython. Cython provides a > command, 'cython', and a (extension) module called 'cython'. If one launches > cython using the script launcher, the 'cython' module will not be importable > (because "import cython" will import the launcher script). Presumably, this > is why '-script' was added to the launcher scripts in previous versions. > > This is a rather unfortunate situation, and I'd like to solicit comments for > a way to avoid this situation. I see a few options: > > 1. Have the setuptools-generated launcher scripts del sys.path[0] before > launching. > 2. Accept the implementation and file bugs with the offending projects like > Cython to have them either rename their script or rename their internal > module. > 3. Continue to generate the script names with '-script.py' appended, > requiring invocation to always include -script.py on Windows. > 4. Continue to generate executables, duplicating the effort of pylauncher, > and dealing with the maintenance burden of that functionality. > > I don't see (2), (3), or (4) as really viable, so my proposal is to move > forward with (1) if there aren't any better suggestions. > > If we move forward with (1), there are a few concerns that come to mind. Here's another problem with #1: you will break single-directory standalone portable app installs, where you use "easy_install -mad somedir" to install all of an app's dependencies to a single directory that the app can then be run from (assuming Python is available). In order to work around this issue, you'd need to hardcode sys.path entries for the dependencies, or do something else more complicated in order to ensure that dependency resolution will pick up the adjacent distributions before searching anything else on sys.path. > Third, is it possible some users are depending on the presence of > sys.path[0] Absolutely. It's a documented feature of Python that the script directory is always first on sys.path, so that you can provide modules and packages adjacent to it. That's how portable app installs work with easy_install. May I suggest an option 5 instead? Use the new .pyz (or .pyzw for non-console apps) as a zipped Python application. .pyz files aren't importable, but *are* executable. That's basically all that's needed to prevent importing -- a file extension that's launchable but not importable. (There's also an option 6, which is to use import system hooks to prevent the script modules from being found in the sys.path[0] entry, but that's rather hairier.) Using option 5 means the feature can only work with versions of Python on Windows that install the necessary PATHEXT support to allow that extension to work, but you're kind of limited to that anyway, because by default .py files aren't findable via PATH on Windows. Your post doesn't make it clear whether you're aware of that, btw: IIUC, on most Windows setups, executing a .py file via PATH doesn't work unless you've set up PATHEXT to include .py. So your feature's going to break until that's fixed, and AFAIK there is *no* Windows Python that fixes this, with the possible exception of 3.4 alpha, possibly a future alpha that hasn't been released yet, because last I saw on Python-Dev it was still being discussed *how* to update PATHEXT safely from the installer. In short: dropping .exe wrappers is not supportable on *any* current version of Python for Windows, in the sense that anybody who uses it will not yet be able to execute the scripts if they are currently doing so via PATH (and haven't manually fixed their PATHEXT). (This was one of the main reasons for using .exe wrappers in the first place.) The .pyz approach of course has the same drawback, but at least it should be viable for future Python versions, and doesn't have the sys.path[0] problems. I think you are going to have to keep .exe wrappers the default for all Python versions < 3.4. From pje at telecommunity.com Sun Aug 11 18:23:48 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 11 Aug 2013 12:23:48 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: On Sun, Aug 11, 2013 at 12:17 PM, PJ Eby wrote: > May I suggest an option 5 instead? Use the new .pyz (or .pyzw for > non-console apps) as a zipped Python application. .pyz files aren't > importable, but *are* executable. That's basically all that's needed > to prevent importing -- a file extension that's launchable but not > importable. (Details I forgot to mention: the script would be in __main__.py inside the zipped application file, and it would need to change sys.path[0], because sys.path[0] will be the .pyz file itself; it should replace it with the directory containing the .pyz file before doing anything else. That would be the correct way to simulate the existing .exe approach.) From jaraco at jaraco.com Sun Aug 11 19:58:26 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sun, 11 Aug 2013 17:58:26 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: > -----Original Message----- > From: PJ Eby [mailto:pje at telecommunity.com] > Sent: Sunday, 11 August, 2013 12:17 > > > Here's another problem with #1: you will break single-directory standalone > portable app installs, where you use "easy_install -mad somedir" to install all > of an app's dependencies to a single directory that the app can then be run > from (assuming Python is available). > > In order to work around this issue, you'd need to hardcode sys.path entries > for the dependencies, or do something else more complicated in order to > ensure that dependency resolution will pick up the adjacent distributions > before searching anything else on sys.path. > > > > Third, is it possible some users are depending on the presence of > > sys.path[0] > > Absolutely. It's a documented feature of Python that the script directory is > always first on sys.path, so that you can provide modules and packages > adjacent to it. That's how portable app installs work with easy_install. All good points to be considered. Thanks. > May I suggest an option 5 instead? Use the new .pyz (or .pyzw for non- > console apps) as a zipped Python application. .pyz files aren't importable, > but *are* executable. That's basically all that's needed to prevent importing > -- a file extension that's launchable but not importable. This sounds like a suitable idea, but as you mention in a subsequent message, this format has issues with sys.path assumptions as well. In this case, I'm inclined to suggest yet another option (7) - create another extension to specifically represent executable but not importable scripts, perhaps .pys/.pysw (or .pycs/.pygs to more closely match console script and gui script). It sounds as if there is a fundamental need for Python to define an extension that distinguishes a script from a module. > (There's also an option 6, which is to use import system hooks to prevent the > script modules from being found in the sys.path[0] entry, but that's rather > hairier.) Agreed, this sounds like it has its own subtle challenges. > Using option 5 means the feature can only work with versions of Python on > Windows that install the necessary PATHEXT support to allow that extension > to work, but you're kind of limited to that anyway, because by default .py > files aren't findable via PATH on Windows. > > Your post doesn't make it clear whether you're aware of that, btw: > IIUC, on most Windows setups, executing a .py file via PATH doesn't work > unless you've set up PATHEXT to include .py. So your feature's going to break > until that's fixed, and AFAIK there is *no* Windows Python that fixes this, > with the possible exception of 3.4 alpha, possibly a future alpha that hasn't > been released yet, because last I saw on Python-Dev it was still being > discussed *how* to update PATHEXT safely from the installer. > > In short: dropping .exe wrappers is not supportable on *any* current version > of Python for Windows, in the sense that anybody who uses it will not yet be > able to execute the scripts if they are currently doing so via PATH (and > haven't manually fixed their PATHEXT). (This was one of the main reasons > for using .exe wrappers in the first > place.) > > The .pyz approach of course has the same drawback, but at least it should be > viable for future Python versions, and doesn't have the sys.path[0] problems. > I think you are going to have to keep .exe wrappers the default for all Python > versions < 3.4. I am aware of the PATHEXT factor. I personally add .py and .pyw to the PATHEXT (for all users) on my systems, so I was unsure if Python 3.3 did add those or if pylauncher would add them (if installed separately). I was _hoping_ that was the case, but it sounds like it is not. I did include in the documentation notes about this requirement (https://bitbucket.org/pypa/setuptools/src/1.0b1/docs/easy_install.txt#cl-98 ). I do want to explore the possibility of setuptools facilitating this configuration such that it's easy for a Windows user to enable these settings even if Python does not. The nice thing about (7) is it would define an extension that's specifically meant to be executable, so would be an obvious (and potentially less controversial) choice to include in PATHEXT. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From pje at telecommunity.com Sun Aug 11 21:11:09 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 11 Aug 2013 15:11:09 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: On Sun, Aug 11, 2013 at 1:58 PM, Jason R. Coombs wrote: > This sounds like a suitable idea, but as you mention in a subsequent > message, this format has issues with sys.path assumptions as well. Meh. It's basically, a one-line fix in the __main__.py, i.e.: import sys,os.path; sys.path[0] = os.path.dirname(sys.path[0]) Not exactly rocket science. ;-) > In this case, I'm inclined to suggest yet another option (7) - create another > extension to specifically represent executable but not importable scripts, > perhaps .pys/.pysw (or .pycs/.pygs to more closely match console script and > gui script). Probably .pys/.pyws or .pws would be needed, due to issues with some Windows shells using extensions longer than three characters. (This came up in PEP 441 discussions on Python-Dev.) > It sounds as if there is a fundamental need for Python to define an > extension that distinguishes a script from a module. Yep. > I am aware of the PATHEXT factor. I personally add .py and .pyw to the > PATHEXT (for all users) on my systems, so I was unsure if Python 3.3 did add > those or if pylauncher would add them (if installed separately). I was > _hoping_ that was the case, but it sounds like it is not. I did include in > the documentation notes about this requirement > (https://bitbucket.org/pypa/setuptools/src/1.0b1/docs/easy_install.txt#cl-98 > ). > > I do want to explore the possibility of setuptools facilitating this > configuration such that it's easy for a Windows user to enable these > settings even if Python does not. It would definitely make sense to have an installer that sets this up, but it would need to be a Windows installer, I think, not a Python program. That is, I don't think setuptools can really do anything about it directly. Personally, I'm not sure I see the point of pushing for early elimination of .exe's - they don't depend on the registry or environment variables or anything else, which makes them great for standalone applications, and they work across all Python versions. Meanwhile, the experimental nature of your change -- and its inability to be the default on versions below 3.4 -- means you're going to be maintaining two sets of code for a very long time. OTOH, implementing a way to deploy an app as a .pyz/.pwz file is a useful feature in its own right, so it might not be doubling up as much. ;-) From ncoghlan at gmail.com Sun Aug 11 23:13:49 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 11 Aug 2013 17:13:49 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: We actually have a proposal on import-sig to allow module specific import path manipulation (including the ability to say "don't import this module from this directory, even though it looks like it is here"). I'd favour that mechanism over a new "not importable" file extension. If that doesn't make it into 3.4, the proposed zipapp extensions would also serve a similar purpose, with some straightforward sys.path manipulation in __main__.py (as PJE pointed out). Regards, Nick. P.S. Has anyone heard from Daniel lately, or know if he's away? I pinged him about getting the zipapp utility module PEP moving again a couple of weeks ago and haven't heard anything back. From jaraco at jaraco.com Mon Aug 12 01:31:07 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Sun, 11 Aug 2013 23:31:07 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> > -----Original Message----- > From: Nick Coghlan [mailto:ncoghlan at gmail.com] > Sent: Sunday, 11 August, 2013 17:14 > > We actually have a proposal on import-sig to allow module specific import > path manipulation (including the ability to say "don't import this module > from this directory, even though it looks like it is here"). I'd favour that > mechanism over a new "not importable" file extension. I don't believe this mechanism would suffice. My previous example was over-simplified to the general problem, which is that any script could potentially be imported as a module of the same name. So if I were to launch easy_install.py, it would set sys.path[0] to Scripts\ and if it were then to import cython (which it does), it would import Scripts/cython.py as cython, unless there were some way to globally declare all installed scripts somewhere so they're excluded from import. > If that doesn't make it into 3.4, the proposed zipapp extensions would also > serve a similar purpose, with some straightforward sys.path manipulation in > __main__.py (as PJE pointed out). Regardless what solution might be made available for Python 3.4, I'd prefer to work toward a solution that leverages pylauncher under older Pythons. After all, one of the huge benefits of pylauncher is that it supports multiple Pythons. If zipapp is the preferred mechanism for that, then so be it. I do agree that we should devise a best approach within the context of Python 3.4, and consider the backward-compatibility implications separately. My feeling is that zipapp is somewhat too convoluted in that it alters the sys.path, but then has to alter it back to simulate not being a zipapp. It's also a file within a file, meaning it can't be readily edited with a text editor, but requires a routine even just to inspect it. I guess that's more transparent than an executable, though. This approach also means that the script generation is not congruent with that on Unix systems. Using a zipapp means that the whole script generation needs to be special-cased for Windows. One of great benefits of using a simple script was that the code becomes largely unified (only requiring appending of an extension when on Windows). That is, unless zipapps can be made executable on Unix as well. Given these obstacles, do you still feel that zipapp is the best approach? -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From pje at telecommunity.com Mon Aug 12 03:37:28 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 11 Aug 2013 21:37:28 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Sun, Aug 11, 2013 at 7:31 PM, Jason R. Coombs wrote: >> -----Original Message----- >> From: Nick Coghlan [mailto:ncoghlan at gmail.com] >> Sent: Sunday, 11 August, 2013 17:14 >> >> We actually have a proposal on import-sig to allow module specific import >> path manipulation (including the ability to say "don't import this module >> from this directory, even though it looks like it is here"). I'd favour that >> mechanism over a new "not importable" file extension. > > I don't believe this mechanism would suffice. My previous example was > over-simplified to the general problem, which is that any script could > potentially be imported as a module of the same name. So if I were to launch > easy_install.py, it would set sys.path[0] to Scripts\ and if it were then to > import cython (which it does), it would import Scripts/cython.py as cython, > unless there were some way to globally declare all installed scripts somewhere > so they're excluded from import. Indeed. It really *does* need to be a "don't import this" extension, though it doesn't much matter what that extension is. Except on Windows, of course, where it has to be something associated with Python that also still works as a console app and is listed in PATHEXT. (As you surmised earlier, my choice of '-script.py' was indeed chosen to prevent accidental importing, as the '-' ensures it's not a valid module name.) >> If that doesn't make it into 3.4, the proposed zipapp extensions would also >> serve a similar purpose, with some straightforward sys.path manipulation in >> __main__.py (as PJE pointed out). > > Regardless what solution might be made available for Python 3.4, I'd prefer to > work toward a solution that leverages pylauncher under older Pythons. After > all, one of the huge benefits of pylauncher is that it supports multiple > Pythons. If zipapp is the preferred mechanism for that, then so be it. For 2.6+, zipapps would work as long as pylauncher supported them and put the requisite extensions in PATHEXT. > This approach also means that the script generation is not congruent with that > on Unix systems. Using a zipapp means that the whole script generation needs > to be special-cased for Windows. One of great benefits of using a simple > script was that the code becomes largely unified (only requiring appending of > an extension when on Windows). That is, unless zipapps can be made executable > on Unix as well. They're already executable on Unix (as of 2.6+), as they contain a #! line. And they don't need a special extension; on both Unix and Windows, Python detects zipapps by inspecting the tail signature, not by the extension. (Of course, you could just continue using the existing wrapper mechanism on Unix, which has the advantage of added transparency, at the cost of having two code paths.) > Given these obstacles, do you still feel that zipapp is the best approach? Long-term, yes. I would slightly prefer the "this is a script" extension, though, as it has the extra transparency benefit. (Still, nobody should be editing an installed script anyway.) From ncoghlan at gmail.com Mon Aug 12 13:33:18 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 12 Aug 2013 07:33:18 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 11 Aug 2013 21:37, "PJ Eby" wrote: > > On Sun, Aug 11, 2013 at 7:31 PM, Jason R. Coombs wrote: > >> -----Original Message----- > >> From: Nick Coghlan [mailto:ncoghlan at gmail.com] > >> Sent: Sunday, 11 August, 2013 17:14 > >> > >> We actually have a proposal on import-sig to allow module specific import > >> path manipulation (including the ability to say "don't import this module > >> from this directory, even though it looks like it is here"). I'd favour that > >> mechanism over a new "not importable" file extension. > > > > I don't believe this mechanism would suffice. My previous example was > > over-simplified to the general problem, which is that any script could > > potentially be imported as a module of the same name. So if I were to launch > > easy_install.py, it would set sys.path[0] to Scripts\ and if it were then to > > import cython (which it does), it would import Scripts/cython.py as cython, > > unless there were some way to globally declare all installed scripts somewhere > > so they're excluded from import. > > Indeed. It really *does* need to be a "don't import this" extension, > though it doesn't much matter what that extension is. Except on > Windows, of course, where it has to be something associated with > Python that also still works as a console app and is listed in > PATHEXT. > > (As you surmised earlier, my choice of '-script.py' was indeed chosen > to prevent accidental importing, as the '-' ensures it's not a valid > module name.) Having an empty "cython.ref" file (extension TBC) would tell Python to skip that directory for "import cython" regardless of the presence of something that otherwise would be considered. The actual downside is that won't land until 3.4 at the earliest, while PyLauncher can associate additional extensions with itself (and modify PATHEXT) earlier than that. Having pys and pyz for "executable, but not importable" (source and zip archive forms) could be quite clean. In effect, the pys extension would bring windows to parity with *nix, where "no extension at all" has traditionally served the purpose of making it impossible to import a script. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Aug 12 15:01:10 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Aug 2013 09:01:10 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Mon, Aug 12, 2013 at 7:33 AM, Nick Coghlan wrote: > Having pys and pyz for "executable, but not importable" (source and zip > archive forms) could be quite clean. In effect, the pys extension would > bring windows to parity with *nix, where "no extension at all" has > traditionally served the purpose of making it impossible to import a script. Oh, that reminds me: IIUC, it's not necessary to *actually* zip a .pyz. Remember, Python doesn't use the extension to determine the contents, it sniffs for a zip trailer. Likewise, there was IIRC no plan for pylauncher to inspect zip contents -- it just reads the #! line and runs python on the file. This means that you can actually write source as a .pyz or .pwz file on Windows, and it would Just Work -- *without any sys.path modification*. For *all versions of Python*, provided PyLauncher is installed. (And setuptools could check the environment and registry for that, if you opt into the non-.exe scripts, and error out if you don't have them set up correctly.) IOW, implementing PEP 441 in PyLauncher gives us the "executable, not importable" format for free. (Granted, I can see reasons for not wanting to use the same extension for source and zipped versions, mostly in the area of tools other than pylauncher, but if you do have different extensions then there have to be *four*, due to console vs. windowed and source vs. zipped.) From p.f.moore at gmail.com Mon Aug 12 16:32:43 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 12 Aug 2013 15:32:43 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 12 August 2013 14:01, PJ Eby wrote: > This means that you can actually write source as a .pyz or .pwz file > on Windows, and it would Just Work -- *without any sys.path > modification*. > Conversely, you can right now rename a zipped file as xxx.py and it will be run happily as a Python standalone zip file. It confuses the heck out of things like text editors and zip file managers, of course :-) We currently have 2 extensions on Windows - .py and .pyw. These get a default run action of the launcher (py.exe and pyw.exe versions respectively) and both are treated as marking Python modules (I don't know why anyone would create a Python module with a .pyw extension - sounds like an unnecessary attempt to apply "consistency" to me). Having two more that had the launcher as default action but did *not* mark Python modules might make sense. Having further extensions is fundamentally a documentation-only exercise - they will not be treated any differently by any code shipped by Python. The problem is that documentation (and user expectation) is important - nobody expects a .py file to be a binary zip file (and they'd get a shock if they opened it in a text editor). On the other hand, grabbing a huge host of file extensions (.py, .pyw, .pyo, .pyc, .pyz, .pwz, .pys, .pws) is not very friendly, as well as adding the burden of clearly documenting what all these various extensions *mean*. My view would be: 1. We can't touch .py/.pyw behaviour for backward compatibility reasons 2. A new suffix that is associated with the launcher but which does *not* mark importable modules would be good. (Call it .pys for now). 3. I'd rather not see further extensions added, six is plenty. As far as zipped Python applications are concerned (pyz), these can be created by just using a pys file containing a #! line prepended to the zip file. Certainly, it's a binary file with a filename that would normally indicate a text file format, but is that any less true on Unix when users create these files? I don't know what the user experience with zipped Python applications on Unix is like - I doubt it's *that* much better than on Windows. Probably the reality is that nobody uses zipped applications anyway, so the problems haven't been identified yet. Maybe the pyz PEP would bet better rewritten to propose providing tools to create and manage zipped Python applications, but *not* to require new extensions, merely to reuse existing ones (pys on Windows, no extension on Unix) with binary (zipped) content. Apologies if I've missed an obvious flaw - I'm way behind on sleep at the moment... Paul. PS Either the ref file marker approach, or a new Python command line argument with appropriate behaviour, could avoid the need for even the pys/pws extension, if people prefer to reduce the number of extensions claimed still further. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Aug 12 17:21:50 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Aug 2013 11:21:50 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Mon, Aug 12, 2013 at 10:32 AM, Paul Moore wrote: > On 12 August 2013 14:01, PJ Eby wrote: > > As far as zipped Python applications are concerned (pyz), these can be > created by just using a pys file containing a #! line prepended to the zip > file. Certainly, it's a binary file with a filename that would normally > indicate a text file format, but is that any less true on Unix when users > create these files? I don't know what the user experience with zipped Python > applications on Unix is like - I doubt it's *that* much better than on > Windows. Probably the reality is that nobody uses zipped applications > anyway, so the problems haven't been identified yet. Maybe the pyz PEP would > bet better rewritten to propose providing tools to create and manage zipped > Python applications, but *not* to require new extensions, merely to reuse > existing ones (pys on Windows, no extension on Unix) with binary (zipped) > content. Seems reasonable... but then somebody will need to write another PEP for the file extension(s) issue. I think the issue of "too many extensions" vs. "source/binary confusion" is going to boil down to a BDFL judgment call, whether it's by Nick, Guido, or some more Windows-specific BDFL For One PEP. If we go with One Extension To Rule Them All, I would actually suggest '.pyl' (for PyLauncher), since really all that extension does is say, "hey, run this as a console app via PyLauncher", not that it's a "script" (which would be assumed to be text). And that all you can be sure of is that a .pyl files will start with a #! line, and launch whatever other program is specified there, on the contents of the file -- which may actually be a zipfile. > PS Either the ref file marker approach, or a new Python command line > argument with appropriate behaviour, could avoid the need for even the > pys/pws extension, if people prefer to reduce the number of extensions > claimed still further. But those would only be available for future Python versions. A file extension would solve the problem upon installing PyLauncher and PATHEXT, at least for those OSes and shells that recognize PATHEXT. Hm, here's a side thought: what if PyLauncher added the ability to serve as a script wrapper, just like setuptools' existing wrappers? Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or .pyw, and presto! No PATHEXT compatibility needed, but users could still opt out of using the .exe wrappers if they're sure their shell works right without it. (The wrapper facility would be implemented by simply checking for an adjacent file of matching filename and extension (.pyl for py.exe, .pyw for pyw.exe), and if found, insert that filename as argv[1] before proceeding with the normal launch process. For efficiency, the file check could be skipped if the executable has its original name, at the minor cost of it not being possible to name a console script 'py' or a windows app 'pyw'. But that's an optional tweak.) From ncoghlan at gmail.com Mon Aug 12 17:35:35 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 12 Aug 2013 11:35:35 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 12 August 2013 11:21, PJ Eby wrote: > On Mon, Aug 12, 2013 at 10:32 AM, Paul Moore wrote: >> On 12 August 2013 14:01, PJ Eby wrote: >> >> As far as zipped Python applications are concerned (pyz), these can be >> created by just using a pys file containing a #! line prepended to the zip >> file. Certainly, it's a binary file with a filename that would normally >> indicate a text file format, but is that any less true on Unix when users >> create these files? I don't know what the user experience with zipped Python >> applications on Unix is like - I doubt it's *that* much better than on >> Windows. Probably the reality is that nobody uses zipped applications >> anyway, so the problems haven't been identified yet. Maybe the pyz PEP would >> bet better rewritten to propose providing tools to create and manage zipped >> Python applications, but *not* to require new extensions, merely to reuse >> existing ones (pys on Windows, no extension on Unix) with binary (zipped) >> content. > > Seems reasonable... but then somebody will need to write another PEP > for the file extension(s) issue. > > I think the issue of "too many extensions" vs. "source/binary > confusion" is going to boil down to a BDFL judgment call, whether it's > by Nick, Guido, or some more Windows-specific BDFL For One PEP. > > If we go with One Extension To Rule Them All, I would actually suggest > '.pyl' (for PyLauncher), since really all that extension does is say, > "hey, run this as a console app via PyLauncher", not that it's a > "script" (which would be assumed to be text). And that all you can be > sure of is that a .pyl files will start with a #! line, and launch > whatever other program is specified there, on the contents of the file > -- which may actually be a zipfile. I like this idea. >> PS Either the ref file marker approach, or a new Python command line >> argument with appropriate behaviour, could avoid the need for even the >> pys/pws extension, if people prefer to reduce the number of extensions >> claimed still further. > > But those would only be available for future Python versions. A file > extension would solve the problem upon installing PyLauncher and > PATHEXT, at least for those OSes and shells that recognize PATHEXT. > > Hm, here's a side thought: what if PyLauncher added the ability to > serve as a script wrapper, just like setuptools' existing wrappers? > Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or > .pyw, and presto! No PATHEXT compatibility needed, but users could > still opt out of using the .exe wrappers if they're sure their shell > works right without it. > > (The wrapper facility would be implemented by simply checking for an > adjacent file of matching filename and extension (.pyl for py.exe, > .pyw for pyw.exe), and if found, insert that filename as argv[1] > before proceeding with the normal launch process. For efficiency, the > file check could be skipped if the executable has its original name, > at the minor cost of it not being possible to name a console script > 'py' or a windows app 'pyw'. But that's an optional tweak.) This sounds like a plausible approach, especially if we add the bootstrapping being considered for 3.4+ to PyLauncher for earlier versions. (Donald has a draft PEP for that, he's just making a few tweaks before publishing it for broader comment) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Aug 12 17:41:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 12 Aug 2013 16:41:46 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 12 August 2013 16:35, Nick Coghlan wrote: > > Hm, here's a side thought: what if PyLauncher added the ability to > > serve as a script wrapper, just like setuptools' existing wrappers? > > Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or > > .pyw, and presto! No PATHEXT compatibility needed, but users could > > still opt out of using the .exe wrappers if they're sure their shell > > works right without it. > > > > (The wrapper facility would be implemented by simply checking for an > > adjacent file of matching filename and extension (.pyl for py.exe, > > .pyw for pyw.exe), and if found, insert that filename as argv[1] > > before proceeding with the normal launch process. For efficiency, the > > file check could be skipped if the executable has its original name, > > at the minor cost of it not being possible to name a console script > > 'py' or a windows app 'pyw'. But that's an optional tweak.) > > This sounds like a plausible approach, especially if we add the > bootstrapping being considered for 3.4+ to PyLauncher for earlier > versions. (Donald has a draft PEP for that, he's just making a few > tweaks before publishing it for broader comment) > Do you want the time machine keys back? :-) http://bugs.python.org/issue18491 Committed by Vinay in http://hg.python.org/cpython/rev/4123e002a1af The wrapper source can be built that way if SCRIPT_WRAPPER is defined, but the build infrastructure does not currently define that. See the patch and issue log for details. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Mon Aug 12 19:00:42 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 12 Aug 2013 17:00:42 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: PJ Eby telecommunity.com> writes: > If we go with One Extension To Rule Them All, I would actually suggest > '.pyl' (for PyLauncher), since really all that extension does is say, > "hey, run this as a console app via PyLauncher", not that it's a > "script" (which would be assumed to be text). And that all you can be > sure of is that a .pyl files will start with a #! line, and launch > whatever other program is specified there, on the contents of the file I know I'm bike-shedding here, but my preference for extension would be '.pye' as it indicates something to execute, but without indicating exactly how (i.e. that it's via a separate launcher executable). The standalone launcher already adds bindings for zip extensions but does not change PATHEXT. Any changes in this area would not be in the launcher itself, but in the standalone launcher installer (and hence would need to be replicated in the Python installer). Regards, Vinay Sajip From jaraco at jaraco.com Mon Aug 12 20:14:34 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Mon, 12 Aug 2013 18:14:34 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> > -----Original Message----- > From: Distutils-SIG [mailto:distutils-sig- > bounces+jaraco=jaraco.com at python.org] On Behalf Of PJ Eby > Sent: Monday, 12 August, 2013 11:22 > > On Mon, Aug 12, 2013 at 10:32 AM, Paul Moore > wrote: > > On 12 August 2013 14:01, PJ Eby wrote: > > > > As far as zipped Python applications are concerned (pyz), these can be > > created by just using a pys file containing a #! line prepended to the > > zip file. Certainly, it's a binary file with a filename that would > > normally indicate a text file format, but is that any less true on > > Unix when users create these files? I don't know what the user > > experience with zipped Python applications on Unix is like - I doubt > > it's *that* much better than on Windows. Probably the reality is that > > nobody uses zipped applications anyway, so the problems haven't been > > identified yet. Maybe the pyz PEP would bet better rewritten to > > propose providing tools to create and manage zipped Python > > applications, but *not* to require new extensions, merely to reuse > > existing ones (pys on Windows, no extension on Unix) with binary (zipped) > content. > > Seems reasonable... but then somebody will need to write another PEP for > the file extension(s) issue. My preference is to reject the idea of the side-by-side executable launcher. There are several downsides that I'm trying to avoid by moving away from the executable: 1. Disparity with Unix. Better parity means cleaner code, easier documentation, and less confusion moving from platform to platform. 2. Executables that look like installers. If a launcher executable is used and Windows detects that it "looks like" an installer and it's a 32-bit executable and it doesn't have a manifest to disable the functionality, Windows will execute the file in a separate UAC context (often in a separate Window). 3. Additional files are needed. In particular, due to (2), a manifest must be provided for 32-bit executables. 4. Word size accounting. It's not clear to me what word size is needed. 32-bit may be sufficient, though 64-bit seem to have some advantages: a manifest is not needed, and it can match the word size of the installed Python executable (for consistency). Setuptools currently provides both (and installs the one that matches the Python executable). 5. Platform support. We're fortunate that Windows is one of the most stable binary platforms out there. Nevertheless, Setuptools recently got support for AMD binaries in the launcher. By relying on an external launcher, the launcher becomes responsible for platform support. 6. Two to three files to do the job of one. In fact, the "job" isn't much more than to invoke code elsewhere, so it seems ugly to require as many as three files to do the job. Then multiply that by the Python-specific version and you have up to six files for a single script. 7. Obfuscation of purpose. A single script pretty directly communicates its purpose. When there are multiple files, it's not obvious why they exist or what their purpose is. Indeed, I went years without realizing we had an open issue in Distribute due to a missing manifest (which was fixed in Setuptools), all because I used the 64-bit executable. While it may take some time for the community to learn what a '.pyl' is, it's easily documented and simple to grasp, unlike the subtle and sometimes implicit nuances (and fragility) of a side-by-side executable. 8. Unwanted content. Some Unix users have complained about finding Windows executables in their Linux packages, so now Setuptools has special handling to omit the launchers when installed on Unix systems. This is far from beautiful. > I think the issue of "too many extensions" vs. "source/binary confusion" is > going to boil down to a BDFL judgment call, whether it's by Nick, Guido, or > some more Windows-specific BDFL For One PEP. > > If we go with One Extension To Rule Them All, I would actually suggest > '.pyl' > (for PyLauncher), since really all that extension does is say, "hey, run > this as a > console app via PyLauncher", not that it's a "script" (which would be > assumed to be text). And that all you can be sure of is that a .pyl files > will > start with a #! line, and launch whatever other program is specified there, > on > the contents of the file > -- which may actually be a zipfile. If it's '.py*', I don't see why it's not reasonable to allow omission of the shebang, and assume the default python. After encountering and now understanding the subtle import semantics, I'm hoping that this new extension can also be used in my personal 'scripts' collection to serve the same purpose it does for setuptools console entry points. I guess one could require #!/usr/bin/python in each, but that seems superfluous on Windows. I don't feel at all strongly on this point. > > PS Either the ref file marker approach, or a new Python command line > > argument with appropriate behaviour, could avoid the need for even the > > pys/pws extension, if people prefer to reduce the number of extensions > > claimed still further. > > But those would only be available for future Python versions. A file > extension would solve the problem upon installing PyLauncher and PATHEXT, > at least for those OSes and shells that recognize PATHEXT. Also, in my mind, this approach is most directly addressing the fundamental challenge (distinguishing a (executable) script from a module) in much the way Unix has previously enjoyed. > Hm, here's a side thought: what if PyLauncher added the ability to serve as > a > script wrapper, just like setuptools' existing wrappers? > Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or .pyw, > and presto! No PATHEXT compatibility needed, but users could still opt out > of using the .exe wrappers if they're sure their shell works right without > it. > > (The wrapper facility would be implemented by simply checking for an > adjacent file of matching filename and extension (.pyl for py.exe, .pyw for > pyw.exe), and if found, insert that filename as argv[1] before proceeding > with the normal launch process. For efficiency, the file check could be > skipped if the executable has its original name, at the minor cost of it not > being possible to name a console script 'py' or a windows app 'pyw'. But > that's an optional tweak.) I'm warming up to this idea a bit, especially how it supports the most elegant approach but degrades gracefully. Some questions that arise: Where would Setuptools expect to find these launchers? Would it expect them to be present on the system? Would it symlink or hardlink them or simply copy? Is py.exe subject to the 'looks like installer' behavior, such that it would need a manifest? I still feel like this approach would require substantial special-casing, but since it provides a transition to the simple, elegant approach, I'm not opposed. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From Steve.Dower at microsoft.com Mon Aug 12 20:35:28 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Mon, 12 Aug 2013 18:35:28 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: > I know I'm bike-shedding here, but my preference for extension would be > '.pye' as it indicates something to execute, but without indicating exactly > how (i.e. that it's via a separate launcher executable). +1 (I spent the whole time reading this thread thinking "I'd prefer pye, or maybe pyx (pee-why-ex-ecutable)") I don't think we're voting on it yet, but also +1 for putting .PYE in PATHEXT instead of .PY. As I've mentioned before, I'll be upset if one day typing "pip install ..." opens my editor instead of running pip... And any reason we need a separate extension for pyw.exe? Can't that be specified in the shebang? Cheers, Steve From Steve.Dower at microsoft.com Mon Aug 12 21:03:00 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Mon, 12 Aug 2013 19:03:00 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> Jason R. Coombs wrote: > My preference is to reject the idea of the side-by-side executable launcher. > There are several downsides that I'm trying to avoid by moving away from the > executable: > [SNIP] > 2. Executables that look like installers. If a launcher executable is used and > Windows detects that it "looks like" an installer and it's a 32-bit executable > and it doesn't have a manifest to disable the functionality, Windows will > execute the file in a separate UAC context (often in a separate Window). The problematic part of detection is the filename. The "Installer Detection" section of http://msdn.microsoft.com/en-us/library/bb530410.aspx has more details, but isn't 100% precise. I'm sure there's a precise description somewhere, but I don't know where it is. > 3. Additional files are needed. In particular, due to (2), a manifest must be > provided for 32-bit executables. > 4. Word size accounting. It's not clear to me what word size is needed. 32-bit > may be sufficient, though 64-bit seem to have some advantages: a manifest is > not needed, and it can match the word size of the installed Python executable > (for consistency). Setuptools currently provides both (and installs the one > that matches the Python executable). 32-bit is sufficient, and the manifest can be embedded. Since all the executable is doing is looking for a matching/similarly-named file in its directory and launching it, there's no need for a 64-bit version, or for any differences to exist between the executables. They could all be copied from the same source whenever one is needed. (A 64-bit binary is only required when loading 64-bit DLLs. It is not required to launch a 64-bit process.) > [SNIP] > 6. Two to three files to do the job of one. In fact, the "job" isn't much more > than to invoke code elsewhere, so it seems ugly to require as many as three > files to do the job. Then multiply that by the Python-specific version and you > have up to six files for a single script. While I can understand this from the POV of the implementer/maintainer, I've never heard a single Windows user mention it. And with an embedded manifest, it's no more than one .py and one .exe-per-Python-version. > [SNIP] > 8. Unwanted content. Some Unix users have complained about finding Windows > executables in their Linux packages, so now Setuptools has special handling to > omit the launchers when installed on Unix systems. This is far from beautiful. Do us Windows users get .sh and .DSStore files filtered out too? :) (More seriously, why isn't the onus on package developers to specify platform-specific files?) >> Hm, here's a side thought: what if PyLauncher added the ability to serve as >> a script wrapper, just like setuptools' existing wrappers? >> Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or .pyw, >> and presto! No PATHEXT compatibility needed, but users could still opt out >> of using the .exe wrappers if they're sure their shell works right without >> it. >> >> (The wrapper facility would be implemented by simply checking for an >> adjacent file of matching filename and extension (.pyl for py.exe, .pyw for >> pyw.exe), and if found, insert that filename as argv[1] before proceeding >> with the normal launch process. For efficiency, the file check could be >> skipped if the executable has its original name, at the minor cost of it not >> being possible to name a console script 'py' or a windows app 'pyw'. But >> that's an optional tweak.) > > I'm warming up to this idea a bit, especially how it supports the most elegant > approach but degrades gracefully. py.exe is a great file to use as a launcher, but it would also be easy to make a more specific one. > Some questions that arise: > Where would Setuptools expect to find these launchers? Would it expect them to > be present on the system? I don't think that's unreasonable. Perhaps they should always install into Python's path (C:\Python##\py.exe) so they are discoverable? Not everybody can install into C:\Windows\System32 (or SysWOW64) > Would it symlink or hardlink them or simply copy? Copying is most reliable and the only way to handle installation onto different drive partitions. Some people will worry about the size, but I don't know that much can be done about that. A smaller executable could be made with no icons and relying on the OS to find "py.exe" wherever it's been installed. You can hardlink the version-specific executables safely. > Is py.exe subject to the 'looks like installer' behavior, such that it would need > a manifest? The one that's in Python 3.4 Alpha already has an embedded manifest, but I don't believe it would trigger the installer heuristics anyway. > I still feel like this approach would require substantial special-casing, but > since it provides a transition to the simple, elegant approach, I'm not > opposed. Shouldn't require that much special casing, except that you probably wouldn't be able to use it with an adjacent "py.py" file (that is, you'd special case "py.exe" to have the default behaviour). Expecting the adjacent file to be "name-script.py" or "name.pyl" for "name.exe" seems reasonable to me, and neither of those modules will be importable. Cheers, Steve From p.f.moore at gmail.com Mon Aug 12 21:30:29 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 12 Aug 2013 20:30:29 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 12 August 2013 19:35, Steve Dower wrote: > And any reason we need a separate extension for pyw.exe? Can't that be > specified in the shebang? The association specifies the exe to run, that exe then relaunches whatever the shebang specifies. But it's the *initial* executable's console/GUI flag that specifies the behaviour, so we need two launchers and hence two extensions. You could put pythonw in the shebang of a .py file, but that would open a console window when double clicked. Similarly, putting python in the shebang of a .pyw file will detach the script from the current console when run from the command line. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 12 22:04:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 12 Aug 2013 21:04:53 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 12 August 2013 19:14, Jason R. Coombs wrote: > My preference is to reject the idea of the side-by-side executable > launcher. > There are several downsides that I'm trying to avoid by moving away from > the > executable: > Using a dedicated filetype and associated systemwide launcher is the ideal solution in many ways, but the behaviour is subtle, shell-dependent, and under-documented. I have been adding .PY to PATHEXT for ages now, and running .py scripts as commands with no issues - but people have raised objections to the proposal that we do this for exe wrappers, and I don't have convincing answers in all cases (mostly because "I've never seen that case" is often my response...). I can't recall details, but look back over recent threads on distutils-sig and python-dev that I've been involved in, for details. Also, a dedicated filetype won't be registered for users of older Pythons - unless they install the latest launcher. Whether that is an issue to be concerned about needs to be agreed. (Backward compatibility vs making progress, I guess...) > If it's '.py*', I don't see why it's not reasonable to allow omission of > the > shebang, and assume the default python. After encountering and now > understanding the subtle import semantics, I'm hoping that this new > extension > can also be used in my personal 'scripts' collection to serve the same > purpose > it does for setuptools console entry points. I guess one could require > #!/usr/bin/python in each, but that seems superfluous on Windows. I don't > feel > at all strongly on this point. > IIRC, the shebang can be omitted, and in that case you get the launcher's configured default Python (from either py.ini, or built in). I tend to add /usr/bin/python{2,3} (or /usr/bin/env python if I want to respect PATH) for Unix compatibility even though I never expect the scripts to be used on Unix. Also, in my mind, this approach is most directly addressing the fundamental > challenge (distinguishing a (executable) script from a module) in much the > way > Unix has previously enjoyed. > Agreed - and I think that's a more important point than the merely technical issue of whether it manipulates sys.path the way we want. > I'm warming up to this idea a bit, especially how it supports the most > elegant > approach but degrades gracefully. Some questions that arise: > > Where would Setuptools expect to find these launchers? Would it expect > them to > be present on the system? Would it symlink or hardlink them or simply > copy? Is > py.exe subject to the 'looks like installer' behavior, such that it would > need > a manifest? > > I still feel like this approach would require substantial special-casing, > but > since it provides a transition to the simple, elegant approach, I'm not > opposed. > You ask important questions (and I don't know the answers :-)). I suggested (in the issue where I implemented the wrapper functionality) that we add a stdlib module that "knows" where the wrapper is and can wrap a script for you (something like distlib's "distlib.script" module). But that felt a bit over-engineered and the idea was abandoned almost immediately. In practical terms, of course, there are many, many ways of writing wrappers, depending on what you want them to do, and how manageable you want to make the job of manipulating them. The current setuptools wrappers make manipulating them easy (it's just an exe plus a script, with related names) at the cost of needing multiple files. You could append the script to the exe, which is a one-file solution but harder to edit the script. You could append a zipfile to the exe, which has the advantage that you can prepend arbitrary content to a zipfile, so there are no "mixed content" issues to worry about, in theory. The exe would be coded slightly differently in each case, of course, so there's a maintenance cost... Also, of course, as was mentioned elsewhere in the thread, you need separate wrapper exes for every architecture/platform you intend to support. And unless the wrappers are added at install time, wrapped scripts change a platform-neutral script into a platform-specific exe. I would still like to see the standard be registered .pye (I'm happy with a bikeshed of this colour) and .pwe extensions which are added to PATHEXT and associated with the launcher. But someone would need to collect and document the issues which have been raised with this approach, confirm which ones are real and which are merely potential, offer solutions for the real ones and confirm that the potential ones don't actually happen (or they do!). I won't have the time to do this in the near future - but I could consider it in a month or two if someone reminds me. It should probably be a PEP, although I'm not 100% sure what that PEP would propose (what changes in Python and wording in a PEP would be needed for you to change setuptools to abandon wrappers, for instance? And why would you need a PEP?) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Mon Aug 12 22:18:24 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 12 Aug 2013 20:18:24 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: Jason R. Coombs jaraco.com> writes: > My preference is to reject the idea of the side-by-side executable > launcher. I agree that it's not ideal to have side-by-side executables, but how do you propose to address PJE's point about older Python versions? You can't force people to install the standalone launcher where they have only older Python versions installed. Are you proposing that an installer looks for py.exe at installation time, and does one thing (install as foo.pye) if it is found, and another thing (install as foo-script.py + foo.exe) if it isn't? This could cause breakage if a user subsequently uninstalled the launcher (perhaps unknowingly, by uninstalling the version of Python it came with) - are we OK with that? They might seem ugly and redundant (multiple copies of identical executables - yecch), but the years of service given by setuptools executable launchers suggests that they are not a problem for end users, whereas the user experience with the alternatives might be in problematic some scenarios. While it's good to look at alternatives, it seems like this area is a "solved problem" even if the solution isn't especially elegant, and I would guess that are probably other issues which should have a higher priority. Regards, Vinay Sajip From pje at telecommunity.com Mon Aug 12 22:18:57 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Aug 2013 16:18:57 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Mon, Aug 12, 2013 at 2:14 PM, Jason R. Coombs wrote: > > >> -----Original Message----- >> From: Distutils-SIG [mailto:distutils-sig- >> bounces+jaraco=jaraco.com at python.org] On Behalf Of PJ Eby >> Sent: Monday, 12 August, 2013 11:22 >> >> On Mon, Aug 12, 2013 at 10:32 AM, Paul Moore >> wrote: >> > On 12 August 2013 14:01, PJ Eby wrote: >> > >> > As far as zipped Python applications are concerned (pyz), these can be >> > created by just using a pys file containing a #! line prepended to the >> > zip file. Certainly, it's a binary file with a filename that would >> > normally indicate a text file format, but is that any less true on >> > Unix when users create these files? I don't know what the user >> > experience with zipped Python applications on Unix is like - I doubt >> > it's *that* much better than on Windows. Probably the reality is that >> > nobody uses zipped applications anyway, so the problems haven't been >> > identified yet. Maybe the pyz PEP would bet better rewritten to >> > propose providing tools to create and manage zipped Python >> > applications, but *not* to require new extensions, merely to reuse >> > existing ones (pys on Windows, no extension on Unix) with binary (zipped) >> content. >> >> Seems reasonable... but then somebody will need to write another PEP for >> the file extension(s) issue. > > My preference is to reject the idea of the side-by-side executable launcher. > There are several downsides that I'm trying to avoid by moving away from the > executable: > > 1. Disparity with Unix. Better parity means cleaner code, easier > documentation, and less confusion moving from platform to platform. > 2. Executables that look like installers. If a launcher executable is used and > Windows detects that it "looks like" an installer and it's a 32-bit executable > and it doesn't have a manifest to disable the functionality, Windows will > execute the file in a separate UAC context (often in a separate Window). > 3. Additional files are needed. In particular, due to (2), a manifest must be > provided for 32-bit executables. > 4. Word size accounting. It's not clear to me what word size is needed. 32-bit > may be sufficient, though 64-bit seem to have some advantages: a manifest is > not needed, and it can match the word size of the installed Python executable > (for consistency). Setuptools currently provides both (and installs the one > that matches the Python executable). > 5. Platform support. We're fortunate that Windows is one of the most stable > binary platforms out there. Nevertheless, Setuptools recently got support for > AMD binaries in the launcher. By relying on an external launcher, the launcher > becomes responsible for platform support. > 6. Two to three files to do the job of one. In fact, the "job" isn't much more > than to invoke code elsewhere, so it seems ugly to require as many as three > files to do the job. Then multiply that by the Python-specific version and you > have up to six files for a single script. > 7. Obfuscation of purpose. A single script pretty directly communicates its > purpose. When there are multiple files, it's not obvious why they exist or > what their purpose is. Indeed, I went years without realizing we had an open > issue in Distribute due to a missing manifest (which was fixed in Setuptools), > all because I used the 64-bit executable. While it may take some time for the > community to learn what a '.pyl' is, it's easily documented and simple to > grasp, unlike the subtle and sometimes implicit nuances (and fragility) of a > side-by-side executable. > 8. Unwanted content. Some Unix users have complained about finding Windows > executables in their Linux packages, so now Setuptools has special handling to > omit the launchers when installed on Unix systems. This is far from beautiful. > >> I think the issue of "too many extensions" vs. "source/binary confusion" is >> going to boil down to a BDFL judgment call, whether it's by Nick, Guido, or >> some more Windows-specific BDFL For One PEP. >> >> If we go with One Extension To Rule Them All, I would actually suggest >> '.pyl' >> (for PyLauncher), since really all that extension does is say, "hey, run >> this as a >> console app via PyLauncher", not that it's a "script" (which would be >> assumed to be text). And that all you can be sure of is that a .pyl files >> will >> start with a #! line, and launch whatever other program is specified there, >> on >> the contents of the file >> -- which may actually be a zipfile. > > If it's '.py*', I don't see why it's not reasonable to allow omission of the > shebang, and assume the default python. After encountering and now > understanding the subtle import semantics, I'm hoping that this new extension > can also be used in my personal 'scripts' collection to serve the same purpose > it does for setuptools console entry points. I guess one could require > #!/usr/bin/python in each, but that seems superfluous on Windows. I don't feel > at all strongly on this point. > >> > PS Either the ref file marker approach, or a new Python command line >> > argument with appropriate behaviour, could avoid the need for even the >> > pys/pws extension, if people prefer to reduce the number of extensions >> > claimed still further. >> >> But those would only be available for future Python versions. A file >> extension would solve the problem upon installing PyLauncher and PATHEXT, >> at least for those OSes and shells that recognize PATHEXT. > > Also, in my mind, this approach is most directly addressing the fundamental > challenge (distinguishing a (executable) script from a module) in much the way > Unix has previously enjoyed. > >> Hm, here's a side thought: what if PyLauncher added the ability to serve as >> a >> script wrapper, just like setuptools' existing wrappers? >> Then setuptools could just copy py.exe or pyw.exe alongside a .pyl or .pyw, >> and presto! No PATHEXT compatibility needed, but users could still opt out >> of using the .exe wrappers if they're sure their shell works right without >> it. >> >> (The wrapper facility would be implemented by simply checking for an >> adjacent file of matching filename and extension (.pyl for py.exe, .pyw for >> pyw.exe), and if found, insert that filename as argv[1] before proceeding >> with the normal launch process. For efficiency, the file check could be >> skipped if the executable has its original name, at the minor cost of it not >> being possible to name a console script 'py' or a windows app 'pyw'. But >> that's an optional tweak.) > > I'm warming up to this idea a bit, especially how it supports the most elegant > approach but degrades gracefully. Some questions that arise: > > Where would Setuptools expect to find these launchers? Would it expect them to > be present on the system? It could, which would incidentally would address your issue #8 (people whining about Windows in their Linux). ;-) Basically, if the launcher is globally installed, you don't need to copy unless you're trying to be compatible with shells that don't support PATHEXT properly. If the launcher isn't installed, you'll need it bundled, or download it on the fly from a binary distribution dependency. > Would it symlink or hardlink them or simply copy? Hardlinks and symlinks are essentially useless on Windows, so copy. > Is > py.exe subject to the 'looks like installer' behavior, such that it would need > a manifest? Yep, but it can be embedded, as Steve points out. The only reason I never did this is because the Force is insufficiently strong in this one. ;-) > I still feel like this approach would require substantial special-casing, but > since it provides a transition to the simple, elegant approach, I'm not > opposed. Well, the transition would probably go something like: 1. Start embedding pylauncher (or having a dependency on an egg or wheel that contains the launcher binaries) to use in place of the existing script mechanism 2. Allow making executables without copying the launcher, provided that a global pylauncher is installed w/file association and proper PATHEXT 3. Switch off using launcher copies by default, leave it as a backward compatibility option Basically step 1 gets manifest files and launcher maintenance off of setuptools' plate, which IIUC is mainly what you want. From pje at telecommunity.com Mon Aug 12 22:26:31 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Aug 2013 16:26:31 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Mon, Aug 12, 2013 at 4:04 PM, Paul Moore wrote: > I would still like to see the standard be registered .pye (I'm happy with a > bikeshed of this colour) and .pwe extensions which are added to PATHEXT and As long as we're discussing bikeshed colors, I'd like to counterpropose .pya and .pwa, to be registered in the Windows class registry as Python Console Application and Python Windowed Application, respectively. Since the *only* reason we need these extensions is for Windows (other OSes do fine at making things executable without an extension), and Windows calls things like these "Applications" or "Apps" in Explorer normally, I think it's better to call them what Windows calls them. From donald at stufft.io Mon Aug 12 22:29:50 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 12 Aug 2013 16:29:50 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Aug 12, 2013, at 4:04 PM, Paul Moore wrote: > Also, of course, as was mentioned elsewhere in the thread, you need separate wrapper exes for every architecture/platform you intend to support. And unless the wrappers are added at install time, wrapped scripts change a platform-neutral script into a platform-specific exe. Hopefully this all will solve this problem, as it is right now if you use setuptools entry points then Wheels erroneously pretend to be platform agnostic. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Mon Aug 12 22:50:16 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 12 Aug 2013 16:50:16 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Mon, Aug 12, 2013 at 4:29 PM, Donald Stufft wrote: > Hopefully this all will solve this problem, as it is right now if you use > setuptools entry points then Wheels erroneously pretend to be platform > agnostic. IMO it's okay to give up having ready-to-use scripts in a platform-agnostic wheel; scripts are sadly not a platform-agnostic thing. (If only MS-DOS had been a little bit more Unix, and a little less CP/M...) From vinay_sajip at yahoo.co.uk Mon Aug 12 22:55:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 12 Aug 2013 20:55:43 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: Donald Stufft stufft.io> writes: > Hopefully this all will solve this problem, as it is right now if you use > setuptools entry points then Wheels erroneously pretend to be platform > agnostic. That's not unreasonable, as long as they don't contain executables. With the current version of distil, built wheels don't contain executables for scripts. The executable launchers are determined / written at wheel installation time. While distlib currently uses bespoke launchers, I plan to update it before the next release to use the PEP 397 launcher compiled with SCRIPT_WRAPPER. Regards, Vinay Sajip From greg.ewing at canterbury.ac.nz Tue Aug 13 02:01:00 2013 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 13 Aug 2013 12:01:00 +1200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <5209773C.8080908@canterbury.ac.nz> PJ Eby wrote: > (Granted, I can see reasons for not wanting to use the same extension > for source and zipped versions, mostly in the area of tools other than > pylauncher, but if you do have different extensions then there have to > be *four*, due to console vs. windowed and source vs. zipped.) Just a thought -- is there any need in this day and age for extensions to be limited to 3 characters? Going beyond that would give us more room to define a consistent scheme for the various combinations. -- Greg From p.f.moore at gmail.com Tue Aug 13 11:17:19 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Aug 2013 10:17:19 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <5209773C.8080908@canterbury.ac.nz> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <5209773C.8080908@canterbury.ac.nz> Message-ID: On 13 August 2013 01:01, Greg Ewing wrote: > Just a thought -- is there any need in this day and age > for extensions to be limited to 3 characters? > There's a bug affecting PowerShell, which Microsoft have pretty much confirmed that they won't fix, which means that longer extensions aren't handled properly when used in PATHEXT. Longer extensions *can* be used, but not for the purposes we want them :-( Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Aug 13 12:27:58 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 13 Aug 2013 10:27:58 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <20130813102758.GE9241@merlinux.eu> On Mon, Aug 12, 2013 at 20:55 +0000, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > > > Hopefully this all will solve this problem, as it is right now if you use > > setuptools entry points then Wheels erroneously pretend to be platform > > agnostic. > > That's not unreasonable, as long as they don't contain executables. > > With the current version of distil, built wheels don't contain executables > for scripts. The executable launchers are determined / written at wheel > installation time. While distlib currently uses bespoke launchers, I plan to > update it before the next release to use the PEP 397 launcher compiled with > SCRIPT_WRAPPER. FWIW, i think that's the way to go. I also understood from a discussion with Daniel Holth that this matches his plans. cheers, holger -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 490 bytes Desc: Digital signature URL: From vinay_sajip at yahoo.co.uk Tue Aug 13 14:33:45 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 13 Aug 2013 12:33:45 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: Vinay Sajip yahoo.co.uk> writes: > While distlib currently uses bespoke launchers, I plan to update it before > the next release to use the PEP 397 launcher compiled with SCRIPT_WRAPPER. One more data point - the launcher currently used by distlib is found at [1]. Since it doesn't have all the bells and whistles that the PEP 397 launcher has (e.g. no searching for Pythons in the registry - the shebang is assumed to point at the Python to launch; no customised command handling; no configuration options; no support for passing parameters to Python itself), it is a lot simpler and has smaller executables as a result (around 64K, as opposed to 100-150K for the PEP 397 launcher). This might be an issue for some, when there would potentially be lots of copies of these executables. Regards, Vinay Sajip [1] https://bitbucket.org/vinay.sajip/simple_launcher/ From jaraco at jaraco.com Tue Aug 13 14:54:27 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Tue, 13 Aug 2013 12:54:27 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> > -----Original Message----- > From: Steve Dower [mailto:Steve.Dower at microsoft.com] > Sent: Monday, 12 August, 2013 15:03 > > Jason R. Coombs wrote: > > 6. Two to three files to do the job of one. In fact, the "job" isn't > > much more than to invoke code elsewhere, so it seems ugly to require > > as many as three files to do the job. Then multiply that by the > > Python-specific version and you have up to six files for a single script. > > While I can understand this from the POV of the implementer/maintainer, > I've never heard a single Windows user mention it. And with an embedded > manifest, it's no more than one .py and one .exe-per-Python-version. Thanks Steve (for this and other comments). I do want to remind that silence is not consent. The multiple files per script has always bugged me (as a user, not implementer), but up until now, I've always considered it a necessary evil. And now that it's not necessary, it's just evil. Over the past day, I've realized/recalled even more problems stemming from side-by-side executables (some of which also apply to other non-executable side-by-side solutions such as markers and manifests): 1. Renames, deletes, and other actions must be synchronized. There's an implicit connection between the files, but it's implicit. And while it's relatively easy to imagine how one can manage the synchronicity, in practice, it's harder. For example, to delete a script, one has to be careful to delete {script_name}-script.py and {script_name}.exe. If one wants to rename, two renames have to occur. What would otherwise be a simple, intuitive operation now has a semantic and technical burden. 2. Discoverability is diminished. Imagine for example that you want to delete all scripts that reference a particular package (as one is wont to do when that package is removed). If there's any side-by-side content, it's not sufficient to grep the files and delete the matching files, but one must instead write a routine or otherwise resolve the matches to their side-by-side equivalents and perform the same operation on them. 3. A directory listing is distracting and unnecessarily twice as long, as there's two files per script. This necessarily means more scrolling and actual human time spent parsing and organizing that structure. And since there's not exactly two files per script (only those with a launcher), the pairing isn't consistent. With one file per script, the listing matches the essence of the directory's content. 4. Updates to the launcher won't apply to existing scripts. If the launcher is updated, the side-by-side versions will remain out-of-date until their scripts are re-installed. If the launcher is associated with the file type, then the state of the launcher can be managed independently of the scripts. 5. Files in use can't be replaced. Because a Windows executable that's in use is not allowed to be overwritten, it's not possible to use a script to update itself. For example, running 'easy_install -U setuptools' will result in an error because the easy_install.exe is in use. 6. There are potentially privilege issues and security aspects to the separate files that haven't yet been uncovered. On a multi-user system, there are considerations about ownership and permissions on the files. When there are multiple files per script, it's less obvious how permissions should be assigned. It's not obvious to me this poses a significant issue, but it sure seems easier to manage these issues on a single file rather than multiple. All of these issues except for (6) have impacted my ability to work effectively with the side-by-side launchers currently on the system. It's one of the things I love about working on the Unix systems. Now that I feel like we're so close to a similarly-elegant solution on Windows, I want to see it employed (even if it's only opt-in, although I would prefer we work toward a solution that ultimately defaults to a single file mode for scripts). -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From pje at telecommunity.com Tue Aug 13 17:58:19 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 13 Aug 2013 11:58:19 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Tue, Aug 13, 2013 at 8:54 AM, Jason R. Coombs wrote: > 1. Renames, deletes, and other actions must be synchronized. Why are you manually deleting or altering executables? Why are you renaming them at all? I've been using .exe wrappers since they were written, and have never had a single one of the issues you mention, because I never do any of the things you mention by hand. IMO that's what tools are for. Doesn't pip uninstall scripts? I may be slightly biased in my preference for .exe, because files with other extensions don't work with Cygwin (which doesn't support PATHEXT), but I work primarily with Windows Python rather than Cygwin Python. So, if there *has* to be a single file, I would greatly prefer an .exe with the script embedded, rather than a non-.exe file. It's a bit less discoverable, but at least it'll discourage anybody from editing the contents. (Because nobody should be editing generated scripts anyway.) (Also relevant: not every situation where wrapper scripts are used is going to be one where a PyLauncher install is possible. For example, portable deployment of an app to USB stick with a bundled Python can't assume PATHEXT and a globally-installed PyLauncher.) > 4. Updates to the launcher won't apply to existing scripts. If the launcher is > updated, the side-by-side versions will remain out-of-date until their scripts > are re-installed. This is kind of a bogus point; *any* update to how scripts are generated isn't automatically applied to existing scripts; the format in which they're written is of no relevance. > 5. Files in use can't be replaced. Because a Windows executable that's in use > is not allowed to be overwritten, But they can be renamed, and deleted afterwards. For example, when updating, you can do the simple dance of: 1. delete scriptname.exe.deleteme if it exists 2. rename scriptname.exe to scriptname.exe.deleteme 3. replace scriptname.exe 4. try to delete the .deleteme file created in step 2, ignoring errors. And since this only needs to be done for the wrappers on installation tools themselves (pip, easy_install, etc.), it's not like a lot of people are going to have to write this code. It can also be further enhanced, by having the .exe wrapper check (as it exits) whether it was renamed, and if so, spin off a 'python -c "import os, time; time.sleep(0.1); os.unlink('path to .deleteme')"' and immediately exit. (Or use one of the other tricks from http://www.catch22.net/tuts/self-deleting-executables -- but I think this one is the simplest and best for our purposes, since the wrapper already knows at this point it can invoke Python using the path it previously found, and it's not doing anything questionable with process invocations that might raise red flags with security tools.) From p.f.moore at gmail.com Tue Aug 13 18:33:36 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Aug 2013 17:33:36 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 16:58, PJ Eby wrote: > > 5. Files in use can't be replaced. Because a Windows executable that's > in use > > is not allowed to be overwritten, > > But they can be renamed, and deleted afterwards. For example, when > updating, you can do the simple dance of: > > 1. delete scriptname.exe.deleteme if it exists > 2. rename scriptname.exe to scriptname.exe.deleteme > 3. replace scriptname.exe > 4. try to delete the .deleteme file created in step 2, ignoring errors. > > And since this only needs to be done for the wrappers on installation > tools themselves (pip, easy_install, etc.), it's not like a lot of > people are going to have to write this code. This works, but is an ugly, fragile workaround. It's *not* a huge problem, it's just how executables work on Windows, and all installers have to deal with this dance (it's why a lot of things need a reboot to complete installation - the "delete on next reboot" API). But it's not *nice*. I would never use exe wrappers for my own personal scripts - I *always* write them as .py files and rely on PATHEXT. I only use exe wrappers for commands installed as part of a Python package (pip.exe, nosetests.exe, etc). That says something about how friendly they are as a general tool. On the other hand, it also acts as a reminder that when used in a suitably managed situation (stuff installed by pip/easy_install) the ugliness of exe wrappers is hidden well enough to be a non-issue. So while Jason may be persuaded to retain exe wrappers for setuptools, I doubt he'd want to use them for his personal scripts (if I read his posts correctly). I know I won't. On another point you mention, Cygwin Python should be using Unix-style shell script wrappers, not Windows-style exes, surely? The whole point of Cygwin is that it emulates Unix, after all... So I don't see that as an argument either way. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Aug 13 19:08:21 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 13 Aug 2013 18:08:21 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 17:33, Paul Moore wrote: > > On another point you mention, Cygwin Python should be using Unix-style shell > script wrappers, not Windows-style exes, surely? The whole point of Cygwin > is that it emulates Unix, after all... So I don't see that as an argument > either way. So say I have a ~/bin directory where I put my scripts that I want to be generally available. I install something with python setup.py install --install-scripts=~/bin so that the scripts/script-wrappers go in there because I want to be able to always access that program under that name. Don't be fooled by the unixy tilde: I'm running ordinary Windows Python in that command in git-bash, not Cygwin. Now if that folder is on PATH while I am in Cygwin I can run the program with the same name if an .exe wrapper was added. I can't run it with the same name if it's a .py/,bat file because Cygwin doesn't have the implicit strip-the-extension PATHEXT feature and can't run .bat files. Oscar From donald at stufft.io Tue Aug 13 19:56:28 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 13 Aug 2013 13:56:28 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: On Aug 10, 2013, at 9:07 PM, Donald Stufft wrote: > [snip] Bueller? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Tue Aug 13 20:55:07 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 13 Aug 2013 14:55:07 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Tue, Aug 13, 2013 at 12:33 PM, Paul Moore wrote: > This works, but is an ugly, fragile workaround. It's *not* a huge problem, > it's just how executables work on Windows, and all installers have to deal > with this dance (it's why a lot of things need a reboot to complete > installation - the "delete on next reboot" API). But it's not *nice*. > > I would never use exe wrappers for my own personal scripts - I *always* > write them as .py files and rely on PATHEXT. I only use exe wrappers for > commands installed as part of a Python package (pip.exe, nosetests.exe, > etc). That says something about how friendly they are as a general tool. In an ironic reversal, I use them for any command I plan to use frequently. In other words, if I use it often enough to care about how easy it is to use, I take the trouble to wrap it in a project and then use setup.py develop to create the script wrappers. From then on, I can edit the *source* scripts, and the wrappers run the right thing. (I don't edit the -script.py's directly, since they're not where the real code is.) > On another point you mention, Cygwin Python should be using Unix-style shell > script wrappers, not Windows-style exes, surely? The whole point of Cygwin > is that it emulates Unix, after all... So I don't see that as an argument > either way. I said I'm using *Windows* Python from the Cygwin shell. I often test my projects with Cygwin Python, to ensure coverage of Unixisms, but I only write dedicated Cygwin Python scripts if I need to use Cygwin paths or APIs, which is relatively infrequent. In any case, the use of .exe means that my invocation patterns are unchanged between commands I've implemented in Cygwin Python vs. Windows Python. If the Windows Python versions used a different extension, then I'd have to remember whether which language a specific command was written in in order to invoke it. From p.f.moore at gmail.com Tue Aug 13 21:58:57 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Aug 2013 20:58:57 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 18:08, Oscar Benjamin wrote: > On 13 August 2013 17:33, Paul Moore wrote: > > > > On another point you mention, Cygwin Python should be using Unix-style > shell > > script wrappers, not Windows-style exes, surely? The whole point of > Cygwin > > is that it emulates Unix, after all... So I don't see that as an argument > > either way. > > So say I have a ~/bin directory where I put my scripts that I want to > be generally available. I install something with > python setup.py install --install-scripts=~/bin > so that the scripts/script-wrappers go in there because I want to be > able to always access that program under that name. Don't be fooled by > the unixy tilde: I'm running ordinary Windows Python in that command > in git-bash, not Cygwin. Now if that folder is on PATH while I am in > Cygwin I can run the program with the same name if an .exe wrapper was > added. I can't run it with the same name if it's a .py/,bat file > because Cygwin doesn't have the implicit strip-the-extension PATHEXT > feature and can't run .bat files. Ah, OK, thanks for the clarification. In that case I can see why you'd prefer exe wrappers (or maybe cygwin bash shell wrappers, or shell aliases...). Maybe an option to still use exe wrappers is worth it - but honestly, I'd say that in that context you probably have enough expertise to understand the issue and make your own solution relatively easily. What about having in your .bashrc: for prog in ls ~/bin/*.py; do alias $(basename $prog .py)=$prog done (Excuse me if I got the precise details wrong there). OK, you need to rerun .bashrc if you add new scripts. It's not perfect. But it's not a showstopper either. I do think, as I said before, that this needs some sort of policy-type PEP on the standard approach for wrapping scripts, with all the pros and cons of the various approaches documented and reviewed. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Aug 13 22:20:45 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 13 Aug 2013 13:20:45 -0700 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: Just $0.02 from a user... I'm primarily an OS-X user these days, but have to do Windows once in a while, and help others do Windows (including as an intro to Python instructor) Once I discovered setuptools "develop" mode, I never looked bak -- it is simpl;y THE way to develop code, particularly if you are working on a lib you want to use from other projects and/or actively develping utility scripts. I really like that I can add some scripts to setup.py, and use either develop mode or regular old install and get nice commands, and it works the same way on Windows and *nix (with the hood closed0 I did all this for a good while before I even noticed that exe launchers -- it "just worked". In fact, the only time I noticed the launchers was a couple years ago when a beta version of setuptools released a broken version -- very frustration -- it would fire up another command window that would then close when the script was done -- not very helpful for nosetests and the like... When I did discover how it all worked, I did think it was a little weird, but Windows simply hasn't been built for command line stuff, so you do what you have to do. Conclusions: 1) an extra bunch of files is a on-issue for most users -- we just need something that works. 2) the exe launcher is a bit fragile and hard to maintain (and even harder to debug) -- but there are smart people working on this. 3) I'd rather not have to mess with PATHEXT, and I particularly don't want to have to tell my students to do it -- environment variables are a pain, and somehow PATHEXT has been fragile for me (and I don't use Cygwin) I cant help thinking a more elegant solution exists, but maybe not. Thanks to everyone hashing this out! -Chris On Tue, Aug 13, 2013 at 12:58 PM, Paul Moore wrote: > On 13 August 2013 18:08, Oscar Benjamin wrote: >> >> On 13 August 2013 17:33, Paul Moore wrote: >> > >> > On another point you mention, Cygwin Python should be using Unix-style >> > shell >> > script wrappers, not Windows-style exes, surely? The whole point of >> > Cygwin >> > is that it emulates Unix, after all... So I don't see that as an >> > argument >> > either way. >> >> So say I have a ~/bin directory where I put my scripts that I want to >> be generally available. I install something with >> python setup.py install --install-scripts=~/bin >> so that the scripts/script-wrappers go in there because I want to be >> able to always access that program under that name. Don't be fooled by >> the unixy tilde: I'm running ordinary Windows Python in that command >> in git-bash, not Cygwin. Now if that folder is on PATH while I am in >> Cygwin I can run the program with the same name if an .exe wrapper was >> added. I can't run it with the same name if it's a .py/,bat file >> because Cygwin doesn't have the implicit strip-the-extension PATHEXT >> feature and can't run .bat files. > > > Ah, OK, thanks for the clarification. > > In that case I can see why you'd prefer exe wrappers (or maybe cygwin bash > shell wrappers, or shell aliases...). Maybe an option to still use exe > wrappers is worth it - but honestly, I'd say that in that context you > probably have enough expertise to understand the issue and make your own > solution relatively easily. > > What about having in your .bashrc: > > for prog in ls ~/bin/*.py; do > alias $(basename $prog .py)=$prog > done > > (Excuse me if I got the precise details wrong there). OK, you need to rerun > .bashrc if you add new scripts. It's not perfect. But it's not a showstopper > either. > > I do think, as I said before, that this needs some sort of policy-type PEP > on the standard approach for wrapping scripts, with all the pros and cons of > the various approaches documented and reviewed. > > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From p.f.moore at gmail.com Tue Aug 13 23:27:45 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 13 Aug 2013 22:27:45 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 21:20, Chris Barker - NOAA Federal wrote: > Conclusions: > > 1) an extra bunch of files is a on-issue for most users -- we just > need something that works. > Agreed - the extra files "clutter" is a relatively small issue. 2) the exe launcher is a bit fragile and hard to maintain (and even > harder to debug) -- but there are smart people working on this. > Nobody is really working on the launcher itself AIUI. The code is pretty much static, except when it breaks (for example, the whole UAC/manifest issue). 3) I'd rather not have to mess with PATHEXT, and I particularly don't > want to have to tell my students to do it -- environment variables are > a pain, and somehow PATHEXT has been fragile for me (and I don't use > Cygwin) > Nobody is suggesting that end users mess with PATHEXT. The proposal is that the Python installer does this (indeed, that's been done for Python 3.4, I don't know if the installer for the standalone launcher has been updated in the same way yet). I'm suggesting that we collect specifics on any "fragility" (can you provide details of what has gone wrong for you?) so that we can document and address any genuine issues. But without specifics, we're currently faced with nothing more than a two-pronged "I think it might not work"/"why change it if it works at the moment" argument that has nothing we can actually address... (Not criticising anyone here, it is often hard to be specific). I cant help thinking a more elegant solution exists, but maybe not. > Personally, I believe that executable .py scripts (or maybe a dedicated .pys/.pye/.pya/whatever extension) *is* a more elegant solution - but equally, I concede, "maybe not"... I think it's worth trying, though. Also, you mention develop mode. I don't use develop mode, most of my standalone scripts are single-file scripts, not anything that I'd bother packaging up with a setup.py, etc. Or I run them via python -m, or I install a supporting package and have a (standalone, as previously) driver script. And many of the issues I've had with the exe wrappers are particular to built installers (wheels, wininsts) and *not* develop mode. Those also need to be classified precisely and reviewed - I don't claim they are any more important than the issues you mention with pure scripts - but I'd prefer that we understand the trade-offs and make an informed decision. It may even be that different solutions are better depending on how you work (develop mode vs installing). In the interests of getting more concrete data, can I suggest that setuptools add an off-by-default option, which can be set globally using a config file or environment variable or something (so that it can be used transparently by tools like pip) to use scripts rather than exe wrappers? People can then try it and see whether they hit issues with it. Otherwise, I think we're going to remain forever stuck with theorising and guesswork. Paul. PS I still think that long-term a policy PEP on the recommended way of making "executable scripts" is worthwhile. As I've said, if no-one else wants to pick it up, ask me again in October or so... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaraco at jaraco.com Wed Aug 14 00:53:13 2013 From: jaraco at jaraco.com (Jason R. Coombs) Date: Tue, 13 Aug 2013 22:53:13 +0000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <0bde35afa8934ba781641fa7bce788e1@BLUPR06MB003.namprd06.prod.outlook.com> From: Distutils-SIG [mailto:distutils-sig-bounces+jaraco=jaraco.com at python.org] On Behalf Of Paul Moore Sent: Tuesday, 13 August, 2013 17:28 In the interests of getting more concrete data, can I suggest that setuptools add an off-by-default option, which can be set globally using a config file or environment variable or something (so that it can be used transparently by tools like pip) to use scripts rather than exe wrappers? People can then try it and see whether they hit issues with it. Otherwise, I think we're going to remain forever stuck with theorising and guesswork. That's exactly what setuptools 1.0 does, but I want to get the first draft right, so I'm seeking comment on the technique (using env var to enable), the proper extension to use (.py has problems), and any other suggestions. I'm glad you mentioned the issues on powershell with >4 characters. I'm going to explore that issue to confirm that something like .pygs isn't viable (I use powershell and PATHEXT quite a bit). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6572 bytes Desc: not available URL: From chris.barker at noaa.gov Wed Aug 14 00:30:45 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 13 Aug 2013 15:30:45 -0700 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On Tue, Aug 13, 2013 at 2:27 PM, Paul Moore wrote: >> 3) I'd rather not have to mess with PATHEXT, and I particularly don't >> want to have to tell my students to do it -- environment variables are >> a pain, and somehow PATHEXT has been fragile for me (and I don't use >> Cygwin) > > Nobody is suggesting that end users mess with PATHEXT. The proposal is that > the Python installer does this ouch! I don't think I'd want PATHEXT set for *.py files -- I'd rather they get opened by an editor by default than run...or is point+click behavior different than command line -- shows you how well I know Windows. > (indeed, that's been done for Python 3.4, I > don't know if the installer for the standalone launcher has been updated in > the same way yet). There are those of us still in the 2.7 world -- and I suspect for a good while. > I'm suggesting that we collect specifics on any "fragility" (can you provide > details of what has gone wrong for you?) well, for PATHEXT, the env variable has to be set right, and Windows kind of hides all that from you -- it's really a pain to edit them by hand, so if the installer doesn't do it, or someone re-builds their registry or profile, or what have you, then it'll break. Oh and the cygwin (and who know what other shell alternatives) issue. At least we can pretty much count on an exe running if the shell can find it... Fragiity for the exe approach -- all I know is that the setuptools binary was proken a while back -- but setuptools itself was in a bit of a void of not-quite-sure-if-its maintained, and not-sure-even-how-to-report-a-bug state. It seems that setuptools (or whatever this will be part of) has now been adopted by the core Python community, so we can expect good support -- let's hope so. Anyway, whether the exe approach was more or less fragile than anyting else, is sure was harder to debug/fix for anyone that doesn't know Windows and non-python development well. >> I cant help thinking a more elegant solution exists, but maybe not. > > Personally, I believe that executable .py scripts (or maybe a dedicated > .pys/.pye/.pya/whatever extension) *is* a more elegant solution - but > equally, I concede, "maybe not"... well, I guess that's the way Windows does things -- *nix has the executable bit, Windows uses extensions, so I suppose that is "the way" to get an executable script -- but it really SHOULDN'T be *.py! > I think it's worth trying, though. agreed. I've lost track of what needs to be tried -- can't we just associate an extension with py and give it a go? (got to get that Windos VM working....) > Also, you mention develop mode. I don't use develop mode, most of my > standalone scripts are single-file scripts, not anything that I'd bother > packaging up with a setup.py, etc. Good point -- it seem most of my scripts are part of a larger package, either a set of scripts, or a couple scripts that all rely on a particular package, so the whole setup.py thing works well. but for a single stand-alone script, the PATHEXT and py launcher approach seems really natural. > And many of the issues I've had with the exe wrappers are particular to > built installers (wheels, wininsts) and *not* develop mode. in theory, wheels and develop mode should do the same thing -- not sure about wininsts -- previously, the launcher thing was setuptools, not plain distutils, not sure how that was handled. but it would be best if they all did it the same way. > PS I still think that long-term a policy PEP on the recommended way of > making "executable scripts" is worthwhile. Probably, yes: "There should be one-- and preferably only one --obvious way to do it." I'm leaning toward the PATHEXT approach -- it seems more si milar to the *nix way, and perhaps easeir for lay folks to debug and fix. But I'm saying tha tthe exe method worked fine for many of us, too. -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From maphew at gmail.com Wed Aug 14 09:33:30 2013 From: maphew at gmail.com (Matt Wilkie) Date: Wed, 14 Aug 2013 00:33:30 -0700 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: > I know I'm bike-shedding here, but my preference for extension would be > '.pye' as it indicates something to execute, but without indicating > exactly how (i.e. that it's via a separate launcher executable). .pyx that said, both .pye and .pyx are superior to .pyl (is that last character an "ell", an "eye" or a "one"?) -matt From vinay_sajip at yahoo.co.uk Wed Aug 14 11:43:12 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 14 Aug 2013 10:43:12 +0100 (BST) Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: <1376473392.17287.YahooMailNeo@web171404.mail.ir2.yahoo.com> > > .pyx > > that said, both .pye and .pyx are superior to .pyl? (is that last > character an "ell", an "eye" or a "one"?) I agree that .pyl is less readable/more confusable. We can't use .pyx, though, as that is already used in the Python ecosystem for Pyrex files (a forerunner of Cython). Regards, Vinay Sajip From p.f.moore at gmail.com Wed Aug 14 12:27:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 14 Aug 2013 11:27:59 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 23:30, Chris Barker - NOAA Federal wrote: > On Tue, Aug 13, 2013 at 2:27 PM, Paul Moore wrote: > > >> 3) I'd rather not have to mess with PATHEXT, and I particularly don't > >> want to have to tell my students to do it -- environment variables are > >> a pain, and somehow PATHEXT has been fragile for me (and I don't use > >> Cygwin) > > > > Nobody is suggesting that end users mess with PATHEXT. The proposal is > that > > the Python installer does this > > ouch! I don't think I'd want PATHEXT set for *.py files -- I'd rather > they get opened by an editor by default than run...or is point+click > behavior different than command line -- shows you how well I know > Windows. > It's OK, PATHEXT has nothing to do with double clicks - that's the file associations (and "run the script" is the default action set by the installer, and has been for many, many versions). > > > (indeed, that's been done for Python 3.4, I > > don't know if the installer for the standalone launcher has been updated > in > > the same way yet). > > There are those of us still in the 2.7 world -- and I suspect for a good > while. > I understand this. It's the biggest issue here, that any changes have to not forget users of older Pythons. Personally, I tend to err towards a view that we improve things for current Pythons (3.3+) and make sure we don't ruin things for older versions, possibly by providing workarounds the user needs to apply. I view "install the standalone launcher" to be in this category of workaround. There are others who are more conservative, so don't worry your views are well represented! > I'm suggesting that we collect specifics on any "fragility" (can you > provide > > details of what has gone wrong for you?) > > well, for PATHEXT, the env variable has to be set right, and Windows > kind of hides all that from you -- it's really a pain to edit them by > hand, so if the installer doesn't do it, or someone re-builds their > registry or profile, or what have you, then it'll break. Oh and the > cygwin (and who know what other shell alternatives) issue. At least we > can pretty much count on an exe running if the shell can find it.. > Thanks. I'd class most of that as relatively non-specific or fixed. Users shouldn't have to set PATHEXT if the installer does it, and in any case I'd like to clearly understand what sort of users are writing command line Python scripts that they want to run transparently as if they were executables, who still wouldn't know how to set an environment variable (or be able to understand whatever documentation we produce on how to do it). The cygwin issue has been mentioned/addressed elsewhere, but thanks for that. It's certainly something that needs to be weighed in the balance. Fragiity for the exe approach -- all I know is that the setuptools > binary was proken a while back -- but setuptools itself was in a bit > of a void of not-quite-sure-if-its maintained, and > not-sure-even-how-to-report-a-bug state. It seems that setuptools (or > whatever this will be part of) has now been adopted by the core Python > community, so we can expect good support -- let's hope so. Anyway, > whether the exe approach was more or less fragile than anyting else, > is sure was harder to debug/fix for anyone that doesn't know Windows > and non-python development well. > My personal experience with the exes: 1. If the shebang line in the script gets corrupted, the resulting error is misleading and extremely hard to debug. 2. The existence of the exe in an installer makes it platform-specific, which is a problem for otherwise portable scripts/packages. 3. To use the exes, you really need to create them via setuptools, so they are not practical for standalone scripts - and having 2 distinct ways of setting scripts up to be runnable is less than ideal. 4. If you are upgrading/reinstalling the script, you cannot do so without workarounds if the exe is currently running (mostly a nice case for pip upgrading itself). Also, not directly related to the exes, but definitely to setuptools wrappers (see point 3 above): 1. The wrappers need pkg_resources installed *at runtime* not just at build time. >> I cant help thinking a more elegant solution exists, but maybe not. > > > > Personally, I believe that executable .py scripts (or maybe a dedicated > > .pys/.pye/.pya/whatever extension) *is* a more elegant solution - but > > equally, I concede, "maybe not"... > > well, I guess that's the way Windows does things -- *nix has the > executable bit, Windows uses extensions, so I suppose that is "the > way" to get an executable script -- but it really SHOULDN'T be *.py! > Agreed, we need two concepts "Python module" (.py) and "Python script" (the new extension). > I've lost track of what needs to be tried -- can't we just associate > an extension with py and give it a go? (got to get that Windos VM > working....) > Yes, essentially. I can confidently report that the solution is flawless, and should be adopted at once :-) More seriously, we need *more* people to try the approach and report *real* issues. At the moment, I feel that we have a few people saying it works (mostly me!) and some people saying that they suspect there may be issues but without providing much that's reproducible. (I worry that I'm exhibiting extreme confirmation bias here, though, which is why I want someone to collate a proper list). > > Also, you mention develop mode. I don't use develop mode, most of my > > standalone scripts are single-file scripts, not anything that I'd bother > > packaging up with a setup.py, etc. > > Good point -- it seem most of my scripts are part of a larger package, > either a set of scripts, or a couple scripts that all rely on a > particular package, so the whole setup.py thing works well. > > but for a single stand-alone script, the PATHEXT and py launcher > approach seems really natural. > > > And many of the issues I've had with the exe wrappers are particular to > > built installers (wheels, wininsts) and *not* develop mode. > > in theory, wheels and develop mode should do the same thing -- not > sure about wininsts -- previously, the launcher thing was setuptools, > not plain distutils, not sure how that was handled. > Yes, I think that consensus is that wheels including the exe wrappers is essentially a bug - they should contain details on how to *create* the scripts/wrappers at install time. But it's a messy bug to address, because of history and lack of standardisation in this area. That's why I thing a PEP saying "how to do scripts" the official way would be useful. It was debated many years ago (when distutils was first added to the stdlib!) but things have got much better since then and it should be easier to come up with something acceptable. but it would be best if they all did it the same way. > Precisely. Then we'd only have "legacy" non-standard approaches to address, and we are better able to do the best we can and acknowledge it's not perfect, if we can say "follow the PEP and it works". > PS I still think that long-term a policy PEP on the recommended way of > > making "executable scripts" is worthwhile. > > Probably, yes: "There should be one-- and preferably only one > --obvious way to do it." > > I'm leaning toward the PATHEXT approach -- it seems more si milar to > the *nix way, and perhaps easeir for lay folks to debug and fix. But > I'm saying tha tthe exe method worked fine for many of us, too. The exe method has many, many advantages, and I think it's important that anything new is measured against it. But I do think that a carefully judged approach of considering whether to abandon certain specific benefits because that allows us to fix some of the disadvantages is a good exercise. Thanks for the feedback! Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 14 13:34:29 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 14 Aug 2013 11:34:29 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: Message-ID: Jason R. Coombs jaraco.com> writes: > This means that instead of installing, for example: > > Scripts\my-command.exe > Scripts\my-command-script.py > Scripts\my-command.exe.manifest > Just to muddy the waters a little, I'd like to suggest an alternative approach which doesn't appear to have been tried: 1. The installer just installs foo.exe on Windows for script 'foo', where the foo.exe contains the actual script 'foo' as a resource. 2. The launcher, before looking for foo-script.py, examines its resources. If a script resource is found, it is extracted and written to foo-script.py in the same directory as foo.exe. If such a resource isn't found, it continues to the next step. 3. The launcher looks for 'foo-script.py' in its directory, invokes it and waits for it to complete. 4. If a 'foo-script.py' was written in step 2, it is deleted. The launcher comes with an embedded manifest, so no external manifest is needed. The insertion of a script as a resource is easy to do using ctypes: I've tested it using a suitably modified version of the current distlib launcher and it seems to work as expected. Advantages of this approach: 1. No additional file clutter: Just one installed file per script. 2. No need to worry about PATHEXT, or whether the PEP 397 launcher is installed. 3. Since the script is foo-script.py, no import clashes will occur. Disadvantages of this approach: 1. The scripts are hard to inspect because they're in the .exe. OTOH, (a) we don't really want people to mess with the scripts, and (b) they should be stock wrappers for other code which does the real work. (There are developer tools which allow inspection of resources, if really needed.) 2. Requires ctypes in the installer, so perhaps problematic for versions of Python < 2.5. 3. Scripts take up more space than a .py-with-PEP 397-launcher solution, but that approach can still give rise to the import issue, as well as requiring PATHEXT setup. Thoughts? Regards, Vinay Sajip From oscar.j.benjamin at gmail.com Wed Aug 14 13:42:23 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 14 Aug 2013 12:42:23 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 13 August 2013 20:58, Paul Moore wrote: > > On 13 August 2013 18:08, Oscar Benjamin wrote: >> >> On 13 August 2013 17:33, Paul Moore wrote: >> > >> > On another point you mention, Cygwin Python should be using Unix-style shell >> > script wrappers, not Windows-style exes, surely? The whole point of Cygwin >> > is that it emulates Unix, after all... So I don't see that as an argument >> > either way. >> >> So say I have a ~/bin directory where I put my scripts that I want to >> be generally available. I install something with >> python setup.py install --install-scripts=~/bin >> so that the scripts/script-wrappers go in there because I want to be >> able to always access that program under that name. Don't be fooled by >> the unixy tilde: I'm running ordinary Windows Python in that command >> in git-bash, not Cygwin. Now if that folder is on PATH while I am in >> Cygwin I can run the program with the same name if an .exe wrapper was >> added. I can't run it with the same name if it's a .py/,bat file >> because Cygwin doesn't have the implicit strip-the-extension PATHEXT >> feature and can't run .bat files. > > Ah, OK, thanks for the clarification. > > In that case I can see why you'd prefer exe wrappers (or maybe cygwin bash shell wrappers, or shell aliases...). Maybe an option to still use exe wrappers is worth it - but honestly, I'd say that in that context you probably have enough expertise to understand the issue and make your own solution relatively easily. Yes, but I'd like it if pip install some_cmd would "just work". > What about having in your .bashrc: > > for prog in ls ~/bin/*.py; do > alias $(basename $prog .py)=$prog > done > > (Excuse me if I got the precise details wrong there). OK, you need to rerun .bashrc if you add new scripts. It's not perfect. But it's not a showstopper either. There are ways to make it work for every different environment where I would type the command. Really though it's a pain to have to set these things up everywhere. Also this still doesn't work with subprocess(..., shell=False). There are a huge range of programs that can invoke subprocesses of a given name and I want them all to work with commands that I install from pypi. There are good reasons to use shell=False: the subprocess documentation contains no less than 5 warning boxes about shell=True! This is not peculiar to Python's subprocess module: it is the underlying Windows API calls regardless of which language the parent process is implemented in. Here's a demo of what happens with Robert Kern's kernprof.py script that doesn't have an .exe wrapper (on my system; it's possible that I didn't install it with pip). $ python Python 2.7.5 (default, May 15 2013, 22:43:36) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> subprocess.call(['kernprof.py'], shell=True) # Uses file-association Usage: kernprof.py [-s setupfile] [-o output_file_path] scriptfile [arg] ... 2 >>> import os >>> os.environ['PATHEXT'] '.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.PY;.PYC;.PSC1;.RB;.RBW' >>> subprocess.call(['kernprof'], shell=True) # Uses PATHEXT Usage: kernprof.py [-s setupfile] [-o output_file_path] scriptfile [arg] ... 2 >>> subprocess.call(['kernprof'], shell=False) # Needs an .exe wrapper! Traceback (most recent call last): File "", line 1, in File "q:\tools\Python27\lib\subprocess.py", line 524, in call return Popen(*popenargs, **kwargs).wait() File "q:\tools\Python27\lib\subprocess.py", line 711, in __init__ errread, errwrite) File "q:\tools\Python27\lib\subprocess.py", line 948, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> subprocess.call(['kernprof.py'], shell=False) # Needs an .exe wrapper! Traceback (most recent call last): File "", line 1, in File "q:\tools\Python27\lib\subprocess.py", line 524, in call return Popen(*popenargs, **kwargs).wait() File "q:\tools\Python27\lib\subprocess.py", line 711, in __init__ errread, errwrite) File "q:\tools\Python27\lib\subprocess.py", line 948, in _execute_child startupinfo) WindowsError: [Error 193] %1 is not a valid Win32 application Here's what happens if I put kernprof.bat next to kernprof.py (the .bat file just @echos "running kernprof"): >>> import subprocess >>> subprocess.call(['kernprof'], shell=True) # PATHEXT in action running kernprof 0 >>> subprocess.call(['kernprof'], shell=False) # No PATHEXT Traceback (most recent call last): File "", line 1, in File "q:\tools\Python27\lib\subprocess.py", line 524, in call return Popen(*popenargs, **kwargs).wait() File "q:\tools\Python27\lib\subprocess.py", line 711, in __init__ errread, errwrite) File "q:\tools\Python27\lib\subprocess.py", line 948, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> subprocess.call(['kernprof.bat'], shell=False) # Works but not what I want running kernprof > > I do think, as I said before, that this needs some sort of policy-type PEP on the standard approach for wrapping scripts, with all the pros and cons of the various approaches documented and reviewed. There have been so many emails on this list that I can't immediately find it but somewhere Steve Dower of Microsoft said something like: ''' Don't worry. Exe wrappers are *definitely* the best solution for this. ''' (quote is approximate and emphasis added by me). My experience has lead me to the same conclusion as Steve. It may be worth documenting the reasons why but make no mistake about it: .exe wrappers of some form are the way to go. Oscar From pje at telecommunity.com Wed Aug 14 14:57:01 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 08:57:01 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: On Wed, Aug 14, 2013 at 7:34 AM, Vinay Sajip wrote: > Jason R. Coombs jaraco.com> writes: > >> This means that instead of installing, for example: >> >> Scripts\my-command.exe >> Scripts\my-command-script.py >> Scripts\my-command.exe.manifest >> > > Just to muddy the waters a little, I'd like to suggest an alternative > approach which doesn't appear to have been tried: > > 1. The installer just installs foo.exe on Windows for script 'foo', > where the foo.exe contains the actual script 'foo' as a resource. > > 2. The launcher, before looking for foo-script.py, examines its resources. > If a script resource is found, it is extracted and written to > foo-script.py in the same directory as foo.exe. If such a resource isn't > found, it continues to the next step. > > 3. The launcher looks for 'foo-script.py' in its directory, invokes it and > waits for it to complete. > > 4. If a 'foo-script.py' was written in step 2, it is deleted. > > The launcher comes with an embedded manifest, so no external manifest is > needed. Better suggestion: just append a PEP 441 .pyz to the .exe, and no extraction is necessary; the .exe just reads out the #! part. For Python 2.6 and up, the .exe can simply pass itself as argv[1] to the interpreter. (For older Pythons, a little -c and PYTHONPATH munging is required, but should be doable.) For bonus points, you can actually stick a compatibly-built wheel on the end of the .exe instead, and embed the entire relevant project. ;-) > Thoughts? Writing the script.py file means the current user needs write access to a program installation directory, which is probably not a good idea. Also, what if two instances are running, or you overwrite an existing script while it's being read by Python in another process? No, if you're taking the embedding route, it's got to be either a zipfile, or else you have to use -c and give Python an offset to seek to in the file. In any case, it'd probably be a good idea to offer some command line tools for manipulating such .exes, to e.g. show/change what Python it's set to use, extract/dump/replace the zip, etc. (As for ctypes, if that's needed for this approach (which I somewhat doubt), there are official Windows binaries available for 2.3 and 2.4.) From p.f.moore at gmail.com Wed Aug 14 15:48:47 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 14 Aug 2013 14:48:47 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 14 August 2013 12:42, Oscar Benjamin wrote: > > I do think, as I said before, that this needs some sort of policy-type > PEP on the standard approach for wrapping scripts, with all the pros and > cons of the various approaches documented and reviewed. > > There have been so many emails on this list that I can't immediately > find it but somewhere Steve Dower of Microsoft said something like: > ''' > Don't worry. Exe wrappers are *definitely* the best solution for this. > ''' > (quote is approximate and emphasis added by me). > > My experience has lead me to the same conclusion as Steve. It may be > worth documenting the reasons why but make no mistake about it: .exe > wrappers of some form are the way to go. I would like it documented, with all the reasons, so that we don't keep rehashing this whole thing over and over. Also, if we do recommend exes, I'd like to see something (in the stdlib ultimately) that makes it trivially easy to "wrap" an existing .py script (a standalone one, not one with an associated setup.py) into an exe. Having two ways to write a command using Python is icky. Also, I'd like a single-file solution. More times than I care to remember I have put a program on a USB key or in my dropbox/skydrive and forgotten the "support stuff" (whether that's required DLLs, resource files, or whatever - it's not a Python-specific problem). But I do see your point regarding things like subprocess. It's a shame, but anything other than exes do seem to be second class citizens on Windows. BTW, you mention bat files - it bugs me endlessly that bat files seem to have a more privileged status than "other" script formats whether that's .py or .ps1 or whatever. I've never managed to 100% convince myself that they are special in a way that you can't replicate with suitable settings (PATHEXT, etc, etc). I think it's that .bat is hard-coded in the OS search algorithm or something, though. The docs are not easy to locate on the various aspects of matter. (If bat files didn't have their horrible nesting and ctrl-C handling behaviours, they'd be a viable solution...) Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 14 15:56:28 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 14 Aug 2013 14:56:28 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: On 14 August 2013 13:57, PJ Eby wrote: > > Thoughts? > > Writing the script.py file means the current user needs write access > to a program installation directory, which is probably not a good > idea. Also, what if two instances are running, or you overwrite an > existing script while it's being read by Python in another process? > Good point. No, if you're taking the embedding route, it's got to be either a > zipfile, or else you have to use -c and give Python an offset to seek > to in the file. > Again, agreed - we have executable zipfiles for Python, and a combined exe/zipfile is a perfectly viable format (it's used by most self-extracting zip formats, as well as wininst formats). In any case, it'd probably be a good idea to offer some command line > tools for manipulating such .exes, to e.g. show/change what Python > it's set to use, extract/dump/replace the zip, etc. > I'd say tools supporting the format are essential. exe/zip formats will never be as user friendly as a pure text file script - we need to make the extra effort as minimal as possible. In particular, see my other post - I don't want to have one format (exe) for installed commands packaged with setuptools, and a separate format for one-file scripts I write myself. Actually, this sounds like a very good solution. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 14 15:58:59 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 14 Aug 2013 14:58:59 +0100 (BST) Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: Message-ID: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> >? > Better suggestion: just append a PEP 441 .pyz to the .exe, and no > IIUC PEP 441 is about tooling to create archives; don't we just need a Python-compatible .zip (i.e. with a __main__.py)? > For bonus points, you can actually stick a compatibly-built wheel on > the end of the .exe instead, and embed the entire relevant project. > ;-) This is less helpful; one might have N scripts per project, no need to stick the whole project in with each one, or am I misunderstanding? > Writing the script.py file means the current user needs write access > to a program installation directory, which is probably not a good True. > idea.? Also, what if two instances are running, or you overwrite an > existing script while it's being read by Python in another process? That could probably be taken care of with a bit of footwork. > No, if you're taking the embedding route, it's got to be either a > zipfile, or else you have to use -c and give Python an offset to seek > to in the file. How would such an offset be used? Are you saying the -c scriptlet would use that offset to extract the script? Or do you mean something else? > (As for ctypes, if that's needed for this approach (which I somewhat > doubt), there are official Windows binaries available for 2.3 and > 2.4.) It's needed only if you use the specific approach I used - which was to use the Windows UpdateResource API to embed the script as a pukka Windows resource. Of course, if you're just appending something to the .exe, you don't need ctypes. Regards, Vinay Sajip From ncoghlan at gmail.com Wed Aug 14 16:33:37 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 10:33:37 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 14 August 2013 09:58, Vinay Sajip wrote: >> > >> Better suggestion: just append a PEP 441 .pyz to the .exe, and no >> > > IIUC PEP 441 is about tooling to create archives; don't we just need a Python-compatible .zip (i.e. with a __main__.py)? > >> For bonus points, you can actually stick a compatibly-built wheel on >> the end of the .exe instead, and embed the entire relevant project. >> ;-) > > This is less helpful; one might have N scripts per project, no need to stick the whole project in with each one, or am I misunderstanding? I believe PJE's suggestion is to expand the scope of PEP 441 a bit to include the ability to generate a valid Windows executable with the archive appended, instead of just the *nix/Python launcher shebang line. If you prepend the shebang line, you get a ".pya" file (Note: I suggested to Daniel that the extension be changed to "pya" for "Python application", but the PEP hasn't been updated yet), if you prepend the executable, you get an actual Windows executable. That would also let us avoid the need for a separate ".pyaw" extension - you would just prepend a Windows GUI executable to handle that case, with .pya only handling console applications. The "script wrapper" case would then just be a particular use of the PEP 441 executable generation features: prepend the console executable for console scripts and the GUI executable for GUI wrappers, with the wrapper itself being a __main__.py file in the attached archive. Given that Vinay already gave the Python launcher the ability to do this (when built with different options), this sounds quite feasible to me. Aside from the lack of embedded C extension support (which could likely be fixed if zipimport was migrated to Python code for 3.5), you'd have the essentials of py2exe right there in the standard library :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vinay_sajip at yahoo.co.uk Wed Aug 14 17:20:42 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 14 Aug 2013 16:20:42 +0100 (BST) Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <1376493642.31118.YahooMailNeo@web171403.mail.ir2.yahoo.com> > From: Nick Coghlan > I believe PJE's suggestion is to expand the scope of PEP 441 a bit to > include the ability to generate a valid Windows executable with the > archive appended, instead of just the *nix/Python launcher shebang > line. I got that; I was commenting on the "embed the entire relevant project" part as applied to wrapped scripts. > The "script wrapper" case would then just be a particular use of the > PEP 441 executable generation features:? prepend the console > executable for console scripts and the GUI executable for GUI > wrappers, with the wrapper itself being a __main__.py file in the > attached archive. > > Given that Vinay already gave the Python launcher the ability to do > this (when built with different options), this sounds quite feasible > to me. It's certainly feasible - I'm looking at the implementation now. It's not quite there yet - I need to extract the shebang from the end of the stock executable/beginning of the appended archive to determine the Python to use, and then ensure that the executable is passed to that Python as the script argument. Regards, Vinay Sajip From ncoghlan at gmail.com Wed Aug 14 17:36:32 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 11:36:32 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 Message-ID: I spent last weekend at "Flock to Fedora", mostly due to my day job working on the Beaker integration testing system (http://beaker-project.org) for Red Hat, but also to talk to folks about Fedora and Python interactions. A completely unexpected discovery over the weekend, was that some of the RPM folks are exploring the idea of switching the *user* facing format for the packaging system away from spec files and towards directly executable Python code. Thus, you'd get away from the painful mess that is RPM conditionals and macros and have a real programming language to define what your built packages *should* look like, while still *producing* static metadata for consumption by installers and other software distribution tools. Hmm, does that approach sound familiar to anyone? :) Anyway, we were talking about how they're considering approaching the install hook problem, and their approach gave me an idea for a better solution in PEP 426. Currently, PEP 426 allows a distribution to define "install hooks": hooks that will execute after the distribution is installed and before it is uninstalled. I'm now planning to change that to allowing distributions to define "export hooks", based on the cleaned up notion of "export groups" in the latest version of PEP 426. An export hook definition consists of the following fields: * group - name of the export group to hook * preupdate - export to call prior to installing/updating/removing a distribution that exports this export group * postupdate - export to call after installing/updating/removing a distribution that exports this export group * refresh - export to call to resynchronise any caches with the system state. This will be invoked for every distribution on the system that exports this export group any time the distribution defining the export hook is itself installed or upgraded If a distribution exports groups that it also defines hooks for, it will exhibit the following behaviours: Fresh install: * preupdate NOT called (hook not yet registered) * postupdate called * refresh called Upgrade: * preupdate called (old version) * postupdate called (new version) * refresh called (new version) Complete removal: * preupdate called * postupdate NOT called (hook no longer registered) * refresh NOT called (hook no longer registered) This behaviour follows naturally from *not* special casing self-exports: prior to installation, the export hooks won't be registered, so they won't be called, and the same applies following complete removal. The hooks would have the following signatures: def preupdate(current_meta, next_meta): # current_meta==None indicates fresh install # next_meta==None indicates complete removal def postupdate(previous_meta, current_meta): # previous_meta==None indicates fresh install # current_meta==None indicates complete removal def refresh(current_meta): # Used to ensure any caches are consistent with system state # Allows handling of previously installed distributions Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From oscar.j.benjamin at gmail.com Wed Aug 14 17:49:49 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 14 Aug 2013 16:49:49 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 14 August 2013 14:48, Paul Moore wrote: > > But I do see your point regarding things like subprocess. It's a shame, but > anything other than exes do seem to be second class citizens on Windows. > BTW, you mention bat files - it bugs me endlessly that bat files seem to > have a more privileged status than "other" script formats whether that's .py > or .ps1 or whatever. I've never managed to 100% convince myself that they > are special in a way that you can't replicate with suitable settings > (PATHEXT, etc, etc). I think it's that .bat is hard-coded in the OS search > algorithm or something, though. I think it is hard-coded into CreateProcess (at least on some versions of Windows). It certainly isn't a documented feature, but as demonstrated in my previous post it does work on XP. > The docs are not easy to locate on the > various aspects of matter. I just tried to find documentation but all I found was this (with dead-links to MS): http://blog.kalmbachnet.de/?postid=34 > (If bat files didn't have their horrible nesting > and ctrl-C handling behaviours, they'd be a viable solution...) You were right to cry about these previously. To give an example of where these subprocess issues might matter. sphinx auto-generates Makefiles that call 'sphinx-build' with no extension. The sphinx-build command has a setuptools .exe wrapper so that it will be picked up. I wouldn't confidently assume that for all combinations of Windows version and 'make' implementation that 'make' would know how to find sphinx-build for anything other than an .exe. A quick experiment shows that my own make handles shebangs if present and then falls back to just calling CreateProcess which handles .exe files and (via the undocumented hack above) .bat files . It does not respect PATHEXT and the error when the extension is provided but no shebang is given clearly shows it using the same sys-call as used by Python's subprocess module: Q:\tmp>show main 'show' is not recognized as an internal or external command, operable program or batch file. Q:\tmp>type Makefile all: mycmd.py Q:\tmp>type mycmd.py print 'hello' Q:\tmp>make mycmd.py process_begin: CreateProcess(Q:\tmp\mycmd.py, mycmd.py, ...) failed. make (e=193): Error 193 make: *** [all] Error 193 Q:\tmp>mycmd.py hello Oscar From erik.m.bray at gmail.com Wed Aug 14 17:55:43 2013 From: erik.m.bray at gmail.com (Erik Bray) Date: Wed, 14 Aug 2013 11:55:43 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Wed, Aug 14, 2013 at 11:36 AM, Nick Coghlan wrote: > I spent last weekend at "Flock to Fedora", mostly due to my day job > working on the Beaker integration testing system > (http://beaker-project.org) for Red Hat, but also to talk to folks > about Fedora and Python interactions. > > A completely unexpected discovery over the weekend, was that some of > the RPM folks are exploring the idea of switching the *user* facing > format for the packaging system away from spec files and towards > directly executable Python code. Thus, you'd get away from the painful > mess that is RPM conditionals and macros and have a real programming > language to define what your built packages *should* look like, while > still *producing* static metadata for consumption by installers and > other software distribution tools. > > Hmm, does that approach sound familiar to anyone? :) > > Anyway, we were talking about how they're considering approaching the > install hook problem, and their approach gave me an idea for a better > solution in PEP 426. > > Currently, PEP 426 allows a distribution to define "install hooks": > hooks that will execute after the distribution is installed and before > it is uninstalled. > > I'm now planning to change that to allowing distributions to define > "export hooks", based on the cleaned up notion of "export groups" in > the latest version of PEP 426. An export hook definition consists of > the following fields: > > * group - name of the export group to hook > * preupdate - export to call prior to installing/updating/removing a > distribution that exports this export group > * postupdate - export to call after installing/updating/removing a > distribution that exports this export group > * refresh - export to call to resynchronise any caches with the system > state. This will be invoked for every distribution on the system that > exports this export group any time the distribution defining the > export hook is itself installed or upgraded > > If a distribution exports groups that it also defines hooks for, it > will exhibit the following behaviours: > > Fresh install: > * preupdate NOT called (hook not yet registered) > * postupdate called > * refresh called > > Upgrade: > * preupdate called (old version) > * postupdate called (new version) > * refresh called (new version) > > Complete removal: > * preupdate called > * postupdate NOT called (hook no longer registered) > * refresh NOT called (hook no longer registered) > > This behaviour follows naturally from *not* special casing > self-exports: prior to installation, the export hooks won't be > registered, so they won't be called, and the same applies following > complete removal. > > The hooks would have the following signatures: > > def preupdate(current_meta, next_meta): > # current_meta==None indicates fresh install > # next_meta==None indicates complete removal > > def postupdate(previous_meta, current_meta): > # previous_meta==None indicates fresh install > # current_meta==None indicates complete removal > > def refresh(current_meta): > # Used to ensure any caches are consistent with system state > # Allows handling of previously installed distributions > I think I'm okay with this so long as it remains optional. I'm not crazy about executable build specs where they're not necessary. For most cases, especially in pure Python packages, it's frequently overkill and asking for trouble. So I would still want to see a well-accepted static build spec for Python packages too (sort of a la setup.cfg as parsed by d2to1, only better), though I realize that's a separate issue from PEP 426. Erik From ncoghlan at gmail.com Wed Aug 14 18:07:43 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 12:07:43 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On 14 August 2013 11:55, Erik Bray wrote: > I think I'm okay with this so long as it remains optional. I'm not > crazy about executable build specs where they're not necessary. For > most cases, especially in pure Python packages, it's frequently > overkill and asking for trouble. So I would still want to see a > well-accepted static build spec for Python packages too (sort of a la > setup.cfg as parsed by d2to1, only better), though I realize that's a > separate issue from PEP 426. Sure, the main point of PEP 426 is to make it so the packaging ecosystem doesn't need to *care* about the user facing formats. YAML, ini, Python, doesn't matter :) My current plan is to focus on formalising pydist.json as the main vehicle for communicating between build tools and installers. I had previously been thinking we could postpone defining the build system hooks, but I now think it makes more sense to formalise that as well before declaring metadata 2.0 ready for general use. In the meantime, we'll continue getting by with setup.py and the setuptools metadata formats :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pje at telecommunity.com Wed Aug 14 19:29:08 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 13:29:08 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Wed, Aug 14, 2013 at 9:58 AM, Vinay Sajip wrote: > IIUC PEP 441 is about tooling to create archives; don't we just need a Python-compatible .zip (i.e. with a __main__.py)? I meant that it has a #! line before the zip part, so that the launcher knows what Python to invoke. There are also some challenges for older Pythons to invoke __main__, since the normal Python import machinery frowns on reloading __main__. I expect the zip would need an extra __main.py stub to bootstrap the loading of __main__, and then invoke python with something like '-c "__import__('sys').path[0:0]=['/path/to','path/to/exe'']; __import__('__main').go()"'. (It can't have the import run the app as a side effect, because otherwise the import lock will be held, leading to Bad Things in multi-threaded apps.) > This is less helpful; one might have N scripts per project, no need to stick the whole project in with each one, or am I misunderstanding? I just meant that for cases where there's only one script, or where you are doing a custom-built application. This also becomes The One Obvious Way to do py2exe-like things. > How would such an offset be used? Are you saying the -c scriptlet would use that offset to extract the script? Or do you mean something else? Extract the script by seeking to the offset and reading it. It's far from ideal, though; the .zip is much better since everything back to 2.3 can support it in some fashion. From pje at telecommunity.com Wed Aug 14 19:46:42 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 13:46:42 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Wed, Aug 14, 2013 at 10:33 AM, Nick Coghlan wrote: > If you prepend the shebang line, you get a ".pya" file (Note: I > suggested to Daniel that the extension be changed to "pya" for "Python > application", but the PEP hasn't been updated yet), if you prepend the > executable, you get an actual Windows executable. The shebang line, or some equivalent is still needed, even if you're prepending an .exe. (It does need to be able to find Python, after all.) The wrappers, I think, should also be updated to support the .deleteme protocol I described earlier, so that if, upon exiting, the program finds it is named .deleteme, it should respawn Python with a -c scriptlet to delete the .deleteme file, and immediately exit without waiting for the result. That way, pip and other installation tools can update their own .exe files, without leaving any garbage behind. With this overall approach, .exe's can remain the default choice of script wrapping, with .pya's available as an option for those who want them. PyLauncher should include the .pya association and PATHEXT, and people who want to write or edit scripts by hand can use that extension. (Of course, using .pya won't work with subprocess.call and other CreateProcess-y things, which incidentally reminds me that I've had some royal pains in the past trying to get other applications to invoke Python scripts or indeed *any* language scripts. For example the Calibre Windows app's "Open With..." plugin requires an .exe... and it's *written* in Python, for heaven's sakes! So .exe headers and extensions for .pya files really ought to be the default option on Windows, for the sake of users' sanity. Plain-text .pya files are a developer/quick-and-dirty feature that you don't use for scripts on Windows that are invoked by anything besides other scripts.) > That would also let > us avoid the need for a separate ".pyaw" extension - you would just > prepend a Windows GUI executable to handle that case, with .pya only > handling console applications. Shouldn't naming the file .pyw already work today for that case? Certainly, the .pyw extension is already suitable for manually creating GUI scripts in a text editor. Unless there's something special about how the 'pythonw' executable processes the command line, it should work just as well for a zipped archive. So, probably no need to have a separate extension in either case. (But maybe somebody should verify that 2.6+ on Windows does indeed run .zip files named with .pyw and double-clicked on.) From qwcode at gmail.com Wed Aug 14 19:53:49 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 14 Aug 2013 10:53:49 -0700 Subject: [Distutils] PEP440 and fork versioning Message-ID: I'm wondering if PEP440 should recommend how to version forks? It's fairly common to fork dependencies temporarily until the change can be released upstream. Ideally, you want to version a fork (and keep the same name) so that it fulfills the requirement, but be obvious that it's a fork. Although pip allows overriding requirement consistency, consistency is preferred, and needed in cases where a `pkg_resources.require` enforces it in a console script. As it is now, the "post-release" scheme works for this, but it's not the intended use case. Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Wed Aug 14 20:00:35 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 14:00:35 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Wed, Aug 14, 2013 at 11:36 AM, Nick Coghlan wrote: > * group - name of the export group to hook > * preupdate - export to call prior to installing/updating/removing a > distribution that exports this export group > * postupdate - export to call after installing/updating/removing a > distribution that exports this export group > * refresh - export to call to resynchronise any caches with the system > state. This will be invoked for every distribution on the system that > exports this export group any time the distribution defining the > export hook is itself installed or upgraded I've reread your post a few times and I'm not sure I understand it. Let me try and spell out a scenario to see if I've got it: * Distribution A defines a refresh hook for group 'foo.bar' -- but doesn't export anything in that group * Distribution B defines an *export* (fka "entry point") -- any export -- in export group 'foo.bar' -- but doesn't define any hooks * Distribution A's refresh hook will be notified when B is installed, updated, or removed Is that what this is for? If so, my confusion is probably because of overloading of the term "export" in this context; among other things, it's unclear whether this is a separate data structure from exports themselves... and if so, why? If I were doing something like this in the existing entry point system, I'd do something like: [mebs.refresh] foo.bar = my.hook.module:refresh i.e., just list the hooks in an export group, using the export name to designate what export group is being monitored. This approach leverages the fact that exports already need to be indexed, so why create a whole new sort of metadata just for the hooks? (But of course if I have misunderstood what you're trying to do in the first place, this and my other thoughts may be moot.) (Oh, and btw, if a distribution has hooks for itself, then how are you going to invoke two different versions of the code? Rollback sys.modules and reload? Spawn another process?) From qwcode at gmail.com Wed Aug 14 20:06:33 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 14 Aug 2013 11:06:33 -0700 Subject: [Distutils] PEP440 and fork versioning In-Reply-To: References: Message-ID: I'm noticing the mention of forks in PEP426 for "provides". so theoretically, `pkg_resources.WorkingSet.resolve` would be updated at some point to account for "provides" in PEP426, and this feature would be surfaced as a setup keyword for users to use in their fork projects. On Wed, Aug 14, 2013 at 10:53 AM, Marcus Smith wrote: > I'm wondering if PEP440 should recommend how to version forks? It's fairly > common to fork dependencies temporarily until the change can be released > upstream. > > Ideally, you want to version a fork (and keep the same name) so that it > fulfills the requirement, but be obvious that it's a fork. > > Although pip allows overriding requirement consistency, consistency is > preferred, and needed in cases where a `pkg_resources.require` enforces it > in a console script. > > As it is now, the "post-release" scheme works for this, but it's not the > intended use case. > > Marcus > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 14 20:09:07 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 14 Aug 2013 19:09:07 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <32c95cca297f427fb6e76e80dbf4f4b5@BLUPR06MB003.namprd06.prod.outlook.com> <61faf23630ca4e80ab32cffc63b2f2c6@BLUPR06MB003.namprd06.prod.outlook.com> <74e37c6924594ed4a193157a15596891@BLUPR03MB199.namprd03.prod.outlook.com> <91ee15395d3d46eab2855c65d30ab82f@BLUPR06MB003.namprd06.prod.outlook.com> Message-ID: On 14 August 2013 16:49, Oscar Benjamin wrote: > To give an example of where these subprocess issues might matter. > sphinx auto-generates Makefiles that call 'sphinx-build' with no > extension. The sphinx-build command has a setuptools .exe wrapper so > that it will be picked up. I wouldn't confidently assume that for all > combinations of Windows version and 'make' implementation that 'make' > would know how to find sphinx-build for anything other than an .exe. > OK, that's a pretty solid use case, and pretty clearly demonstrates that there will be issues with anything other than an exe. So we come full circle again - I'm pretty sure the last time this came up a month or so ago, someone came up with a scenario that convinced me to give up on executable script files. I definitely will at some point write up *some* sort of document on best practices for wrapping Python code (scripts, apps, whatever) as OS commands, in a cross-platform way. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 14 20:14:09 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 14 Aug 2013 19:14:09 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 14 August 2013 18:46, PJ Eby wrote: > Shouldn't naming the file .pyw already work today for that case? > Certainly, the .pyw extension is already suitable for manually > creating GUI scripts in a text editor. Unless there's something > special about how the 'pythonw' executable processes the command line, > it should work just as well for a zipped archive. > .pyw files can be imported as modules, just like .py, so you hit the issue of scripts named the same as modules that they import. Naming a zipped archive .pyw is no better or worse than naming it .py - both work most of the time, but are disconcerting to users who think the filetype implies "text" and both have the problems associated with being both executable and importable. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 14 20:23:48 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 14:23:48 -0400 Subject: [Distutils] PEP440 and fork versioning In-Reply-To: References: Message-ID: Distros actually need to do this fairly regularly for security patches and packaging tweaks, so it may be a good idea. I think local updates were one of the intended uses for post-releases, but that doesn't work if upstream is also using that suffix (and we know some projects do). *If* this was added to the PEP, I would add it as a new optional ".localN" suffix, with a recommendation that public index servers MUST disallow use of local numbering, since it is intended for downstream integrators to indicate the inclusion of additional changes relative to the upstream version. Outside a given integrators environment, the local numbering is no longer valid. Currently, the PEP assumes the use of an external numbering system to track that kind of local change (e.g. with RPM, one common tactic is to increase the release level rather than the version, so the version continues to match the upstream base). The rationale for adding it directly to the PEP would be to let people accurately track versions for local modifications, even if they're using virtualenv as their integration environment. "Provides" is a bit different, in that it covers more permanent forks and name changes, rather than the "fix things locally for immediate use, submit upstream patch as a good open source citizen" and "backport selected bug fixes from later versions" workflows that are pretty common in larger integration projects like Linux distros. Cheers, Nick. From qwcode at gmail.com Wed Aug 14 21:07:03 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 14 Aug 2013 12:07:03 -0700 Subject: [Distutils] PEP440 and fork versioning In-Reply-To: References: Message-ID: > *If* this was added to the PEP, I would add it as a new optional > ".localN" suffix, with a recommendation that public index servers MUST > disallow use of local numbering, since it is intended for downstream > integrators to indicate the inclusion of additional changes relative > to the upstream version. Outside a given integrators environment, the > local numbering is no longer valid. > Seems like you have to add something like this, if you're not wanting "provides" to cover it. Like you mention, ".post" won't work in all cases. Otherwise people are left not changing the version, or changing the version in a way that could break consistency. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 14 21:14:36 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 15:14:36 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On 14 August 2013 14:00, PJ Eby wrote: > On Wed, Aug 14, 2013 at 11:36 AM, Nick Coghlan wrote: >> * group - name of the export group to hook >> * preupdate - export to call prior to installing/updating/removing a >> distribution that exports this export group >> * postupdate - export to call after installing/updating/removing a >> distribution that exports this export group >> * refresh - export to call to resynchronise any caches with the system >> state. This will be invoked for every distribution on the system that >> exports this export group any time the distribution defining the >> export hook is itself installed or upgraded > > I've reread your post a few times and I'm not sure I understand it. > Let me try and spell out a scenario to see if I've got it: > > * Distribution A defines a refresh hook for group 'foo.bar' -- but > doesn't export anything in that group > * Distribution B defines an *export* (fka "entry point") -- any export > -- in export group 'foo.bar' -- but doesn't define any hooks > * Distribution A's refresh hook will be notified when B is installed, > updated, or removed No, A's preupdate and postupdate hooks would fire when B (or any other distro exporting the "foo.bar" group) is installed/updated/removed. refresh() would fire only when A was installed or updated. I realised that my proposed signature for the refresh() hook is wrong, though, since it doesn't deal with catching up on *removed* distributions properly. Rather than being called multiple times, refresh() instead needs to be called with an iterable providing the metadata for all currently installed distributions that export that group. > Is that what this is for? > > If so, my confusion is probably because of overloading of the term > "export" in this context; among other things, it's unclear whether > this is a separate data structure from exports themselves... and if > so, why? Where "exports" is about publishing entries into an export group, the new "export_hooks" field would be about *subscribing* to an export group and being told about changes to it. While you could use a naming convention to defined these hooks directly in "exports" without colliding with the export of the group itself, but I think it's better to separate them out so you can do stricter validation on the permitted keys and values (the rationale is similar to that for separating out commands from more general exports, and exports from arbitrary metadata extensions). You're right the name should be a key in a mapping (like "exports") rather than a subfield in a list, though. That means I'm envisioning something like the following: Distribution "foo": "export_hooks" : { "foo.bar": { "preupdate": "foo.bar:exporter_update_started" "postupdate": "foo.bar:exporter_update_completed" "refresh": "foo.bar:resync_cache" } } When "foo" is installed or updated, then the installer would invoke "foo.bar.resync_cache" with an iterable of all currently installed distributions that export the "foo.bar" export group. Distribution "notfoo": "exports" : { "foo.bar": {} } When "notfoo" is installed, updated or removed, then "foo.bar:exporter_update_started" would be called prior to changing anything, and "foo.bar:exporter_update_completed" would be called when the changes had been made. > If I were doing something like this in the existing entry point > system, I'd do something like: > > [mebs.refresh] > foo.bar = my.hook.module:refresh > > i.e., just list the hooks in an export group, using the export name to > designate what export group is being monitored. This approach > leverages the fact that exports already need to be indexed, so why > create a whole new sort of metadata just for the hooks? Mostly so you can validate them and display them differently, and avoid reserving any part of the shared namespace. I find documentation is also easier when the core use cases aren't wedged into the extension mechanisms (even if they share implementation details under the hood). > (But of course if I have misunderstood what you're trying to do in the > first place, this and my other thoughts may be moot.) I think you mostly understood it, I'm just not explaining it very well yet. > (Oh, and btw, if a distribution has hooks for itself, then how are you > going to invoke two different versions of the code? Rollback > sys.modules and reload? Spawn another process?) Requiring that all hook invocations happen in a subprocess sounds like the best plan to me. The arguments all serialise nicely to JSON so that shouldn't be too hard to arrange. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Aug 14 21:28:33 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Aug 2013 15:28:33 -0400 Subject: [Distutils] PEP440 and fork versioning In-Reply-To: References: Message-ID: On 14 August 2013 15:07, Marcus Smith wrote: > >> *If* this was added to the PEP, I would add it as a new optional >> ".localN" suffix, with a recommendation that public index servers MUST >> disallow use of local numbering, since it is intended for downstream >> integrators to indicate the inclusion of additional changes relative >> to the upstream version. Outside a given integrators environment, the >> local numbering is no longer valid. > > > Seems like you have to add something like this, if you're not wanting > "provides" to cover it. > Like you mention, ".post" won't work in all cases. > Otherwise people are left not changing the version, or changing the version > in a way that could break consistency. Yeah, I agree. To help keep track of this stuff (and to make pull requests a possibility), I created a "PyPI Metadata Formats" repo on BitBucket and filed this as the first issue: https://bitbucket.org/pypa/pypi-metadata-formats/issue/1/add-local-numbering-to-pep-440 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From pje at telecommunity.com Thu Aug 15 00:32:14 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 18:32:14 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Wed, Aug 14, 2013 at 3:14 PM, Nick Coghlan wrote: > On 14 August 2013 14:00, PJ Eby wrote: >> On Wed, Aug 14, 2013 at 11:36 AM, Nick Coghlan wrote: >>> * group - name of the export group to hook >>> * preupdate - export to call prior to installing/updating/removing a >>> distribution that exports this export group >>> * postupdate - export to call after installing/updating/removing a >>> distribution that exports this export group >>> * refresh - export to call to resynchronise any caches with the system >>> state. This will be invoked for every distribution on the system that >>> exports this export group any time the distribution defining the >>> export hook is itself installed or upgraded >> >> I've reread your post a few times and I'm not sure I understand it. >> Let me try and spell out a scenario to see if I've got it: >> >> * Distribution A defines a refresh hook for group 'foo.bar' -- but >> doesn't export anything in that group >> * Distribution B defines an *export* (fka "entry point") -- any export >> -- in export group 'foo.bar' -- but doesn't define any hooks >> * Distribution A's refresh hook will be notified when B is installed, >> updated, or removed > > No, A's preupdate and postupdate hooks would fire when B (or any other > distro exporting the "foo.bar" group) is installed/updated/removed. > refresh() would fire only when A was installed or updated. Huh? So refresh is only relevant to the package itself? I guess I don't understand the point of that, since you get the same info from postupdate then, no? > I realised that my proposed signature for the refresh() hook is wrong, > though, since it doesn't deal with catching up on *removed* > distributions properly. Rather than being called multiple times, > refresh() instead needs to be called with an iterable providing the > metadata for all currently installed distributions that export that > group. Ah. But then why is it called for A, instead of.. oh, I think I see now. Gotcha. This is the sort of thing that examples are really made for, so you can see the use cases for the different hooks. >> If so, my confusion is probably because of overloading of the term >> "export" in this context; among other things, it's unclear whether >> this is a separate data structure from exports themselves... and if >> so, why? > > Where "exports" is about publishing entries into an export group, the > new "export_hooks" field would be about *subscribing* to an export > group and being told about changes to it. That's not actually a justification for not using exports. > While you could use a naming convention to defined these hooks > directly in "exports" without colliding with the export of the group > itself, but I think it's better to separate them out so you can do > stricter validation on the permitted keys and values (the rationale is > similar to that for separating out commands from more general exports, > and exports from arbitrary metadata extensions). The separation of commands is (just barely) justifiable because it's not a runtime use, it's installer use. Stricter validation, OTOH, is a completely bogus justification for not using exports, otherwise nobody would ever have any reason to use exports, everybody would have to define their own extensions so they could have stricter validation. ;-) The solution to providing more validation is to use *more* export groups, e.g.: [mebs.export_validators] mebs.refresh = module.that.validates.keys.in.the.refresh.group:somefunc (In other words, define hooks for validating export groups, the way setuptools uses an entry point group for validating setup keywords.) Of course, even without that possibility, the stricter validation concept is kind of bogus here: the only thing you can really validate is that syntactically valid group names are being used as export names, which isn't much of a validation. You can't *semantically* validate them, since there is no global registry of group names. So what's the point? The build system *should* reserve at least one (subdivisible) namespace for itself, and use that mechanism for its own extension, for two reasons: 1. Entities should not be multiplied beyond necessity, 2. It serves as an example of how exports are to be used, and 3. The API is reusable... No, three reasons! Wait, I'll come in again... the API is reusable, it serves as an example, no duplication, and namespaces are a good idea, let's do more of them... no, four reasons... chief amongst the reasonry... Seriously: I can *sort of* see a reason to keep commands separate, but that's a "meh". I admittedly just grabbed it as a handy way to shoehorn that functionality into setuptools. But keeping extensions to the build system itself in a separate place? No, a thousand times no. This sort of extensibility is *precisely* what the darn things are *for*. If the build system doesn't use them, what's the point? > Mostly so you can validate them and display them differently, and > avoid reserving any part of the shared namespace. I find documentation > is also easier when the core use cases aren't wedged into the > extension mechanisms (even if they share implementation details under > the hood). How is the documentation easier in this case? Can you given an example? I personally don't see the problem here, in part because this hook mechanism can (and perhaps should) be described as a separate PEP. In fact, the more that you use extension facilities to implement features of this sort, the *easier* it is to perform this separation of docs, because you can write individual docs assuming the extension mechanism is understood. Already, PEP 426 is bigger than PEP 333 (!), and it might be wise to break it into sub-PEPs anyway. For example: 1. Main PEP, describes the format and core types' syntax, references the other PEPs 2. Dependency and versioning PEP 3. Exports PEP (including an API proposal) 4. Build system extensions PEP (covering the hooks discussed in this thread, referencing #3) This would make the overall thing a lot more comprehensible, and the audiences for #3 and #4 would be limited compared to #1 and #2. Build system developers and extenders would need to read all 4, but there would be a well-gradated learning curve from "here's the rough concept and JSON schema" to "now you are a Jedi, my Padawan". ;-) In particular, this arrangement means that the language of each PEP can simply reference assumed-to-be-understood terms from prior PEPs, and sufficient space can be allotted to explaining the use cases and providing examples relevant to that layer of the system. (Also, it ought to make community review and consensus of the various PEPs easier, too.) IOW, another value of reusing the existing export mechanism is that if you are going to implement something related to PEP #4, you *need* to be sufficiently versed in the concepts of PEP #3 anwyay -- introducing a different data structure and API for the metadata is just duplication of entities and an extra thing to learn, instead of reusing the One Obvious Way to handle importable hooks. From pje at telecommunity.com Thu Aug 15 00:37:39 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 14 Aug 2013 18:37:39 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Wed, Aug 14, 2013 at 2:14 PM, Paul Moore wrote: > .pyw files can be imported as modules, just like .py, Darn. Okay, so another extension *is* needed if you want to be able to make non-console apps runnable-but-not-importable. IIUC it should by '.pywa' rather than '.pya', though, because of the issue with only the first three characters of an extension working in PowerShell, which means it would be executed by the wrong PyLauncher under some circumstances. (Honestly, though, I'm not sure whether anybody cares about PATH/PATHEXT in relation to GUI apps; ISTM they'd mostly be invoked by shortcuts, and there's also a registry mechanism that's supposed to be used for these things nowadays, rather than PATH... but I think it only works for .exe's, so there we go again back to the land of .exe's just plain Work Better On Windows.) From jess.austin at gmail.com Thu Aug 15 05:12:03 2013 From: jess.austin at gmail.com (Jess Austin) Date: Wed, 14 Aug 2013 22:12:03 -0500 Subject: [Distutils] help requested getting existing project "gv" into PyPI In-Reply-To: References: Message-ID: On Fri, Aug 9, 2013 at 12:46 PM, PJ Eby wrote: > update the download links whenever a new version is released. Note, > too, that unless there is a specific download for the Python binding > that uses setup.py to invoke its build process, a PyPI listing won't > make the binding pip-installable. > Yes it appears setup.py is not used. Thanks, and nevermind! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tk47 at students.poly.edu Thu Aug 15 05:57:18 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 14 Aug 2013 23:57:18 -0400 Subject: [Distutils] Realistic PyPI, pip and TUF demo Message-ID: <520C519E.3090905@students.poly.edu> Hello everyone, We now have a demonstration of pip that securely and efficiently downloads with TUF any package from a PyPI mirror: https://github.com/theupdateframework/pip/wiki/pip-over-TUF We hope that you will try our demonstration with your favourite packages and tell us about any issue that you find. TUF does not yet work on Microsoft Windows and Apple OS X. This is because it depends for cryptography on a custom Python library (evpy) which binds with OpenSSL. We are planning to fix this by moving to the cross-platform Mozilla Network Security Services (NSS) library. We also welcome your thoughts on features and enhancements that you would like to see. Our next demo will show security flaws in package managers such as pip that do not use TUF. We will then see how pip with TUF addresses those security attacks. -The TUF team From vinay_sajip at yahoo.co.uk Thu Aug 15 07:38:29 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 15 Aug 2013 05:38:29 +0000 (UTC) Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 References: Message-ID: PJ Eby telecommunity.com> writes: > The build system *should* reserve at least one (subdivisible) > namespace for itself, and use that mechanism for its own extension, +1 - dog-food :-) Regards, Vinay Sajip From ncoghlan at gmail.com Thu Aug 15 15:21:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 15 Aug 2013 09:21:19 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On 15 Aug 2013 00:39, "Vinay Sajip" wrote: > > PJ Eby telecommunity.com> writes: > > > The build system *should* reserve at least one (subdivisible) > > namespace for itself, and use that mechanism for its own extension, > > +1 - dog-food :-) Sounds fair - let's use "pydist", since we want these definitions to be somewhat independent of their reference implementation in distlib :) Based on PJE's feedback, I'm also starting to think that the exports/extensions split is artificial and we should drop it. Instead, there should be a "validate" export hook that build tools can call to check for export validity, and the contents of an export group be permitted to be arbitrary JSON. So we would have "pydist.commands" and "pydist.export_hooks" as export groups, with "distlib" used as an example of how to define handlers for them. The installers are still going to have to be export_hooks aware, though, since the registered handlers are how the whole export system will be bootstrapped. Something else I'm wondering: should the metabuild system be separate, or is it just some more export hooks and you define the appropriate export group to say which build system to invoke? And rather than each installer having to define their own fallback, we'd just implement the appropriate hooks in setuptools to call setup.py. (Installers would still need an explicit fallback for legacy metadata). Cheers, Nick. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Aug 15 16:38:04 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 15 Aug 2013 14:38:04 +0000 Subject: [Distutils] devpi-1.0: improved PyPI-server with upload/test/staging tool Message-ID: <20130815143804.GC21633@merlinux.eu> devpi-1.0: PyPI server and packaging/testing/release tool ========================================================= devpi-1.0 brings an improved PyPI caching and internal index server as well as a new abilities for tox-testing and staging your Python release packages. For a (long) list of changes, see the below CHANGELOG. Documentation got revamped and extended and now contains three quickstart scenarios. First the Quickstart tutorial for pypi-mirroring on your laptop:: http://doc.devpi.net/1.0/quickstart-pypimirror.html And if you want to manage your releases or implement staging as an individual or within an organisation:: http://doc.devpi.net/1.0/quickstart-releaseprocess.html If you want to permanently install devpi-server and potentially access it from many clients:: http://doc.devpi.net/1.0/quickstart-server.html More documentation and the beginning of an exhaustive user manual:: http://doc.devpi.net/latest/ Note that devpi-1.0 is not data-compatible to the previous 0.9.4 release: You need to start with a fresh devpi-1.0 installation and upload your packages again. Future releases of devpi should support data migration more directly. best and have fun, holger krekel Changelog 1.0 (-0.9.4) ---------------------------- devpi-server: - rename "--datadir" to "--serverdir" to better match the also picked up DEVPI_SERVERDIR environment variable. - fix a strange effect in that sometimes tools ask to receive a package url with a "#md5=..." arriving at the server side. We now strip that part out before trying to serve the file. - on startup don't create any initial indexes other than the "root/pypi" pypi caching mirror. - introduce ``--start``, ``--stop`` and ``--log`` commands for controling a background devpi-server run. (these commands previously were implemented with the devpi-client and the "server" sub command) - fix issue27: provide full list of pypi names in root/pypi's simple view (and simple pages from inheriting indices) - default to "eventlet" server when creating deployment with --gendeploy - fix issue25: return 403 Forbidden when trying to delete the root user. - fix name mangling issue for pypi-cache: "project_name*" is now matched correctly when a lookup for "project-name" happens. - fix issue22: don't bypass CDN by default, rather provide an "--bypass-cdn" option to do it (in case you have cache-invalidation troubles) - fix issue20 and fix issue23: normalize index specs internally ("/root/dev" -> "root/dev") and check if base indices exist. - add Jenkins build job triggering for running the tests for a package through tox. - inheritance cleanup: inherited versions for a project are now shadowed and not shown anymore with getreleaselinks() or in +simple pages if the "basename" is exactly shadowed. - fix issue16: enrich projectconfig json with a "+shadow" file which lists shadowed "versions" - initial wheel support: accept "whl" uploads and support caching of whl files from pypi.python.org - implemented internal push operation between devpi indexes - show "docs" link if documentation has been uploaded - pushing releases to pypi.python.org will now correctly report the filetype/pyversion in the metadata. - add setting of acl_upload for indexes. Only the owning user and acl_upload users may upload releases, files or documentation to an index. - add --passwd USER option for setting a user's password server-side - don't require email setting for creating users devpi-client: - removed ``server`` subcommand and options for controling background devpi-server processes to become options of ``devpi-server`` itself. - fix issue14: lookup "python" from PATH for upload/packaging activities instead of using "sys.executable" which comes from the interpreter executing the "devpi" script. This allows to alias "devpi" to come from a virtualenv which is separate from the one used to perform packaging. - fix issue35: "devpi index" cleanly errors out if no index is specified or in use. - remember authentication on a per-root basis and cleanup "devpi use" interactions. This makes switching between multiple devpi instances more seemless. - fix issue17: better reporting when "devpi use" does not operate on valid URL - test result upload and access: - "devpi test" invokes "tox --result-json ..." and uploads the test result log to devpi-server. - "devpi list [-f] PKG" shows test result information. - add "uploadtrigger_jenkins" configuration option through "devpi index". - fix issue19: devpi use now memorizes --venv setting properly. Thanks Laurent. - fix issue16: show files from shadowed versions - initial wheel support: "devpi upload --format=bdist_wheel" now uploads a wheel format file to the index. (XXX "devpi install" will trigger pip commands with option "--use-wheels".) - fix issue15: docs will now be built via "setup.py build_sphinx" using a internal build dir so that the upload succeeds if conf.py would otherwise specify a non-standard location. - implement and refine "devpi push" command. It now accepts two forms "user/name" for specifying an internal devpi index and "pypi:REPONAME" for specifying a repository which must be defined in a .pypirc file. - remove spurious pdb.set_trace() in devpi install command when no pip can be found. - show and allow to set "acl_upload" for uploading priviliges - add longer descriptions to each sub command, shown with "devpi COMMAND -h". - removed pytestplugin support for now (pytest reporting directly to devpi-server) From holger at merlinux.eu Thu Aug 15 16:39:06 2013 From: holger at merlinux.eu (holger krekel) Date: Thu, 15 Aug 2013 14:39:06 +0000 Subject: [Distutils] tox-1.6: install_command, develop, py25 support, json-reporting ... Message-ID: <20130815143906.GD21633@merlinux.eu> tox-1.6: support for install_command, develop, json-reporting ============================================================= Welcome to a new release of tox, the virtualenv-based test automation manager. This release brings some new major features: - installer_command: you can customize the command user for installing packages and dependencies. Thanks Carl Meyer. - usedevelop: you can use "develop" mode ("pip install -e") either by configuring it in your tox.ini or through the new "--develop" option. Thank Monty Tailor. - python2.5: tox ships internally virtualenv-1.9.1 and can thus run tests create virtualenvs and run tests against python2.5 even if you have a newer virtualenv version installed. While tox-1.6 should otherwise be compatible to tox-1.5, the new $HOME-isolation ($HOME is set to a temporary directory when installing packages) might trigger problems if your tests relied on $HOME configuration files -- which they shouldn't if you want to repeatability. If that causes problems, please file an issue. Docs and more information at: http://tox.testrun.org/tox/latest/ have fun, holger 1.6 Changelog -------------- - fix issue35: add new EXPERIMENTAL "install_command" testenv-option to configure the installation command with options for dep/pkg install. Thanks Carl Meyer for the PR and docs. - fix issue91: python2.5 support by vendoring the virtualenv-1.9.1 script and forcing pip<1.4. Also the default [py25] environment modifies the default installer_command (new config option) to use pip without the "--pre" option which was introduced with pip-1.4 and is now required if you want to install non-stable releases. (tox defaults to install with "--pre" everywhere). - during installation of dependencies HOME is now set to a pseudo location ({envtmpdir}/pseudo-home). If an index url was specified a .pydistutils.cfg file will be written with an index_url setting so that packages defining ``setup_requires`` dependencies will not silently use your HOME-directory settings or https://pypi.python.org. - fix issue1: empty setup files are properly detected, thanks Anthon van der Neuth - remove toxbootstrap.py for now because it is broken. - fix issue109 and fix issue111: multiple "-e" options are now combined (previously the last one would win). Thanks Anthon van der Neut. - add --result-json option to write out detailed per-venv information into a json report file to be used by upstream tools. - add new config options ``usedevelop`` and ``skipsdist`` as well as a command line option ``--develop`` to install the package-under-test in develop mode. thanks Monty Tailor for the PR. - always unset PYTHONDONTWRITEBYTE because newer setuptools doesn't like it - if a HOMEDIR cannot be determined, use the toxinidir. - refactor interpreter information detection to live in new tox/interpreters.py file, tests in tests/test_interpreters.py. From vinay_sajip at yahoo.co.uk Thu Aug 15 17:21:28 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 15 Aug 2013 15:21:28 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: PJ Eby telecommunity.com> writes: > used for these things nowadays, rather than PATH... but I think it > only works for .exe's, so there we go again back to the land of .exe's > just plain Work Better On Windows.) In that vein, I've updated distlib to install only a single .exe per script, where the script is appended to a stock launcher .exe as a zip with the script as a single __main__.py in it. I've not produced a new release, but the BitBucket repos for both distlib [1] and the launcher [2] are up to date. Note that the launcher is still not the PEP 397 launcher, but the simpler one which I developed when working on PEP 405 and whose .exes have been shipping with distlib. To try it out, you can download the latest distil.py from [3]. My smoke tests with both generated and pre-built scripts pass, but I'd be grateful if people could try it out and provide feedback. Wheel support should be up to date in terms of PEP 426 and the discussions about launchers: wheel builds should have no platform-specific files other than for binary extensions, and when installing from a wheel, wrapped scripts declared in the metadata should be installed. Regards, Vinay Sajip [1] https://bitbucket.org/pypa/distlib [2] https://bitbucket.org/vinay.sajip/simple_launcher [3] https://bitbucket.org/vinay.sajip/docs-distil/downloads/distil.py From pje at telecommunity.com Thu Aug 15 17:39:16 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 15 Aug 2013 11:39:16 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Thu, Aug 15, 2013 at 9:21 AM, Nick Coghlan wrote: > > On 15 Aug 2013 00:39, "Vinay Sajip" wrote: >> >> PJ Eby telecommunity.com> writes: >> >> > The build system *should* reserve at least one (subdivisible) >> > namespace for itself, and use that mechanism for its own extension, >> >> +1 - dog-food :-) > > Sounds fair - let's use "pydist", since we want these definitions to be > somewhat independent of their reference implementation in distlib :) I think that as part of the spec, we should either reserve multiple prefixes for Python/stdlib use, or have a single, always-reserved top-level prefix like 'py.' that can be subdivided in the future. Extensions are a honking great idea, so the stdlib will probably do more of them in the future. Likewise, future standards and informational PEPs will likely document specific extension protocols of general and specialized interest. (Notice, for example, that extensions could be used to publicize what database drivers are installed and available on a system.) > Based on PJE's feedback, I'm also starting to think that the > exports/extensions split is artificial and we should drop it. Instead, there > should be a "validate" export hook that build tools can call to check for > export validity, and the contents of an export group be permitted to be > arbitrary JSON. I think there is still something to be said for STASCTAP: simple things are simple, complex things are possible. (Also, flat is better than nested.) So I would suggest that an export can either be an import identifier string, *or* a JSON object with arbitrary contents. That would make it easier, I think, to implement both a full-featured replacement for setuptools entry point API, and allow simple extensions to be simple. It means, too, that simple exports can be defined with a flatter syntax (ala setuptools' ini format) in tools that generate the JSON. Given how many use cases are already met today by providing import-based exports, ISTM that they are the 20% that provides 80% of the value; arbitrary JSON is the 80% that only provides 20%, and so should not be the entry point (no pun intended) for people dealing with extensions. Removing the extension/export split also raises a somewhat different question, which is what to *call* them. I'm sort of leaning towards "extensions" as the general category, with "exports" being extensions that consist of an importable object, and "JSON extensions" for ones that are a JSON mapping object. So the terminology would be: Extension group - package like names, subdivisible as a namespace, should have a prefix associated with a project that defines the semantics of the extension group; analagous to Eclipse's notion of an "extension point" Extension name - arbitrary string, unique per distribution for a given group, but not required to be globally unique even for the group. Specific names or specific syntax for names may be specified by the creators of the group, and may optionally be validated. Extension object - either an "export string" specifying an importable object, or a JSON object. If a string, must be syntactically valid as an export; it is not, however, required to reference a module in the distribution that exports it; it *should* be in that distribution or one of its dependencies, however. So, an extension is machine-usable metadata published by a distribution in order to be (optionally) consumed by other distributions. It can be either static JSON metadata, or an importable object. The semantics of an extension are defined by its group, and other extensions can be used to validate those semantics. Any project that wants to be able to use plugins or extensions of some kind, can define its own groups, and publish extensions for validating them. Python itself will reserve and define a group namespace for extending the build and installation system, including a sub-namespace where the validators can be declared. > So we would have "pydist.commands" and "pydist.export_hooks" as export > groups, with "distlib" used as an example of how to define handlers for > them. Is 'commands' for scripts, or something else? Following "flat is better than nested", I would suggest not using arbitrary JSON for these when it's easy to define new dotted groups. (Keeping to such a style will make it easier for humans to define this stuff in the first place, before it's turned into JSON.) (Note, btw, that having more dots in a name does not necessarily equal "nested", whereas replacing those dots with nested JSON structures most definitely *is* "nested"!) Similarly, I'd just as soon see e.g. pydist.hooks.* subgroups, rather than a dedicated data structure. A 'pydist.validators' group would of course also be needed for syntax validation, with extension names in that group possibly allowing trailing '*' or '**' wildcards. (There will of course need to be a validation API, which is why I think that a separate PEP for the "extensions" system is probably going to be needed, followed by a PEP for the specific extensions used by the build system.) > Something else I'm wondering: should the metabuild system be separate, or is > it just some more export hooks and you define the appropriate export group > to say which build system to invoke? It's just extensions, IMO. What else *is* there? You *could* define a core metadata field that says, "this is the distribution I depend on for building", and then look for the right extension there. (Or you could define a builder name, and then look that up in an extension group, to do a sort of provides-requires approach.) But the actual build process can be purely extension-driven, ISTM. From vinay_sajip at yahoo.co.uk Thu Aug 15 18:06:53 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 15 Aug 2013 16:06:53 +0000 (UTC) Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 References: Message-ID: Nick Coghlan gmail.com> writes: > Sounds fair - let's use "pydist", since we want these definitions to be > somewhat independent of their reference implementation in distlib :) Seems reasonable. > Based on PJE's feedback, I'm also starting to think that the > exports/extensions split is artificial and we should drop it. Instead, > there should be a "validate" export hook that build tools can call to > check for export validity, and the contents of an export group be > permitted to be arbitrary JSON. I don't know that we should allow arbitrary JSON here: I would wait to see what it is we need, and keep it restricted for now until the more detailed understanding of those needs becomes more apparent. Arbitrary JSON is likely to be needed for *implementations* of things, but not necessarily for *interfaces* between things. The PEP 426 scope should be mainly focused on dependency resolution, other installer requirements and interactions between installed distributions (exports). > The installers are still going to have to be export_hooks aware, though, > since the registered handlers are how the whole export system will be > bootstrapped. Distil currently supports the preuninstall/postinstall hooks, and I expect to extend this to other types of hook. > Something else I'm wondering: should the metabuild system be separate, I think it should be separate, though of course there will be a role for exports. The JSON metadata needed for source packaging and building can be quite large (example at [1]), and IMO doesn't really belong with the PEP 426 metadata. Currently, the extended metadata used by distil for building contains the whole PEP 426 metadata as an "index-metadata" sub-dictionary. It's already a fairly generic build system - though simple, it can build e.g. C/C++/Fortran extensions, handle Cython, SWIG and so on, without using any of distutils. However, there's still lots of work to be done to generalise the interfaces between different parts of the system so that building can be plug and play - it's a bit opaque at the moment, but I expect that will improve. Regards, Vinay Sajip [1] http://red-dove.com/pypi/projects/A/Assimulo/package-2.2.json From vinay_sajip at yahoo.co.uk Thu Aug 15 18:36:34 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 15 Aug 2013 16:36:34 +0000 (UTC) Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 References: Message-ID: PJ Eby telecommunity.com> writes: > I think that as part of the spec, we should either reserve multiple > prefixes for Python/stdlib use, or have a single, always-reserved > top-level prefix like 'py.' that can be subdivided in the future. +1 There's quite a lot of stuff in your post that I haven't digested yet, but one thing confused me early on: > than nested.) So I would suggest that an export can either be an > import identifier string, *or* a JSON object with arbitrary contents. [snip] > Given how many use cases are already met today by providing > import-based exports, ISTM that they are the 20% that provides 80% of > the value; arbitrary JSON is the 80% that only provides 20%, and so > should not be the entry point (no pun intended) for people dealing > with extensions. The above two statements seem to be contradictory as to the value of arbitrary JSON. I think the metadata format is a communication tool between developers as much as anything else (though intended to be primarily consumed by software), so I think KISS and YAGNI should be our watch-words (in terms of what the PEP allows), until specific uses have been identified. > That would make it easier, I think, to implement both a full-featured > replacement for setuptools entry point API, and allow simple What do you feel is missing in terms of functionality? > It's just extensions, IMO. What else *is* there? You *could* define > a core metadata field that says, "this is the distribution I depend on I think the thing here is to identify what the components in the build system would be (as an abstraction), how they would interact etc. If we look at how the build side of distutils works, it's all pretty much hardcoded once you specify the inputs, without doing a lot of work to subclass, monkey-patch etc. all over the place. It's unnecessarily hard to do even simple stuff like "use this set of compilation flags for only this specific set of sources in my extension". In any realistic build pipeline you'd need to be able to insert components into the pipeline, sometimes to augment the work of other components, sometimes to replace it etc. and ISTM we don't really know how any of that would work (at a meta level, I mean). Regards, Vinay Sajip From pje at telecommunity.com Thu Aug 15 19:23:23 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 15 Aug 2013 13:23:23 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Thu, Aug 15, 2013 at 12:36 PM, Vinay Sajip wrote: > PJ Eby telecommunity.com> writes: >> than nested.) So I would suggest that an export can either be an >> import identifier string, *or* a JSON object with arbitrary contents. > [snip] >> Given how many use cases are already met today by providing >> import-based exports, ISTM that they are the 20% that provides 80% of >> the value; arbitrary JSON is the 80% that only provides 20%, and so >> should not be the entry point (no pun intended) for people dealing >> with extensions. > > The above two statements seem to be contradictory as to the value of > arbitrary JSON. I don't see a contradiction. I said that the majority of use cases (the figurative 80% of value) can be met with just a string (20% of complexity), and that a minority of use cases (20% of value) would be met by JSON (80% of complexity). This is consistent with STASCTAP, i.e., simple things are simple, complex things are possible. To be clear: I am *against* arbitrary JSON as the core protocol; it should be only for "complex things are possible" and only used when absolutely required. I think we are in agreement on this. > I think the metadata format is a communication tool between > developers as much as anything else (though intended to be primarily > consumed by software), so I think KISS and YAGNI should be our watch-words > (in terms of what the PEP allows), until specific uses have been identified. +100. >> That would make it easier, I think, to implement both a full-featured >> replacement for setuptools entry point API, and allow simple > > What do you feel is missing in terms of functionality? What I was saying is that starting from a base of arbitrary JSON (as Nick seemed to be proposing) would make it *harder* to provide the simple functionality. Not that adding JSON is needed to support setuptools functionality. Setuptools does just fine with plain export strings! I don't want to lose that simplicity; the "export string or JSON" suggestion was a compromise counterproposal to Nick's "let's just use arbitrary JSON structures". > I think the thing here is to identify what the components in the build > system would be (as an abstraction), how they would interact etc. If we look > at how the build side of distutils works, it's all pretty much hardcoded > once you specify the inputs, without doing a lot of work to subclass, > monkey-patch etc. all over the place. It's unnecessarily hard to do even > simple stuff like "use this set of compilation flags for only this specific > set of sources in my extension". In any realistic build pipeline you'd need > to be able to insert components into the pipeline, sometimes to augment the > work of other components, sometimes to replace it etc. and ISTM we don't > really know how any of that would work (at a meta level, I mean). I was assuming that we leave build tools to build tool developers. If somebody wants to create a pipelined or meta-tool system, then projects that want to use that can just say, "I use the foobar metabuild system". For installer-tool purposes, it suffices to say what system will be responsible, and have a standard for how to invoke build systems and get wheels or the raw materials from which the wheel should be created. *How* this build system gets the raw materials and does the build is its own business. It might use extensions, or it might be setup.py based, or Makefile based, or who knows whatever else. That's none of the metadata PEP's business, really. Just how to invoke the builder and get the outputs. From ncoghlan at gmail.com Fri Aug 16 01:16:19 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 15 Aug 2013 18:16:19 -0500 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On 15 Aug 2013 12:27, "PJ Eby" wrote: > > On Thu, Aug 15, 2013 at 12:36 PM, Vinay Sajip wrote: > > PJ Eby telecommunity.com> writes: > >> than nested.) So I would suggest that an export can either be an > >> import identifier string, *or* a JSON object with arbitrary contents. > > [snip] > >> Given how many use cases are already met today by providing > >> import-based exports, ISTM that they are the 20% that provides 80% of > >> the value; arbitrary JSON is the 80% that only provides 20%, and so > >> should not be the entry point (no pun intended) for people dealing > >> with extensions. > > > > The above two statements seem to be contradictory as to the value of > > arbitrary JSON. > > I don't see a contradiction. I said that the majority of use cases > (the figurative 80% of value) can be met with just a string (20% of > complexity), and that a minority of use cases (20% of value) would be > met by JSON (80% of complexity). > > This is consistent with STASCTAP, i.e., simple things are simple, > complex things are possible. > > To be clear: I am *against* arbitrary JSON as the core protocol; it > should be only for "complex things are possible" and only used when > absolutely required. I think we are in agreement on this. But if we're only going to validate it via hooks, why not have the "mapping of names to export specifiers" just be a recommended convention for extensions rather than a separate exports field? pydist.install_hooks, pydist.console_scripts, pydist.gui_scripts would then all be conventional export groups. pydist.prebuilt_commands would be non-conventional, since the values would be relative file paths rather than export specifiers. As an extension, pydist.extension_hooks would also be non-conventional, since it would define a new namespace, where extension names map to an export group of hooks. A separate export group per hook would be utterly unreadable. That's why I'm still inclined to make this one a separate top level field: *installers* have to know how to bootstrap the hook system, and I like the symmetry of separate, relatively flat, publication and subscription interfaces. Cheers, Nick. > > > > I think the metadata format is a communication tool between > > developers as much as anything else (though intended to be primarily > > consumed by software), so I think KISS and YAGNI should be our watch-words > > (in terms of what the PEP allows), until specific uses have been identified. > > +100. > > > >> That would make it easier, I think, to implement both a full-featured > >> replacement for setuptools entry point API, and allow simple > > > > What do you feel is missing in terms of functionality? > > What I was saying is that starting from a base of arbitrary JSON (as > Nick seemed to be proposing) would make it *harder* to provide the > simple functionality. Not that adding JSON is needed to support > setuptools functionality. Setuptools does just fine with plain export > strings! > > I don't want to lose that simplicity; the "export string or JSON" > suggestion was a compromise counterproposal to Nick's "let's just use > arbitrary JSON structures". > > > > I think the thing here is to identify what the components in the build > > system would be (as an abstraction), how they would interact etc. If we look > > at how the build side of distutils works, it's all pretty much hardcoded > > once you specify the inputs, without doing a lot of work to subclass, > > monkey-patch etc. all over the place. It's unnecessarily hard to do even > > simple stuff like "use this set of compilation flags for only this specific > > set of sources in my extension". In any realistic build pipeline you'd need > > to be able to insert components into the pipeline, sometimes to augment the > > work of other components, sometimes to replace it etc. and ISTM we don't > > really know how any of that would work (at a meta level, I mean). > > I was assuming that we leave build tools to build tool developers. If > somebody wants to create a pipelined or meta-tool system, then > projects that want to use that can just say, "I use the foobar > metabuild system". For installer-tool purposes, it suffices to say > what system will be responsible, and have a standard for how to invoke > build systems and get wheels or the raw materials from which the wheel > should be created. > > *How* this build system gets the raw materials and does the build is > its own business. It might use extensions, or it might be setup.py > based, or Makefile based, or who knows whatever else. That's none of > the metadata PEP's business, really. Just how to invoke the builder > and get the outputs. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Fri Aug 16 05:31:24 2013 From: pje at telecommunity.com (PJ Eby) Date: Thu, 15 Aug 2013 23:31:24 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Thu, Aug 15, 2013 at 7:16 PM, Nick Coghlan wrote: > But if we're only going to validate it via hooks, why not have the "mapping > of names to export specifiers" just be a recommended convention for > extensions rather than a separate exports field? I guess I didn't explain it very well, because that's roughly what I meant: a single namespace for all "extensions", structured as a mapping from group names to submappings whose keys can be arbitrary values, but whose values must then be either a string or a JSON object, and if it's a string, then it should be an export specifier. To put it another way, I'm saying something slightly stronger than a recommended convention: making it a requirement that strings at that level be import specifiers, and only allowing mappings as an alternative. In that way, there is a minimum level of validation possible for the majority of extensions *by default*, without needing an explicit validator declared. To put it another way, it ensures that there's a kind of lingua franca or lowest-common denominator that lets somebody understand what's going on in most extensions, without having to understand a new *structural* schema for every extension group. (Just a *syntactical* one) > As an extension, pydist.extension_hooks would also be non-conventional, > since it would define a new namespace, where extension names map to an > export group of hooks. A separate export group per hook would be utterly > unreadable. If you already know what keys go in an entry point group, there's a good chance you're doing it wrong. Normally, the whole point of the group is that the keys are defined by the publisher, not the consumer. The normal pattern is that the consumer names the group (representing a hook), and the publishers name the extensions (representing implementations for the hook). I don't see how it makes it unreadable, but then I think in terms of the ini syntax or setup.py syntax for defining entry points, which is all very flat. IMO, creating a second-level data structure for this doesn't make a whole lot of sense, because now you're nesting something. I'm not even clear why you need separate registrations for the different hooks anyway; ISTM a single hook with an event parameter is sufficient. Even if it weren't, I'd be inclined to just make the information part of the key in that case, e.g. [pydist.extension_listeners] preinstall:foo.bar = some.module:hook This sort of thing is very flat and easy to express in a simple configuration syntax, which we really shouldn't lose sight of. It's just as easy to have write a syntax validator as a structure validator, but if you start with structures then you have to back-figure a syntax. I'd very much like it to be easy to define a simple flat syntax that's usable for 90%+ of extension use cases... which means I'd rather not see the PEP make up its own data structures when they're not actually needed. Don't get me wrong, I'm okay with allowing JSON structures for extensions in place of export strings, but I don't think there's been a single use case proposed as yet that actually *works better* as a data structure. If you need to do something like have a bunch of i18n/l10n resource definitions with locales and subpaths and stuff like that... awesome. That's something that might make a lot of sense for JSON. But when the ultimate point of the data structure is to define an importable entry point, and the information needed to identify it can be put into a relatively short human readable string, ISTM that the One Obvious Way to do it is something like a setuptools entry point -- i.e. a basic key-value pair in a consumer-defined namespace, mapping a semantically-valued name to an importable object. And *most* use cases for extensions, that I'm aware of, fit that bill. You have to be doing something pretty complex to need anything more complicated, *and* there has to be a possibility that you're going to avoid importing the related code or putting in on sys.path, or else you don't actually *save* anything by putting it in the metadata. IOW, if you're going to have to import it anyway, there is no point to putting it in the metadata; you might as well import it. The only things that make sense to put in metadata for these things are data that tells you whether or not you need to import it. Generally, this means keys, not values, in other words. (Which is why l10n and scripts make sense to not be entry points: at the time you use them, you're not importing 'em.) >That's why I'm still inclined to make this one a separate top > level field: *installers* have to know how to bootstrap the hook system, and > I like the symmetry of separate, relatively flat, publication and > subscription interfaces. I don't really see the value of a separate top-level field, but then that's because I don't see anything at all special about these hooks that demands something more sophisticated than common entry points. AFAICT it's a YAGNI. From vinay_sajip at yahoo.co.uk Fri Aug 16 12:21:34 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 16 Aug 2013 10:21:34 +0000 (UTC) Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 References: Message-ID: PJ Eby telecommunity.com> writes: > I guess I didn't explain it very well, because that's roughly what I > meant: a single namespace for all "extensions", structured as a > mapping from group names to submappings whose keys can be arbitrary > values, but whose values must then be either a string or a JSON > object, and if it's a string, then it should be an export specifier. Why should the keys be completely arbitrary? I can't see a reason for this; the current constraint of the "prefixed name" seems sufficient. What would relaxing this constraint make possible which otherwise isn't? On the values: an export specifier is just a more human-friendly version of a dict with module/content/extra keys. While of course the uses of importables in this area is well established, what specific use cases are we enabling by allowing arbitrary JSON? It certainly would clutter the metadata and render it less human-readable, and the only thing it provides is a dict which could be expressed in an importable form "mypackage.mymodule:path.to.a.dict". Of course this incurs an import penalty for access to the data, but if the publisher is concerned about this, they can minimise that overhead by arranging their package hierarchy appropriately. What kinds of use cases would require that the data be fully expressed in the metadata to avoid import overhead? > To put it another way, I'm saying something slightly stronger than a > recommended convention: making it a requirement that strings at that > level be import specifiers, and only allowing mappings as an > alternative. I'd like to understand the use cases which allowing mappings here would facilitate, that need to avoid importing to access the mapping. > If you already know what keys go in an entry point group, there's a > good chance you're doing it wrong. Normally, the whole point of the > group is that the keys are defined by the publisher, not the consumer. > The normal pattern is that the consumer names the group (representing > a hook), and the publishers name the extensions (representing > implementations for the hook). But the general form of the keys needs to be agreed to some extent between the consumer and publisher. Otherwise, the consumer doesn't know how to interpret the values of those keys. Of course a consumer can get all of the key/value entries exported by a dist or all dists for a specific group, but then what do they do with it if they don't know what the individual entries mean? > which means I'd rather not see the PEP make up its own data structures > when they're not actually needed. +1 - identification of the needs should come before specific proposals to address them. > Don't get me wrong, I'm okay with allowing JSON structures for > extensions in place of export strings, but I don't think there's been > a single use case proposed as yet that actually *works better* as a > data structure. The forms are equivalent, modulo an import penalty which only occurs for actual use and not for just scanning to see what's available. > Way to do it is something like a setuptools entry point -- i.e. a > basic key-value pair in a consumer-defined namespace, mapping a > semantically-valued name to an importable object. > > And *most* use cases for extensions, that I'm aware of, fit that bill. > You have to be doing something pretty complex to need anything more > complicated, *and* there has to be a possibility that you're going to > avoid importing the related code or putting in on sys.path, or else > you don't actually *save* anything by putting it in the metadata. > IOW, if you're going to have to import it anyway, there is no point to > putting it in the metadata; you might as well import it. Agreed, so I would say that we need to identify these use cases before saying that arbitrary mappings should be allowed as values. Regards, Vinay Sajip From ncoghlan at gmail.com Fri Aug 16 14:04:53 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 16 Aug 2013 07:04:53 -0500 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: Concrete extension use cases I have in mind that don't fit in the exports/entry-point data model: - the mapping of prebuilt executable names to wheel contents - platform specific external dependencies and other hints for conversion to platform specific formats (e.g. Windows GUIDs) - metadata for build purposes (e.g. the working directory to use when building from a VCS checkout, so you can have multiple projects in the same repo) - project specific metadata (e.g. who actually did the release) - security metadata (e.g. security contact address and email GPG fingerprint) This is why extensions/exports were originally separate, and may still remain that way. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Fri Aug 16 16:25:45 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 16 Aug 2013 10:25:45 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Fri, Aug 16, 2013 at 6:21 AM, Vinay Sajip wrote: > PJ Eby telecommunity.com> writes: > >> I guess I didn't explain it very well, because that's roughly what I >> meant: a single namespace for all "extensions", structured as a >> mapping from group names to submappings whose keys can be arbitrary >> values, but whose values must then be either a string or a JSON >> object, and if it's a string, then it should be an export specifier. > > Why should the keys be completely arbitrary? By "arbitrary" I mean only that the PEP doesn't place syntactical restrictions on them. > I can't see a reason for this; > the current constraint of the "prefixed name" seems sufficient. What would > relaxing this constraint make possible which otherwise isn't? I think you're thinking I'm describing a single level namespace; I'm referring to something like this: {group1: {anykey: export_or_mapping}} "anykey" is not validated by the spec, only by registered validators for "group1". Of course it has to have some meaning that is interpretable by consumers of group1. The point is that the *spec* only defines the syntax of group names and export strings, and it's left to specific groups to define the syntax/semantics of the keys. > On the values: an export specifier is just a more human-friendly version of > a dict with module/content/extra keys. While of course the uses of > importables in this area is well established, what specific use cases are we > enabling by allowing arbitrary JSON? It certainly would clutter the metadata > and render it less human-readable, and the only thing it provides is a dict > which could be expressed in an importable form I gave one example already: i18n/l10n information that's about files contained in the distribution's data. It's quite possible to have distributions without any code, only data of that kind. A requirement to create code, just to specify the data seems rather pointless. In Nick's reply, he's listed other use cases. The main question is, should exports and extensions be treated separately? Nick originally proposed merging the concepts and using arbitrary JSON. My counterproposal was to say, let's distinguish exports and extensions by restricting the spec to something which spells out the distinction. After this further discussion, I think that the use cases we're discussing really boil back down to exports vs. metadata extensions, and that maybe we should stick to them being separate. From pje at telecommunity.com Fri Aug 16 16:56:16 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 16 Aug 2013 10:56:16 -0400 Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 In-Reply-To: References: Message-ID: On Fri, Aug 16, 2013 at 8:04 AM, Nick Coghlan wrote: > Concrete extension use cases I have in mind that don't fit in the > exports/entry-point data model: > > - the mapping of prebuilt executable names to wheel contents > - platform specific external dependencies and other hints for conversion to > platform specific formats (e.g. Windows GUIDs) > - metadata for build purposes (e.g. the working directory to use when > building from a VCS checkout, so you can have multiple projects in the same > repo) > - project specific metadata (e.g. who actually did the release) > - security metadata (e.g. security contact address and email GPG > fingerprint) > > This is why extensions/exports were originally separate, and may still > remain that way. But exports should still be used for hooks defined by the spec; they are the Obvious Way to specify importable hooks, and *not* dogfooding there is still a bad idea. (To be clear, I was never exactly *enthused* about the idea of merging extensions and exports, just *unenthused* about the idea of the spec using extensions or custom metadata to do things that could be perfectly well expressed as exports from a reserved namespace.) I'm kind of toying with the idea that the core metadata itself could be carved into extension namespaces, have the core itself be just another extension, rather than nesting extensions and exports inside the core, so that the entire spec is just a relatively-flat collection of namespaces, in more human-digestible groups. There are some conceptual and practical advantages to that, at least in principle, but until I've played around with actually expressing some concepts from the PEP that way, I won't know whether it would actually pay off in practice. From vinay_sajip at yahoo.co.uk Fri Aug 16 17:51:09 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 16 Aug 2013 15:51:09 +0000 (UTC) Subject: [Distutils] Changing the "install hooks" mechanism for PEP 426 References: Message-ID: PJ Eby telecommunity.com> writes: > I think you're thinking I'm describing a single level namespace; I'm > referring to something like this: > > {group1: {anykey: export_or_mapping}} I got that :-) > "anykey" is not validated by the spec, only by registered validators > for "group1". Of course it has to have some meaning that is > interpretable by consumers of group1. The point is that the *spec* > only defines the syntax of group names and export strings, and it's > left to specific groups to define the syntax/semantics of the keys. Ok, I get what you mean now. But an appropriate validator would be one defined by the definer of of the group, right? That code may not be present on the system. Any validators registered by third parties could disagree; what then? It may be better to say that either a consumer ask for an entry against a specific key which it understands, or iterate over all keys and ignore those they don't recognise the meaning of. > I gave one example already: i18n/l10n information that's about files > contained in the distribution's data. It's quite possible to have > distributions without any code, only data of that kind. A requirement > to create code, just to specify the data seems rather pointless. In > Nick's reply, he's listed other use cases. Right, but your case here and the cases Nick mention belong, I think, outside what I understand the scope of PEP 426 to be. That is to say, metadata which describes elements of an installed distribution and dependency information used when installing or uninstalling it. I don't see any reason why we can't have an "extensions" subdict which is free-format in the PEP 426 dict for holding other information not described further in the spec, but that's just as a grab-bag for ancillary data. I've probably been thinking at cross purposes when discussing the term "extension". It's a somewhat overloaded term, what with C extensions and all :-) > After this further discussion, I think that the use cases we're > discussing really boil back down to exports vs. metadata extensions, > and that maybe we should stick to them being separate. I'm OK with that, but there is a lot of packaging metadata which properly lives outside PEP 426, and which one wouldn't want to see ending up in the metadata extensions just because they're there. For example, the information conventionally passed in setuptools.setup(package_data=...), which is often used to specify i18n/l10n data. Regards, Vinay Sajip From ncoghlan at gmail.com Sat Aug 17 04:31:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 16 Aug 2013 21:31:45 -0500 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: Heh, this got a "Usenet nod" from me. I wanted to mention this in a talk tomorrow, so I posted the updated draft in the PEPs repo. Hopefully the February timeframe is generous enough for Christian to either talk to the PSF about a cert for f.pypi.python.org, or else migrate to a different domain name. I think the exact date is the main open question now - I believe the PEP makes a solid case for why the status quo can't continue indefinitely. Cheers, Nick. From donald at stufft.io Sat Aug 17 04:45:47 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 16 Aug 2013 22:45:47 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: On Aug 16, 2013, at 10:31 PM, Nick Coghlan wrote: > Heh, this got a "Usenet nod" from me. I wanted to mention this in a > talk tomorrow, so I posted the updated draft in the PEPs repo. > > Hopefully the February timeframe is generous enough for Christian to > either talk to the PSF about a cert for f.pypi.python.org, or else > migrate to a different domain name. I think the exact date is the main > open question now - I believe the PEP makes a solid case for why the > status quo can't continue indefinitely. > > Cheers, > Nick. I forgot that still needed updated, thanks! I know one person has said to me privately they plan on reviewing it still, but I haven't seen anything from Christian. Other then that I've not seen a lot of response so either people haven't noticed it or they were usenet nodding as well :D ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Aug 17 04:46:31 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 16 Aug 2013 22:46:31 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: <7FEEA19E-7291-4722-9212-64B6A7DFA0F4@stufft.io> Oh FWIW pip got a CVE assigned earlier today for --use-mirrors which is effectively an implementation of what this PEP is removing. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ct at gocept.com Sat Aug 17 07:55:37 2013 From: ct at gocept.com (Christian Theune) Date: Sat, 17 Aug 2013 07:55:37 +0200 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: On 17. Aug2013, at 4:45 AM, Donald Stufft wrote: > > On Aug 16, 2013, at 10:31 PM, Nick Coghlan wrote: > >> Heh, this got a "Usenet nod" from me. I wanted to mention this in a >> talk tomorrow, so I posted the updated draft in the PEPs repo. >> >> Hopefully the February timeframe is generous enough for Christian to >> either talk to the PSF about a cert for f.pypi.python.org, or else >> migrate to a different domain name. I think the exact date is the main >> open question now - I believe the PEP makes a solid case for why the >> status quo can't continue indefinitely. >> >> Cheers, >> Nick. > > > I forgot that still needed updated, thanks! > > I know one person has said to me privately they plan on reviewing it still, > but I haven't seen anything from Christian. > > Other then that I've not seen a lot of response so either people haven't > noticed it or they were usenet nodding as well :D Sorry for the stealth mode. The long thread has been sitting in my Inbox while the last week was very busy and we're hosting a sprint at our offices right now. I hope I will get around to reading and responding early next week. Christian -- Christian Theune ? ct at gocept.com gocept gmbh & co. kg ? Forsterstra?e 29 ? 06112 Halle (Saale) ? Germany http://gocept.com ? Tel +49 345 1229889-7 Python, Pyramid, Plone, Zope ? consulting, development, hosting, operations -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From samuel.ferencik at barclays.com Fri Aug 16 11:18:05 2013 From: samuel.ferencik at barclays.com (samuel.ferencik at barclays.com) Date: Fri, 16 Aug 2013 10:18:05 +0100 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows Message-ID: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Hi, It seems distutils.util.get_platform() semantically differs on Windows and Linux. Windows: the return value is derived from the architecture of the *interpreter*, hence for 32-bit Python running on 64-bit Windows get_platform() = 'win32' (32-bit). Linux: the return value is derived from the architecture of the *OS*, hence for 32-bit Python running on 64-bit Linux get_platform() = 'linux-x86_64' (64-bit). Is this intentional? My context (where this hit me): I was trying to install the 32-bit version of the Perforce API (compiled module) on 64-bit Windows and on 64-bit Linux. My command-line was python3.3-32 setup.py install --root FOO --install-platlib=lib.$PLAT (note the '-32' and the '$PLAT') On Windows, this installed the 32-bit version of the API into "FOO\lib.win32". On Linux, this installed the 64-bit version of the API into "FOO/lib.linux-x86_64". I'm not sure which is more correct behavior but the asymmetry seems suspicious. Sam _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. _______________________________________________ From patelnaitik30 at gmail.com Sat Aug 17 07:09:23 2013 From: patelnaitik30 at gmail.com (NAITIK PATEL) Date: Sat, 17 Aug 2013 10:39:23 +0530 Subject: [Distutils] python version problem during install distutils Message-ID: Hi I wan to install distutils modul on python2.5 for that I downloaded Distutils-1.0.2. But during the installation it make some problem. give some error. File "setup.py", line 30, in packages = ['distutils', 'distutils.command'], File "/tmp/Distutils-1.0.2/distutils/core.py", line 101, in setup _setup_distribution = dist = klass(attrs) File "/tmp/Distutils-1.0.2/distutils/dist.py", line 130, in __init__ setattr(self, method_name, getattr(self.metadata, method_name)) AttributeError: DistributionMetadata instance has no attribute 'get___doc__' please help me for that Regards, Naitik Patel -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 19 13:31:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 12:31:46 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? Message-ID: While debate on PEP 439 has died down, I've been thinking about what it actually means to end users for Python to bundle pip. If we have an explicit bootstrapping command like get_pip, how much practical benefit is there for that to be bundled vs having a well-known location for such a script (which is more or less what we have now)? Also, there is the whole question of how much we want to "lock down" what pip currently provides, as official functionality. (Particularly given the renewed energy around projects like distlib/distil, etc). I'd like to propose that as a first step, we agree and document a *user interface* that will be officially supported going forward. It would be based on (a subset of) the pip interface, so that a "bundle pip" strategy is still the immediate way forward, but by standardising the interface rather than the implementation, we retain the option to change things behind the scenes if, for example, distil suddenly takes over the world. Users get the benefit of a properly sanctioned "one true way" to manage packages. And we reserve enough flexibility that we don't accidentally accept constraints that are stronger than we'd like and stifle innovation. To be specific, what I propose is that we agree on the following: 1. Python installations provide a bootstrap command (tentatively "python -m get_pip", I'm not sure this needs to be directly executable, the -m interface is probably fine). This command may be a no-op if for example we opt for full bundling (and it will always be a no-op after the first run). 2. Once bootstrapped, Python will provide a "pip" command (or pip3?) accessible whenever the "python" command is available (worded this way because the Windows installer doesn't put "python" on PATH by default). 3. A specific subset of the current pip interface is provided (to be documented). I'd suggest we need a minimum of pip install, pip list, pip uninstall, pip install -U (for upgrades). I don't know what proportion of the rest of the pip API (various options, requirement files, environment variables and ini files) we want to lock down - I suggest we start minimal and add items as needed. 4. We may want to add a separate note that "python -m pip" will do the same as the "pip" command, and may be needed to self-upgrade the pip command ("python -m pip -U pip"). We can note for 3.4 that the pip command will be implemented using the current pip project, and so all of the other features of pip will be available, but with the proviso that any interfaces not explicitly mentioned in the standard documentation may be changed or removed as pip changes, and are not protected by Python's compatibility rules. That gives users a reasonable level of stability, while still allowing us the room to innovate. Does this sound reasonable? It may be obvious to everyone but me that this is what we're expecting - or I may even be proposing *more* than people expect. But I think something along the lines of an "interface over implementation" definition like this would be reasonable. If this is a useful proposal and no-one else is already working on something similar, I'm willing to write this up as a PEP. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Aug 19 14:54:29 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 19 Aug 2013 08:54:29 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: I'm only briefly checking my email before going back to sleep but just thought I'd mention I have a successor to PEP439 mostly written and I was planning on finishing it up based on some feedback i'd gotten and publishing it this week. More specific response when I'm not half asleep. On Aug 19, 2013, at 7:31 AM, Paul Moore wrote: > While debate on PEP 439 has died down, I've been thinking about what it actually means to end users for Python to bundle pip. If we have an explicit bootstrapping command like get_pip, how much practical benefit is there for that to be bundled vs having a well-known location for such a script (which is more or less what we have now)? > > Also, there is the whole question of how much we want to "lock down" what pip currently provides, as official functionality. (Particularly given the renewed energy around projects like distlib/distil, etc). > > I'd like to propose that as a first step, we agree and document a *user interface* that will be officially supported going forward. It would be based on (a subset of) the pip interface, so that a "bundle pip" strategy is still the immediate way forward, but by standardising the interface rather than the implementation, we retain the option to change things behind the scenes if, for example, distil suddenly takes over the world. > > Users get the benefit of a properly sanctioned "one true way" to manage packages. And we reserve enough flexibility that we don't accidentally accept constraints that are stronger than we'd like and stifle innovation. > > To be specific, what I propose is that we agree on the following: > > 1. Python installations provide a bootstrap command (tentatively "python -m get_pip", I'm not sure this needs to be directly executable, the -m interface is probably fine). This command may be a no-op if for example we opt for full bundling (and it will always be a no-op after the first run). > 2. Once bootstrapped, Python will provide a "pip" command (or pip3?) accessible whenever the "python" command is available (worded this way because the Windows installer doesn't put "python" on PATH by default). > 3. A specific subset of the current pip interface is provided (to be documented). I'd suggest we need a minimum of pip install, pip list, pip uninstall, pip install -U (for upgrades). I don't know what proportion of the rest of the pip API (various options, requirement files, environment variables and ini files) we want to lock down - I suggest we start minimal and add items as needed. > 4. We may want to add a separate note that "python -m pip" will do the same as the "pip" command, and may be needed to self-upgrade the pip command ("python -m pip -U pip"). > > We can note for 3.4 that the pip command will be implemented using the current pip project, and so all of the other features of pip will be available, but with the proviso that any interfaces not explicitly mentioned in the standard documentation may be changed or removed as pip changes, and are not protected by Python's compatibility rules. That gives users a reasonable level of stability, while still allowing us the room to innovate. > > Does this sound reasonable? It may be obvious to everyone but me that this is what we're expecting - or I may even be proposing *more* than people expect. But I think something along the lines of an "interface over implementation" definition like this would be reasonable. > > If this is a useful proposal and no-one else is already working on something similar, I'm willing to write this up as a PEP. > > Paul. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon Aug 19 16:28:57 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Aug 2013 09:28:57 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 Aug 2013 07:54, "Donald Stufft" wrote: > > I'm only briefly checking my email before going back to sleep but just thought I'd mention I have a successor to PEP439 mostly written and I was planning on finishing it up based on some feedback i'd gotten and publishing it this week. > > More specific response when I'm not half asleep. Donald sent me the preliminary draft last week, so I can share the short version of the proposal: * Bootstrap included in the standard library, executable as "python -m getpip". This ensures a clear distinction between files managed by the system installer (i.e. getpip) and those managed by pip (including pip itself) * Installs the bundled copy of pip by default, but has an option to do a network install of the latest from PyPI * Updates of the installed version use "pip install --upgrade pip" as normal * Bundled version inside getpip will be updated when pip is updated (so CPython maintenance releases may have newer versions of pip) * If you build from source, you need to run the bootstrap explicitly to get pip * Windows and Mac OS X installers will bootstrap the bundled pip by default (opt-out) with a network update option (opt-in) The following two points aren't in Donald's PEP yet, and are the main reason the initial draft wasn't published more widely: * We'll ask the Linux distros to define a circular runtime dependency between python and python-pip (I spoke to Toshio about that at Flock last weekend and Fedora is amenable, but we haven't asked any Debian or Ubuntu folks yet) * pyvenv will install pip by default (with an option to opt-out) The main open question is how we uninstall Python without leaving pip managed files around. If an appropriate command doesn't already exist, pip may need to grow a "python -m pip uninstall --all --force" command that the Python uninstallers can invoke. The bootstrap will also need to be able to control whether or not script wrappers are installed and the name they use and/or key them off sys.implementation somehow. Cheers, Nick. > > On Aug 19, 2013, at 7:31 AM, Paul Moore wrote: > > > While debate on PEP 439 has died down, I've been thinking about what it actually means to end users for Python to bundle pip. If we have an explicit bootstrapping command like get_pip, how much practical benefit is there for that to be bundled vs having a well-known location for such a script (which is more or less what we have now)? > > > > Also, there is the whole question of how much we want to "lock down" what pip currently provides, as official functionality. (Particularly given the renewed energy around projects like distlib/distil, etc). > > > > I'd like to propose that as a first step, we agree and document a *user interface* that will be officially supported going forward. It would be based on (a subset of) the pip interface, so that a "bundle pip" strategy is still the immediate way forward, but by standardising the interface rather than the implementation, we retain the option to change things behind the scenes if, for example, distil suddenly takes over the world. > > > > Users get the benefit of a properly sanctioned "one true way" to manage packages. And we reserve enough flexibility that we don't accidentally accept constraints that are stronger than we'd like and stifle innovation. > > > > To be specific, what I propose is that we agree on the following: > > > > 1. Python installations provide a bootstrap command (tentatively "python -m get_pip", I'm not sure this needs to be directly executable, the -m interface is probably fine). This command may be a no-op if for example we opt for full bundling (and it will always be a no-op after the first run). > > 2. Once bootstrapped, Python will provide a "pip" command (or pip3?) accessible whenever the "python" command is available (worded this way because the Windows installer doesn't put "python" on PATH by default). > > 3. A specific subset of the current pip interface is provided (to be documented). I'd suggest we need a minimum of pip install, pip list, pip uninstall, pip install -U (for upgrades). I don't know what proportion of the rest of the pip API (various options, requirement files, environment variables and ini files) we want to lock down - I suggest we start minimal and add items as needed. > > 4. We may want to add a separate note that "python -m pip" will do the same as the "pip" command, and may be needed to self-upgrade the pip command ("python -m pip -U pip"). > > > > We can note for 3.4 that the pip command will be implemented using the current pip project, and so all of the other features of pip will be available, but with the proviso that any interfaces not explicitly mentioned in the standard documentation may be changed or removed as pip changes, and are not protected by Python's compatibility rules. That gives users a reasonable level of stability, while still allowing us the room to innovate. > > > > Does this sound reasonable? It may be obvious to everyone but me that this is what we're expecting - or I may even be proposing *more* than people expect. But I think something along the lines of an "interface over implementation" definition like this would be reasonable. > > > > If this is a useful proposal and no-one else is already working on something similar, I'm willing to write this up as a PEP. > > > > Paul. > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 19 17:08:48 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 16:08:48 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 15:28, Nick Coghlan wrote: > > More specific response when I'm not half asleep. > > Donald sent me the preliminary draft last week, so I can share the short > version of the proposal: > > * Bootstrap included in the standard library, executable as "python -m > getpip". This ensures a clear distinction between files managed by the > system installer (i.e. getpip) and those managed by pip (including pip > itself) > * Installs the bundled copy of pip by default, but has an option to do a > network install of the latest from PyPI > * Updates of the installed version use "pip install --upgrade pip" as > normal > * Bundled version inside getpip will be updated when pip is updated (so > CPython maintenance releases may have newer versions of pip) > * If you build from source, you need to run the bootstrap explicitly to > get pip > * Windows and Mac OS X installers will bootstrap the bundled pip by > default (opt-out) with a network update option (opt-in) > > The following two points aren't in Donald's PEP yet, and are the main > reason the initial draft wasn't published more widely: > Thanks for the update, Nick. I'll wait for Donald's comments and the published version of the PEP before commenting in detail, but the following points come to mind for me at least: 1. I would still like to see *some* clear statement of what the "guaranteed interface" users can rely on is. Maybe I'm suggesting more than anyone else is willing to commit to, but I *do* think that if we're suggesting that there is now a "standard Python installer", we should give users some indication of what it actually is. For example, I think it's reasonable as a user to assume that "pip install foo" will work for the foreseeable future - but not that (for example) "pip bundle" will. 2. We need to be careful to define exactly when and how the "pip" command is present. Don't forget that on Windows, the "python" command is not on PATH by default (and the existence of the launcher means that it really doesn't need to be). I would suggest that we say something like "The pip command will be installed alongside the python command, and will be available when python is"[1]. We should also probably note that versioned variants of pip will be provided matching the versioned copies of the python command that are available (e.g., pip3/pip3.4 on Unix, maybe none at all on Windows...). Unless of course we want to use a different scheme for pip, in which case we need to agree on what that will be. 3. This also begs the question of whether pip.exe gets installed in the "Scripts" subdirectory on Windows, as at present - if it does, we'll have to be very careful indeed over how we word the instructions, as it's *easy* for users to have python.exe on PATH but not have Scripts\pip.exe on there :-( Paul. [1] I don't like that wording, but I can't find a better way of putting it that covers all cases but isn't too wordy :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 19 17:27:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 16:27:10 +0100 Subject: [Distutils] Looks like pypi is down, I'm getting 500 errors... Message-ID: The CDN seems OK (pip install works fine) but the backend (search, etc) looks like it has problems... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Aug 19 19:13:08 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 19 Aug 2013 10:13:08 -0700 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: On Fri, Aug 16, 2013 at 2:18 AM, wrote: > It seems distutils.util.get_platform() semantically differs on Windows and Linux. > > Windows: the return value is derived from the architecture of the *interpreter*, hence for 32-bit Python running on 64-bit Windows get_platform() = 'win32' (32-bit). > > Linux: the return value is derived from the architecture of the *OS*, hence for 32-bit Python running on 64-bit Linux get_platform() = 'linux-x86_64' (64-bit). > > Is this intentional? This seems just plain wrong to me. For the record, running a 32 bit Python on a 64 bit OS_X box: In [5]: distutils.util.get_platform() Out[5]: 'macosx-10.6-i386' which is the answer I want. -Chris > My context (where this hit me): I was trying to install the 32-bit version of the Perforce API (compiled module) on 64-bit Windows and on 64-bit Linux. My command-line was > > python3.3-32 setup.py install --root FOO --install-platlib=lib.$PLAT > > (note the '-32' and the '$PLAT') > > On Windows, this installed the 32-bit version of the API into "FOO\lib.win32". On Linux, this installed the 64-bit version of the API into "FOO/lib.linux-x86_64". I'm not sure which is more correct behavior but the asymmetry seems suspicious. > > Sam > _______________________________________________ > > This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. > > For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. > > _______________________________________________ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From vinay_sajip at yahoo.co.uk Mon Aug 19 21:47:27 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 19 Aug 2013 19:47:27 +0000 (UTC) Subject: [Distutils] What does it mean for Python to "bundle pip"? References: Message-ID: Paul Moore gmail.com> writes: > I'd like to propose that as a first step, we agree and document a *user interface* that will be officially supported going forward. It would be based on (a subset of) the pip interface, so that a "bundle pip" strategy is still the immediate way forward, but by standardising the interface rather than the implementation, we retain the option to change things behind the scenes if, for example, distil suddenly takes over the world. Or even if it doesn't, we still want the flexibility to change things under the covers based on changing needs and what's available to meet them. > 4. We may want to add a separate note that "python -m pip" will do the same as the "pip" command, and may be needed to self-upgrade the pip command ("python -m pip -U pip"). Have we resolved the "unfortunate importability of scripts" issue? > Does this sound reasonable? It may be obvious to everyone but me that this is what we're expecting - or I may even be proposing *more* than people expect. But I think something along the lines of an "interface over implementation" definition like this would be reasonable. I think this is vital to allow use to manage future changes effectively. I'm not sure exactly how the bundling implementation is supposed to work, but in my view user programs should not be able to import pip, pkg_resources, setuptools etc. from the bundled locations - only from site-packages if they have actually been installed there. The interface you're describing should be at a CLI level only. Regards, Vinay Sajip From p.f.moore at gmail.com Mon Aug 19 22:44:31 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 21:44:31 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 20:47, Vinay Sajip wrote: > > 4. We may want to add a separate note that "python -m pip" will do the > same as the "pip" command, and may be needed to self-upgrade the pip > command > ("python -m pip -U pip"). > > Have we resolved the "unfortunate importability of scripts" issue? > Only in the sense that exe wrappers have (once again) proved to be the only really good solution. I promise I *will* write up the discussion so we can stop having it. But I'm not promising anything before 2018 ;-) But while there's a technique for implementing self-replacing exes, I don't know if anyone has actually implemented it yet (hence my reservation that we may need to suggest python -m pip to upgrade itself). > Does this sound reasonable? It may be obvious to everyone but me that this > is what we're expecting - or I may even be proposing *more* than people > expect. But I think something along the lines of an "interface over > implementation" definition like this would be reasonable. > > I think this is vital to allow use to manage future changes effectively. > I'm > not sure exactly how the bundling implementation is supposed to work, but > in > my view user programs should not be able to import pip, pkg_resources, > setuptools etc. from the bundled locations - only from site-packages if > they > have actually been installed there. The interface you're describing should > be at a CLI level only. I don't think that's anyone's current plan (and I don't actually think it's necessary, either). I am happy to go with a solution that provides a guaranteed command line interface, documented in a pep and preferably also in the Python docs. That CLI will be couched in terms of a "pip" command - any future contenders will have to be renamed to match the contract, but I see that as an essential aspect of "blessing" the pip command (not the project, just the command) as the Python installer. Under the hood, we can happily admit that *currently* the functionality is provided by installing the pip distribution. But if you use anything outside of the CLI in the PEP (whether it's additional pip commands or flags to existing commands not mentioned in the PEP, or if you import pip directly) then you accept that you rely on pip and so work to their release management and compatibility practices, not Python's, *and* you accept that should an alternative project be selected to replace pip, you will no longer have access to those features unless you manually install the pip project (assuming pip still exists at such a time, and supports installing side by side with its successor). Hiding the internal implementation is extra effort for little reward, if we take the "consenting adults" view of people using undocumented details. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Aug 19 23:21:53 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 19 Aug 2013 17:21:53 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Aug 19, 2013, at 4:44 PM, Paul Moore wrote: > > On 19 August 2013 20:47, Vinay Sajip wrote: > > 4. We may want to add a separate note that "python -m pip" will do the > same as the "pip" command, and may be needed to self-upgrade the pip command > ("python -m pip -U pip"). > > Have we resolved the "unfortunate importability of scripts" issue? > > Only in the sense that exe wrappers have (once again) proved to be the only really good solution. I promise I *will* write up the discussion so we can stop having it. But I'm not promising anything before 2018 ;-) > > But while there's a technique for implementing self-replacing exes, I don't know if anyone has actually implemented it yet (hence my reservation that we may need to suggest python -m pip to upgrade itself). > > > Does this sound reasonable? It may be obvious to everyone but me that this > is what we're expecting - or I may even be proposing *more* than people > expect. But I think something along the lines of an "interface over > implementation" definition like this would be reasonable. > > I think this is vital to allow use to manage future changes effectively. I'm > not sure exactly how the bundling implementation is supposed to work, but in > my view user programs should not be able to import pip, pkg_resources, > setuptools etc. from the bundled locations - only from site-packages if they > have actually been installed there. The interface you're describing should > be at a CLI level only. > > I don't think that's anyone's current plan (and I don't actually think it's necessary, either). > > I am happy to go with a solution that provides a guaranteed command line interface, documented in a pep and preferably also in the Python docs. That CLI will be couched in terms of a "pip" command - any future contenders will have to be renamed to match the contract, but I see that as an essential aspect of "blessing" the pip command (not the project, just the command) as the Python installer. > > Under the hood, we can happily admit that *currently* the functionality is provided by installing the pip distribution. But if you use anything outside of the CLI in the PEP (whether it's additional pip commands or flags to existing commands not mentioned in the PEP, or if you import pip directly) then you accept that you rely on pip and so work to their release management and compatibility practices, not Python's, *and* you accept that should an alternative project be selected to replace pip, you will no longer have access to those features unless you manually install the pip project (assuming pip still exists at such a time, and supports installing side by side with its successor). > > Hiding the internal implementation is extra effort for little reward, if we take the "consenting adults" view of people using undocumented details. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig My current draft explicitly ensures that the end result is the same as if someone installed pip manually. There will be no importable pip from the standard library. It also specifically excludes pip from the backwards compat and other governance related issues of CPython. This is an outdated draft, it needs updated which once I have time in the next few days i'll update it and post it but this is my draft https://github.com/dstufft/peps/blob/master/pip-bundling.rst ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon Aug 19 23:37:34 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Aug 2013 16:37:34 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 10:08, Paul Moore wrote: > 2. We need to be careful to define exactly when and how the "pip" command is > present. Don't forget that on Windows, the "python" command is not on PATH > by default (and the existence of the launcher means that it really doesn't > need to be). I would suggest that we say something like "The pip command > will be installed alongside the python command, and will be available when > python is"[1]. We should also probably note that versioned variants of pip > will be provided matching the versioned copies of the python command that > are available (e.g., pip3/pip3.4 on Unix, maybe none at all on Windows...). > Unless of course we want to use a different scheme for pip, in which case we > need to agree on what that will be. In 3.3+, I believe the Windows installer does PATH modification by default. In 3.4+ it will likely do PATHEXT modification, too. > 3. This also begs the question of whether pip.exe gets installed in the > "Scripts" subdirectory on Windows, as at present - if it does, we'll have to > be very careful indeed over how we word the instructions, as it's *easy* for > users to have python.exe on PATH but not have Scripts\pip.exe on there :-( We should just add Scripts to the PATH in the installer as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Mon Aug 19 23:43:13 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 22:43:13 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 22:21, Donald Stufft wrote: > My current draft explicitly ensures that the end result is the same as if > someone installed pip manually. There will be no importable pip from the > standard library. > OK, in which case I think you need to explain in the PEP why this is a benefit compared to users just manually installing pip by hand. In particular "why doesn't pip just supply getpip.py for download?" You also need to address the fact that installing pip (whether by hand or via the bootstrap) on Windows, does not normally make the pip command available (because Python34\Scripts is not on %PATH%). Benefits: install process matches the current one, disadvantages: all user documentation needs to add some level of verbiage about how to add pip to %PATH%. It also specifically excludes pip from the backwards compat and other > governance related issues of CPython. > I agree with this, but I think we should offer some core stability guarantees - even if it's only that "python -m pip install X" will install X... At the moment, the PEP doesn't really offer any tangible benefits for the end user. I'm not sure if it's the place of this PEP, but *something* needs to document the reasons why we are even proposing that pip gets bundled with Python. From an end user perspective, the immediate impression is that everything's going to be hugely better, because we'll have a package manager that can install, upgrade, uninstall, etc, built into Python. From there on, however, the picture is one of a series of little disappointments - "I need to explicitly bootstrap. Oh, OK.", "And now I have to set PATH. Yech, but fair enough". "And wait, the interface isn't stable? What?"[1] The various technical issues have started to obscure the original user-facing reasons why we want to do this at all. If we don't document those reasons, we risk ending up with a technical solution that doesn't actually address the original issue. Which would be a shame. I can understand if Donald feels that the original rationale isn't something he wants to cover (I don't know if he was involved in the discussions - IIRC, they were at PyCon, and I know for example that *I* don't have a clear picture of all the details, as I wasn't there...) Maybe Nick would be better placed to ad a "Why are we doing this?" section. This is an outdated draft, it needs updated which once I have time in the > next few days i'll update it and post it but this is my draft > https://github.com/dstufft/peps/blob/master/pip-bundling.rst > I hope the above is useful, and not out of date with where you plan on going with your updates. Apologies if I'm jumping the gun here. Paul. [1] Yes, I know it is stable - but the docs will say "not covered by Python stability guarantees" which gives the *impression* "not stable" :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Aug 20 00:07:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 23:07:22 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 22:37, Nick Coghlan wrote: > On 19 August 2013 10:08, Paul Moore wrote: > > 2. We need to be careful to define exactly when and how the "pip" > command is > > present. Don't forget that on Windows, the "python" command is not on > PATH > > by default (and the existence of the launcher means that it really > doesn't > > need to be). I would suggest that we say something like "The pip command > > will be installed alongside the python command, and will be available > when > > python is"[1]. We should also probably note that versioned variants of > pip > > will be provided matching the versioned copies of the python command that > > are available (e.g., pip3/pip3.4 on Unix, maybe none at all on > Windows...). > > Unless of course we want to use a different scheme for pip, in which > case we > > need to agree on what that will be. > > In 3.3+, I believe the Windows installer does PATH modification by > default. In 3.4+ it will likely do PATHEXT modification, too. > "Add python to PATH" is off by default. And I think it only adds C:\Python33 to PATH, not C:\Python33\Scripts (which isn't even created by the installer). I haven't tested this, though. PATHEXT modification is handled by the code for "Register Python extensions" which is on by default. But that's separate from "add python to PATH". Given that the installer includes the py.exe launcher, if you leave the defaults, then at a command prompt "python" doesn't work. But that's fine, because "py" does. And if you have multiple versions of Python installed, you don't *want* python on PATH, because then you have to manage your PATH. Why bother when "py -2.7" or "py -3.3" does what you want with no path management? Once you want any *other* executables, though, you have to deal with PATH (especially in the multiple Pythons case). That is a new issue, and one that hasn't been thought through yet, and we don't have a good solution. There's a reason that python.exe was not on PATH by default for all these years. The launcher let us avoid most of the issues, and having it optional and off by default solved most of the remaining issues. But all of the problems still apply equally to pip.exe - and we don't have workarounds/solutions yet for that. At a minimum, if you want to set PATH to include C:\Python33;C:\Python33\Scripts by default, this should be proposed on python-dev. And be prepared for questions like "what if you install multiple Pythons?", "how will uninstall work?", etc. They have been raised before and never answered satisfactorily. TBH, I doubt it'll be resolved in time for beta 1. > 3. This also begs the question of whether pip.exe gets installed in the > > "Scripts" subdirectory on Windows, as at present - if it does, we'll > have to > > be very careful indeed over how we word the instructions, as it's *easy* > for > > users to have python.exe on PATH but not have Scripts\pip.exe on there > :-( > > We should just add Scripts to the PATH in the installer as well. > See above. I'm -1 on changing any of the current installer PATH handling for Windows as a side-effect of this proposal. I think we need to devise a solution that works with the current behaviour (which might mean installing a pip.exe alongside python.exe, and documenting the command as "pip, if you have python on PATH, otherwise py -m pip"). OTOH, if you want to raise the issue of "fixing" the current Python install layout and process for Windows, then I'd be fine with that. The current layout is messy, and annoyingly different than Unix. But (a) it's an issue for python-dev, and (b) I don't think you should expect that debate to be resolved in time to hold this proposal up for it. Paul. PS I'm carefully *not* mentioning the differing layout used by virtualenv/venv in the above, because it will muddy the water. But it is probably relevant. And if the system Python is on PATH and you activate a virtualenv, there could be some difficult interactions there - that's why I explicitly don't add my system Python to PATH, or install packages into it. I think we need some end user best practice advice, if we're going to start advocating using a pip installed in the system Python (maybe "always use --user" is enough). Hmm, I'm starting to find more questions than answers here. Time to stop for a while... -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Aug 20 00:14:41 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 19 Aug 2013 18:14:41 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Aug 19, 2013, at 5:43 PM, Paul Moore wrote: > On 19 August 2013 22:21, Donald Stufft wrote: > My current draft explicitly ensures that the end result is the same as if someone installed pip manually. There will be no importable pip from the standard library. > > OK, in which case I think you need to explain in the PEP why this is a benefit compared to users just manually installing pip by hand. In particular "why doesn't pip just supply getpip.py for download?" > > You also need to address the fact that installing pip (whether by hand or via the bootstrap) on Windows, does not normally make the pip command available (because Python34\Scripts is not on %PATH%). Benefits: install process matches the current one, disadvantages: all user documentation needs to add some level of verbiage about how to add pip to %PATH%. > > It also specifically excludes pip from the backwards compat and other governance related issues of CPython. > > I agree with this, but I think we should offer some core stability guarantees - even if it's only that "python -m pip install X" will install X... At the moment, the PEP doesn't really offer any tangible benefits for the end user. The benefit is that the installers will run the bootstrap during the install process, so ``pip`` works out of the box. The typical end user will not be running ``python -m getpip``. > > I'm not sure if it's the place of this PEP, but *something* needs to document the reasons why we are even proposing that pip gets bundled with Python. From an end user perspective, the immediate impression is that everything's going to be hugely better, because we'll have a package manager that can install, upgrade, uninstall, etc, built into Python. From there on, however, the picture is one of a series of little disappointments - "I need to explicitly bootstrap. Oh, OK.", "And now I have to set PATH. Yech, but fair enough". "And wait, the interface isn't stable? What?"[1] As said above they typically won't have to manually bootstrap it. I specifically excludes the PATH stuff from the PEP because in my mind that's a separate issue. I haven't used windows in a long enough time I don't feel qualified to speak on it (And it's not an issue under OSX which is the other installer distributed by Python). However that problem affects all installed distributions with command line scripts on Windows and probably should have a more generic solution. > > The various technical issues have started to obscure the original user-facing reasons why we want to do this at all. If we don't document those reasons, we risk ending up with a technical solution that doesn't actually address the original issue. Which would be a shame. > > I can understand if Donald feels that the original rationale isn't something he wants to cover (I don't know if he was involved in the discussions - IIRC, they were at PyCon, and I know for example that *I* don't have a clear picture of all the details, as I wasn't there...) Maybe Nick would be better placed to ad a "Why are we doing this?" section. I have 7 paragraphs of rationale. I"m not sure what additional information should be added to them in order to cover it? Part of what I hope to do before official release is reword things to be a bit more clear. > > This is an outdated draft, it needs updated which once I have time in the next few days i'll update it and post it but this is my draft https://github.com/dstufft/peps/blob/master/pip-bundling.rst > > I hope the above is useful, and not out of date with where you plan on going with your updates. Apologies if I'm jumping the gun here. > > Paul. > > [1] Yes, I know it is stable - but the docs will say "not covered by Python stability guarantees" which gives the *impression* "not stable" :-( Perhaps the wording should explicitly mention that it is not covered by Python's stability guarantees but is instead covered by pip's backwards compat policy (and then we need to actually close the loop on that :V). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue Aug 20 00:20:24 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Aug 2013 23:20:24 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 19 August 2013 23:14, Donald Stufft wrote: > I have 7 paragraphs of rationale. I"m not sure what additional information > should be added to them in order to cover it? Part of what I hope to do > before official release is reword things to be a bit more clear. Whoops, my apologies. It's getting too late at night here. I seem to have skimmed that without my brain being connected to my eyes. I think I may still have some comments I'd like to make, but I'll leave them until I'm awake enough to avoid humiliating myself in public a second time :-) Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From merwok at netwok.org Tue Aug 20 00:58:34 2013 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 19 Aug 2013 18:58:34 -0400 Subject: [Distutils] python version problem during install distutils In-Reply-To: References: Message-ID: <5212A31A.8080203@netwok.org> Hi, You don?t have to install distutils if you?re using Python 2.5: it is in the standard library that ships with Python. The error you?re seeing probably comes from Python mixing the standard library distutils with the one being installed. Hope this helps From vinay_sajip at yahoo.co.uk Tue Aug 20 01:29:10 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 20 Aug 2013 00:29:10 +0100 (BST) Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: <1376954950.48186.YahooMailNeo@web171405.mail.ir2.yahoo.com> > From: Paul Moore >But while there's a technique for implementing self-replacing exes, I don't know if anyone has actually implemented it yet (hence my reservation that we may need to suggest python -m pip to upgrade itself). The ".deleteme" dance is implemented in distlib. It's not in any released version but it is in the repo. >Hiding the internal implementation is extra effort for little reward, if we take the "consenting adults" view of people using undocumented details. Only if it's hard to hide - I'm more concerned that people won't understand the distinction between stdlib and bundled code if there appears to be no difference between the two. Regards, Vinay Sajip From samuel.ferencik at barclays.com Tue Aug 20 08:15:46 2013 From: samuel.ferencik at barclays.com (samuel.ferencik at barclays.com) Date: Tue, 20 Aug 2013 07:15:46 +0100 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> >-----Original Message----- >From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] >Sent: Monday, August 19, 2013 7:13 PM >To: Ferencik, Samuel: Markets (PRG) >Cc: distutils-sig at python.org >Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows > >On Fri, Aug 16, 2013 at 2:18 AM, wrote: >> It seems distutils.util.get_platform() semantically differs on Windows and >> Linux. >> >> Windows: the return value is derived from the architecture of the >> *interpreter*, hence for 32-bit Python running on 64-bit Windows >> get_platform() = 'win32' (32-bit). >> >> Linux: the return value is derived from the architecture of the *OS*, hence >> for 32-bit Python running on 64-bit Linux get_platform() = 'linux-x86_64' >> (64-bit). >> >> Is this intentional? > >This seems just plain wrong to me. > >For the record, running a 32 bit Python on a 64 bit OS_X box: > >In [5]: distutils.util.get_platform() >Out[5]: 'macosx-10.6-i386' > >which is the answer I want. > >-Chris Chris, What does your 'uname -m' return? Is it possible you're really running a 32-bit Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] Basically, you're saying that the return value is wrong on Linux and correct on Windows, right? That get_platform() should return "32-bit" for a 32-bit process running on a 64-bit system. TBH, I was expecting the opposite; to me, "platform" means the OS, which would mean that Linux does well to derive the return value from the OS's architecture. Sam _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. _______________________________________________ From donald at stufft.io Tue Aug 20 08:54:10 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 20 Aug 2013 02:54:10 -0400 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: <0DE7ED79-C97F-4260-AFA9-2AB227E88241@stufft.io> AFAIK Wheel is using that to determine binary compatibility, so if a 32bit Python on a 64bit Linux needs to be linux_32 that could be a problem. On Aug 20, 2013, at 2:15 AM, wrote: >> -----Original Message----- >> From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] >> Sent: Monday, August 19, 2013 7:13 PM >> To: Ferencik, Samuel: Markets (PRG) >> Cc: distutils-sig at python.org >> Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows >> >> On Fri, Aug 16, 2013 at 2:18 AM, wrote: >>> It seems distutils.util.get_platform() semantically differs on Windows and >>> Linux. >>> >>> Windows: the return value is derived from the architecture of the >>> *interpreter*, hence for 32-bit Python running on 64-bit Windows >>> get_platform() = 'win32' (32-bit). >>> >>> Linux: the return value is derived from the architecture of the *OS*, hence >>> for 32-bit Python running on 64-bit Linux get_platform() = 'linux-x86_64' >>> (64-bit). >>> >>> Is this intentional? >> >> This seems just plain wrong to me. >> >> For the record, running a 32 bit Python on a 64 bit OS_X box: >> >> In [5]: distutils.util.get_platform() >> Out[5]: 'macosx-10.6-i386' >> >> which is the answer I want. >> >> -Chris > > Chris, > > What does your 'uname -m' return? Is it possible you're really running a 32-bit > Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] > > Basically, you're saying that the return value is wrong on Linux and correct on > Windows, right? That get_platform() should return "32-bit" for a 32-bit process > running on a 64-bit system. TBH, I was expecting the opposite; to me, "platform" > means the OS, which would mean that Linux does well to derive the return value > from the OS's architecture. > > Sam > > _______________________________________________ > > This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. > > For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. > > _______________________________________________ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oscar.j.benjamin at gmail.com Tue Aug 20 10:13:26 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 20 Aug 2013 09:13:26 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? Message-ID: Paul wrote: > Given that the installer includes the py.exe launcher, if you leave the defaults, then at a command prompt "python" doesn't work. But that's fine, because "py" does. And if you have multiple versions of Python installed, you don't *want* python on PATH, because then you have to manage your PATH. Why bother when "py -2.7" or "py -3.3" does what you want with no path management? Once you want any *other* executables, though, you have to deal with PATH (especially in the multiple Pythons case). That is a new issue, and one that hasn't been thought through yet, and we don't have a good solution. >From a user perspective I think that 'py -3.4 -m pip ...' is an improvement as it means I can easily install or upgrade for a particular python installation (I tend to have a few). There's no need to put Scripts on PATH just to run pip. I think this should be the recommended invocation for Windows users. Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue Aug 20 10:51:15 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Aug 2013 09:51:15 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 09:13, Oscar Benjamin wrote: > Paul wrote: > > Given that the installer includes the py.exe launcher, if you leave the > defaults, then at a command prompt "python" doesn't work. But that's fine, > because "py" does. And if you have multiple versions of Python installed, > you don't *want* python on PATH, because then you have to manage your PATH. > Why bother when "py -2.7" or "py -3.3" does what you want with no path > management? Once you want any *other* executables, though, you have to deal > with PATH (especially in the multiple Pythons case). That is a new issue, > and one that hasn't been thought through yet, and we don't have a good > solution. > > From a user perspective I think that 'py -3.4 -m pip ...' is an > improvement as it means I can easily install or upgrade for a particular > python installation (I tend to have a few). There's no need to put Scripts > on PATH just to run pip. I think this should be the recommended invocation > for Windows users. > Thanks - I agree with you (IMO it would be nice to have pip.exe in PATH, but it's not important enough to change the install process). Some other points on how the various bundling approaches fit with the Windows python installer: 1. Will the bundled pip go into the system site-packages or the user site-packages? Does this depend on whether the user selects "install for all users" or "install for just me"? 2. If pip goes into system site-packages, what happens with the uninstaller? It doesn't know about pip, so it won't uninstall Python cleanly. (Not a major point, you can delete the directory manually after uninstalling, but it's untidy). Maybe the uninstaller should just unconditionally delete all of site-packages as well as whatever files it knows were installed. This is a "normal" issue when installing into the system Python, but for people who avoid that and use virtualenvs (e.g. me :-)) it's new (and annoying, as we'll never use the system pip in any case...) This raises another point - to an extent, I don't care about any of this, as I routinely use virtualenvs. But if using pip to manage the system python is becoming the recommended approach, I'd like to understand what precisely the recommendation is so that I can see if it's better than what I currently do - for instance, I've never used --user so I don't know if it will be of benefit to me. I assume that this will go in the packaging user guide in due course, but I don't know who will write it (does anyone have the relevant experience? most people I know recommend virtualenv...) Maybe the whole automatically bootstrapping on install should be optional (albeit likely on by default) for people who don't want to install stuff into their system python anyway? Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Tue Aug 20 11:22:51 2013 From: theller at ctypes.org (Thomas Heller) Date: Tue, 20 Aug 2013 11:22:51 +0200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Back from holidays, I read this very interesting discussion... Am 14.08.2013 16:33, schrieb Nick Coghlan: > Aside from the lack of embedded C extension support (which could > likely be fixed if zipimport was migrated to Python code for 3.5), ...but I don't understand what you mean by this. Can you please explain? Thanks, Thomas From oscar.j.benjamin at gmail.com Tue Aug 20 12:25:33 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 20 Aug 2013 11:25:33 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 09:51, Paul Moore wrote: > 1. Will the bundled pip go into the system site-packages or the user > site-packages? Does this depend on whether the user selects "install for all > users" or "install for just me"? If you have get-pip then why not choose at that point whether you want to install for the system or for all users e.g.: 'py -3.4 -m get-pip --user' (or perhaps reverse the default)? > 2. If pip goes into system site-packages, what happens with the uninstaller? > It doesn't know about pip, so it won't uninstall Python cleanly. (Not a > major point, you can delete the directory manually after uninstalling, but > it's untidy). Maybe the uninstaller should just unconditionally delete all > of site-packages as well as whatever files it knows were installed. This is > a "normal" issue when installing into the system Python, but for people who > avoid that and use virtualenvs (e.g. me :-)) it's new (and annoying, as > we'll never use the system pip in any case...) Can you not just teach the Python installer to check for pip and remove it if found? > This raises another point - to an extent, I don't care about any of this, as > I routinely use virtualenvs. But if using pip to manage the system python is > becoming the recommended approach, I'd like to understand what precisely the > recommendation is so that I can see if it's better than what I currently do > - for instance, I've never used --user so I don't know if it will be of > benefit to me. I assume that this will go in the packaging user guide in due > course, but I don't know who will write it (does anyone have the relevant > experience? most people I know recommend virtualenv...) If I could install everything I wanted with pip then virtualenvs would be more practical. Maybe when wheel distribution becomes commonplace I'll start doing that. I basically always want to install a large number of third party packages before I do anything though. So for me the procedure on ubuntu is something like: 1) install ubuntu 2) sudo apt-get install python-numpy python-scipy python-matplotlib ipython python-sympy python-dev cython python-pygraph python-tables python-wxgtk2.8 python-pywt python-sphinx ... On Windows the procedure is: 1) Install Python 2) Get MSIs for numpy, scipy, wxPython, matplotlib, PyQt, numexpr, ... 3) Setup PATH or create a shell/batch script called 'python' that does the right thing. 4) Run ez_setup.py and Install pip 5) Patch distutils (http://bugs.python.org/issue12641) 6) Use pip for cython, sympy, ipython, pyreadline, spyder, sphinx, docutils, line_profiler, coverage, ... 7) Build and install my own commonly used private packages. 8) Get more prebuilt binaries for other awkward packages when necessary: pytables, numexpr, mayavi, ... (You can see why some people just install Python(x, y) or EPD right?) It takes quite a while to do all this and then I have basically all the packages I want minus a few pippable ones. At this point I don't really see the point in creating a virtualenv except to test something that I'm personally developing. Or am I missing something? Oscar From p.f.moore at gmail.com Tue Aug 20 12:51:08 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Aug 2013 11:51:08 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 11:25, Oscar Benjamin wrote: > On 20 August 2013 09:51, Paul Moore wrote: > > 1. Will the bundled pip go into the system site-packages or the user > > site-packages? Does this depend on whether the user selects "install for > all > > users" or "install for just me"? > > If you have get-pip then why not choose at that point whether you want > to install for the system or for all users e.g.: 'py -3.4 -m get-pip > --user' (or perhaps reverse the default)? > That's not how get-pip is being proposed. It will run automatically on installation of Python. If it were manually run, *and* if a --user flag was included, then you would be correct. > 2. If pip goes into system site-packages, what happens with the > uninstaller? > > It doesn't know about pip, so it won't uninstall Python cleanly. (Not a > > major point, you can delete the directory manually after uninstalling, > but > > it's untidy). Maybe the uninstaller should just unconditionally delete > all > > of site-packages as well as whatever files it knows were installed. This > is > > a "normal" issue when installing into the system Python, but for people > who > > avoid that and use virtualenvs (e.g. me :-)) it's new (and annoying, as > > we'll never use the system pip in any case...) > > Can you not just teach the Python installer to check for pip and > remove it if found? > I'm not sure. That's my point, essentially. Who knows enough about the Windows installer to do this correctly? If the answer is "nobody", then is a best-efforts approach with some issues that we don't have anyone who knows how to solve, an acceptable approach? Maybe it is, I'm not claiming that this is a major issue, but we should note it in the PEP as a limitation. > This raises another point - to an extent, I don't care about any of this, > as > > I routinely use virtualenvs. But if using pip to manage the system > python is > > becoming the recommended approach, I'd like to understand what precisely > the > > recommendation is so that I can see if it's better than what I currently > do > > - for instance, I've never used --user so I don't know if it will be of > > benefit to me. I assume that this will go in the packaging user guide in > due > > course, but I don't know who will write it (does anyone have the relevant > > experience? most people I know recommend virtualenv...) > > If I could install everything I wanted with pip then virtualenvs would > be more practical. Maybe when wheel distribution becomes commonplace > I'll start doing that. I basically always want to install a large > number of third party packages before I do anything though. > > So for me the procedure on ubuntu is something like: > 1) install ubuntu > 2) sudo apt-get install python-numpy python-scipy python-matplotlib > ipython python-sympy python-dev cython python-pygraph python-tables > python-wxgtk2.8 python-pywt python-sphinx ... > > On Windows the procedure is: > 1) Install Python > 2) Get MSIs for numpy, scipy, wxPython, matplotlib, PyQt, numexpr, ... > 3) Setup PATH or create a shell/batch script called 'python' that does > the right thing. > 4) Run ez_setup.py and Install pip > 5) Patch distutils (http://bugs.python.org/issue12641) > 6) Use pip for cython, sympy, ipython, pyreadline, spyder, sphinx, > docutils, line_profiler, coverage, ... > 7) Build and install my own commonly used private packages. > 8) Get more prebuilt binaries for other awkward packages when > necessary: pytables, numexpr, mayavi, ... > > (You can see why some people just install Python(x, y) or EPD right?) > > It takes quite a while to do all this and then I have basically all > the packages I want minus a few pippable ones. At this point I don't > really see the point in creating a virtualenv except to test something > that I'm personally developing. Or am I missing something? > :-) Yes, it's a pain. My experience is better because I don't use that many packages that need binaries. For those that I do, I maintain a local cache of wheels that I convert from bdist_wininst installers using "wheel convert" and then pip install works for everything. This is a really slick experience, but it relies on me maintaining my local repo, and having an appropriate -i flag added to pip (or I put it in pip.ini). As I work on multiple PCs, it's still fiddly (I put the cache on my skydrive for ease). But yes, if I made extensive use of binary extensions, I'd hate this approach. That's why I keep saying that the biggest win for wheels will be when they become the common means of distributing Windows binaries on PyPI, in place of wininst/msi. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From antoine at python.org Tue Aug 20 15:25:07 2013 From: antoine at python.org (Antoine Pitrou) Date: Tue, 20 Aug 2013 13:25:07 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 Message-ID: Hello, Some comments about PEP 426: > The information defined in this PEP is serialised to pydist.json files for some > use cases. These are files containing UTF-8 encoded JSON metadata. Perhaps add that on-disk pydist.json files may/should be generated in printed form with sorted keys, to ease direct inspection by users and developers? > Source labels MUST be unique within each project and MUST NOT match any > defined version for the project. Is there a motivation for the "not matching any defined version"? AFAICT it makes it necessary to have two different representation schemes, e.g. "X.Y.Z" for source labels and "vX.Y.Z" for versions. > For source archive references, an expected hash value may be specified by > including a ``=`` entry as part of the URL > fragment. Why only source archive references (and not e.g. binary)? > "project_urls": { > "Documentation": "https://distlib.readthedocs.org" > "Home": "https://bitbucket.org/pypa/distlib" > "Repository": "https://bitbucket.org/pypa/distlib/src" > "Tracker": "https://bitbucket.org/pypa/distlib/issues" > } This example lacks commas. > An abbreviation of "metadistribution requires". This is a list of > subdistributions that can easily be installed and used together by > depending on this metadistribution. I don't understand what it means :-) Care to explain and/or clarify the purpose? (for me, "meta-requires" sounds like something that setup.py depends on for its own operation, but that the installed software doesn't need) (edit: I now see this is clarified in Appendix C. The section ordering in the PEP makes it look like "meta_requires" are the primary type of requires, though, while according to that appendix they're a rather exotic use case. Would be nice to spell that out *before* the appendices :-)). > * MAY allow direct references What is a direct reference? > Automated tools MUST NOT allow strict version matching clauses or direct > references in this field - if permitted at all, such clauses should appear > in ``meta_requires`` instead. Why so? [test requires] > Public index servers SHOULD NOT allow strict version matching clauses or > direct references in this field. Again, why? Is it important for public index servers that test dependencies be not pinned? > Note that while these are build dependencies for the distribution being > built, the installation is a *deployment* scenario for the dependencies. But there are no deployment requires, right? :) (or is what "meta requires" are for?) > For example, multiple projects might supply > PostgreSQL bindings for use with SQL Alchemy: each project might declare > that it provides ``sqlalchemy-postgresql-bindings``, allowing other > projects to depend only on having at least one of them installed. But the automated installer wouldn't be able to suggest the various packages providing ``sqlalchemy-postgresql-bindings`` if none is installed, which should IMO discourage such a scheme. > To handle this case in a way that doesn't allow for name hijacking, the > authors of the distribution that first defines the virtual dependency > should > create a project on the public index server with the corresponding name, > and > depend on the specific distribution that should be used if no other > provider > is already installed. This also has the benefit of publishing the default > provider in a way that automated tools will understand. But then the alternatives needn't provide the "virtual dependency". They can just provide the "default provider", which saves the time and hassle of defining a well-known virtual dependency for all similar projects. > A string that indicates that this project is no longer being developed. > The > named project provides a substitute or replacement. How about a project that is no longer being developed but has no direct substitution? :) Can it use an empty string (or null / None perhaps?) > Examples indicating supported operating systems:: > > # Windows only > "supports_environments": ["sys_platform == 'win32'"] Hmm, which syntax is it exactly? In a previous section, you used the following example: > "environment": "sys.platform == 'win32'" (note dot vs. underscore) > "modules": ["chair", "chair.cushions", (...)] The example is a bit intriguing. Is it expected that both "chair" and "chair.cushions" be specified there, or is "chair" sufficient? > When installing from an sdist, source archive or VCS checkout, > installation > tools SHOULD create a binary archive using ``setup.py bdist_wheel`` and > then install binary archive normally (including invocation of any install > hooks). Installation tools SHOULD NOT invoke ``setup.py install`` > directly. Interesting. Is "setup.py install" meant to die, or will it be redefined as "bdist_wheel + install_wheel"? (also, why is this mentioned in the postinstall hooks section, or even in a metadata-related PEP?) > Installation tools SHOULD treat an exception thrown by a preuninstall > hook as an indication the removal of the distribution should be aborted. I hope a "--force" option will be provided by such tools. Failure to uninstall because of buggy uninstall tools is a frustrating experience. > Extras are additional dependencies that enable an optional aspect > of the distribution I am confused. To me, extras look like additional provides, not additional dependencies. I.e. in: "requires": ["ComfyChair[warmup]"] -> requires ``ComfyChair`` and ``SoftCushions`` "warmup" is an additional provide of ComfyChair, and it depends on SoftCushions. > "requires": ["ComfyChair[*]"] > -> requires ``ComfyChair`` and ``SoftCushions``, but will also > pick up any new extras defined in later versions This one confuses me (again :-)). What does "pick up" mean? Accept? Require? > pip install ComfyChair[-,:*:,*] > -> installs the full set of development dependencies, but avoids > installing ComfyChair itself Are all these possibilities ("-", ":*:", "*") useful in real life? > Environment markers > =================== In this section, there still are inconsistencies in the format examples ("sys.platform" vs. "sys_platform"). > * ``platform_python_implementation``: ``platform.python_implementation()`` > * ``implementation_name````: ``sys.implementation.name`` Why two different ways to spell nearly the same thing: >>> platform.python_implementation() 'CPython' >>> sys.implementation.name 'cpython' (also, look at how platform.python_implementation() is implemented :-)) Also, do the ordering operators ("<=", ">=", etc.) operate logically or lexicographically on version values? > Build labels > ------------ > > See PEP 440 for the rationale behind the addition of this field. I can't see anything named "Build label" in PEP 426. Did you mean "source label"? > This version of the metadata specification continues to use ``setup.py`` > and the distutils command syntax to invoke build and test related > operations on a source archive or VCS checkout. I don't really understand how Metadata 2.0 is dependent on the distutils command scheme. Can you elaborate? Regards Antoine. From ncoghlan at gmail.com Tue Aug 20 15:42:08 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Aug 2013 08:42:08 -0500 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 20 Aug 2013 04:22, "Thomas Heller" wrote: > > Back from holidays, I read this very interesting discussion... > > Am 14.08.2013 16:33, schrieb Nick Coghlan: > >> Aside from the lack of embedded C extension support (which could >> likely be fixed if zipimport was migrated to Python code for 3.5), > > > ...but I don't understand what you mean by this. Can you please explain? Importing C extensions requires extracting them to a temp directory and loading them from there. Trivial in Python, a pain in C. zipimport is currently still written in C. Cheers, Nick. > > Thanks, > Thomas > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Aug 20 15:49:58 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Aug 2013 08:49:58 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 Aug 2013 05:51, "Paul Moore" wrote: > > On 20 August 2013 11:25, Oscar Benjamin wrote: >> >> On 20 August 2013 09:51, Paul Moore wrote: >> > 1. Will the bundled pip go into the system site-packages or the user >> > site-packages? Does this depend on whether the user selects "install for all >> > users" or "install for just me"? >> >> If you have get-pip then why not choose at that point whether you want >> to install for the system or for all users e.g.: 'py -3.4 -m get-pip >> --user' (or perhaps reverse the default)? > > > That's not how get-pip is being proposed. It will run automatically on installation of Python. If it were manually run, *and* if a --user flag was included, then you would be correct. > >> > 2. If pip goes into system site-packages, what happens with the uninstaller? >> > It doesn't know about pip, so it won't uninstall Python cleanly. (Not a >> > major point, you can delete the directory manually after uninstalling, but >> > it's untidy). Maybe the uninstaller should just unconditionally delete all >> > of site-packages as well as whatever files it knows were installed. This is >> > a "normal" issue when installing into the system Python, but for people who >> > avoid that and use virtualenvs (e.g. me :-)) it's new (and annoying, as >> > we'll never use the system pip in any case...) >> >> Can you not just teach the Python installer to check for pip and >> remove it if found? > > > I'm not sure. That's my point, essentially. Who knows enough about the Windows installer to do this correctly? If the answer is "nobody", then is a best-efforts approach with some issues that we don't have anyone who knows how to solve, an acceptable approach? Maybe it is, I'm not claiming that this is a major issue, but we should note it in the PEP as a limitation. > >> > This raises another point - to an extent, I don't care about any of this, as >> > I routinely use virtualenvs. But if using pip to manage the system python is >> > becoming the recommended approach, I'd like to understand what precisely the >> > recommendation is so that I can see if it's better than what I currently do >> > - for instance, I've never used --user so I don't know if it will be of >> > benefit to me. I assume that this will go in the packaging user guide in due >> > course, but I don't know who will write it (does anyone have the relevant >> > experience? most people I know recommend virtualenv...) >> >> If I could install everything I wanted with pip then virtualenvs would >> be more practical. Maybe when wheel distribution becomes commonplace >> I'll start doing that. I basically always want to install a large >> number of third party packages before I do anything though. >> >> So for me the procedure on ubuntu is something like: >> 1) install ubuntu >> 2) sudo apt-get install python-numpy python-scipy python-matplotlib >> ipython python-sympy python-dev cython python-pygraph python-tables >> python-wxgtk2.8 python-pywt python-sphinx ... >> >> On Windows the procedure is: >> 1) Install Python >> 2) Get MSIs for numpy, scipy, wxPython, matplotlib, PyQt, numexpr, ... >> 3) Setup PATH or create a shell/batch script called 'python' that does >> the right thing. >> 4) Run ez_setup.py and Install pip >> 5) Patch distutils (http://bugs.python.org/issue12641) >> 6) Use pip for cython, sympy, ipython, pyreadline, spyder, sphinx, >> docutils, line_profiler, coverage, ... >> 7) Build and install my own commonly used private packages. >> 8) Get more prebuilt binaries for other awkward packages when >> necessary: pytables, numexpr, mayavi, ... >> >> (You can see why some people just install Python(x, y) or EPD right?) >> >> It takes quite a while to do all this and then I have basically all >> the packages I want minus a few pippable ones. At this point I don't >> really see the point in creating a virtualenv except to test something >> that I'm personally developing. Or am I missing something? > > > :-) > > Yes, it's a pain. My experience is better because I don't use that many packages that need binaries. For those that I do, I maintain a local cache of wheels that I convert from bdist_wininst installers using "wheel convert" and then pip install works for everything. This is a really slick experience, but it relies on me maintaining my local repo, and having an appropriate -i flag added to pip (or I put it in pip.ini). As I work on multiple PCs, it's still fiddly (I put the cache on my skydrive for ease). > > But yes, if I made extensive use of binary extensions, I'd hate this approach. That's why I keep saying that the biggest win for wheels will be when they become the common means of distributing Windows binaries on PyPI, in place of wininst/msi. Scientific users will always be better off with something like hashdist/conda, since that ignores platform interoperability and easy security updates in favour of hash based reproducibility. Continuum Analytics also already take care of providing the prebuilt binary versions. The pip ecosystem is more appropriate for pure Python code and relatively simple C extensions (including cffi bindings). Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Tue Aug 20 16:18:10 2013 From: theller at ctypes.org (Thomas Heller) Date: Tue, 20 Aug 2013 16:18:10 +0200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Am 20.08.2013 15:42, schrieb Nick Coghlan: > Importing C extensions requires extracting them to a temp directory and > loading them from there. Trivial in Python, a pain in C. zipimport is > currently still written in C. So what - zipimport is a builtin module (on Windows at least). From antoine at python.org Tue Aug 20 16:23:40 2013 From: antoine at python.org (Antoine Pitrou) Date: Tue, 20 Aug 2013 14:23:40 +0000 (UTC) Subject: [Distutils] What does it mean for Python to "bundle pip"? References: Message-ID: Paul Moore gmail.com> writes: > > That's not how get-pip is being proposed. It will run automatically on > installation of Python. If it were manually run, *and* if a --user flag was > included, then you would be correct. +1 for providing a "--user" flag to "python -m getpip". It is important to be able to install things without root access, and without having to create a venv. Regards Antoine. From oscar.j.benjamin at gmail.com Tue Aug 20 17:09:11 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 20 Aug 2013 16:09:11 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 14:49, Nick Coghlan wrote: > > On 20 Aug 2013 05:51, "Paul Moore" wrote: >> >> But yes, if I made extensive use of binary extensions, I'd hate this >> approach. That's why I keep saying that the biggest win for wheels will be >> when they become the common means of distributing Windows binaries on PyPI, >> in place of wininst/msi. > > Scientific users will always be better off with something like > hashdist/conda, since that ignores platform interoperability and easy > security updates in favour of hash based reproducibility. Continuum > Analytics also already take care of providing the prebuilt binary versions. Hashdist looks useful but it's for people who will build everything from source (as is basically required in the HPC environments for which it is designed). This is still problematic on Windows (which is never used for HPC). Conda looks interesting though, I'll give that a try soon. > The pip ecosystem is more appropriate for pure Python code and relatively > simple C extensions (including cffi bindings). The core extensions that I would want to put into each and every virtualenv are things like numpy and matplotlib. These projects have been reliably providing binary installers for Windows for years and I'm sure that they will soon be distributing wheels. The current PyPI binaries for numpy are here: https://pypi.python.org/pypi/numpy Is it not a fairly simple change to make it so that they're also uploading wheels? BTW is there any reason for numpy et al not to start distributing wheels now? Is any part of the wheel specification/tooling/infrastructure not complete yet? Oscar From Steve.Dower at microsoft.com Tue Aug 20 17:10:14 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 20 Aug 2013 15:10:14 +0000 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: <1c2461b6c9b74313a8d52c1e81b23581@BLUPR03MB199.namprd03.prod.outlook.com> Oscar Benjamin wrote: > Paul wrote: >> Given that the installer includes the py.exe launcher, if you leave the >> defaults, then at a command prompt "python" doesn't work. But that's fine, >> because "py" does. And if you have multiple versions of Python installed, you >> don't *want* python on PATH, because then you have to manage your PATH. Why >> bother when "py -2.7" or "py -3.3" does what you want with no path management? >> Once you want any *other* executables, though, you have to deal with PATH >> (especially in the multiple Pythons case). That is a new issue, and one that >> hasn't been thought through yet, and we don't have a good solution. > > From a user perspective I think that 'py -3.4 -m pip ...' is an improvement as > it means I can easily install or upgrade for a particular python installation (I > tend to have a few). There's no need to put Scripts on PATH just to run pip. I > think this should be the recommended invocation for Windows users. Crazy idea: py install (or 'py --install ...', but I think 'py install' is currently invalid and could be used?) which the launcher executes identically to: py -m pip install (Implicitly extended to other relevant commands... I'm not proposing a complete list.) Pros: * allows implicit bootstrapping on first use (from a bundled pip, IMO, in case no network is available) * multiple Python versions are handled nicely and consistently ('py -3.3 install ...') * can minimize officially supported API surface (as Paul described at the start of this thread) * pip becomes an internal implementation detail that can be entirely replaced * one less character of typing (slightly tongue-in-cheek, but some people count :) ) Cons: * doesn't apply on *nix (or does/could it?) * requires the most new code of any of the options * more difficult to update the launcher than a user-installed package * others that I can't think of because I'm suffering from confirmation bias? Thoughts? Cheers, Steve From p.f.moore at gmail.com Tue Aug 20 17:21:27 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Aug 2013 16:21:27 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 16:09, Oscar Benjamin wrote: > BTW is there any reason for numpy et al not to start distributing > wheels now? Is any part of the wheel > specification/tooling/infrastructure not complete yet? > Not really. It's up to them to do so, though. Maybe their toolset makes that more difficult, I don't believe they use setuptools, for example - that's their problem, but it may not be one they are interested in solving :-( The biggest issues outstanding are: 1. Script handling, which is a bit flaky still (but I don't think that affects numpy) 2. Tags (not in general, but AIUI numpy distribute a fancy installer that decides what compiled code to use depending on whether you have certain CPU features - they may want to retain that, and to do so may prefer to have more fine-grained tags, which in turn may or may not be possible to support). I don't think that's a critical issue though. Getting numpy et al on board would be a huge win - if wheels can satisfy their needs, we could be pretty sure we haven't missed anything. And it gets a big group of users involved, giving us a lot more real world use cases. Feel like sounding the numpy community out on the idea? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Tue Aug 20 17:45:01 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Tue, 20 Aug 2013 16:45:01 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 20 August 2013 16:21, Paul Moore wrote: > On 20 August 2013 16:09, Oscar Benjamin wrote: >> >> BTW is there any reason for numpy et al not to start distributing >> wheels now? Is any part of the wheel >> specification/tooling/infrastructure not complete yet? > > Not really. It's up to them to do so, though. Maybe their toolset makes that > more difficult, I don't believe they use setuptools, for example - that's > their problem, but it may not be one they are interested in solving :-( They seem to be using setuptools commands here: https://github.com/numpy/numpy/blob/master/numpy/distutils/core.py#L48 https://github.com/numpy/numpy/blob/master/setupegg.py > The biggest issues outstanding are: > > 1. Script handling, which is a bit flaky still (but I don't think that > affects numpy) I think that they are responsible for installing the f2py script in each of my Scripts directories. I never use this script and I don't know what numpy wants with it (my understanding is that the Fortran parts of numpy were all shifted over to scipy). > 2. Tags (not in general, but AIUI numpy distribute a fancy installer that > decides what compiled code to use depending on whether you have certain CPU > features - they may want to retain that, and to do so may prefer to have > more fine-grained tags, which in turn may or may not be possible to > support). I don't think that's a critical issue though. I guess this is what you mean: https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/test.c Is there no way for them to run a post-install script when pip installing wheels from PyPI? > Getting numpy et al on board would be a huge win - if wheels can satisfy > their needs, we could be pretty sure we haven't missed anything. And it gets > a big group of users involved, giving us a lot more real world use cases. > Feel like sounding the numpy community out on the idea? Maybe. I'm not usually on their mailing lists but I'd be willing to find out what they think. First I need to be clear that I know what I'm talking about though! Oscar From brett at python.org Tue Aug 20 17:48:15 2013 From: brett at python.org (Brett Cannon) Date: Tue, 20 Aug 2013 11:48:15 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: <1c2461b6c9b74313a8d52c1e81b23581@BLUPR03MB199.namprd03.prod.outlook.com> References: <1c2461b6c9b74313a8d52c1e81b23581@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: On Tue, Aug 20, 2013 at 11:10 AM, Steve Dower wrote: > Oscar Benjamin wrote: > > Paul wrote: > >> Given that the installer includes the py.exe launcher, if you leave the > >> defaults, then at a command prompt "python" doesn't work. But that's > fine, > >> because "py" does. And if you have multiple versions of Python > installed, you > >> don't *want* python on PATH, because then you have to manage your PATH. > Why > >> bother when "py -2.7" or "py -3.3" does what you want with no path > management? > >> Once you want any *other* executables, though, you have to deal with > PATH > >> (especially in the multiple Pythons case). That is a new issue, and one > that > >> hasn't been thought through yet, and we don't have a good solution. > > > > From a user perspective I think that 'py -3.4 -m pip ...' is an > improvement as > > it means I can easily install or upgrade for a particular python > installation (I > > tend to have a few). There's no need to put Scripts on PATH just to run > pip. I > > think this should be the recommended invocation for Windows users. > > Crazy idea: > > py install > (or 'py --install ...', but I think 'py install' is currently invalid and > could be used?) > > which the launcher executes identically to: > > py -m pip install > > (Implicitly extended to other relevant commands... I'm not proposing a > complete list.) > > Pros: > * allows implicit bootstrapping on first use (from a bundled pip, IMO, in > case no network is available) > Nick already killed this idea. Richard's original PEP proposed this and the idea eventually was shot down. -Brett > * multiple Python versions are handled nicely and consistently ('py -3.3 > install ...') > * can minimize officially supported API surface (as Paul described at the > start of this thread) > * pip becomes an internal implementation detail that can be entirely > replaced > * one less character of typing (slightly tongue-in-cheek, but some people > count :) ) > > Cons: > * doesn't apply on *nix (or does/could it?) > * requires the most new code of any of the options > * more difficult to update the launcher than a user-installed package > * others that I can't think of because I'm suffering from confirmation > bias? > > Thoughts? > > Cheers, > Steve > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Steve.Dower at microsoft.com Tue Aug 20 18:02:08 2013 From: Steve.Dower at microsoft.com (Steve Dower) Date: Tue, 20 Aug 2013 16:02:08 +0000 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: <1c2461b6c9b74313a8d52c1e81b23581@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: <8cec992e5ddc4967a83cf57a067e7879@BLUPR03MB199.namprd03.prod.outlook.com> Brett Cannon wrote: > Steve Dower wrote: >> Oscar Benjamin wrote: >>> Paul wrote: >>>> Given that the installer includes the py.exe launcher, if you leave the >>>> defaults, then at a command prompt "python" doesn't work. But that's fine, >>>> because "py" does. And if you have multiple versions of Python installed, you >>>> don't *want* python on PATH, because then you have to manage your PATH. Why >>>> bother when "py -2.7" or "py -3.3" does what you want with no path management? >>>> Once you want any *other* executables, though, you have to deal with PATH >>>> (especially in the multiple Pythons case). That is a new issue, and one that >>>> hasn't been thought through yet, and we don't have a good solution. >>> >>> From a user perspective I think that 'py -3.4 -m pip ...' is an improvement as >>> it means I can easily install or upgrade for a particular python installation (I >>> tend to have a few). There's no need to put Scripts on PATH just to run pip. I >>> think this should be the recommended invocation for Windows users. >> Crazy idea: >> >> py install >> (or 'py --install ...', but I think 'py install' is currently invalid and could be used?) >> >> which the launcher executes identically to: >> >> py -m pip install >> >> (Implicitly extended to other relevant commands... I'm not proposing a complete list.) >> >> Pros: >> * allows implicit bootstrapping on first use (from a bundled pip, IMO, in case no network is available) > > Nick already killed this idea. Richard's original PEP proposed this and the idea eventually was shot down. Are you referring to the whole idea or just the implicit bootstrap? I followed the discussions about the latter, and it seemed that the problem was trying to bootstrap pip using pip. This is different (bootstrap pip from py) and I believe it does not have the same issues. > -Brett From ncoghlan at gmail.com Tue Aug 20 18:22:27 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Aug 2013 11:22:27 -0500 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On 20 Aug 2013 09:18, "Thomas Heller" wrote: > > Am 20.08.2013 15:42, schrieb Nick Coghlan: > >> Importing C extensions requires extracting them to a temp directory and >> loading them from there. Trivial in Python, a pain in C. zipimport is >> currently still written in C. > > > So what - zipimport is a builtin module (on Windows at least). Huh? That's irrelevant to the fact that doing the tempdir creation, file extraction and subsequent import entirely in C code would be painfully tedious. Cheers, Nick. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Aug 20 18:31:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Aug 2013 11:31:23 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: <8cec992e5ddc4967a83cf57a067e7879@BLUPR03MB199.namprd03.prod.outlook.com> References: <1c2461b6c9b74313a8d52c1e81b23581@BLUPR03MB199.namprd03.prod.outlook.com> <8cec992e5ddc4967a83cf57a067e7879@BLUPR03MB199.namprd03.prod.outlook.com> Message-ID: On 20 Aug 2013 11:02, "Steve Dower" wrote: > > Brett Cannon wrote: > > Steve Dower wrote: > >> Oscar Benjamin wrote: > >>> Paul wrote: > >>>> Given that the installer includes the py.exe launcher, if you leave the > >>>> defaults, then at a command prompt "python" doesn't work. But that's fine, > >>>> because "py" does. And if you have multiple versions of Python installed, you > >>>> don't *want* python on PATH, because then you have to manage your PATH. Why > >>>> bother when "py -2.7" or "py -3.3" does what you want with no path management? > >>>> Once you want any *other* executables, though, you have to deal with PATH > >>>> (especially in the multiple Pythons case). That is a new issue, and one that > >>>> hasn't been thought through yet, and we don't have a good solution. > >>> > >>> From a user perspective I think that 'py -3.4 -m pip ...' is an improvement as > >>> it means I can easily install or upgrade for a particular python installation (I > >>> tend to have a few). There's no need to put Scripts on PATH just to run pip. I > >>> think this should be the recommended invocation for Windows users. > >> Crazy idea: > >> > >> py install > >> (or 'py --install ...', but I think 'py install' is currently invalid and could be used?) > >> > >> which the launcher executes identically to: > >> > >> py -m pip install > >> > >> (Implicitly extended to other relevant commands... I'm not proposing a complete list.) > >> > >> Pros: > >> * allows implicit bootstrapping on first use (from a bundled pip, IMO, in case no network is available) > > > > Nick already killed this idea. Richard's original PEP proposed this and the idea eventually was shot down. > > Are you referring to the whole idea or just the implicit bootstrap? I followed the discussions about the latter, and it seemed that the problem was trying to bootstrap pip using pip. That and potentially relying on network access at runtime, as well as potentially running into permissions issues depending on where Python was installed. Implicit bootstrap just seems like a recipe for hard to debug failure modes. > This is different (bootstrap pip from py) and I believe it does not have the same issues. It certainly avoids some of them, but not all. I think Paul Moore has the right idea: treat "scripts on Windows" as a distinct problem deserving of a separate PEP. That general solution will then apply to pip as well. In the meantime "py -m pip" feels like a tolerable Windows specific workaround. Cheers, Nick. > > > -Brett > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Tue Aug 20 18:39:27 2013 From: theller at ctypes.org (Thomas Heller) Date: Tue, 20 Aug 2013 18:39:27 +0200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Am 20.08.2013 18:22, schrieb Nick Coghlan: > > On 20 Aug 2013 09:18, "Thomas Heller" > wrote: > > > > Am 20.08.2013 15:42, schrieb Nick Coghlan: > > > >> Importing C extensions requires extracting them to a temp directory and > >> loading them from there. Trivial in Python, a pain in C. zipimport is > >> currently still written in C. > > > > > > So what - zipimport is a builtin module (on Windows at least). > > Huh? That's irrelevant to the fact that doing the tempdir creation, file > extraction and subsequent import entirely in C code would be painfully > tedious. Ok, now I understand. But the zipfile could contain a loader-module for each extension which does something like this (this example extracts and loads 'bz2.pyd'): def __load(): import imp, os path = os.path.join(__loader__.archive, "--EXTENSIONS--", 'bz2.pyd') data = __loader__.get_data(path) dstpath = os.path.join(TEMPDIR, 'bz2.pyd') with open(dstpath, "wb") as dll: dll.write(data) imp.load_dynamic(__name__, dstpath) __load() del __load (py2exe for Python 3, which is work in progress, uses this approach) Thomas From chris.barker at noaa.gov Tue Aug 20 18:37:38 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 20 Aug 2013 09:37:38 -0700 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: On Tue, Aug 20, 2013 at 9:00 AM, wrote: > That's strange. I'm on Python 3.3.1, and it seems to me that get_platform() > derives the value from uname for OS X, similar to Linux. > > (osname, host, release, version, machine) = os.uname() > ... > elif osname[:6] == "darwin": > import _osx_support, distutils.sysconfig > osname, release, machine = _osx_support.get_platform_osx( > distutils.sysconfig.get_config_vars(), > osname, release, machine) > return "%s-%s-%s" % (osname, release, machine) > > so I would expect "uname -m" to be in line with get_plaform(). But maybe I'm > misreading that... Also, I don't have access to the _osx_support source code. yup -- _osx_support.get_platform_osx() does special magic.... >>> return value is wrong on Linux and correct on >>> Windows, right? >> >> no -- I'm saying that it's right on Windows (and OS-X), but wrong on Linux. > > I think you have misread my sentence, and we actually agree here. duh! yes, we do. > What's the next action? Report a Python bug? (That's a cultural question; I'm > new to Python.) not sure -- this seems like the right place to report it, but an offical bug report may make sense. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From chris.barker at noaa.gov Tue Aug 20 17:47:14 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 20 Aug 2013 08:47:14 -0700 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: On Mon, Aug 19, 2013 at 11:15 PM, wrote: > What does your 'uname -m' return? x86_64 > Is it possible you're really running a 32-bit > Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] nope -- I am quite deliberately running a 32 bit Python on my 64 bit OS (I have some custom code C++ I"m using that is not yet 64 bit safe). > return value is wrong on Linux and correct on > Windows, right? no -- I'm saying that it's right on Windows (and OS-X), but wrong on Linux. >That get_platform() should return "32-bit" for a 32-bit process > running on a 64-bit system. yes, it should. > TBH, I was expecting the opposite; to me, "platform" > means the OS, which would mean that Linux does well to derive the return value > from the OS's architecture. except what would be the utility of that? this is a call made within python, and it's part of distutils, so what the caller wants to know is the platform that this particular python was build for, NOT the platform is happens to be running on. i.e. what platform do I want to build binary extensions for, and/or what platform do I want to download binary wheels for. So I'm pretty sure that currently Windows and OS-X have it right, and Linux is broken. I'm guessing running 32 bit python on a 64 bit LInux is not that common, however. (and it's less common to download binaries...) To add complexity, if I run the Apple-supplied python2.7.1 (which is 32_64 bit universal, but runs 64 bit on my machine), I get: >>> distutils.util.get_platform() 'macosx-10.7-intel' Which is more useful than it may look at first -- "intel" means "both intel platforms", i.e. i386 and x86_64. and 10.7 means -- built for OS-X 10.7 and above. so I think it's doing the right thing. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From pje at telecommunity.com Tue Aug 20 19:39:12 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 20 Aug 2013 13:39:12 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: On Tue, Aug 20, 2013 at 12:39 PM, Thomas Heller wrote: > Ok, now I understand. But the zipfile could contain a loader-module > for each extension which does something like this (this example extracts > and loads 'bz2.pyd'): > ... > > (py2exe for Python 3, which is work in progress, uses this approach) Setuptools has also done this since the egg format was developed, but it has some well-known problems, which unfortunately your example has worse versions of. ;-) Setuptools takes the approach of keeping a per-user cache directory (so that cleanup isn't required, and so there are no security issues where somebody can replace a tempfile between you writing it and importing it), and it uses a unique subdirectory per egg so that different (say) bz2.pyd files can't conflict with each other. Even with these adjustments, Unix users frequently run into issues where the user a process is running as doesn't have access to a suitable cache directory, and so it's a common complaint about the use of zipped eggs. I thought that at one point you (Thomas) had come up with a way to load modules into memory from a zipfile without needing to extract them. Was that you? If so, how did that work out? (ISTR that there was some sort of licensing issue, too.) From theller at ctypes.org Tue Aug 20 20:31:16 2013 From: theller at ctypes.org (Thomas Heller) Date: Tue, 20 Aug 2013 20:31:16 +0200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Am 20.08.2013 19:39, schrieb PJ Eby: > On Tue, Aug 20, 2013 at 12:39 PM, Thomas Heller wrote: >> Ok, now I understand. But the zipfile could contain a loader-module >> for each extension which does something like this (this example extracts >> and loads 'bz2.pyd'): >> ... >> >> (py2exe for Python 3, which is work in progress, uses this approach) > > Setuptools has also done this since the egg format was developed, but > it has some well-known problems, which unfortunately your example has > worse versions of. ;-) The code I posted was some 'pseudocode' how to extract and import an extension from a zip-file without coding it in C ;-). For example, TEMPDIR is not the usual TEMP directory, instead py2exe will use a per-process temp directory and cleanup after process exit. So, at least some of the problems you list below should be solved or solvable. > Setuptools takes the approach of keeping a per-user cache directory > (so that cleanup isn't required, and so there are no security issues > where somebody can replace a tempfile between you writing it and > importing it), and it uses a unique subdirectory per egg so that > different (say) bz2.pyd files can't conflict with each other. Even > with these adjustments, Unix users frequently run into issues where > the user a process is running as doesn't have access to a suitable > cache directory, and so it's a common complaint about the use of > zipped eggs. > > I thought that at one point you (Thomas) had come up with a way to > load modules into memory from a zipfile without needing to extract > them. Was that you? If so, how did that work out? (ISTR that there > was some sort of licensing issue, too.) Yes, that was me. It worked out so-so, fine for extensions from wx-python and numpy, for example, not so good for others. Until recently it did only work for win32, not win64, but this will be fixed. And, of course, it is windows only! The code it is based on is MPL2.0 licensed: https://github.com/fancycode/MemoryModule Thomas From vinay_sajip at yahoo.co.uk Wed Aug 21 02:57:45 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 00:57:45 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Thomas Heller ctypes.org> writes: > Ok, now I understand. But the zipfile could contain a loader-module > for each extension which does something like this (this example extracts > and loads 'bz2.pyd'): In distlib, I've built on top of the zipfile support to allow C extensions to be available. Ordinary zipfiles contain no metadata indicating the extensions available, but wheels built with distil do. The wheel handling code in distlib takes advantage of this. The Wheel.mount() API [1] takes care of this (adding the wheel to sys.path, extracting the extensions to a user-specific directory and using an import hook to call imp.load_dynamic when required, so that both Python modules and extensions are available for import). It seems to work, though when I introduced the Wheel.mount() API I was told that it was very dangerous, and the sky would fall, or something :-) Regards, Vinay Sajip [1] http://distlib.readthedocs.org/en/latest/tutorial.html#mounting-wheels From donald at stufft.io Wed Aug 21 06:17:25 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 00:17:25 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> On Aug 20, 2013, at 8:57 PM, Vinay Sajip wrote: > Thomas Heller ctypes.org> writes: > >> Ok, now I understand. But the zipfile could contain a loader-module >> for each extension which does something like this (this example extracts >> and loads 'bz2.pyd'): > > In distlib, I've built on top of the zipfile support to allow C extensions > to be available. Ordinary zipfiles contain no metadata indicating the > extensions available, but wheels built with distil do. The wheel handling > code in distlib takes advantage of this. The Wheel.mount() API [1] takes > care of this (adding the wheel to sys.path, extracting the extensions to a > user-specific directory and using an import hook to call imp.load_dynamic > when required, so that both Python modules and extensions are available for > import). It seems to work, though when I introduced the Wheel.mount() API I > was told that it was very dangerous, and the sky would fall, or something :-) > > Regards, > > Vinay Sajip > > [1] http://distlib.readthedocs.org/en/latest/tutorial.html#mounting-wheels > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Mounting Wheels seems like a bad idea, it was one of the things Daniel explicitly removed (since Wheels are basically cleaned up eggs). Adding it back in ex post facto seems like it's an idea that's going down the wrong track. If Wheels are to be importable it should be enshrined in the PEP not an adhoc feature of one possible implementation. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 08:36:04 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 06:36:04 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > Mounting Wheels seems like a bad idea, it was one of the things Daniel > explicitly removed (since Wheels are basically cleaned up eggs). Adding > it back in ex post facto seems like it's an idea that's going down the wrong > track. Like I said, the sky will fall. Isn't importing from zips what's being discussed in this part of the thread? Unless something is expressly verboten, I can't see a reason why particular implementations of a PEP can't provide additional functionality, as long as they implement the contents of the standard. And in Python's consenting-adults world, I can't recall seeing any such express proscriptions. Regards, Vinay Sajip From p.f.moore at gmail.com Wed Aug 21 08:50:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 07:50:56 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> Message-ID: On 21 August 2013 07:36, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > > > Mounting Wheels seems like a bad idea, it was one of the things Daniel > > explicitly removed (since Wheels are basically cleaned up eggs). Adding > > it back in ex post facto seems like it's an idea that's going down the > wrong > > track. > > Like I said, the sky will fall. Isn't importing from zips what's being > discussed in this part of the thread? > > Unless something is expressly verboten, I can't see a reason why particular > implementations of a PEP can't provide additional functionality, as long as > they implement the contents of the standard. And in Python's > consenting-adults world, I can't recall seeing any such express > proscriptions. > I'm concerned that you need extra metadata (not described in the wheel spec) to do this. It means that there are in effect two subtly different types of wheel. To be specific, if I create a wheel for (say) pyzmq using distil, and mount it, everything works. But if I create the same wheel with bdist_wheel or pip, it doesn't. That, to my mind, is very bad as it damages the credibility of wheel as a standardised format. Can I suggest that if you need to add features like this, you need to get the wheel spec updated to mandate them, so that *all* wheels will follow the same spec. Essentially, I am -1 on any feature that uses information that is not documented in the wheel spec. Pip in particular resisted adding support for wheels until they were standardised in a PEP. It's frustrating if that PEP *still* doesn't mean that the wheel format is the same for all tools. (Note that another area where this is an issue is script wrappers, as the spec is silent about the fact that they are specified using entry-points.txt in metadata 1.x/setuptools. I've sent a proposed update to the spec to Daniel for his consideration). Paul PS Variances like this make me want to go back to the original idea that wheel support functions should be implemented in the stdlib, rather than having competing implementations "in the wild". Or at least, that one implementation should be considered the reference implementation that all others need to be compatible with :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 09:04:43 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 07:04:43 +0000 (UTC) Subject: [Distutils] What does it mean for Python to "bundle pip"? References: Message-ID: Oscar Benjamin gmail.com> writes: > I think that they are responsible for installing the f2py script in > each of my Scripts directories. I never use this script and I don't > know what numpy wants with it (my understanding is that the Fortran > parts of numpy were all shifted over to scipy). IIUC, if a third-party extension wants to use Fortran, the build process converts it using f2py in to a Python-importable extension. It may be a feature for distributions that use numpy, even if numpy doesn't use Fortran itself. > > 2. Tags (not in general, but AIUI numpy distribute a fancy installer that > > decides what compiled code to use depending on whether you have certain CPU > > features - they may want to retain that, and to do so may prefer to have > > more fine-grained tags, which in turn may or may not be possible to > > support). I don't think that's a critical issue though. > > I guess this is what you mean: > https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/test.c > > Is there no way for them to run a post-install script when pip > installing wheels from PyPI? I'm not sure that would be enough. The numpy installation checks for various features available at build time, and then writes numpy source code which is then installed. When building and installing on the same machine, perhaps no problem - but there could be problems when installation happens on a different machine, since the sources written to the wheel at build time would encode information about the build environment which may not be valid in the installation environment. ISTM for numpy to work with wheels, all of this logic would need to move from build time to run time, but I don't know how pervasive the source-writing approach is and how much work would be entailed in switching over to run-time adaptation to the environment. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Wed Aug 21 09:32:53 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 07:32:53 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> Message-ID: Paul Moore gmail.com> writes: > I'm concerned that you need extra metadata (not described in the wheel spec) to do this. It means that there are in effect two subtly different types of wheel. To be specific, if I create a wheel for (say) pyzmq using distil, and mount it, everything works. But if I create the same wheel with bdist_wheel or pip, it doesn't. That, to my mind, is very bad as it damages the credibility of wheel as a standardised format. If the additional metadata isn't there, then distlib just doesn't do anything additional - it just makes the Python modules importable (by adding the wheel to sys.path, which AFAIK is uncontroversial). > Can I suggest that if you need to add features like this, you need to get the wheel spec updated to mandate them, so that *all* wheels will follow the same spec. > Essentially, I am -1 on any feature that uses information that is not documented in the wheel spec. Pip in particular resisted adding support for wheels until they were standardised in a PEP. It's frustrating if that PEP *still* doesn't mean that the wheel format is the same for all tools. (Note that another area where this is an issue is script wrappers, as the spec is silent about the fact that they are specified using entry-points.txt in metadata 1.x/setuptools. I've sent a proposed update to the spec to Daniel for his consideration). Well, you don't really want to stifle innovation, do you? ;-) As far as I can tell, Daniel's wheel implementation allows files that are not specifically mentioned in the PEP to be installed into a distribution's .dist-info. This is also allowed in distlib - ISTM this is one way in which different packaging tools can add features which are special to them, and hold state relevant to distributions they build and/or install. If you accept that multiple competing implementations if a PEP are a good thing, then they can't all be functionally identical, though they must all implement a common set of functions described in the PEP they're implementing. As far as the advocacy for C-extension import support in wheel mounting goes, I see opposition to the idea on the basis that some people have shot themselves in the foot with a similar feature in pkg_resources. However, I've not seen any analysis that indicates (with specifics) why the feature is inherently bad (if there are problems with a specific implementation of the idea, then those could perhaps be resolved, but you can't really argue against "you're going to have a bad time, and you should feel bad"). Regards, Vinay Sajip From donald at stufft.io Wed Aug 21 09:45:43 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 03:45:43 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> Message-ID: <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> On Aug 21, 2013, at 3:32 AM, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> I'm concerned that you need extra metadata (not described in the wheel > spec) to do this. It means that there are in effect two subtly different > types of wheel. To be specific, if I create a wheel for (say) pyzmq using > distil, and mount it, everything works. But if I create the same wheel with > bdist_wheel or pip, it doesn't. That, to my mind, is very bad as it damages > the credibility of wheel as a standardised format. > > If the additional metadata isn't there, then distlib just doesn't do > anything additional - it just makes the Python modules importable (by adding > the wheel to sys.path, which AFAIK is uncontroversial). > >> Can I suggest that if you need to add features like this, you need to get > the wheel spec updated to mandate them, so that *all* wheels will follow the > same spec. >> Essentially, I am -1 on any feature that uses information that is not > documented in the wheel spec. Pip in particular resisted adding support for > wheels until they were standardised in a PEP. It's frustrating if that PEP > *still* doesn't mean that the wheel format is the same for all tools. (Note > that another area where this is an issue is script wrappers, as the spec is > silent about the fact that they are specified using entry-points.txt in > metadata 1.x/setuptools. I've sent a proposed update to the spec to Daniel > for his consideration). > > Well, you don't really want to stifle innovation, do you? ;-) > > As far as I can tell, Daniel's wheel implementation allows files that are > not specifically mentioned in the PEP to be installed into a distribution's > .dist-info. This is also allowed in distlib - ISTM this is one way in which > different packaging tools can add features which are special to them, and > hold state relevant to distributions they build and/or install. If you > accept that multiple competing implementations if a PEP are a good thing, > then they can't all be functionally identical, though they must all > implement a common set of functions described in the PEP they're implementing. I was one of the advocates for extension support in the new metadata, I want tools to be able to try things out and innovate. However what I don't really want is to be using someones personal testbed for features they think is cool. There's nothing *wrong* with you trying new ideas out in distlib, it just means that distib isn't the library I want to build tooling around. My basic problem is if the library we're pointing at to be the reference implementation of all of these things is adding new features it's confusing what is standard and what are just distlib's extensions. So basically I want people to innovate, that's something I feel very strongly is a good thing, I just don't want innovations to happen in the reference library. Maybe we need a smaller reference library which is strictly the PEPs to allow distlib to experiment. If it's experimentations turns out to be good and useful we can make PEPs for them and add them to the reference library. > > As far as the advocacy for C-extension import support in wheel mounting > goes, I see opposition to the idea on the basis that some people have shot > themselves in the foot with a similar feature in pkg_resources. However, > I've not seen any analysis that indicates (with specifics) why the feature > is inherently bad (if there are problems with a specific implementation of > the idea, then those could perhaps be resolved, but you can't really argue > against "you're going to have a bad time, and you should feel bad"). > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 10:07:09 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 09:07:09 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 21 August 2013 08:45, Donald Stufft wrote: > My basic problem is if the library we're pointing at to be the reference > implementation of all of these things is adding new features it's > confusing what > is standard and what are just distlib's extensions. > > So basically I want people to innovate, that's something I feel very > strongly > is a good thing, I just don't want innovations to happen in the reference > library. Maybe we need a smaller reference library which is strictly the > PEPs > to allow distlib to experiment. If it's experimentations turns out to be > good and > useful we can make PEPs for them and add them to the reference library. > +1 My problem is that as someone who wants to implement code that uses the new features like wheels, I want a usable reference implementation that covers the (agreed) standards. I don't particularly want my application to incorporate support for extensions to the standard, nor do I want to have to implement my own support all the time. In particular, at the present time there are two tools that can generate wheels (bdist_wheel and distlib) and 3 that can install them (wheel, distlib and pip). They have subtly different behaviours outside of the standard definitions, which means that they are not completely interoperable. I am not happy at all about that - and if that counts as "being against innovation", then I'm afraid that yes, I am... (I don't think it does, by the way, but you may differ). At the moment the wheel PEP is lagging a little behind some of the ongoing discussions, in particular in terms of script generation. That's fine, it's a work in progress. I hope it will be updated soon so that the spec matches what's been agreed. But I think we have a reasonable consensus on how scripts should work, and I think that should be reflected in the spec and the various tools be brought up to date with the spec before we move onto other areas and forget to tidy up around this one. Pip resisted including wheel support until we had a standard. I'm pretty unhappy that now we do have a standard, we still have situations where a wheel generated by one tool can have problems when installed with another - try "pip install wheel --use-wheel" on Windows to see what I mean, the exe wrappers are missing (this uses the wheel from PyPI, not a home-built one). OK, so here's a concrete question for distutils-sig. If I want to use wheels in my app (built them, install them, whatever) what should I use as my "reference implementation". I don't want to implement the code myself, I just want to produce lowest-common-denominator wheels that can be used anywhere, and consume wheels that conform to the spec correctly. This is not a hypothetical question - in the first instance I'm looking to add support for loading setuptools/pip from wheels in virtualenv, and I need to know what code to bundle to make that happen. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 10:09:08 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 04:09:08 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On Aug 21, 2013, at 4:07 AM, Paul Moore wrote: > OK, so here's a concrete question for distutils-sig. If I want to use wheels in my app (built them, install them, whatever) what should I use as my "reference implementation". I don't want to implement the code myself, I just want to produce lowest-common-denominator wheels that can be used anywhere, and consume wheels that conform to the spec correctly. This is not a hypothetical question - in the first instance I'm looking to add support for loading setuptools/pip from wheels in virtualenv, and I need to know what code to bundle to make that happen. Probably Wheel at this point. There's just the problem with the scripts which we need to actually get into the PEP and implemented. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 10:23:04 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 09:23:04 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 21 August 2013 09:09, Donald Stufft wrote: > > On Aug 21, 2013, at 4:07 AM, Paul Moore wrote: > > OK, so here's a concrete question for distutils-sig. If I want to use > wheels in my app (built them, install them, whatever) what should I use as > my "reference implementation". I don't want to implement the code myself, I > just want to produce lowest-common-denominator wheels that can be used > anywhere, and consume wheels that conform to the spec correctly. This is > not a hypothetical question - in the first instance I'm looking to add > support for loading setuptools/pip from wheels in virtualenv, and I need to > know what code to bundle to make that happen. > > > Probably Wheel at this point. There's just the problem with the scripts > which we need to actually get into the PEP and implemented. > I thought someone would say that. Wheel implies a dependency on setuptools (pkg_resources) which is viable for the virtualenv use case, but makes me somewhat sad in the more general case (because depending on setuptools at runtime feels wrong to me and there's no standalone pkg_resources). But nevertheless I think you're right. Thanks. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 10:23:25 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 08:23:25 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > However what I don't really want is to be using someones personal testbed > for features they think is cool. There's nothing *wrong* with you trying > new ideas out in distlib, it just means that distib isn't the library I > want to build tooling around. "Someone's personal test-bed"? How do you think all open source software starts out? What innovation isn't based on a feature someone thinks "is cool"? >From our interactions, I don't get a feeling that you're particularly interested in building tooling around distlib anyway, for reasons best known to you. This specific feature could be easily pulled, and distlib is still only at version 0.1.2. Nick has indicated he doesn't think it's appropriate to consider distlib ready for endorsement in a 3.4 time-frame, so there's plenty of time to make changes which are deemed necessary. How on earth do you expect people to try things out, see where the problems are and fix them, if you don't release such features? It would certainly help if you describe specific problems with this feature, rather than just "I don't like it". > My basic problem is if the library we're pointing at to be the reference > implementation of all of these things is adding new features it's > confusing what is standard and what are just distlib's extensions. There are already features in distlib which aren't mentioned in any PEP - e.g. the ability to calculate dependency graphs without downloading any archives, or the ability to install scripts as single executables on Windows, or the ability to give better feedback when uninstalling. > So basically I want people to innovate, that's something I feel very > strongly is a good thing, I just don't want innovations to happen in the > reference library. Maybe we need a smaller reference library which is ISTM distlib is not yet that reference library - it's just another library for most people, judging from the low level of feedback I've had overall. There's plenty of time to try things out and make changes, so I find your approach a bit of an over-reaction, and feel that it doesn't square with what you're saying about innovation. You know that feeling you mentioned that you get, where you think everyone is just out to block you and stop you getting stuff done? Are you trying to spread that feeling around? ;-) Regards, Vinay Sajip From donald at stufft.io Wed Aug 21 10:56:05 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 04:56:05 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> On Aug 21, 2013, at 4:23 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> However what I don't really want is to be using someones personal testbed >> for features they think is cool. There's nothing *wrong* with you trying >> new ideas out in distlib, it just means that distib isn't the library I >> want to build tooling around. > > "Someone's personal test-bed"? How do you think all open source software > starts out? What innovation isn't based on a feature someone thinks "is cool"? > > From our interactions, I don't get a feeling that you're particularly > interested in building tooling around distlib anyway, for reasons best known > to you. This specific feature could be easily pulled, and distlib is still > only at version 0.1.2. Nick has indicated he doesn't think it's appropriate > to consider distlib ready for endorsement in a 3.4 time-frame, so there's > plenty of time to make changes which are deemed necessary. How on earth do > you expect people to try things out, see where the problems are and fix > them, if you don't release such features? It would certainly help if you > describe specific problems with this feature, rather than just "I don't like > it". I don't particularly care about this feature. I care about having a simple reference library. That is what I thought distlib was supposed to be (and I'm not alone). However if you want to innovate and experiment inside it then great, that's fine. I just won't tell people that distlib is the reference library. > >> My basic problem is if the library we're pointing at to be the reference >> implementation of all of these things is adding new features it's >> confusing what is standard and what are just distlib's extensions. > > There are already features in distlib which aren't mentioned in any PEP - > e.g. the ability to calculate dependency graphs without downloading any > archives, or the ability to install scripts as single executables on > Windows, or the ability to give better feedback when uninstalling. As mentioned above, I'm not trying to argue against this particular feature I'm just trying to make sure we have a simple reference library for people to be pointed at. > >> So basically I want people to innovate, that's something I feel very >> strongly is a good thing, I just don't want innovations to happen in the >> reference library. Maybe we need a smaller reference library which is > > ISTM distlib is not yet that reference library - it's just another library > for most people, judging from the low level of feedback I've had overall. That's totally fine. We just need to be clear that it's not the reference library and is instead one implementation. > > There's plenty of time to try things out and make changes, so I find your > approach a bit of an over-reaction, and feel that it doesn't square with > what you're saying about innovation. You know that feeling you mentioned > that you get, where you think everyone is just out to block you and stop you > getting stuff done? Are you trying to spread that feeling around? ;-) I think the way you view distlib and the way other are viewing distlib are different (and that's ok). We just need to know what distlib is so we can have reasonable expectations of it. What i'm getting from you is that, at least right now, distlib isn't what I thought it was and I (and others) should stop treating it as *the* reference implementation of the new standards. I'm not trying to stop you from innovating, I'm just trying to make sure everyone has reasonable expectations all around. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 11:27:27 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 10:27:27 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> Message-ID: On 21 August 2013 09:56, Donald Stufft wrote: > > ISTM distlib is not yet that reference library - it's just another > library > > for most people, judging from the low level of feedback I've had overall. > > That's totally fine. We just need to be clear that it's not the reference > library and is instead one implementation. Yep, there's certainly been a perception that distlib is the reference implementation. Apologies if I perpetuated that. We do have a slightly different issue then, in that there *isn't* a reference implementation for a lot of this stuff... (I guess wheel counts as the reference implementation for wheel, doh, so that part's covered). People are starting to write code to use these new facilities, so having an actual reference implementation is important (IMO, that's one area where packaging/distutils2 got in a mess, so I'm concerned we don't fall into the same trap). We need people using the new stuff to help us identify potential issues. Paul PS Apologies if I appear to be a little irritable on this subject. I have a series of scripts that maintain a local cache of wheels for various projects. I'm starting to hit cases where the wheels aren't usable because of subtle differences in how the spec's being implemented, and it feels like I'm trying to hit a moving target, which is what I thought having the wheel 1.0 PEP accepted was designed to avoid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 11:29:56 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 05:29:56 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> Message-ID: <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> On Aug 21, 2013, at 5:27 AM, Paul Moore wrote: > On 21 August 2013 09:56, Donald Stufft wrote: > > ISTM distlib is not yet that reference library - it's just another library > > for most people, judging from the low level of feedback I've had overall. > > That's totally fine. We just need to be clear that it's not the reference > library and is instead one implementation. > > Yep, there's certainly been a perception that distlib is the reference implementation. Apologies if I perpetuated that. > > We do have a slightly different issue then, in that there *isn't* a reference implementation for a lot of this stuff... (I guess wheel counts as the reference implementation for wheel, doh, so that part's covered). People are starting to write code to use these new facilities, so having an actual reference implementation is important (IMO, that's one area where packaging/distutils2 got in a mess, so I'm concerned we don't fall into the same trap). We need people using the new stuff to help us identify potential issues. I might see about making a stripped down library which only implements the PEPs and nothing else. > > Paul > > PS Apologies if I appear to be a little irritable on this subject. I have a series of scripts that maintain a local cache of wheels for various projects. I'm starting to hit cases where the wheels aren't usable because of subtle differences in how the spec's being implemented, and it feels like I'm trying to hit a moving target, which is what I thought having the wheel 1.0 PEP accepted was designed to avoid. Can you send me a list (or post them here) of what issues you've hit? The biggest one i'm aware of is the scripts problem which is a fundamental problem with the 1.0 Wheel (or rather that any library with console entry points cannot be universal). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 11:46:05 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 10:46:05 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: On 21 August 2013 10:29, Donald Stufft wrote: > Can you send me a list (or post them here) of what issues you've hit? The > biggest one i'm aware of is the scripts problem which is a fundamental > problem with the 1.0 Wheel (or rather that any library with console entry > points cannot be universal). The scripts one is the key one (and yes, that needs to be fixed by updating the spec to confirm the consensus and then updating the tools to match). Another one IIRC was that distlib didn't put entry-points.txt in the .dist-info directory in the wheel (which breaks entry points). I think that's fixed now (and again, the Wheel spec is silent on what is correct behaviour here). I'll try to dig out the others, but they weren't anywhere near as major (some were also script-related such as inconsistent rewriting of shebang lines, which are superseded by the current state of play on scripts anyway). Generally, though, they were all areas where the 1.0 spec is silent (or just a bit vague) on details, and I was hitting implementation-defined differences. (For example, I was surprised to find that nowhere in the Wheel 1.0 spec does it actually state the metadata version that wheels should contain internally, just that it should be stored in a file called METADATA and that it should follow PKG-INFO which sort of implies that it's a 1.X format). Without an implementation that matches the spec precisely, though, it's often hard to know where to report bugs. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 11:48:57 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 05:48:57 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> On Aug 21, 2013, at 5:46 AM, Paul Moore wrote: > On 21 August 2013 10:29, Donald Stufft wrote: > Can you send me a list (or post them here) of what issues you've hit? The biggest one i'm aware of is the scripts problem which is a fundamental problem with the 1.0 Wheel (or rather that any library with console entry points cannot be universal). > > The scripts one is the key one (and yes, that needs to be fixed by updating the spec to confirm the consensus and then updating the tools to match). +1 > > Another one IIRC was that distlib didn't put entry-points.txt in the .dist-info directory in the wheel (which breaks entry points). I think that's fixed now (and again, the Wheel spec is silent on what is correct behaviour here). Ah yes, This one is a bit harder because it's a non standard file added by setuptools, but yea it probably should be adding that file. > > I'll try to dig out the others, but they weren't anywhere near as major (some were also script-related such as inconsistent rewriting of shebang lines, which are superseded by the current state of play on scripts anyway). Generally, though, they were all areas where the 1.0 spec is silent (or just a bit vague) on details, and I was hitting implementation-defined differences. (For example, I was surprised to find that nowhere in the Wheel 1.0 spec does it actually state the metadata version that wheels should contain internally, just that it should be stored in a file called METADATA and that it should follow PKG-INFO which sort of implies that it's a 1.X format). Without an implementation that matches the spec precisely, though, it's often hard to know where to report bugs. > > Paul. I think Wheel files are (and should be) independent of the particular metadata version used. That file should contain the required information in order to know what version of the metadata is included with the Wheel. This means that as metadata evolves Wheels can just start using the new meta data version without requiring an update to the spec. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 12:17:29 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 11:17:29 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> Message-ID: On 21 August 2013 10:48, Donald Stufft wrote: > I think Wheel files are (and should be) independent of the particular > metadata version used. That file should contain the required information in > order to know what version of the metadata is included with the Wheel. This > means that as metadata evolves Wheels can just start using the new meta > data version without requiring an update to the spec. That implies that any wheel reference implementation needs to expose APIs for reading and writing the metadata to/from the wheel. I don't have a problem with that, but I don't think the existing implementations do[1]... (And it could be a bit of a beast to design such an interface in a sufficiently future-proof manner, unless we also standardise a metadata object type...) Specifically, if I have a wheel and want to introspect it to find out the author email address, how do I do this? ("Doing so is not supported" is a valid answer, of course, but should also be documented...) Paul [1] Just checked. Distlib doesn't. Wheel doesn't (and wheel doesn't even have a fully documented API). Pip's wheel support is purely internal so doesn't count. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 12:29:18 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 06:29:18 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> Message-ID: <944910E3-05AA-4BA4-8C82-0976FA32384E@stufft.io> On Aug 21, 2013, at 6:17 AM, Paul Moore wrote: > On 21 August 2013 10:48, Donald Stufft wrote: > I think Wheel files are (and should be) independent of the particular metadata version used. That file should contain the required information in order to know what version of the metadata is included with the Wheel. This means that as metadata evolves Wheels can just start using the new meta data version without requiring an update to the spec. > > That implies that any wheel reference implementation needs to expose APIs for reading and writing the metadata to/from the wheel. I don't have a problem with that, but I don't think the existing implementations do[1]... (And it could be a bit of a beast to design such an interface in a sufficiently future-proof manner, unless we also standardise a metadata object type...) Specifically, if I have a wheel and want to introspect it to find out the author email address, how do I do this? ("Doing so is not supported" is a valid answer, of course, but should also be documented...) > > Paul > > [1] Just checked. Distlib doesn't. Wheel doesn't (and wheel doesn't even have a fully documented API). Pip's wheel support is purely internal so doesn't count. Yes I believe there should be an API for reading/writing to the dist-info directory and that's how a Wheel API should handle exposing that. It means you can compose api's so you have 1 API for reading/writing metadata which can be used for Wheels, Sdist 2.0, The on disk installed database format etc. Quick off the cuff design of an API for introspecting the author email address (Please realize this is totally off the top of my head and has not been thought through or played with or explored or anything that would make it reasonable to actually expect it to be sane for more than this simple example). >>> from ref import Metadata >>> from ref import Wheel, Sdist2 >>> whl = Wheel.from_file("something-something.whl") >>> print(whl.dist_info) {"METADATA": "?", "entry_points.txt": "?"} >>> meta = Metadata.from_mapping(whl.dist_info) >>> print(meta["summary"]) "A Something or Other Library" >>> sdist2 = Sdist2.from_file("something-something.sdist2") >>> print(sdist2.dist_info) {"pydist.json": "?", "README": "?"} >>> meta = Metadata.from_mapping(sdist2.dist_info) >>> print(meta["summary"]) "A something or Other Library" ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oscar.j.benjamin at gmail.com Wed Aug 21 12:29:19 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 11:29:19 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 08:04, Vinay Sajip wrote: > Oscar Benjamin gmail.com> writes: > >> I think that they are responsible for installing the f2py script in >> each of my Scripts directories. I never use this script and I don't >> know what numpy wants with it (my understanding is that the Fortran >> parts of numpy were all shifted over to scipy). > > IIUC, if a third-party extension wants to use Fortran, the build process > converts it using f2py in to a Python-importable extension. It may be a > feature for distributions that use numpy, even if numpy doesn't use Fortran > itself. Okay, that makes sense. I'm sure that's not a big problem. It won't work very well on Windows (the case where wheels are really needed) anyway since it doesn't have a wrapper script and won't get picked up by make etc. >> > 2. Tags (not in general, but AIUI numpy distribute a fancy installer that >> > decides what compiled code to use depending on whether you have certain CPU >> > features - they may want to retain that, and to do so may prefer to have >> > more fine-grained tags, which in turn may or may not be possible to >> > support). I don't think that's a critical issue though. >> >> I guess this is what you mean: >> https://github.com/numpy/numpy/blob/master/tools/win32build/cpuid/test.c >> >> Is there no way for them to run a post-install script when pip >> installing wheels from PyPI? > > I'm not sure that would be enough. The numpy installation checks for various > features available at build time, and then writes numpy source code which is > then installed. When building and installing on the same machine, perhaps no > problem - but there could be problems when installation happens on a > different machine, since the sources written to the wheel at build time > would encode information about the build environment which may not be valid > in the installation environment. > > ISTM for numpy to work with wheels, all of this logic would need to move > from build time to run time, but I don't know how pervasive the > source-writing approach is and how much work would be entailed in switching > over to run-time adaptation to the environment. I may have misunderstood it but looking at this https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147 I think that the installer ships variants for each architecture and decides at install time which to place on the target system. If that's the case then would it be possible for a wheel to ship all variants so that a post-install script could sort it out (rename/delete) after the wheel is installed? Oscar From donald at stufft.io Wed Aug 21 12:30:26 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 06:30:26 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <944910E3-05AA-4BA4-8C82-0976FA32384E@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> <944910E3-05AA-4BA4-8C82-0976FA32384E@stufft.io> Message-ID: <3545A08E-D7D0-4FF5-8E80-10AB44214E2E@stufft.io> On Aug 21, 2013, at 6:29 AM, Donald Stufft wrote: > introspecting the author email address Of course I wrote that and then did summary because the location of the author email address changed between Metadata 1.x and 2.x and I didn't feel like looking up the exact difference. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 12:36:27 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 11:36:27 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <3545A08E-D7D0-4FF5-8E80-10AB44214E2E@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> <944910E3-05AA-4BA4-8C82-0976FA32384E@stufft.io> <3545A08E-D7D0-4FF5-8E80-10AB44214E2E@stufft.io> Message-ID: On 21 August 2013 11:30, Donald Stufft wrote: > On Aug 21, 2013, at 6:29 AM, Donald Stufft wrote: > > introspecting the author email address > > > Of course I wrote that and then did summary because the location of the > author email address changed between Metadata 1.x and 2.x and I didn't feel > like looking up the exact difference. > :-) So would you see the API for author being if meta.version < (2, 0): author = meta["whatever_it_is_in_1.x"] else: author = meta["whatever_it_is_in_2.x"] (i.e., the metadata version would be available)? Sounds like a sensible approach. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 12:39:26 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 06:39:26 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC7 0-4EB5-B4B6-8BF791F9F30E@stufft.io> <944910E3-05AA-4BA4-8C82-0976FA32384E@stufft.io> <3545A08E-D7D0-4FF5-8E80-10AB44214E2E@stufft.io> Message-ID: <54E00335-4F6F-4149-A78B-C818A0303FFA@stufft.io> On Aug 21, 2013, at 6:36 AM, Paul Moore wrote: > On 21 August 2013 11:30, Donald Stufft wrote: > On Aug 21, 2013, at 6:29 AM, Donald Stufft wrote: > >> introspecting the author email address > > Of course I wrote that and then did summary because the location of the author email address changed between Metadata 1.x and 2.x and I didn't feel like looking up the exact difference. > > :-) > > So would you see the API for author being > > if meta.version < (2, 0): > author = meta["whatever_it_is_in_1.x"] > else: > author = meta["whatever_it_is_in_2.x"] > > (i.e., the metadata version would be available)? > > Sounds like a sensible approach. > > Paul. Yes, and possibly some sort of compat shim could be made to smooth over the differences if it was felt to be warranted. But in general I'm very much in favor of composed APIs instead of having things like WheelMetadata, Sdist2Metadata, InstalledDBMetadata etc. Less code and easier to switch out pieces (e.g. in your own code you wouldn't need to change the metadata handling based on if it was a Wheel or a Sdist2, just load the appropriate file type and pass the mapping into the Metadata api). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 12:39:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 11:39:39 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 11:29, Oscar Benjamin wrote: > I may have misunderstood it but looking at this > > https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147 > I think that the installer ships variants for each architecture and > decides at install time which to place on the target system. If that's > the case then would it be possible for a wheel to ship all variants so > that a post-install script could sort it out (rename/delete) after the > wheel is installed? > Wheel 1.0 does not have the ability to bundle multiple versions (and I don't think tags are fine-grained enough to cover the differences numpy need, which are at the "do you have the SSE instruction set?" level AIUI). Multi-version wheels are a possible future extension, but I don't know if anyone has thought about fine-grained tags. This is precisely the sort of input that the numpy people could provide to make sure that the wheel design covers their needs. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Aug 21 12:46:20 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 11:46:20 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 11:39, Paul Moore wrote: > On 21 August 2013 11:29, Oscar Benjamin wrote: >> >> I may have misunderstood it but looking at this >> >> https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147 >> I think that the installer ships variants for each architecture and >> decides at install time which to place on the target system. If that's >> the case then would it be possible for a wheel to ship all variants so >> that a post-install script could sort it out (rename/delete) after the >> wheel is installed? > > Wheel 1.0 does not have the ability to bundle multiple versions (and I don't > think tags are fine-grained enough to cover the differences numpy need, > which are at the "do you have the SSE instruction set?" level AIUI). > Multi-version wheels are a possible future extension, but I don't know if > anyone has thought about fine-grained tags. No, but the wheel could do like the current numpy installer and ship _numpy.pyd.nosse _numpy.pyd.sse1 _numpy.pyd.sse2 _numpy.pyd.sse3 as platlib files and then a post-install script can check for SSE support, rename the appropriate file to _numpy.pyd and delete the other _numpy.pyd.* files. > This is precisely the sort of input that the numpy people could provide to > make sure that the wheel design covers their needs. I'm I right in guessing (since the question keeps being evaded :) ) that a post-install script is not possible with pip+wheel+PyPI?. Oscar From donald at stufft.io Wed Aug 21 12:47:56 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 06:47:56 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Aug 21, 2013, at 6:46 AM, Oscar Benjamin wrote: > On 21 August 2013 11:39, Paul Moore wrote: >> On 21 August 2013 11:29, Oscar Benjamin wrote: >>> >>> I may have misunderstood it but looking at this >>> >>> https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147 >>> I think that the installer ships variants for each architecture and >>> decides at install time which to place on the target system. If that's >>> the case then would it be possible for a wheel to ship all variants so >>> that a post-install script could sort it out (rename/delete) after the >>> wheel is installed? >> >> Wheel 1.0 does not have the ability to bundle multiple versions (and I don't >> think tags are fine-grained enough to cover the differences numpy need, >> which are at the "do you have the SSE instruction set?" level AIUI). >> Multi-version wheels are a possible future extension, but I don't know if >> anyone has thought about fine-grained tags. > > No, but the wheel could do like the current numpy installer and ship > _numpy.pyd.nosse > _numpy.pyd.sse1 > _numpy.pyd.sse2 > _numpy.pyd.sse3 > as platlib files and then a post-install script can check for SSE > support, rename the appropriate file to _numpy.pyd and delete the > other _numpy.pyd.* files. > >> This is precisely the sort of input that the numpy people could provide to >> make sure that the wheel design covers their needs. > > I'm I right in guessing (since the question keeps being evaded :) ) > that a post-install script is not possible with pip+wheel+PyPI?. > > > Oscar > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig Metadata 2.0 includes the ability to have a post install script, but Wheel is not yet using Metadata 2.0 (and it's not yet finalized). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 12:56:58 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 10:56:58 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Paul Moore gmail.com> writes: > My problem is that as someone who wants to implement code that uses the > new features like wheels, I want a usable reference implementation that > covers the (agreed) standards. I don't particularly want my application > to incorporate support for extensions to the standard, nor do I want to > have to implement my own support all the time. The wheel implementation in distlib conforms to the existing PEPs, as far as I know. It covers the agreed standards and, as far as I know, in a reasonable way. Why should you care if there are additional methods in a Wheel class, if you never call on them? It seems a step too far for a spec of this type to dictate precisely what specific methods should or should not be present in a class. If you find that distlib's behaviour somehow violates the agreed standards, please tell me how and I'll fix it. I think I've been pretty responsive on issues raised, but please don't raise vague fears if they have no basis in fact. > In particular, at the present time there are two tools that can generate > wheels (bdist_wheel and distlib) and 3 that can install them (wheel, > distlib and pip). They have subtly different behaviours outside of the > standard definitions, which means that they are not completely > interoperable. I am not happy at all about that - and if that counts as > "being against innovation", then I'm afraid that yes, I am... (I don't > think it does, by the way, but you may differ). As I understand it, any interoperability issues are an effect of everything being a work in progress and some implementations lagging behind others in terms of PEP compliance. I can't speak for Daniel, but I've certainly made efforts to ensure that distlib can work with wheel-produced wheels, even where it means using old METADATA and entry_points.txt formats. I find it always helps to focus on specifics - since the implementations can't be identical, what specifically are the differences that make you unhappy, and why do they make you unhappy? If some of those can be laid at distlib's door, fine - I think I'm generally quite responsive to specific issues raised. Also, note that some features in an implementation might be useful to some people, even if not enshrined in a PEP. For example, distlib's wheel installation provides an option to install only the site-packages parts - no scripts or headers - a feature which you specifically requested, and which I readily added (as I can see its usefulness) without playing a "it's not in the PEP" card. Is that a "harmless innovation", or a "dangerous deviation from the standard"? > At the moment the wheel PEP is lagging a little behind some of the > ongoing discussions, in particular in terms of script generation. That's > fine, it's a work in progress. I hope it will be updated soon so that the > spec matches what's been agreed. Well, let's not forget that distlib is a work in progress, too. > But I think we have a reasonable consensus on how scripts should work, Do we? Someone might tell me tomorrow that the "cool" one-executable solution I've recently implemented as an option on Windows (append shebang / archive to a stock executable) is an offence against all that's holy ;-) > had a standard. I'm pretty unhappy that now we do have a standard, we > still have situations where a wheel generated by one tool can have > problems when installed with another - try "pip install wheel > --use-wheel" on Windows to see what I mean, the exe wrappers are missing > (this uses the wheel from PyPI, not a home-built one). If distlib is proving to be a problem either as a producer or consumer of wheels, please raise issues. > OK, so here's a concrete question for distutils-sig. If I want to use > wheels in my app (built them, install them, whatever) what should I use > as my "reference implementation". I don't want to implement the code > myself, I just want to produce lowest-common-denominator wheels that can > be used anywhere, and consume wheels that conform to the spec correctly. > This is not a hypothetical question - in the first instance I'm looking > to add support for loading setuptools/pip from wheels in virtualenv, and > I need to know what code to bundle to make that happen. Some will say "wheel", even though it doesn't fully implement the spec, apparently because the wheel mount code (which,naturally, doesn't do anything unless it's called) is offensive to the point that some sort of fork is warranted ;-) Seriously - I already install setuptools and pip from wheels into venvs using distil (which of course uses distlib) - have you tried it? You're welcome to try with wheels built using any project, and if you run into any issues I will do my best to fix them. Have you had a better offer? :-) Regards, Vinay Sajip From p.f.moore at gmail.com Wed Aug 21 12:58:25 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 11:58:25 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 11:47, Donald Stufft wrote: > Metadata 2.0 includes the ability to have a post install script, but Wheel > is not yet using Metadata 2.0 (and it's not yet finalized). But when Metadata 2.0 support is available, what you (Oscar) suggest does sound like a reasonable approach. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Aug 21 13:16:24 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 07:16:24 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: <92897E3F-FDB5-423C-86D8-638DD5570E35@stufft.io> On Aug 21, 2013, at 6:56 AM, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > >> My problem is that as someone who wants to implement code that uses the >> new features like wheels, I want a usable reference implementation that >> covers the (agreed) standards. I don't particularly want my application >> to incorporate support for extensions to the standard, nor do I want to >> have to implement my own support all the time. > > The wheel implementation in distlib conforms to the existing PEPs, as far as > I know. It covers the agreed standards and, as far as I know, in a > reasonable way. Why should you care if there are additional methods in a > Wheel class, if you never call on them? It seems a step too far for a spec > of this type to dictate precisely what specific methods should or should not > be present in a class. If you find that distlib's behaviour somehow violates > the agreed standards, please tell me how and I'll fix it. I think I've been > pretty responsive on issues raised, but please don't raise vague fears if > they have no basis in fact. The spec does not and will not dictate what particular feature any one implementation has or does not have other than the base set outlined in the spec. > >> In particular, at the present time there are two tools that can generate >> wheels (bdist_wheel and distlib) and 3 that can install them (wheel, >> distlib and pip). They have subtly different behaviours outside of the >> standard definitions, which means that they are not completely >> interoperable. I am not happy at all about that - and if that counts as >> "being against innovation", then I'm afraid that yes, I am... (I don't >> think it does, by the way, but you may differ). > > As I understand it, any interoperability issues are an effect of everything > being a work in progress and some implementations lagging behind others in > terms of PEP compliance. I can't speak for Daniel, but I've certainly made > efforts to ensure that distlib can work with wheel-produced wheels, even > where it means using old METADATA and entry_points.txt formats. I find it > always helps to focus on specifics - since the implementations can't be > identical, what specifically are the differences that make you unhappy, and > why do they make you unhappy? If some of those can be laid at distlib's > door, fine - I think I'm generally quite responsive to specific issues raised. > > Also, note that some features in an implementation might be useful to some > people, even if not enshrined in a PEP. For example, distlib's wheel > installation provides an option to install only the site-packages parts - no > scripts or headers - a feature which you specifically requested, and which I > readily added (as I can see its usefulness) without playing a "it's not in > the PEP" card. Is that a "harmless innovation", or a "dangerous deviation > from the standard"? Nobody is saying that the only useful features are the ones enshrined in a PEP. > >> At the moment the wheel PEP is lagging a little behind some of the >> ongoing discussions, in particular in terms of script generation. That's >> fine, it's a work in progress. I hope it will be updated soon so that the >> spec matches what's been agreed. > > Well, let's not forget that distlib is a work in progress, too. > >> But I think we have a reasonable consensus on how scripts should work, > > Do we? Someone might tell me tomorrow that the "cool" one-executable > solution I've recently implemented as an option on Windows (append shebang / > archive to a stock executable) is an offence against all that's holy ;-) > >> had a standard. I'm pretty unhappy that now we do have a standard, we >> still have situations where a wheel generated by one tool can have >> problems when installed with another - try "pip install wheel >> --use-wheel" on Windows to see what I mean, the exe wrappers are missing >> (this uses the wheel from PyPI, not a home-built one). > > If distlib is proving to be a problem either as a producer or consumer of > wheels, please raise issues. > >> OK, so here's a concrete question for distutils-sig. If I want to use >> wheels in my app (built them, install them, whatever) what should I use >> as my "reference implementation". I don't want to implement the code >> myself, I just want to produce lowest-common-denominator wheels that can >> be used anywhere, and consume wheels that conform to the spec correctly. >> This is not a hypothetical question - in the first instance I'm looking >> to add support for loading setuptools/pip from wheels in virtualenv, and >> I need to know what code to bundle to make that happen. > > Some will say "wheel", even though it doesn't fully implement the spec, > apparently because the wheel mount code (which,naturally, doesn't do > anything unless it's called) is offensive to the point that some sort of > fork is warranted ;-) It has nothing to do with the Wheel mount code in particular. A reference implantation is extremely useful but a reference implementation cannot contain random features that somebody thought were useful or cool. It's 100% absolutely and positively for *your* library to include features not inside the spec. What that means is *your* library is not the reference library that myself, Paul, and others thought it was trying to be. I do not believe it's reasonable to expect the same library to be both innovator and reference, the expectations and desires are just mutually exclusive. Again there is absolutely nothing wrong with you experimenting in distlib, it just means distlib isn't what I want to use right now. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 13:22:42 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 12:22:42 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 21 August 2013 11:56, Vinay Sajip wrote: > > But I think we have a reasonable consensus on how scripts should work, > > Do we? To the level of "wheels builders should write metadata that defines the scripts and wheel installers should generate the necessary wrappers" then yes. We may or may not (depending on your viewpoint) have agreement on the format of that metadata. Personally, I think that Metadata 2.0 where the wheel uses that format, and entry-points.txt for older versions, is the only realistic option here. I would like to see some clarity on the status of the wheel 1.0 spec. Where there are areas like this where the spec is either silent or missing sufficient detail to allow us to implement a common approach, should we be updating the 1.0 spec or should we be creating a 1.1 spec (I'd prefer the former)? Who does those updates? Must they go through Daniel as the author (who's been quite quiet, possibly he's hiding somewhere rather than being drawn into the fray :-))? > Someone might tell me tomorrow that the "cool" one-executable > solution I've recently implemented as an option on Windows (append shebang > / > archive to a stock executable) is an offence against all that's holy ;-) > That won't be me. I'm more interested in discussions about interoperability issues than about how the functionality gets implemented. > had a standard. I'm pretty unhappy that now we do have a standard, we > > still have situations where a wheel generated by one tool can have > > problems when installed with another - try "pip install wheel > > --use-wheel" on Windows to see what I mean, the exe wrappers are missing > > (this uses the wheel from PyPI, not a home-built one). > > If distlib is proving to be a problem either as a producer or consumer of > wheels, please raise issues. > I don't believe that distlib creates scripts based on entry-points.txt for pre-2.0 metadata. I have raised an issue for that. That's the only significant issue I have with distlib from the POV of what it can solve. There may be other interoperability issues where other tools can't consume things that distlib produces - I haven't tested anything like all of the combinations. We currently have 2 producers and 3 consumers. That's already 6 interactions to test. And if there *is* an issue, it's not always at all clear who to report the problem to. I'm not complaining about *any* of the implementations here. And I apologise if you in particular feel "picked on" (there's probably a distutils-sig "victim of the week" rota somewhere for this ;-)) My real concern is that we're drifting away from standards-based design towards implementation-based. And I think that updating the wheel PEP to be tighter on some of the details is what's really needed (hey, look, it's Daniel's turn to be picked on! :-)) I'd be willing to author an updated version of the wheel spec if all of the interested parties are OK with that (in particular, Nick and Daniel). If nothing else, that means that everyone can pick on me so I wouldn't feel quite so much like a troublemaker :-) Paul PS I really, really don't want anyone here to feel like anything they do is not valued. This is about tidying up the documentation, not about stifling anyone's enthusiasm or blocking any experimentation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 13:22:35 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 11:22:35 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: Paul Moore gmail.com> writes: > Another one IIRC was that distlib didn't put entry-points.txt in the > .dist-info directory in the wheel (which breaks entry points). I think > that's fixed now (and again, the Wheel spec is silent on what is correct > behaviour here). Right. The recent PEP 426 updates with the "commands" key supersedes the entry points stuff, so I'm not sure what exactly should be done here. The distlib code uses the latest PEP information. Since wheel is moving to pydist.json, I assume that (when it gets around to it) it will have the relevant scripts info in pydist.json, so I haven't implemented using scripts declared in entry_points.txt in distlib for that reason. Regards, Vinay Sajip From p.f.moore at gmail.com Wed Aug 21 14:01:22 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 13:01:22 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: On 21 August 2013 12:22, Vinay Sajip wrote: > Paul Moore gmail.com> writes: > > > Another one IIRC was that distlib didn't put entry-points.txt in the > > .dist-info directory in the wheel (which breaks entry points). I think > > that's fixed now (and again, the Wheel spec is silent on what is correct > > behaviour here). > > Right. The recent PEP 426 updates with the "commands" key supersedes the > entry points stuff, so I'm not sure what exactly should be done here. The > distlib code uses the latest PEP information. Since wheel is moving to > pydist.json, I assume that (when it gets around to it) it will have the > relevant scripts info in pydist.json, so I haven't implemented using > scripts > declared in entry_points.txt in distlib for that reason. OK, I see what you're saying here. But the Wheel 1.0 spec says that metadata is in the METADATA file (and comes from PKG-INFO). So my reading of that means that Metadata 1.x will remain valid for the foreseeable future (sure, Metadata 2.0 may become acceptable *as well* but the point of having a Wheel 1.0 spec is that we won't stop supporting it for some time yet). So you need to have a pre-2.0 solution in place, and while entry-points.txt isn't explicitly stated in the wheel PEP, it's the obvious equivalent. If you want to say distlib won't support pre-Metadata 2.0 specifications of script metadata, then that's your choice - it's not contrary to the standards but I'd view it as a quality of implementation choice. I view the underspecification in the Wheel 1.0 spec as similarly a quality of detail issue, and I'd expect to fix it in either an update to Wheel 1.0, or a Wheel 1.1 which does not make the jump to pure Metadata 2.0. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 21 14:15:51 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 13:15:51 +0100 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) Message-ID: On 21 August 2013 13:01, Paul Moore wrote: > If you want to say distlib won't support pre-Metadata 2.0 specifications > of script metadata, then that's your choice - it's not contrary to the > standards but I'd view it as a quality of implementation choice. I view the > underspecification in the Wheel 1.0 spec as similarly a quality of detail > issue, and I'd expect to fix it in either an update to Wheel 1.0, or a > Wheel 1.1 which does not make the jump to pure Metadata 2.0. Ignore this (to an extent). I've just seen your comments on the distlib issue. For those not following along, both wheel and distlib currently process a pydist.json file in the wheel dist-info directory. The wheel project doesn't put entry-points.txt data in there yet, but distlib expects to find script metadata there. So the incompatibility here is merely one of two implementations being out of sync - no big deal. BUT, this means that there is no spec of the current behaviour, and no implementation of the Wheel 1.0 spec anywhere. Either there is movement behind the scenes that I'm not aware of, and I'm jumping the gun in trying to make sense of the current state of play, or the wheel spec needs a review reasonably soon. Whichever is the case, I don't think that my current investigations are adding anything productive here. So I'm going to take a step back from all of this for a week or two and see what develops. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 14:41:37 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 12:41:37 +0000 (UTC) Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) References: Message-ID: Paul Moore gmail.com> writes: > BUT, this means that there is no spec of the current behaviour, and no implementation of the Wheel 1.0 spec anywhere. [snip] > or the wheel spec needs a review reasonably soon. I think it's this. I'm not sure to what extent wheels are being used in anger out there, but it would make sense to review the spec in light of PEP 426 developments and release a 1.1. Regards, Vinay Sajip From oscar.j.benjamin at gmail.com Wed Aug 21 14:59:30 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 13:59:30 +0100 Subject: [Distutils] Installing from a wheel Message-ID: This is the first time that I've tested using wheels and I have a couple of questions. Here's what I did (is this right?): $ cat spam.py # spam.py print('running spam from:', __file__) $ cat setup.py from setuptools import setup setup(name='spam', version='1.0', py_modules=['spam']) $ python setup.py bdist_wheel running bdist_wheel ... creating build\bdist.win32\wheel\spam-1.0.dist-info\WHEEL $ ls build dist setup.py spam.egg-info spam.py $ ls dist/ spam-1.0-py27-none-any.whl Okay, so far so good. I have the wheel and everything makes sense. Now I want to test installing it: $ wheel install --wheel-dir=./dist/ spam The line above gives no output. I expect something like 'installing spam... installed.'. It also ran so quickly that I thought that nothing had happened. A quick check reveals that the module was installed: $ cd ~ $ python -m spam ('running spam from:', 'q:\\tools\\Python27\\lib\\site-packages\\spam.py') $ pip list | grep spam spam (1.0) So now how do I uninstall it? $ pip uninstall spam Can't uninstall 'spam'. No files were found to uninstall. The wheel command doesn't seem to have an uninstall option either. Oscar From vinay_sajip at yahoo.co.uk Wed Aug 21 15:02:41 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 13:02:41 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I think the way you view distlib and the way other are viewing distlib are > different (and that's ok). We just need to know what distlib is so we can > have reasonable expectations of it. What i'm getting from you is that, at > least right now, distlib isn't what I thought it was and I (and others) should > stop treating it as *the* reference implementation of the new standards. > > I'm not trying to stop you from innovating, I'm just trying to make sure everyone > has reasonable expectations all around. I don't see how anyone is treating it as a reference implementation, other than just talking about it being such. I have had very good feedback from one or two people who are using it, but for the most part I see no evidence that people on this list are using it to the extent they would if they really thought "this might be a good candidate for a reference implementation - I see it's early days, and there's work to do, but it seems usable, so let me check it out and see that my use cases are covered, and if anything's been overlooked". In my view, nothing deserves to be a considered a "reference implementation" other than through merit, or perhaps by "being number one in a field of one". Merit isn't earned unless the software is used and refined based on real-world experience. The time to try distlib is now (not in production environments, obviously), to allow any problems with it to be identified early in its life. That would be more helpful than any sniping from the sidelines about whether it does more than what it needs to for PEP conformance. Regards, Vinay Sajip From p.f.moore at gmail.com Wed Aug 21 15:08:18 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 14:08:18 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 13:59, Oscar Benjamin wrote: > This is the first time that I've tested using wheels and I have a > couple of questions. > > Here's what I did (is this right?): > > $ cat spam.py > # spam.py > print('running spam from:', __file__) > $ cat setup.py > from setuptools import setup > > setup(name='spam', > version='1.0', > py_modules=['spam']) > > $ python setup.py bdist_wheel > running bdist_wheel > ... > creating build\bdist.win32\wheel\spam-1.0.dist-info\WHEEL > $ ls > build dist setup.py spam.egg-info spam.py > $ ls dist/ > spam-1.0-py27-none-any.whl > > Okay, so far so good. I have the wheel and everything makes sense. Looks good. You might want to add the (undocumented) universal flag to setup.cfg, as your wheel is Python only and works for Python 2 and 3, and so not version-specific. setup.cfg: [wheel] universal=1 > Now > I want to test installing it: > > $ wheel install --wheel-dir=./dist/ spam > > The line above gives no output. I expect something like 'installing > spam... installed.'. It also ran so quickly that I thought that > nothing had happened. > > A quick check reveals that the module was installed: > > $ cd ~ > $ python -m spam > ('running spam from:', 'q:\\tools\\Python27\\lib\\site-packages\\spam.py') > $ pip list | grep spam > spam (1.0) > Looks good. I thought wheel install gave some progress output, but it's a long time since I used it and I may be misremembering. You can also use pip install --use-wheel if you prefer (assuming you have pip 1.4+) So now how do I uninstall it? > > $ pip uninstall spam > Can't uninstall 'spam'. No files were found to uninstall. > > The wheel command doesn't seem to have an uninstall option either > Odd. pip uninstall should work. Can you confirm your version of pip and wheel? And can you list the contents of the spam-1.0.dist-info directory in your site-packages? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed Aug 21 15:24:14 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 21 Aug 2013 09:24:14 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: On Wed, Aug 21, 2013 at 8:01 AM, Paul Moore wrote: > On 21 August 2013 12:22, Vinay Sajip wrote: >> >> Paul Moore gmail.com> writes: >> >> > Another one IIRC was that distlib didn't put entry-points.txt in the >> > .dist-info directory in the wheel (which breaks entry points). I think >> > that's fixed now (and again, the Wheel spec is silent on what is correct >> > behaviour here). >> >> Right. The recent PEP 426 updates with the "commands" key supersedes the >> entry points stuff, so I'm not sure what exactly should be done here. The >> distlib code uses the latest PEP information. Since wheel is moving to >> pydist.json, I assume that (when it gets around to it) it will have the >> relevant scripts info in pydist.json, so I haven't implemented using >> scripts >> declared in entry_points.txt in distlib for that reason. > > > OK, I see what you're saying here. But the Wheel 1.0 spec says that metadata > is in the METADATA file (and comes from PKG-INFO). So my reading of that > means that Metadata 1.x will remain valid for the foreseeable future (sure, > Metadata 2.0 may become acceptable *as well* but the point of having a Wheel > 1.0 spec is that we won't stop supporting it for some time yet). So you need > to have a pre-2.0 solution in place, and while entry-points.txt isn't > explicitly stated in the wheel PEP, it's the obvious equivalent. > > If you want to say distlib won't support pre-Metadata 2.0 specifications of > script metadata, then that's your choice - it's not contrary to the > standards but I'd view it as a quality of implementation choice. I view the > underspecification in the Wheel 1.0 spec as similarly a quality of detail > issue, and I'd expect to fix it in either an update to Wheel 1.0, or a Wheel > 1.1 which does not make the jump to pure Metadata 2.0. > > Paul The wheel spec should probably say that its .dist-info directory should conform to a .dist-info PEP rather than saying it contains any particular files like METADATA. The only files that belong to wheel itself are the manifest and the WHEEL file that contains the version of the wheel format itself. Before the (useful) scripts wrapper feature, the wheel installer did not need to look at the PEP 426 or setuptools metadata at all. You can install them with unzip. The wheel command line tool lately has a "wheel install-scripts [package]" command that uses setuptools to rewrite entry-points script wrappers for an installed package. It is a proof of concept but it comes in handy. When pip is updated to be able to generate script wrappers at install time then a simultaneously released bdist_wheel will be able to update its default behavior to defer script generation and produce the newer metadata. In the meantime scripts are imperfect. Egg's controversial features like being a valid sys.path entry were not really left out of wheel. The format could have been a .tar file after all. Instead, wheel's documentation emphasizes what turned out to be the least problematic use pattern as an installation format. Egg got there from the other direction starting as a plugin container. From donald at stufft.io Wed Aug 21 15:24:49 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 09:24:49 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> Message-ID: <87DB8E6B-CE3C-489E-9769-2B908E45ECF6@stufft.io> On Aug 21, 2013, at 9:02 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I think the way you view distlib and the way other are viewing distlib are >> different (and that's ok). We just need to know what distlib is so we can >> have reasonable expectations of it. What i'm getting from you is that, at >> least right now, distlib isn't what I thought it was and I (and others) should >> stop treating it as *the* reference implementation of the new standards. >> >> I'm not trying to stop you from innovating, I'm just trying to make sure > everyone >> has reasonable expectations all around. > > I don't see how anyone is treating it as a reference implementation, other > than just talking about it being such. I have had very good feedback from > one or two people who are using it, but for the most part I see no evidence > that people on this list are using it to the extent they would if they > really thought "this might be a good candidate for a reference > implementation - I see it's early days, and there's work to do, but it seems > usable, so let me check it out and see that my use cases are covered, and if > anything's been overlooked". > > In my view, nothing deserves to be a considered a "reference implementation" > other than through merit, or perhaps by "being number one in a field of > one". Merit isn't earned unless the software is used and refined based on > real-world experience. The time to try distlib is now (not in production > environments, obviously), to allow any problems with it to be identified > early in its life. That would be more helpful than any sniping from the > sidelines about whether it does more than what it needs to for PEP conformance. > > Regards, > > Vinay Sajip > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I think you're using a completely different definition of "reference implementation" than I've ever seen used. A reference implementation by definition cannot contain customizations or additions or extensions from the spec. The entire *point* of a reference implementation is to act as programatic reference to the spec. Something being the reference implementation does not speak to the quality of the implementation and as such it may not be the *best* implementation. It becomes extremely useful however when you want to test conformance against the spec because it gives you a baseline with which to test against. Instead of needing to test against N different implementations people wanting to work with Wheel would only need to test against the reference implementation, and if that works then they can assume that their code will work against any other implementation of Wheel that properly implements the standard. If you don't have a reference implementation then people need to interpret the standards on their own and hopefully get it right. An example is the wsgiref from the standard library. Very few projects actively use wsgiref for much at all if they use it at all. However it's existence means that web servers like gunicorn, mod_wsgi etc can simply test against it instead of needing to test against every implementation of WSGI. Which implementation is used (and ultimately possibly enshrined in the standard library) is decided through merit. Which implementation is used as the reference implementation is typically decided by the standards body (in this case, distutils-sig or Nick or whoever). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oscar.j.benjamin at gmail.com Wed Aug 21 15:28:57 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 14:28:57 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 14:08, Paul Moore wrote: > On 21 August 2013 13:59, Oscar Benjamin wrote: >> >> $ cat spam.py >> # spam.py >> print('running spam from:', __file__) [snip] > > Looks good. You might want to add the (undocumented) universal flag to > setup.cfg, as your wheel is Python only and works for Python 2 and 3, and so > not version-specific. Really I need to import print_function for that universality to be true but we'll overlook that :) > setup.cfg: > > [wheel] > universal=1 Okay so I need setup.cfg as well as setup.py. [snip] > > Looks good. I thought wheel install gave some progress output, but it's a > long time since I used it and I may be misremembering. You can also use pip > install --use-wheel if you prefer (assuming you have pip 1.4+) Okay, that's good. I'd rather just use the pip command than use wheel directly. >> So now how do I uninstall it? >> >> $ pip uninstall spam >> Can't uninstall 'spam'. No files were found to uninstall. >> >> The wheel command doesn't seem to have an uninstall option either > > Odd. pip uninstall should work. Can you confirm your version of pip and > wheel? And can you list the contents of the spam-1.0.dist-info directory in > your site-packages? $ pip --version pip 1.3.1 from q:\tools\python27\lib\site-packages\pip-1.3.1-py2.7.egg (python 2.7) $ wheel --version # This gives a usage message >>> import wheel >>> wheel.__version__ '0.21.0' $ ls /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/ DESCRIPTION.rst METADATA RECORD WHEEL pydist.json top_level.txt $ cat /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/* UNKNOWN Metadata-Version: 2.0 Name: spam Version: 1.0 Summary: UNKNOWN Home-page: UNKNOWN Author: UNKNOWN Author-email: UNKNOWN License: UNKNOWN Platform: UNKNOWN UNKNOWN spam-1.0.dist-info\DESCRIPTION.rst,sha256=OCTuuN6LcWulhHS3d5rfjdsQtW22n7HENFRh6jC6ego,10 spam-1.0.dist-info\METADATA,sha256=N7NDv-twCNGywvm1HXdz67MoFL4xIUoT5p39--tGGB8,179 spam-1.0.dist-info\WHEEL,sha256=ceN1GNMAiWCEADx3_5pdpmZwt4A_AtSxSxYSCyHhhPw,98 spam-1.0.dist-info\pydist.json,sha256=rptnmxTtRo0YZfBQZbIxMdHWDAg48f0UhCDmdymzHbk,174 spam-1.0.dist-info\top_level.txt,sha256=KE4wKczjrl7gsFhmEA4wAEY1n1OuTHf-azTAWqenLO4,5 spam.py,sha256=_5V9b8A2xHt-590km2JzJniHeWIiXbdU_wVHONhTzms,48 spam-1.0.dist-info/RECORD,, Wheel-Version: 1.0 Generator: bdist_wheel (0.21.0) Root-Is-Purelib: true Tag: py27-none-any {"document_names": {"description": "DESCRIPTION.rst"}, "name": "spam", "metadata_version": "2.0", "generator": "bdist_wheel (0.21.0)", "summary": "UNKNOWN", "version": "1.0"}spam So I tried updating everything e.g.: $ pip install -U wheel pip setuptools Requirement already up-to-date: wheel in q:\tools\python27\lib\site-packages Downloading/unpacking pip from https://pypi.python.org/packages/source/p/pip/pip-1.4.1.tar.gz#md5=6afbb46aeb48abac658d4df742bff714 Downloading pip-1.4.1.tar.gz (445kB): 445kB downloaded Running setup.py egg_info for package pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.rst' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Downloading/unpacking distribute from https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a Downloading distribute-0.7.3.zip (145kB): 145kB downloaded Running setup.py egg_info for package distribute Downloading/unpacking setuptools>=0.7 from https://pypi.python.org/packages/source/s/setuptools/setuptools-1.0.tar.gz#md5=3d196ffb6e5e4425daddbb4fe42a4a74 (from distribute) Downloading setuptools-1.0.tar.gz (679kB): 679kB downloaded Running setup.py egg_info for package setuptools Installing collected packages: pip, distribute, setuptools Found existing installation: pip 1.3.1 Uninstalling pip: Successfully uninstalled pip Running setup.py install for pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.rst' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing pip-script.py script to q:\tools\Python27\Scripts Installing pip.exe script to q:\tools\Python27\Scripts Installing pip.exe.manifest script to q:\tools\Python27\Scripts Installing pip-2.7-script.py script to q:\tools\Python27\Scripts Installing pip-2.7.exe script to q:\tools\Python27\Scripts Installing pip-2.7.exe.manifest script to q:\tools\Python27\Scripts Exception: Traceback (most recent call last): File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\basecommand.py", line 139, in main File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\commands\install.py", line 271, in run File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\req.py", line 1193, in install File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\req.py", line 507, in commit_uninstall File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\req.py", line 1542, in commit File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\util.py", line 41, in rmtree File "q:\tools\Python27\lib\shutil.py", line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File "q:\tools\Python27\lib\shutil.py", line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File "q:\tools\Python27\lib\shutil.py", line 247, in rmtree rmtree(fullname, ignore_errors, onerror) File "q:\tools\Python27\lib\shutil.py", line 252, in rmtree onerror(os.remove, fullname, sys.exc_info()) File "q:\tools\Python27\lib\site-packages\pip-1.3.1-py2.7.egg\pip\util.py", line 60, in rmtree_errorhandler WindowsError: [Error 5] Access is denied: 'c:\\docume~1\\enojb\\locals~1\\temp\\pip-6echt4-uninstall\\tools\\python27\\scripts\\pip.exe' Storing complete log in c:/Documents and Settings/enojb\pip\pip.log Does that mean that pip is broken now? Or is it just that the .exe wasn't replaced? After this I deleted the site-packages/spam* files, rebuilt the wheel, used wheel to install again and now I get: $ pip list Cython (0.19.1) distribute (0.6.40) docutils (0.11) ipython (0.13.2) Jinja2 (2.7.1) line-profiler (1.0b3) MarkupSafe (0.18) matplotlib (1.2.1) mpmath (0.17) numpy (1.7.1) pip (1.4.1) Pygments (1.6) pyreadline (2.0) scipy (0.12.0) Exception: Traceback (most recent call last): File "q:\tools\Python27\lib\site-packages\pip\basecommand.py", line 134, in main status = self.run(options, args) File "q:\tools\Python27\lib\site-packages\pip\commands\list.py", line 80, in run self.run_listing(options) File "q:\tools\Python27\lib\site-packages\pip\commands\list.py", line 127, in run_listing self.output_package_listing(installed_packages) File "q:\tools\Python27\lib\site-packages\pip\commands\list.py", line 136, in output_package_listing if dist_is_editable(dist): File "q:\tools\Python27\lib\site-packages\pip\util.py", line 347, in dist_is_editable req = FrozenRequirement.from_dist(dist, []) File "q:\tools\Python27\lib\site-packages\pip\__init__.py", line 194, in from_dist assert len(specs) == 1 and specs[0][0] == '==' AssertionError Storing complete log in c:/Documents and Settings/enojb\pip\pip.log Is it because of distribute? I didn't think I'd installed that. Here's how dist-info looks now: $ ls /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/ DESCRIPTION.rst METADATA RECORD WHEEL pydist.json top_level.txt $ cat /q/tools/Python27/Lib/site-packages/spam-1.0.dist-info/* UNKNOWN Metadata-Version: 2.0 Name: spam Version: 1.0 Summary: UNKNOWN Home-page: UNKNOWN Author: UNKNOWN Author-email: UNKNOWN License: UNKNOWN Platform: UNKNOWN UNKNOWN spam-1.0.dist-info\DESCRIPTION.rst,sha256=OCTuuN6LcWulhHS3d5rfjdsQtW22n7HENFRh6jC6ego,10 spam-1.0.dist-info\METADATA,sha256=N7NDv-twCNGywvm1HXdz67MoFL4xIUoT5p39--tGGB8,179 spam-1.0.dist-info\WHEEL,sha256=ceN1GNMAiWCEADx3_5pdpmZwt4A_AtSxSxYSCyHhhPw,98 spam-1.0.dist-info\pydist.json,sha256=rptnmxTtRo0YZfBQZbIxMdHWDAg48f0UhCDmdymzHbk,174 spam-1.0.dist-info\top_level.txt,sha256=KE4wKczjrl7gsFhmEA4wAEY1n1OuTHf-azTAWqenLO4,5 spam.py,sha256=_5V9b8A2xHt-590km2JzJniHeWIiXbdU_wVHONhTzms,48 spam-1.0.dist-info/RECORD,, Wheel-Version: 1.0 Generator: bdist_wheel (0.21.0) Root-Is-Purelib: true Tag: py27-none-any {"document_names": {"description": "DESCRIPTION.rst"}, "name": "spam", "metadata_version": "2.0", "generator": "bdist_wheel (0.21.0)", "summary": "UNKNOWN", "version": "1.0"}spam Oscar From p.f.moore at gmail.com Wed Aug 21 15:56:53 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 14:56:53 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 14:28, Oscar Benjamin wrote: > So I tried updating everything e.g.: > > $ pip install -U wheel pip setuptools > [lots omitted for brevity] Some thoughts. pip 1.3.1 predates pip's wheel support so you wouldn't have had pip install --use-wheel there. The upgrade error may have been because pip install -U pip tries to install a new pip.exe while pip.exe is in use. The error might not be too bad (pip.exe doesn't actually need to change). For safety, "python -m pip install -U pip --force-reinstall" might be worth doing. You quite probably shouldn't have upgraded setuptools like you did. It looks like you had a pre-merge version, and upgrading across the distribute merge appears to be fun (I have personally never encountered that particular flavour of fun, but that's what I'm led to believe). For safety you should check your site-packages for setuptools and distribute installations. Maybe manually remove distribute if present, and then "python -m pip install -U setuptools --force-reinstall" (don't do a combined run of pip and setuptools together, that's one of the scary failure modes IIRC). pip 1.4.1 should be able to pip uninstall a distribution installed from a wheel (but TBH, I would have expected 1.3.1 to be able to, as well. The installed data looked OK). For people watching at home, upgrading pip really isn't this scary :-) I'm just making it sound scary (a) because I don't know the precise upgrade instructions for setuptools and (b) because you need to do setuptools and pip separately, and use python -m pip on Windows to upgrade pip (neither of which are immediately obvious). Let me know how you get on. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Wed Aug 21 16:48:45 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 15:48:45 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 14:56, Paul Moore wrote: > On 21 August 2013 14:28, Oscar Benjamin wrote: >> >> So I tried updating everything e.g.: >> >> $ pip install -U wheel pip setuptools > > [lots omitted for brevity] > > Some thoughts. > > pip 1.3.1 predates pip's wheel support so you wouldn't have had pip install > --use-wheel there. > > The upgrade error may have been because pip install -U pip tries to install > a new pip.exe while pip.exe is in use. The error might not be too bad > (pip.exe doesn't actually need to change). Maybe, although the path c:\\docume~1\\enojb\\locals~1\\temp\\pip-6echt4-uninstall\\tools\\python27\\scripts\\pip.exe is not the location of the pip I used and is not on PATH. > For safety, "python -m pip install -U pip --force-reinstall" might be worth > doing. Okay done. Seems to work fine. > You quite probably shouldn't have upgraded setuptools like you did. It looks > like you had a pre-merge version, and upgrading across the distribute merge > appears to be fun (I have personally never encountered that particular > flavour of fun, but that's what I'm led to believe). This is not an old Python installation. I installed this as a clean installation to test the patches I uploaded for issue12641 2-3 months ago. I wouldn't have deliberately installed distribute (I know it's obsoleted) so I don't know how it got there. > For safety you should > check your site-packages for setuptools and distribute installations. Maybe > manually remove distribute if present, I got this far: $ rm -r /q/tools/Python27/Lib/site-packages/distribute-0.6.40-py2.7.egg/ and then $ pip Traceback (most recent call last): File "q:\tools\Python27\Scripts\pip-script.py", line 5, in from pkg_resources import load_entry_point ImportError: No module named pkg_resources > and then "python -m pip install -U setuptools --force-reinstall" Alas, this one doesn't work any more either: $ python -m pip q:\tools\Python27\python.exe: No module named pkg_resources; 'pip' is a package and cannot be directly executed > (don't do a combined run of pip and setuptools > together, that's one of the scary failure modes IIRC). Okay so I manually deleted everything for pip/setuptools/distribute/easy_install from Scripts and site-packages and started again. Following the instructions for install pip and setuptools for Windows I downloaded and ran ez_setup.py followed by get-pip.py. Then I got this error: $ py -2.7 get-pip.py Downloading/unpacking pip pip can't proceed with requirement 'pip' due to a pre-existing build directory. location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip This is likely due to a previous installation that failed. pip is being responsible and not assuming it can delete this. Please delete it and try again. Cleaning up... Exception: Traceback (most recent call last): File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\basecommand.py", line 134, in main status = self.run(options, args) File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\commands\install.py", line 236, in run requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle) File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\req.py", line 1071, in prepare_files raise e PreviousBuildDirError: pip can't proceed with requirement 'pip' due to a pre-existing build directory. location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip This is likely due to a previous installation that failed. pip is being responsible and not assuming it can delete this. Please delete it and try again. The path it refers to doesn't exist but deleting a similar directory gets it working: $ rm -r ~/Local\ Settings/temp/pip-6echt4-uninstall/ enojb at ENM-OB:/q$ py -2.7 get-pip.py Downloading/unpacking pip Downloading pip-1.4.1.tar.gz (445kB): 445kB downloaded Running setup.py egg_info for package pip ... Okay so now I'm back in business ('pip list' works etc.). > pip 1.4.1 should be able to pip uninstall a distribution installed from a > wheel (but TBH, I would have expected 1.3.1 to be able to, as well. The > installed data looked OK). Yes it can: $ pip uninstall spam Uninstalling spam: q:\tools\python27\lib\site-packages\spam-1.0.dist-info\description.rst q:\tools\python27\lib\site-packages\spam-1.0.dist-info\metadata q:\tools\python27\lib\site-packages\spam-1.0.dist-info\pydist.json q:\tools\python27\lib\site-packages\spam-1.0.dist-info\record q:\tools\python27\lib\site-packages\spam-1.0.dist-info\top_level.txt q:\tools\python27\lib\site-packages\spam-1.0.dist-info\wheel q:\tools\python27\lib\site-packages\spam.py Proceed (y/n)? y Successfully uninstalled spam Also 'pip install --use-wheel' has a more intuitive interface than wheel: $ pip install --use-wheel dist/spam-1.0-py27-none-any.whl Unpacking .\dist\spam-1.0-py27-none-any.whl Installing collected packages: spam Successfully installed spam Cleaning up... > For people watching at home, upgrading pip really isn't this scary :-) I'm > just making it sound scary (a) because I don't know the precise upgrade > instructions for setuptools and (b) because you need to do setuptools and > pip separately, and use python -m pip on Windows to upgrade pip (neither of > which are immediately obvious). Is it perhaps safer to suggest the following? a) uninstall pip/setuptools/distribute b) run ez_setup.py c) run get-pip.py That's what I just did and it worked fine. Perhaps there could be one script that does steps a), b) and c) to smooth the path for anyone upgrading? Oscar From dholth at gmail.com Wed Aug 21 16:57:08 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 21 Aug 2013 10:57:08 -0400 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: A fresh virtualenv would have been the humane way to get a working 'pip install wheel'. Wheel's built in installer isn't intended to replace or be better than pip in any way. It's just for reference or bootstrapping. FYI if you point pip directly at the .whl file you can omit --use-wheel. PS I didn't mean to release the wheel 0.21.0 .whl sans scripts. Sorry Paul if that's caused confusion. It is just a bug in the released archive. On Wed, Aug 21, 2013 at 10:48 AM, Oscar Benjamin wrote: > On 21 August 2013 14:56, Paul Moore wrote: >> On 21 August 2013 14:28, Oscar Benjamin wrote: >>> >>> So I tried updating everything e.g.: >>> >>> $ pip install -U wheel pip setuptools >> >> [lots omitted for brevity] >> >> Some thoughts. >> >> pip 1.3.1 predates pip's wheel support so you wouldn't have had pip install >> --use-wheel there. >> >> The upgrade error may have been because pip install -U pip tries to install >> a new pip.exe while pip.exe is in use. The error might not be too bad >> (pip.exe doesn't actually need to change). > > Maybe, although the path > c:\\docume~1\\enojb\\locals~1\\temp\\pip-6echt4-uninstall\\tools\\python27\\scripts\\pip.exe > is not the location of the pip I used and is not on PATH. > >> For safety, "python -m pip install -U pip --force-reinstall" might be worth >> doing. > > Okay done. Seems to work fine. > >> You quite probably shouldn't have upgraded setuptools like you did. It looks >> like you had a pre-merge version, and upgrading across the distribute merge >> appears to be fun (I have personally never encountered that particular >> flavour of fun, but that's what I'm led to believe). > > This is not an old Python installation. I installed this as a clean > installation to test the patches I uploaded for issue12641 2-3 months > ago. I wouldn't have deliberately installed distribute (I know it's > obsoleted) so I don't know how it got there. > >> For safety you should >> check your site-packages for setuptools and distribute installations. Maybe >> manually remove distribute if present, > > I got this far: > $ rm -r /q/tools/Python27/Lib/site-packages/distribute-0.6.40-py2.7.egg/ > > and then > $ pip > Traceback (most recent call last): > File "q:\tools\Python27\Scripts\pip-script.py", line 5, in > from pkg_resources import load_entry_point > ImportError: No module named pkg_resources > >> and then "python -m pip install -U setuptools --force-reinstall" > > Alas, this one doesn't work any more either: > $ python -m pip > q:\tools\Python27\python.exe: No module named pkg_resources; 'pip' is > a package and cannot be directly executed > >> (don't do a combined run of pip and setuptools >> together, that's one of the scary failure modes IIRC). > > Okay so I manually deleted everything for > pip/setuptools/distribute/easy_install from Scripts and site-packages > and started again. Following the instructions for install pip and > setuptools for Windows I downloaded and ran ez_setup.py followed by > get-pip.py. Then I got this error: > > $ py -2.7 get-pip.py > Downloading/unpacking pip > > pip can't proceed with requirement 'pip' due to a pre-existing build directory. > location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip > This is likely due to a previous installation that failed. > pip is being responsible and not assuming it can delete this. > Please delete it and try again. > > Cleaning up... > Exception: > Traceback (most recent call last): > File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\basecommand.py", > line 134, in main > status = self.run(options, args) > File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\commands\install.py", > line 236, in run > requirement_set.prepare_files(finder, > force_root_egg_info=self.bundle, bundle=self.bundle) > File "c:\docume~1\enojb\locals~1\temp\unpacker-c3e81q-scratchdir\pip\req.py", > line 1071, in prepare_files > raise e > PreviousBuildDirError: > pip can't proceed with requirement 'pip' due to a pre-existing build directory. > location: c:\docume~1\enojb\locals~1\temp\pip-build-enojb\pip > This is likely due to a previous installation that failed. > pip is being responsible and not assuming it can delete this. > Please delete it and try again. > > The path it refers to doesn't exist but deleting a similar directory > gets it working: > > $ rm -r ~/Local\ Settings/temp/pip-6echt4-uninstall/ > enojb at ENM-OB:/q$ py -2.7 get-pip.py > Downloading/unpacking pip > Downloading pip-1.4.1.tar.gz (445kB): 445kB downloaded > Running setup.py egg_info for package pip > ... > > Okay so now I'm back in business ('pip list' works etc.). > >> pip 1.4.1 should be able to pip uninstall a distribution installed from a >> wheel (but TBH, I would have expected 1.3.1 to be able to, as well. The >> installed data looked OK). > > Yes it can: > > $ pip uninstall spam > Uninstalling spam: > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\description.rst > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\metadata > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\pydist.json > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\record > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\top_level.txt > q:\tools\python27\lib\site-packages\spam-1.0.dist-info\wheel > q:\tools\python27\lib\site-packages\spam.py > Proceed (y/n)? y > Successfully uninstalled spam > > Also 'pip install --use-wheel' has a more intuitive interface than wheel: > > $ pip install --use-wheel dist/spam-1.0-py27-none-any.whl > Unpacking .\dist\spam-1.0-py27-none-any.whl > Installing collected packages: spam > Successfully installed spam > Cleaning up... > >> For people watching at home, upgrading pip really isn't this scary :-) I'm >> just making it sound scary (a) because I don't know the precise upgrade >> instructions for setuptools and (b) because you need to do setuptools and >> pip separately, and use python -m pip on Windows to upgrade pip (neither of >> which are immediately obvious). > > Is it perhaps safer to suggest the following? > a) uninstall pip/setuptools/distribute > b) run ez_setup.py > c) run get-pip.py > > That's what I just did and it worked fine. Perhaps there could be one > script that does steps a), b) and c) to smooth the path for anyone > upgrading? > > > Oscar > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Wed Aug 21 16:57:54 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 15:57:54 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 15:48, Oscar Benjamin wrote: > > For people watching at home, upgrading pip really isn't this scary :-) > I'm > > just making it sound scary (a) because I don't know the precise upgrade > > instructions for setuptools and (b) because you need to do setuptools and > > pip separately, and use python -m pip on Windows to upgrade pip (neither > of > > which are immediately obvious). > > Is it perhaps safer to suggest the following? > a) uninstall pip/setuptools/distribute > b) run ez_setup.py > c) run get-pip.py > > That's what I just did and it worked fine. Perhaps there could be one > script that does steps a), b) and c) to smooth the path for anyone > upgrading? It probably is. I've heard concerns that people want to avoid suggesting manual uninstalls and having to download the setup scripts. But it seems simple enough to me. (What would I know, I just run virtualenv and leave it at that :-)) Glad it worked in the end, anyway, and sorry if my instructions made it harder than it needed to be. As regards distribute, I suspect that the reason you hit issues is that if you have a setuptools that's older than 0.7 (or whatever the first merged version was) then an upgrade can end up jumping through some hoops and going through a "dummy" distribute version that's there to handle the fork/re-merge somehow. I honestly don't know how it all works, I'm just going off what I saw on some of the discussions on pypa-dev at the time. It all sounded very clever to me, but a bit fragile. I'm a simple soul, and prefer to just wipe it out and reinstall, so I zoned out after a while:-) I doubt the details matter to you now, though... Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 17:30:03 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 15:30:03 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <87DB8E6B-CE3C-489E-9769-2B908E45ECF6@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > I think you're using a completely different definition of "reference > implementation" than I've ever seen used. A reference implementation Quite possibly, but I feel justified in this case ... I'll say why below. > by definition cannot contain customizations or additions or extensions > from the spec. The entire *point* of a reference implementation is to > act as programatic reference to the spec. Something being the reference > implementation does not speak to the quality of the implementation and > as such it may not be the *best* implementation. The packaging PEPs don't do down to programmatic detail in their specification. In the stricter definition of "reference implementation" (RI) you're talking about, a spec comes with a reference implementation but also a test suite (provided by the spec author) which conforming implementations must pass. While this is de rigueur in the Java world, it's not common in the Python world. Without such a test conformance suite, I think it's reasonable to use the looser definition of RI that I did. > An example is the wsgiref from the standard library. Very few projects > actively use wsgiref for much at all if they use it at all. However it's > existence means that web servers like gunicorn, mod_wsgi etc can simply > test against it instead of needing to test against every implementation > of WSGI. While you can certainly use wsgiref for interoperability testing (much as I use wheel's wheel implementation for interoperability testing with distlib) I don't think WSGI implementations run a suite of tests provided in wsgiref before claiming conformance. I'm happy to be corrected if I'm wrong about that :-) In my experience of implementing PEPs (282, 391, 397, 405) all implementations have had test suites, but none have had the tests implemented by any central authority. In discussions the relevant implementations have been referred to as RIs and that's the way I'm using it now. > Which implementation is used (and ultimately possibly enshrined in the > standard library) is decided through merit. Which implementation is > used as the reference implementation is typically decided by the standards > body (in this case, distutils-sig or Nick or whoever). If Nick or whoever is planning to write a test suite which all implementations must pass, great. But that would (I suppose) mean tying things down to a specific Python interface at the module, class and function level - not something I've seen imposed externally before. But in the absence of such, I don't see any problem with my interpretation of RI, since the stricter interpretation only makes sense in the presence of accompanying tests generated by the spec originators. Regards, Vinay Sajip From donald at stufft.io Wed Aug 21 17:44:21 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 11:44:21 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <87DB8E6B-CE3C-489E- 9769-2B908E45ECF6@stufft.io> Message-ID: <817A73FE-384A-4A29-A1EC-E0D849A90B90@stufft.io> On Aug 21, 2013, at 11:30 AM, Vinay Sajip wrote: > Donald Stufft stufft.io> writes: > >> I think you're using a completely different definition of "reference >> implementation" than I've ever seen used. A reference implementation > > Quite possibly, but I feel justified in this case ... I'll say why below. > >> by definition cannot contain customizations or additions or extensions >> from the spec. The entire *point* of a reference implementation is to >> act as programatic reference to the spec. Something being the reference >> implementation does not speak to the quality of the implementation and >> as such it may not be the *best* implementation. > > The packaging PEPs don't do down to programmatic detail in their > specification. In the stricter definition of "reference implementation" (RI) > you're talking about, a spec comes with a reference implementation but also a > test suite (provided by the spec author) which conforming > implementations must pass. While this is de rigueur in the Java world, it's > not common in the Python world. Without such a test conformance suite, I > think it's reasonable to use the looser definition of RI that I did. > >> An example is the wsgiref from the standard library. Very few projects >> actively use wsgiref for much at all if they use it at all. However it's >> existence means that web servers like gunicorn, mod_wsgi etc can simply >> test against it instead of needing to test against every implementation >> of WSGI. > > While you can certainly use wsgiref for interoperability testing (much as I > use wheel's wheel implementation for interoperability testing with distlib) I > don't think WSGI implementations run a suite of tests provided in wsgiref > before claiming conformance. I'm happy to be corrected > if I'm wrong about that :-) I have no idea what the WSGI servers actually do, it was just an example. > > In my experience of implementing PEPs (282, 391, 397, 405) all > implementations have had test suites, but none have had the tests implemented > by any central authority. In discussions the relevant implementations have > been referred to as RIs and that's the way I'm using it now. None of those peps are defining a standard with a primary goal of being able to be completely replaced by a different module and still work. They are for adding a single implementation to the standard library. So it would make sense for those not to have a reference implementation because there are not any expected other implementations besides the one that got added to the stdlib. > >> Which implementation is used (and ultimately possibly enshrined in the >> standard library) is decided through merit. Which implementation is >> used as the reference implementation is typically decided by the standards >> body (in this case, distutils-sig or Nick or whoever). > > If Nick or whoever is planning to write a test suite which all > implementations must pass, great. But that would (I suppose) mean tying > things down to a specific Python interface at the module, class and function > level - not something I've seen imposed externally before. But in > the absence of such, I don't see any problem with my interpretation of RI, > since the stricter interpretation only makes sense in the presence of > accompanying tests generated by the spec originators. I would expect that the reference implementation would include acceptance tests that test the various specs yes. For Wheel this wouldn't (in my mind) be at the Python level (so it wouldn't encode a particular python level API) other than a very minimal interface (which may simply be a CLI script?) needed to actually power the testing but the primary use of the acceptance tests would be to verify that the implementation can create and install Wheel files to the spec. It's purpose would not be to set a specific Pythonic API as the spec because the spec is defining a file format not a Python API. Not having behavior defined by an implementation is very important to me, hence why I'm being such a stickler for wanting something that other tools can use to test against. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From carl at oddbird.net Wed Aug 21 17:52:36 2013 From: carl at oddbird.net (Carl Meyer) Date: Wed, 21 Aug 2013 09:52:36 -0600 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> Message-ID: <5214E244.1020203@oddbird.net> On 08/21/2013 03:29 AM, Donald Stufft wrote: > Can you send me a list (or post them here) of what issues you've hit? > The biggest one i'm aware of is the scripts problem which is a > fundamental problem with the 1.0 Wheel (or rather that any library with > console entry points cannot be universal). Since you asked, I'll mention the two that I've hit (though I think you're also aware of these already): 1) Wheel's conversion of - to _ in version strings embedded in filenames, which breaks with setuptools precedent; see https://github.com/pypa/pip/issues/1150 and https://bitbucket.org/dholth/wheel/issue/78/wheel-rewrites-versions-preventing 2) Wheel's decision to follow distutils' documentation rather than distutils' behavior when it comes to the location for installing data_files with relative paths; see https://bitbucket.org/dholth/wheel/issue/80/wheel-does-not-install-data_files-in-site Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From oscar.j.benjamin at gmail.com Wed Aug 21 18:21:02 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 17:21:02 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 15:57, Daniel Holth wrote: > > A fresh virtualenv would have been the humane way to get a working > 'pip install wheel'. Good point. I think I learned an important point going through that upgrade mess though: uninstall/reinstall is safer than upgrade. > Wheel's built in installer isn't intended to replace or be better than > pip in any way. It's just for reference or bootstrapping. Fair enough. Can I suggest that it have a --version option (since it is traditional)? > FYI if you point pip directly at the .whl file you can omit --use-wheel. Okay I've just tried that and that's definitely the way I want to use it. So basically: $ python setup.py bdist_wheel # Makes wheels and $ pip install foo.whl # Installs wheels If someone wants to import the bdist_wheel command and use it outside of setuptools setup() (in the way that numpy does) where should they import it from? I'm thinking of something like this: https://github.com/numpy/numpy/blob/master/numpy/distutils/command/bdist_rpm.py Is the following appropriate? from wheel.bdist_wheel import bdist_wheel class mybdist_wheel(bdist_wheel): ... (the wheel API docs don't describe using bdist_wheel from Python code.) Oscar From dholth at gmail.com Wed Aug 21 18:26:46 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 21 Aug 2013 12:26:46 -0400 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On Wed, Aug 21, 2013 at 12:21 PM, Oscar Benjamin wrote: > On 21 August 2013 15:57, Daniel Holth wrote: >> >> A fresh virtualenv would have been the humane way to get a working >> 'pip install wheel'. > > Good point. I think I learned an important point going through that > upgrade mess though: uninstall/reinstall is safer than upgrade. > >> Wheel's built in installer isn't intended to replace or be better than >> pip in any way. It's just for reference or bootstrapping. > > Fair enough. Can I suggest that it have a --version option (since it > is traditional)? There is a nearly-done PR in wheel's https://bitbucket.org/dholth/wheel >> FYI if you point pip directly at the .whl file you can omit --use-wheel. > > Okay I've just tried that and that's definitely the way I want to use it. > > So basically: > $ python setup.py bdist_wheel # Makes wheels > and > $ pip install foo.whl # Installs wheels > > If someone wants to import the bdist_wheel command and use it outside > of setuptools setup() (in the way that numpy does) where should they > import it from? I'm thinking of something like this: > https://github.com/numpy/numpy/blob/master/numpy/distutils/command/bdist_rpm.py > > Is the following appropriate? > > from wheel.bdist_wheel import bdist_wheel > > class mybdist_wheel(bdist_wheel): > ... > > (the wheel API docs don't describe using bdist_wheel from Python code.) It should be about as appropriate as any distutils subclassing or extension exercise... a lot of debugging work and probably a bad idea... but if you must, that's how you would do it. > > Oscar From oscar.j.benjamin at gmail.com Wed Aug 21 18:28:02 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 21 Aug 2013 17:28:02 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 15:57, Paul Moore wrote: > On 21 August 2013 15:48, Oscar Benjamin wrote: >> >> Is it perhaps safer to suggest the following? >> a) uninstall pip/setuptools/distribute >> b) run ez_setup.py >> c) run get-pip.py > > It probably is. I've heard concerns that people want to avoid suggesting > manual uninstalls and having to download the setup scripts. But it seems > simple enough to me. (What would I know, I just run virtualenv and leave it > at that :-)) I walked right into that one: I definitely could have used a virtualenv for this. However I couldn't have used a virtualenv to update my system pip so it would support wheel installation. > Glad it worked in the end, anyway, and sorry if my instructions made it > harder than it needed to be. No they didn't. There was no point when I didn't know how to revert everything with 'rm -r'. > As regards distribute, I suspect that the reason you hit issues is that if > you have a setuptools that's older than 0.7 (or whatever the first merged > version was) then an upgrade can end up jumping through some hoops and going > through a "dummy" distribute version that's there to handle the > fork/re-merge somehow. I honestly don't know how it all works, I'm just > going off what I saw on some of the discussions on pypa-dev at the time. It > all sounded very clever to me, but a bit fragile. I'm a simple soul, and > prefer to just wipe it out and reinstall, so I zoned out after a while:-) I > doubt the details matter to you now, though... No they don't. But I'm with you on the uninstall/reinstall thing. That would be my recommendation to anyone who needs to upgrade. Oscar From pje at telecommunity.com Wed Aug 21 18:31:59 2013 From: pje at telecommunity.com (PJ Eby) Date: Wed, 21 Aug 2013 12:31:59 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <87DB8E6B-CE3C-489E-9769-2B908E45ECF6@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <87DB8E6B-CE3C-489E-9769-2B908E45ECF6@stufft.io> Message-ID: On Wed, Aug 21, 2013 at 9:24 AM, Donald Stufft wrote: > An example is the wsgiref from the standard library. It's an example, alright, but not for your side. ;-) The wsgiref library doesn't just implement the spec, it implements a ton of utility classes for use with the spec. The validator was almost an afterthought grafted on later, borrowed from another project. It implements a framework with all sorts of features that are not technically part of the spec, but are just useful if you want to implement the spec. Very few of the classes, methods, etc. in the entire package are specified by the spec, except in the sense that many of them match a calling signature defined in the PEP. (The PEP doesn't specify any method names, except for things like read() on file-like objects.) IOW, wsgiref is a collection of generally useful tools for anybody doing things with the spec, as an combination of "examples of how to do this" and "ready-to-use code for working with the spec". Personally, I'm very happy to see Vinay's extensions, because they are IMO important validations of whether the new specs are likely to be useful for replacing all of setuptools' functionality. There are people who need to mount eggs and have their extensions run, so if it wasn't possible to build tools that support them under the new specs (whether that support is required by the spec or not), that would still be a reason to use setuptools -- meaning, IMO, that the new spec effort is failing to create a unified packaging world. From donald at stufft.io Wed Aug 21 18:35:15 2013 From: donald at stufft.io (Donald Stufft) Date: Wed, 21 Aug 2013 12:35:15 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <87DB8E6B-CE3C-489E- 9769-2B908E45ECF6@stufft.io> Message-ID: <93E5AA80-5773-46C6-8109-36ED05A11DCA@stufft.io> On Aug 21, 2013, at 12:31 PM, PJ Eby wrote: > Personally, I'm very happy to see Vinay's extensions, because they are > IMO important validations of whether the new specs are likely to be > useful for replacing all of setuptools' functionality. There are > people who need to mount eggs and have their extensions run, so if it > wasn't possible to build tools that support them under the new specs > (whether that support is required by the spec or not), that would > still be a reason to use setuptools -- meaning, IMO, that the new spec > effort is failing to create a unified packaging world. I'm perfectly happy that he was able (and did) write those extensions, I was one of the people who wanted metadata 2.0 to be extensible for exactly that purpose. None of my arguments were against any particular feature. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Wed Aug 21 18:44:29 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 17:44:29 +0100 Subject: [Distutils] Installing from a wheel In-Reply-To: References: Message-ID: On 21 August 2013 17:21, Oscar Benjamin wrote: > Okay I've just tried that and that's definitely the way I want to use it. > > So basically: > $ python setup.py bdist_wheel # Makes wheels > With pip and wheel installed pip wheel . will also build a wheel from the current directory (to wheelhouse\proj-ver.whl). You can use -w to put the wheel somewhere else if you prefer. Or if you want to build a wheel from a sdist on PyPI, something like "pip wheel projectname" will download it and build a wheel. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Wed Aug 21 18:48:15 2013 From: carl at oddbird.net (Carl Meyer) Date: Wed, 21 Aug 2013 10:48:15 -0600 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <5214E244.1020203@oddbird.net> Message-ID: <5214EF4F.2040404@oddbird.net> On 08/21/2013 10:32 AM, Daniel Holth wrote: >> 2) Wheel's decision to follow distutils' documentation rather than >> distutils' behavior when it comes to the location for installing >> data_files with relative paths; see >> https://bitbucket.org/dholth/wheel/issue/80/wheel-does-not-install-data_files-in-site > > Django has fixed it by using package_data appropriately: > https://code.djangoproject.com/ticket/19252 . The problem isn't unique > to wheel, the same data_files mishap happens with bdist_wininst. > > "Regardless, comment 5 is correct that we jump through way too many > hoops in our setup.py in order to try to trick distutils into handling > data_files as if they were package_data, and that is the root cause of > this bug. Instead we should just use package_data and solve the > problem properly." Yup, that's my comment you're quoting :-) I do think from the packager end using package_data is the right solution. But given the existence of distributions using data_files this way (and the likelihood that not all of them will be fixed), is there a good argument for wheel to not maintain compatibility with sdists and python setup.py install? Are there distributions out there relying on the bdist_wininst behavior? Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From dholth at gmail.com Wed Aug 21 18:32:59 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 21 Aug 2013 12:32:59 -0400 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <5214E244.1020203@oddbird.net> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <5214E244.1020203@oddbird.net> Message-ID: On Wed, Aug 21, 2013 at 11:52 AM, Carl Meyer wrote: > On 08/21/2013 03:29 AM, Donald Stufft wrote: >> Can you send me a list (or post them here) of what issues you've hit? >> The biggest one i'm aware of is the scripts problem which is a >> fundamental problem with the 1.0 Wheel (or rather that any library with >> console entry points cannot be universal). > > Since you asked, I'll mention the two that I've hit (though I think > you're also aware of these already): > > 1) Wheel's conversion of - to _ in version strings embedded in > filenames, which breaks with setuptools precedent; see > https://github.com/pypa/pip/issues/1150 and > https://bitbucket.org/dholth/wheel/issue/78/wheel-rewrites-versions-preventing No good solution to this one just yet. > 2) Wheel's decision to follow distutils' documentation rather than > distutils' behavior when it comes to the location for installing > data_files with relative paths; see > https://bitbucket.org/dholth/wheel/issue/80/wheel-does-not-install-data_files-in-site Django has fixed it by using package_data appropriately: https://code.djangoproject.com/ticket/19252 . The problem isn't unique to wheel, the same data_files mishap happens with bdist_wininst. "Regardless, comment 5 is correct that we jump through way too many hoops in our setup.py in order to try to trick distutils into handling data_files as if they were package_data, and that is the root cause of this bug. Instead we should just use package_data and solve the problem properly." From qwcode at gmail.com Wed Aug 21 19:42:02 2013 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 21 Aug 2013 10:42:02 -0700 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <5214E244.1020203@oddbird.net> Message-ID: > > > > 1) Wheel's conversion of - to _ in version strings embedded in > > filenames, which breaks with setuptools precedent; see > > https://github.com/pypa/pip/issues/1150 and > > > https://bitbucket.org/dholth/wheel/issue/78/wheel-rewrites-versions-preventing > > No good solution to this one just yet. > not a great solution, but wheel can just declare it doesn't support "_" in versions, and not build wheels in that case. then installers know they can assume "-" was meant. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Wed Aug 21 21:03:06 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 19:03:06 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> <857F948B-38C9-4026-BCA2-2F0B00790398@stufft.io> <778D0155-F527-4864-BE6B-D02265C4BB89@stufft.io> <1F6083CA-DC70-4EB5-B4B6-8BF791F9F30E@stufft.io> Message-ID: Paul Moore gmail.com> writes: > That implies that any wheel reference implementation needs to expose APIs > for reading and writing the metadata to/from the wheel. Not necessarily. For example, distlib's approach side-steps the need for such a write API: you tell Wheel.build which directories contain purelib, platlib, scripts, headers etc. and it puts all the stuff that it finds in those directories into appropriate locations in the wheel, then updates RECORD, WHEEL etc. This way, it doesn't need to worry about what custom files installers put in .dist-info, and reduces the coupling between the wheel code and users of it. There's no need for a special read API either, since wheels are just zip files and you can use the zipfile API to look at anything inside a wheel. As a convenience, distlib's Wheel instances have a metadata property, which returns as a dict the pydist.json from the wheel's .dist-info. Why make it more complicated than that? Regards, Vinay Sajip From ncoghlan at gmail.com Wed Aug 21 22:35:13 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Aug 2013 15:35:13 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 Aug 2013 20:40, "Paul Moore" wrote: > > On 21 August 2013 11:29, Oscar Benjamin wrote: >> >> I may have misunderstood it but looking at this >> https://github.com/numpy/numpy/blob/master/tools/win32build/nsis_scripts/numpy-superinstaller.nsi.in#L147 >> I think that the installer ships variants for each architecture and >> decides at install time which to place on the target system. If that's >> the case then would it be possible for a wheel to ship all variants so >> that a post-install script could sort it out (rename/delete) after the >> wheel is installed? > > > Wheel 1.0 does not have the ability to bundle multiple versions (and I don't think tags are fine-grained enough to cover the differences numpy need, which are at the "do you have the SSE instruction set?" level AIUI). Multi-version wheels are a possible future extension, but I don't know if anyone has thought about fine-grained tags. > > This is precisely the sort of input that the numpy people could provide to make sure that the wheel design covers their needs. I'm reasonably confident the wheel format *doesn't* meet the scientific community's needs in the general case, and can't be made to do so without a lot of additional complexity. That's why I explicitly support the hashdist/conda approach which abandons some of the goals of pip and wheel (notably, easier handling of security updates and easier conversion to Linux distro packages) in order to better handle complex binary dependencies. >From the pip/wheel point of view, the conda ecosystem then becomes just another downstream "system" package manager to interoperate with. Cheers, Nick. > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 21 22:40:02 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Aug 2013 15:40:02 -0500 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) In-Reply-To: References: Message-ID: On 21 Aug 2013 22:42, "Vinay Sajip" wrote: > > Paul Moore gmail.com> writes: > > > BUT, this means that there is no spec of the current behaviour, and no > implementation of the Wheel 1.0 spec anywhere. > [snip] > > or the wheel spec needs a review reasonably soon. > > I think it's this. I'm not sure to what extent wheels are being used in > anger out there, but it would make sense to review the spec in light of PEP > 426 developments and release a 1.1. Um, the current wheel spec uses PEP 345 + setuptools metadata only. If distlib is expecting PEP 426 metadata in wheel files, it is not compliant with the spec. There won't be a new version of the wheel spec until after PEP 426 is done, and that's likely months away. Cheers, Nick. > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 21 22:57:35 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 21:57:35 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 21:35, Nick Coghlan wrote: > I'm reasonably confident the wheel format *doesn't* meet the scientific > community's needs in the general case, and can't be made to do so without a > lot of additional complexity. That's why I explicitly support the > hashdist/conda approach which abandons some of the goals of pip and wheel > (notably, easier handling of security updates and easier conversion to > Linux distro packages) in order to better handle complex binary > dependencies. While "the general case" may include some specialised situations, in my view, if the wheel format isn't a suitable replacement for bdist_wininst (and by implication, cannot be used by the numpy, scipy and similar projects to deliver Windows binary distributions) - maybe not for specialised use, but certainly for casual users like myself - then it will be essentially a failure. The only reason I am interested in wheels *at all* is as a format that allows me to "pip install" all those projects that currently provide bdist_wininst installers. In the first instance, via "wheel convert", but ultimately by the projects themselves switching from wininst format to wheel (or via some form of build farm mechanism, it doesn't matter to me, as long as the wheels are on PyPI). Note that "wheel convert" is proof that this is the case right now, so this is not setting the bar unreasonably high. Nor am I saying that there's even a problem here at the moment. But if distutils-sig is sending a message that they don't think wheels are a suitable distribution format for (say) numpy or scipy, then I will be very disappointed. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 21 22:59:56 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 21:59:56 +0100 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) In-Reply-To: References: Message-ID: On 21 August 2013 21:40, Nick Coghlan wrote: > Um, the current wheel spec uses PEP 345 + setuptools metadata only. If > distlib is expecting PEP 426 metadata in wheel files, it is not compliant > with the spec. > > There won't be a new version of the wheel spec until after PEP 426 is > done, and that's likely months away. > OK, thanks. That's pretty clear. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 21 23:01:40 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Aug 2013 16:01:40 -0500 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 21 Aug 2013 17:46, "Donald Stufft" wrote: > > > On Aug 21, 2013, at 3:32 AM, Vinay Sajip wrote: > > > Paul Moore gmail.com> writes: > > > >> I'm concerned that you need extra metadata (not described in the wheel > > spec) to do this. It means that there are in effect two subtly different > > types of wheel. To be specific, if I create a wheel for (say) pyzmq using > > distil, and mount it, everything works. But if I create the same wheel with > > bdist_wheel or pip, it doesn't. That, to my mind, is very bad as it damages > > the credibility of wheel as a standardised format. > > > > If the additional metadata isn't there, then distlib just doesn't do > > anything additional - it just makes the Python modules importable (by adding > > the wheel to sys.path, which AFAIK is uncontroversial). > > > >> Can I suggest that if you need to add features like this, you need to get > > the wheel spec updated to mandate them, so that *all* wheels will follow the > > same spec. > >> Essentially, I am -1 on any feature that uses information that is not > > documented in the wheel spec. Pip in particular resisted adding support for > > wheels until they were standardised in a PEP. It's frustrating if that PEP > > *still* doesn't mean that the wheel format is the same for all tools. (Note > > that another area where this is an issue is script wrappers, as the spec is > > silent about the fact that they are specified using entry-points.txt in > > metadata 1.x/setuptools. I've sent a proposed update to the spec to Daniel > > for his consideration). > > > > Well, you don't really want to stifle innovation, do you? ;-) > > > > As far as I can tell, Daniel's wheel implementation allows files that are > > not specifically mentioned in the PEP to be installed into a distribution's > > .dist-info. This is also allowed in distlib - ISTM this is one way in which > > different packaging tools can add features which are special to them, and > > hold state relevant to distributions they build and/or install. If you > > accept that multiple competing implementations if a PEP are a good thing, > > then they can't all be functionally identical, though they must all > > implement a common set of functions described in the PEP they're implementing. > > I was one of the advocates for extension support in the new metadata, I want > tools to be able to try things out and innovate. > > However what I don't really want is to be using someones personal testbed > for features they think is cool. There's nothing *wrong* with you trying new > ideas out in distlib, it just means that distib isn't the library I want to build > tooling around. Right. I wasn't really aware Vinay was adding experimental ideas to distlib, I thought it was just the proven stable core from distutils2, plus support for the draft PEPs, with the experimental stuff entirely in distil rather than in distlib. If Vinay wants to do experimental extensions, they either need to happen somewhere other than distlib, or else we need a new library which is just the candidate for standard library inclusion with *nothing* that hasn't been discussed and agreed to through the PEP process. As I said, I *thought* distlib was that library, but it appears Vinay doesn't currently see it that way :( > > My basic problem is if the library we're pointing at to be the reference > implementation of all of these things is adding new features it's confusing what > is standard and what are just distlib's extensions. > > So basically I want people to innovate, that's something I feel very strongly > is a good thing, I just don't want innovations to happen in the reference > library. Maybe we need a smaller reference library which is strictly the PEPs > to allow distlib to experiment. If it's experimentations turns out to be good and > useful we can make PEPs for them and add them to the reference library. Right. distlib can be a candidate for stdlib inclusion, or it can be a vehicle for distributing experimental features not covered by any PEP, it can't be both at the same time. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 21 23:13:40 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Aug 2013 16:13:40 -0500 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 22 Aug 2013 06:57, "Paul Moore" wrote: > > On 21 August 2013 21:35, Nick Coghlan wrote: >> >> I'm reasonably confident the wheel format *doesn't* meet the scientific community's needs in the general case, and can't be made to do so without a lot of additional complexity. That's why I explicitly support the hashdist/conda approach which abandons some of the goals of pip and wheel (notably, easier handling of security updates and easier conversion to Linux distro packages) in order to better handle complex binary dependencies. > > > While "the general case" may include some specialised situations, in my view, if the wheel format isn't a suitable replacement for bdist_wininst (and by implication, cannot be used by the numpy, scipy and similar projects to deliver Windows binary distributions) - maybe not for specialised use, but certainly for casual users like myself - then it will be essentially a failure. Sure, it can replace bdist_wininst. What it *can't* do is tag an arbitrarily complex dependency tree the way hashdist can. So you're either going to need fat wheels which figure out the right binary extensions to use in a post-install hook, or else you're going to have to bless a default configuration that is published as a wheel. Since wheel 1.0 has no support for post-install hooks, that option is not yet available, and I don't expect the idea of blessing a default binary version to be popular either. > > The only reason I am interested in wheels *at all* is as a format that allows me to "pip install" all those projects that currently provide bdist_wininst installers. In the first instance, via "wheel convert", but ultimately by the projects themselves switching from wininst format to wheel (or via some form of build farm mechanism, it doesn't matter to me, as long as the wheels are on PyPI). > > Note that "wheel convert" is proof that this is the case right now, so this is not setting the bar unreasonably high. Nor am I saying that there's even a problem here at the moment. But if distutils-sig is sending a message that they don't think wheels are a suitable distribution format for (say) numpy or scipy, then I will be very disappointed. Wheel is a suitable replacement for bdist_wininst (although anything that needs install hooks will have to wait for wheel 1.1, which will support metadata 2.0). It's just not a replacement for what hashdist and conda let you do when you care more about reproducibility than you do about security updates. Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Aug 21 23:22:42 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 21 Aug 2013 22:22:42 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 22:13, Nick Coghlan wrote: > Wheel is a suitable replacement for bdist_wininst (although anything that > needs install hooks will have to wait for wheel 1.1, which will support > metadata 2.0). It's just not a replacement for what hashdist and conda let > you do when you care more about reproducibility than you do about security > updates. OK, that's a good statement - wheels as a better bdist_wininst is all I want to be able to promote (and yes, if you need post-install hooks, wait for wheel 1.1). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Thu Aug 22 00:12:27 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 22:12:27 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > Right. I wasn't really aware Vinay was adding experimental ideas to > distlib, I thought it was just the proven stable core from distutils2, > plus support for the draft PEPs, with the experimental stuff entirely in > distil rather than in distlib. I've been up front about what's in distlib all along - check the overview page in the distlib docs. Above all, I want the stuff I do to be *useful*, rather than tick boxes here and there. For example, there's no PEP covering distlib's functionality whereby dependency resolution happens *without* downloading/unpacking archives and running egg_info. We seem to take this sort of dependency resolution for granted in Linux distros, but for some reason Python packaging has to make do with a clunkier approach? Initially my work in this area was experimental, but it seems to work well (though some areas still need more work), and at some point I would expect to propose a PEP to cover it. However, lots of things are in flux and not in my direct control, so directing effort there now would likely involve rework when e.g. existing PEPs change, so it wouldn't be productive to do it yet. > If Vinay wants to do experimental extensions, they either need to happen > somewhere other than distlib, or else we need a new library which is just > the candidate for standard library inclusion with *nothing* that hasn't been > discussed and agreed to through the PEP process. Distlib aims to conform to the PEPs as they evolve, and anyone can raise an issue if they find non-conformance. In my view, one of the failures of distutils2 was that it did not include functionality that was actually useful to people and used by them, such as exports and package resources. I implemented exports in distlib before you added them to PEP 426, but that doesn't mean that I'm some kind of heretic for doing so. I've implemented package resources in distlib too, and there's no PEP yet covering it. You've specifically told me that you don't see distlib as a candidate for inclusion in Python in the 3.4 time frame, so what's the big hurry with getting the pitchforks out now? > As I said, I *thought* distlib was that library, but it appears Vinay doesn't > currently see it that way :( Nick, you haven't discussed this with me at all, and I don't see how you can come up with that interpretation from anything I've said in my posts here (or anywhere else). As I see it, you've explicitly told me that there's a very long time to go before distlib is even considered as a possible candidate for standardisation, and I can't see any valid reason why I can't keep adding useful features to it for now, because the experience gained from using them will inform any future PEPs relating to those features. When the time to consider standardisation is nearer, decisions can be made about what might need to come out and what can stay in, etc. and PEPs written to propose any things that aren't already covered. I've written distlib in a modular fashion and I don't expect such a process to be painful, and I would certainly expect to produce PEPs where needed. > Right. distlib can be a candidate for stdlib inclusion, or it can be a > vehicle for distributing experimental features not covered by any PEP, it > can't be both at the same time. How is it that you're ready to call time on it now, already, even though you've told me standardisation is not something to be even considered until Python 3.5? Of the two options quoted above, you had already decided that it can't be the former in the short to medium term, and now you're saying you don't want it to be the latter either? I'm confused. I'd certainly like to keep making distlib and distil more useful. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Aug 22 00:20:41 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 21 Aug 2013 22:20:41 +0000 (UTC) Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) References: Message-ID: Nick Coghlan gmail.com> writes: > Um, the current wheel spec uses PEP 345 + setuptools metadata only. If distlib is expecting PEP 426 metadata in wheel files, it is not compliant with the spec. I can certainly rectify that - I was possibly confused by the fact that the latest wheel implementation writes pydist.json to the wheel (though the Wheel- Version in WHEEL is still 1.0). Regards, Vinay Sajip From ncoghlan at gmail.com Thu Aug 22 04:33:50 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Aug 2013 12:33:50 +1000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 22 August 2013 08:12, Vinay Sajip wrote: > I've been up front about what's in distlib all along - check the overview > page in the distlib docs. Above all, I want the stuff I do to be *useful*, > rather than tick boxes here and there. Right, you didn't do anything wrong, I just wasn't really paying attention because promoting distlib adoption has been firmly in the "later" bucket for me, after the setuptools rehabilitation, pip bootstrapping, etc. I can't fault you for not realising I believe something I had never really written down anywhere :) > Nick Coghlan gmail.com> writes: >> Right. distlib can be a candidate for stdlib inclusion, or it can be a >> vehicle for distributing experimental features not covered by any PEP, it >> can't be both at the same time. > > How is it that you're ready to call time on it now, already, even though > you've told me standardisation is not something to be even considered until > Python 3.5? Of the two options quoted above, you had already decided that it > can't be the former in the short to medium term, and now you're saying you > don't want it to be the latter either? I'm confused. I'd certainly like to > keep making distlib and distil more useful. I previously thought distlib was going to be the repository for the agreed, stable, "this is going to happen" stuff. It's OK that I was wrong - I think you're right that somewhere is needed as an experimental location to show some of the *possibilities* of the new metadata, and to seed ideas for making it into the eventual standard base that people can assume is readily available. What that means though, is we need *something else* that indicates the common core that people can assume will always be available. It's this common core which pip will need to factor out to remove their dependency on setuptools, rather than adopting distlib wholesale, experimental features and all. I'm actually OK with that, since it means we can aim for a self-contained pip that serves as the basis for the rest of the ecosystem, and is upgraded on the *pip* update cycle, rather than CPython's. I'd been wondering how we'd avoid the "stale standard library support" issue we ran into with distutils, and I think the pip bootstrapping proposals give us the answer: we don't make the core distribution infrastructure part of the standard library, we make it part of *pip*. One of the key concepts of the bootstrapping idea is that CPython maintenance releases will bundle newer versions of pip, and also that pip will be able to upgrade itself in place, so if people need newer distribution infrastructure "upgrade pip" is a much lower risk proposition than "upgrade to a newer version of Python". Currently, pip doesn't expose a programmatic API. I suggested to Donald that it may make sense to start exposing one as "piplib". The bootstrapping would then provide the pip CLI and the common utilities in piplib, and that would be the more conservative core, leaving distlib and distil free to push the boundaries with experimental features that go beyond what the agreed standards currently support. As ideas from distlib make their way into the relevant PEPs and hence into piplib, then they will become available by default, but in the meantime, people would be able to do a build dependency on distlib to get the experimental features. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu Aug 22 04:39:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Aug 2013 12:39:30 +1000 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) In-Reply-To: References: Message-ID: On 22 August 2013 08:20, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> Um, the current wheel spec uses PEP 345 + setuptools metadata only. If > distlib is expecting PEP 426 metadata in wheel files, it is not compliant with > the spec. > > I can certainly rectify that - I was possibly confused by the fact that the > latest wheel implementation writes pydist.json to the wheel (though the Wheel- > Version in WHEEL is still 1.0). Yeah, they were certainly coupled together originally - that's why PEPs 425, 426 and 427 all happened around the same time. However, Daniel tweaked the wheel format spec in PEP 427 to remove the dependency on the new metadata spec once he realised that most of the features that wheels really needed already existed in the setuptools metadata, and the metadata spec was going to take a *lot* longer to stabilise than he originally thought. However, the pydist.json that wheel currently writes is in the category of "arbitrary additional metadata in the dist-info directory", since the metadata 2.0 spec is still far from stable. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From samuel.ferencik at barclays.com Tue Aug 20 18:00:48 2013 From: samuel.ferencik at barclays.com (samuel.ferencik at barclays.com) Date: Tue, 20 Aug 2013 17:00:48 +0100 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> > -----Original Message----- > From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] > Sent: Tuesday, August 20, 2013 5:47 PM > To: Ferencik, Samuel: Markets (PRG) > Cc: distutils-sig at python.org > Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows > > On Mon, Aug 19, 2013 at 11:15 PM, wrote: > >> What does your 'uname -m' return? > > x86_64 > >> Is it possible you're really running a 32-bit >> Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] > > nope -- I am quite deliberately running a 32 bit Python on my 64 bit > OS (I have some custom code C++ I"m using that is not yet 64 bit > safe). That's strange. I'm on Python 3.3.1, and it seems to me that get_platform() derives the value from uname for OS X, similar to Linux. (osname, host, release, version, machine) = os.uname() ... elif osname[:6] == "darwin": import _osx_support, distutils.sysconfig osname, release, machine = _osx_support.get_platform_osx( distutils.sysconfig.get_config_vars(), osname, release, machine) return "%s-%s-%s" % (osname, release, machine) so I would expect "uname -m" to be in line with get_plaform(). But maybe I'm misreading that... Also, I don't have access to the _osx_support source code. > >> return value is wrong on Linux and correct on >> Windows, right? > > no -- I'm saying that it's right on Windows (and OS-X), but wrong on Linux. I think you have misread my sentence, and we actually agree here. What's the next action? Report a Python bug? (That's a cultural question; I'm new to Python.) Regards, Sam > >> That get_platform() should return "32-bit" for a 32-bit process >> running on a 64-bit system. > > yes, it should. > >> TBH, I was expecting the opposite; to me, "platform" >> means the OS, which would mean that Linux does well to derive the return value >> from the OS's architecture. > > except what would be the utility of that? this is a call made within > python, and it's part of distutils, so what the caller wants to know > is the platform that this particular python was build for, NOT the > platform is happens to be running on. i.e. what platform do I want to > build binary extensions for, and/or what platform do I want to > download binary wheels for. > > So I'm pretty sure that currently Windows and OS-X have it right, and > Linux is broken. I'm guessing running 32 bit python on a 64 bit LInux > is not that common, however. (and it's less common to download > binaries...) > > To add complexity, if I run the Apple-supplied python2.7.1 (which is > 32_64 bit universal, but runs 64 bit on my machine), I get: > > >>> distutils.util.get_platform() > 'macosx-10.7-intel' > > Which is more useful than it may look at first -- "intel" means "both > intel platforms", i.e. i386 and x86_64. and 10.7 means -- built for > OS-X 10.7 and above. > > so I think it's doing the right thing. > > -Chris _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. _______________________________________________ From vinay_sajip at yahoo.co.uk Thu Aug 22 09:22:07 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 22 Aug 2013 07:22:07 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > I previously thought distlib was going to be the repository for the > agreed, stable, "this is going to happen" stuff. It's OK that I was > wrong - I think you're right that somewhere is needed as an > experimental location to show some of the *possibilities* of the new > metadata, and to seed ideas for making it into the eventual standard > base that people can assume is readily available. It's not just about completely new, experimental stuff. For example, the resources functionality isn't completely new territory. The PyPI interfacing is (IMO) a saner API than the one in distutils2. A better Windows story (for when launcher support when py.exe can't be used) is also not rocket science. > What that means though, is we need *something else* that indicates the > common core that people can assume will always be available. It's this If that "something else" you're thinking of is something that is supposed to live in the stdlib, then I see no reason why a subset of distlib couldn't be that something else, since stdlib changes are not on the table for 3.4. I certainly have never envisaged that distlib would be adopted wholesale into the stdlib (if at all) without peer review and any changes coming out of that. > common core which pip will need to factor out to remove their > dependency on setuptools, rather than adopting distlib wholesale, > experimental features and all. I honestly think you're making a bit too much of the "experimental" label here, even though it is a label that I use myself. For me, that label is most appropriate for the extended metadata that I collect from PyPI and which is the basis for distlib's smarter dependency resolution. If your concerns are about instability due to experimental features (and I quite understand the importance of stability in packaging), then there's nothing stopping anyone doing a technical review of distlib to see what any actual risks are. Indeed, I've invited such review from day one. > Currently, pip doesn't expose a programmatic API. I suggested to > Donald that it may make sense to start exposing one as "piplib". The I think this would be a mistake, and it seems a little early to make this sort of decision. You've given me to understand that pip could at some future point use (some subset of) distlib under the covers, with compatibility maintained at the CLI level. If that is still the case, then I don't see much value in having two lib layers. Like setuptools, pip has done sterling service, but it's not a codebase I'd like to see become the basis for our long-term packaging infrastructure. I don't mean to offend anybody by saying that - it's just software that has grown organically over time and IMO there will be technical debt to repay if we go down the route of exposing bits of it as Python APIs. It certainly feels like you're side-lining distlib, or planning to, whether or not that's the message you're intending to send. No matter :-) Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Thu Aug 22 09:34:28 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 22 Aug 2013 07:34:28 +0000 (UTC) Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) References: Message-ID: Nick Coghlan gmail.com> writes: > However, the pydist.json that wheel currently writes is in the > category of "arbitrary additional metadata in the dist-info > directory", since the metadata 2.0 spec is still far from stable. You can perhaps see why that could cause confusion - was that mentioned somewhere on this list? It certainly seems odd to add a pydist.json there if it's not needed; the natural thing to do is to assume that if it's there, it's usable. Unfortunately, the pydist.json that it currently writes is not conformant to the latest version of the PEP (though it passes schema validation) :-( Regards, Vinay Sajip From p.f.moore at gmail.com Thu Aug 22 10:24:45 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 22 Aug 2013 09:24:45 +0100 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) In-Reply-To: References: Message-ID: On 21 August 2013 23:20, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > > > Um, the current wheel spec uses PEP 345 + setuptools metadata only. If > distlib is expecting PEP 426 metadata in wheel files, it is not compliant > with > the spec. > > I can certainly rectify that - I was possibly confused by the fact that the > latest wheel implementation writes pydist.json to the wheel (though the > Wheel- > Version in WHEEL is still 1.0). Conversely, of course, there's no mention in the wheel spec that setuptools metadata (specifically entry-points.txt) should be present. Which is why I mentioned that the wheel spec might need a review/update to clarify this (if we want to ensure that any necessary script metadata is guaranteed to be present in compliant wheels). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From theller at ctypes.org Thu Aug 22 10:26:32 2013 From: theller at ctypes.org (Thomas Heller) Date: Thu, 22 Aug 2013 10:26:32 +0200 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> Message-ID: Am 20.08.2013 19:39, schrieb PJ Eby: > I thought that at one point you (Thomas) had come up with a way to > load modules into memory from a zipfile without needing to extract > them. Was that you? If so, how did that work out? To give a definite answer, after thinking it over: It works, for quite some extensions. The main problem is this: If it does NOT work (process crashes) there is no way to find out why. It is nearly impossible to debug because you end up with all the machine code from the extensions/dlls mapped into the process and the debugger has no info about it. Thomas From ronaldoussoren at mac.com Thu Aug 22 12:13:35 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 22 Aug 2013 12:13:35 +0200 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: <39DAF259-9F87-4169-BAB8-82A4ABCBAC51@mac.com> On 20 Aug, 2013, at 8:15, samuel.ferencik at barclays.com wrote: >> -----Original Message----- >> From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] >> Sent: Monday, August 19, 2013 7:13 PM >> To: Ferencik, Samuel: Markets (PRG) >> Cc: distutils-sig at python.org >> Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows >> >> On Fri, Aug 16, 2013 at 2:18 AM, wrote: >>> It seems distutils.util.get_platform() semantically differs on Windows and >>> Linux. >>> >>> Windows: the return value is derived from the architecture of the >>> *interpreter*, hence for 32-bit Python running on 64-bit Windows >>> get_platform() = 'win32' (32-bit). >>> >>> Linux: the return value is derived from the architecture of the *OS*, hence >>> for 32-bit Python running on 64-bit Linux get_platform() = 'linux-x86_64' >>> (64-bit). >>> >>> Is this intentional? >> >> This seems just plain wrong to me. >> >> For the record, running a 32 bit Python on a 64 bit OS_X box: >> >> In [5]: distutils.util.get_platform() >> Out[5]: 'macosx-10.6-i386' >> >> which is the answer I want. >> >> -Chris > > Chris, > > What does your 'uname -m' return? Is it possible you're really running a 32-bit > Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] disutils.util.get_platform() on OSX returns the "architecture" supported by the current binary. I get: :>>> distutils.util.get_platform() 'macosx-10.8-intel' This means that Python was build for a deployment target of 10.8 (that is, the binary runs on OSX 10.8 or later) and supports the 'intel' set of architecures (i386 and x86_64). Ronald From ronaldoussoren at mac.com Thu Aug 22 12:20:36 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 22 Aug 2013 12:20:36 +0200 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> Message-ID: <4260010D-A41A-44C1-BB00-11906854DA66@mac.com> On 20 Aug, 2013, at 18:00, samuel.ferencik at barclays.com wrote: >> -----Original Message----- >> From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] >> Sent: Tuesday, August 20, 2013 5:47 PM >> To: Ferencik, Samuel: Markets (PRG) >> Cc: distutils-sig at python.org >> Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows >> >> On Mon, Aug 19, 2013 at 11:15 PM, wrote: >> >>> What does your 'uname -m' return? >> >> x86_64 >> >>> Is it possible you're really running a 32-bit >>> Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] >> >> nope -- I am quite deliberately running a 32 bit Python on my 64 bit >> OS (I have some custom code C++ I"m using that is not yet 64 bit >> safe). > > That's strange. I'm on Python 3.3.1, and it seems to me that get_platform() > derives the value from uname for OS X, similar to Linux. > > (osname, host, release, version, machine) = os.uname() > ... > elif osname[:6] == "darwin": > import _osx_support, distutils.sysconfig > osname, release, machine = _osx_support.get_platform_osx( > distutils.sysconfig.get_config_vars(), > osname, release, machine) > return "%s-%s-%s" % (osname, release, machine) > > so I would expect "uname -m" to be in line with get_plaform(). But maybe I'm > misreading that... Also, I don't have access to the _osx_support source code. _osx_support is a pure python module in the stdlib, the source is in the usual location. The behavior on OSX is quite intentional and ensures that disutils binary archive names correctly reflect the use of fat binaries and the minimal supported OSX release. The only thing that might need change is the name of the supported architectures, the wheel spec has a better way to indicate multiple executable architectures than making up names for every set of architectures that we care to support, but to be honest I haven't had time yet to fully ingest the spec and work out if is completely useful for fat binaries on OSX. > >> >>> return value is wrong on Linux and correct on >>> Windows, right? >> >> no -- I'm saying that it's right on Windows (and OS-X), but wrong on Linux. > > I think you have misread my sentence, and we actually agree here. > > What's the next action? Report a Python bug? (That's a cultural question; I'm > new to Python.) http://bugs.python.org/ Ronald From oscar.j.benjamin at gmail.com Thu Aug 22 13:24:01 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 22 Aug 2013 12:24:01 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 21 August 2013 22:22, Paul Moore wrote: > On 21 August 2013 22:13, Nick Coghlan wrote: >> >> Wheel is a suitable replacement for bdist_wininst (although anything that >> needs install hooks will have to wait for wheel 1.1, which will support >> metadata 2.0). It's just not a replacement for what hashdist and conda let >> you do when you care more about reproducibility than you do about security >> updates. > > OK, that's a good statement - wheels as a better bdist_wininst is all I want > to be able to promote (and yes, if you need post-install hooks, wait for > wheel 1.1). Okay, so going back to my earlier question... Oscar asked: > BTW is there any reason for numpy et al not to start distributing > wheels now? Is any part of the wheel > specification/tooling/infrastructure not complete yet? the answer is basically yes to both questions. The pip+PyPI+wheel infrastructure is not yet able to satisfy numpy's needs as the wheel spec doesn't give sufficiently fine-grained architecture information and there's no way to monkey-patch the installation process in order to do what the current installers do. It seems to me that the ideal solution for numpy is not really the post-install script but a way to distribute wheels appropriate to the given CPU. Bundling the different binaries in one installer makes sense for an installer that is manually downloaded by a user but not for one that is automatically downloaded. There's a pure Python script here that seems to be able to obtain the appropriate information: https://code.google.com/p/numexpr/source/browse/numexpr/cpuinfo.py?r=ac92866e7929df669ca5e4e050179cd7448798f0 $ python cpuinfo.py CPU information: CPUInfoBase__get_nbits=32 getNCPUs=2 has_mmx has_sse2 is_32bit is_Core2 is_Intel is_i686 So perhaps numpy could upload multiple wheels: numpy-1.7.1-cp27-cp22m-win32.whl numpy-1.7.1-cp27-cp22m-win32_sse.whl numpy-1.7.1-cp27-cp22m-win32_sse2.whl numpy-1.7.1-cp27-cp22m-win32_sse3.whl Then ordinary pip would just install the win32 wheel but "fancypip" could install the one with the right level of sse2 support. Or is there perhaps a way that a distribution like numpy could depend on another distribution that finds CPU information and informs or hooks into pip etc. so that pip would be able to gain this support in an extensible way? Oscar From vinay_sajip at yahoo.co.uk Thu Aug 22 13:57:55 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 22 Aug 2013 11:57:55 +0000 (UTC) Subject: [Distutils] What does it mean for Python to "bundle pip"? References: Message-ID: Oscar Benjamin gmail.com> writes: > I think that the installer ships variants for each architecture and > decides at install time which to place on the target system. If that's > the case then would it be possible for a wheel to ship all variants so > that a post-install script could sort it out (rename/delete) after the > wheel is installed? It's not just about the architecture on the target system, it's also about e.g. what libraries are installed on the target system. Files like numpy/__config__.py and numpy/distutils/__config__.py are created at build time, based on local conditions, and those files would then be written to the wheel. On the installation machine, the environment may not be compatible with those configurations computed on the build machine. Those are the things I was talking about which may need moving from build-time to run-time computations. Regards, Vinay Sajip From samuel.ferencik at barclays.com Thu Aug 22 12:27:52 2013 From: samuel.ferencik at barclays.com (samuel.ferencik at barclays.com) Date: Thu, 22 Aug 2013 11:27:52 +0100 Subject: [Distutils] distutils.util.get_platform() - Linux vs Windows In-Reply-To: <4260010D-A41A-44C1-BB00-11906854DA66@mac.com> References: <66607689AF9BB243B6C00BC05B4AFE6E0E0FDBB2E3@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE061@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <66607689AF9BB243B6C00BC05B4AFE6E0E127CE07F@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> <4260010D-A41A-44C1-BB00-11906854DA66@mac.com> Message-ID: <66607689AF9BB243B6C00BC05B4AFE6E0E127CE09E@LDNPCMMGMB05.INTRANET.BARCAPINT.COM> > -----Original Message----- > From: Ronald Oussoren [mailto:ronaldoussoren at mac.com] > Sent: Thursday, August 22, 2013 12:21 PM > To: Ferencik, Samuel: Markets (PRG) > Cc: chris.barker at noaa.gov; distutils-sig at python.org > Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows > > > On 20 Aug, 2013, at 18:00, samuel.ferencik at barclays.com wrote: > >>> -----Original Message----- >>> From: Chris Barker - NOAA Federal [mailto:chris.barker at noaa.gov] >>> Sent: Tuesday, August 20, 2013 5:47 PM >>> To: Ferencik, Samuel: Markets (PRG) >>> Cc: distutils-sig at python.org >>> Subject: Re: [Distutils] distutils.util.get_platform() - Linux vs Windows >>> >>> On Mon, Aug 19, 2013 at 11:15 PM, wrote: >>> >>>> What does your 'uname -m' return? >>> >>> x86_64 >>> >>>> Is it possible you're really running a 32-bit >>>> Python on a *32-bit* OS X kernel? [http://superuser.com/q/161195] >>> >>> nope -- I am quite deliberately running a 32 bit Python on my 64 bit >>> OS (I have some custom code C++ I"m using that is not yet 64 bit >>> safe). >> >> That's strange. I'm on Python 3.3.1, and it seems to me that get_platform() >> derives the value from uname for OS X, similar to Linux. >> >> (osname, host, release, version, machine) = os.uname() >> ... >> elif osname[:6] == "darwin": >> import _osx_support, distutils.sysconfig >> osname, release, machine = _osx_support.get_platform_osx( >> distutils.sysconfig.get_config_vars(), >> osname, release, machine) >> return "%s-%s-%s" % (osname, release, machine) >> >> so I would expect "uname -m" to be in line with get_plaform(). But maybe I'm >> misreading that... Also, I don't have access to the _osx_support source code. > > _osx_support is a pure python module in the stdlib, the source is in the usual > location. Of course it is. I don't know where I was looking. Basically, get_platform_osx() overrides the value of 'machine' passed in. So in distutils.util.get_platform() it looks like it's doing a similar thing as for Linux (uname) but it then throws it away and lets _osx_support.get_platform_osx() do its own thing. > The behavior on OSX is quite intentional and ensures that disutils binary archive > names correctly reflect the use of fat binaries and the minimal supported OSX release. > > The only thing that might need change is the name of the supported architectures, > the wheel spec has a better way to indicate multiple executable architectures than > making up names for every set of architectures that we care to support, but to be > honest I haven't had time yet to fully ingest the spec and work out if is completely > useful for fat binaries on OSX. > >> >>> >>>> return value is wrong on Linux and correct on >>>> Windows, right? >>> >>> no -- I'm saying that it's right on Windows (and OS-X), but wrong on Linux. >> >> I think you have misread my sentence, and we actually agree here. >> >> What's the next action? Report a Python bug? (That's a cultural question; I'm >> new to Python.) > > http://bugs.python.org/ Thanks, I'll report one. Sam _______________________________________________ This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer. For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com. _______________________________________________ From dholth at gmail.com Thu Aug 22 15:01:18 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 22 Aug 2013 09:01:18 -0400 Subject: [Distutils] State of the wheel spec (Was: How to handle launcher script importability?) In-Reply-To: References: Message-ID: On Thu, Aug 22, 2013 at 4:24 AM, Paul Moore wrote: > On 21 August 2013 23:20, Vinay Sajip wrote: >> >> Nick Coghlan gmail.com> writes: >> >> > Um, the current wheel spec uses PEP 345 + setuptools metadata only. If >> distlib is expecting PEP 426 metadata in wheel files, it is not compliant >> with >> the spec. >> >> I can certainly rectify that - I was possibly confused by the fact that >> the >> latest wheel implementation writes pydist.json to the wheel (though the >> Wheel- >> Version in WHEEL is still 1.0). > > > Conversely, of course, there's no mention in the wheel spec that setuptools > metadata (specifically entry-points.txt) should be present. Which is why I > mentioned that the wheel spec might need a review/update to clarify this (if > we want to ensure that any necessary script metadata is guaranteed to be > present in compliant wheels). pydist.json is in there in order to have an implementation / old metadata converter to inform PEP 426 development. I added the "generator" tag to PEP 426 to deal with the problem of detecting pydist.json that conform to obsolete drafts of the spec. In the meantime the stable metadata is what setuptools supports inside .dist-info directories which is similar to old-draft key/value PEP 426, including Provides-Extra: etc. Apart from the upcoming wrapper scripts generation, the basic wheel install step shouldn't need to read any of the setuptools/PEP 345/426 metadata at all. From ncoghlan at gmail.com Thu Aug 22 15:49:59 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Aug 2013 23:49:59 +1000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Disclaimer: everything I say below about pip is ultimately up to the pip devs. I'm just pointing out what I think makes sense, and my reading of Donald's comments means that I expect he would feel the same way. On 22 August 2013 17:22, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: >> What that means though, is we need *something else* that indicates the >> common core that people can assume will always be available. It's this > > If that "something else" you're thinking of is something that is supposed to > live in the stdlib, then I see no reason why a subset of distlib couldn't be > that something else, since stdlib changes are not on the table for 3.4. I > certainly have never envisaged that distlib would be adopted wholesale into > the stdlib (if at all) without peer review and any changes coming out of that. I *can't* tell people "we're going to migrate to distlib as the reference packaging infrastructure implementation" when I really mean "we're going to migrate to some as yet undefined subset of distlib, so 'import distlib' won't be the long term answer". Incorporating only a subset of an existing published API into the standard library is a mistake - the PyXML debacle shows us that. If the API is different (even if that means a strict subset), then it needs a different name. >> common core which pip will need to factor out to remove their >> dependency on setuptools, rather than adopting distlib wholesale, >> experimental features and all. > > I honestly think you're making a bit too much of the "experimental" label > here, even though it is a label that I use myself. For me, that label is > most appropriate for the extended metadata that I collect from PyPI and > which is the basis for distlib's smarter dependency resolution. > > If your concerns are about instability due to experimental features (and I > quite understand the importance of stability in packaging), then there's > nothing stopping anyone doing a technical review of distlib to see what any > actual risks are. Indeed, I've invited such review from day one. It has nothing to do with code quality, and everything to do with being able to explain the migration plan to people. I *can* say to them "pip is going to cherry pick parts of distlib and potentially other libraries and make them available as 'piplib', which will be installed automatically when you install pip". At the moment, I no longer feel I can say "distlib will become the reference implementation". Note that there's also the bootstrapping issue with having pip depend on an external library: having the core library *in pip* makes that problem go away. >> Currently, pip doesn't expose a programmatic API. I suggested to >> Donald that it may make sense to start exposing one as "piplib". The > > I think this would be a mistake, and it seems a little early to make this > sort of decision. You've given me to understand that pip could at some > future point use (some subset of) distlib under the covers, with > compatibility maintained at the CLI level. If that is still the case, then I > don't see much value in having two lib layers. When I made that suggestion, I misunderstood your plans for distlib. If pip are only adopting a subset of it, they can't use the same name, or people will get confused. I now think it makes more sense for pip to migrate to a more tightly constrained public library API, that doesn't go beyond the documented metadata standards (except for legacy compatibility reasons). > Like setuptools, pip has done sterling service, but it's not a codebase I'd > like to see become the basis for our long-term packaging infrastructure. I > don't mean to offend anybody by saying that - it's just software that has > grown organically over time and IMO there will be technical debt to repay if > we go down the route of exposing bits of it as Python APIs. I don't expect the contents of piplib to match the contents of the existing pip module. This is about enabling a gradual refactoring over to a cleaner core library with a public API, rather than a big bang migration to an alternative solution like distlib. > It certainly feels like you're side-lining distlib, or planning to, whether > or not that's the message you're intending to send. No matter :-) I don't currently believe your plans for distlib and my plans for the standard library software distribution support (whether directly in the standard library or via the pip bootstrapping) are compatible. If I am correct, then distlib remains extemely valuable in that scenario, but the nature of its role changes. I want a completely barebones "absolutely no features that aren't covered by an Accepted PEP" library as a candidate for future inclusion. Such a library *cannot* be particularly useful at this point in time, because the metadata 2.0 PEPs are nowhere near complete enough for that. By contrast, you understandably wish for distlib to be useful *now*, which means running ahead of the standardisation process, filling in missing features as needed. Assuming the pip devs are amenable (and given Donald's comments in this thread, I expect they will be), making the decision now that pip *will not* bundle distlib directly, but instead will create its own support library means we have a clear path forward for defining the "suitable subset of distlib (and perhaps other libraries)" that will become the "available by default" core library for installation tools, while leaving you free to make distlib as useful as it can be in the near term, even if it takes the standardisation process a while to catch up. The eventual core API *probably won't support* the legacy metadata formats, instead leaving that to setuptools and/or distlib. It will probably depend on the upgraded PyPI APIs as well (once they're defined). Ideally, this will give us at least two competing implementations on the metadata 2.0 production side (setuptools/bdist_wheel and distlib/distil) and two on the consumption side (piplib/pip and distlib/distil), so the various combinations should help us ensure we've eliminated most of the ambiguity from the specifications and aren't going to the end up with excessively high levels of implementation defined behaviour again. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia On 22 August 2013 17:22, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> I previously thought distlib was going to be the repository for the >> agreed, stable, "this is going to happen" stuff. It's OK that I was >> wrong - I think you're right that somewhere is needed as an >> experimental location to show some of the *possibilities* of the new >> metadata, and to seed ideas for making it into the eventual standard >> base that people can assume is readily available. > > It's not just about completely new, experimental stuff. For example, the > resources functionality isn't completely new territory. The PyPI interfacing > is (IMO) a saner API than the one in distutils2. A better Windows story (for > when launcher support when py.exe can't be used) is also not rocket science. > >> What that means though, is we need *something else* that indicates the >> common core that people can assume will always be available. It's this > > If that "something else" you're thinking of is something that is supposed to > live in the stdlib, then I see no reason why a subset of distlib couldn't be > that something else, since stdlib changes are not on the table for 3.4. I > certainly have never envisaged that distlib would be adopted wholesale into > the stdlib (if at all) without peer review and any changes coming out of that. > >> common core which pip will need to factor out to remove their >> dependency on setuptools, rather than adopting distlib wholesale, >> experimental features and all. > > I honestly think you're making a bit too much of the "experimental" label > here, even though it is a label that I use myself. For me, that label is > most appropriate for the extended metadata that I collect from PyPI and > which is the basis for distlib's smarter dependency resolution. > > If your concerns are about instability due to experimental features (and I > quite understand the importance of stability in packaging), then there's > nothing stopping anyone doing a technical review of distlib to see what any > actual risks are. Indeed, I've invited such review from day one. > >> Currently, pip doesn't expose a programmatic API. I suggested to >> Donald that it may make sense to start exposing one as "piplib". The > > I think this would be a mistake, and it seems a little early to make this > sort of decision. You've given me to understand that pip could at some > future point use (some subset of) distlib under the covers, with > compatibility maintained at the CLI level. If that is still the case, then I > don't see much value in having two lib layers. > > Like setuptools, pip has done sterling service, but it's not a codebase I'd > like to see become the basis for our long-term packaging infrastructure. I > don't mean to offend anybody by saying that - it's just software that has > grown organically over time and IMO there will be technical debt to repay if > we go down the route of exposing bits of it as Python APIs. > > It certainly feels like you're side-lining distlib, or planning to, whether > or not that's the message you're intending to send. No matter :-) > > Regards, > > Vinay Sajip > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From oscar.j.benjamin at gmail.com Thu Aug 22 15:52:52 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 22 Aug 2013 14:52:52 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 22 August 2013 12:57, Vinay Sajip wrote: >> I think that the installer ships variants for each architecture and >> decides at install time which to place on the target system. If that's >> the case then would it be possible for a wheel to ship all variants so >> that a post-install script could sort it out (rename/delete) after the >> wheel is installed? > > It's not just about the architecture on the target system, it's also about > e.g. what libraries are installed on the target system. Files like > numpy/__config__.py and numpy/distutils/__config__.py are created at build > time, based on local conditions, and those files would then be written to > the wheel. On the installation machine, the environment may not be > compatible with those configurations computed on the build machine. Those > are the things I was talking about which may need moving from build-time to > run-time computations. I'm pretty sure the current Windows installer just doesn't bother with BLAS/LAPACK libraries. Maybe it will become possible to expose them via a separate wheel-distributed PyPI name one day. That would help since they're currently not very easy to setup/build on Windows but the same sse etc. issues would apply to them as well. For now just leaving out BLAS/LAPACK is probably okay. apt-get doesn't bother to install them for numpy either (on Ubuntu). It will set them up properly if you explicitly ask for them though. Oscar From vinay_sajip at yahoo.co.uk Thu Aug 22 16:19:33 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 22 Aug 2013 14:19:33 +0000 (UTC) Subject: [Distutils] How to handle launcher script importability? References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > standard library is a mistake - the PyXML debacle shows us that. If > the API is different (even if that means a strict subset), then it > needs a different name. I'm not really hung up about a specific name - what's in a name? > It has nothing to do with code quality, and everything to do with > being able to explain the migration plan to people. I *can* say to Code quality is pertinent when it's the subtext behind "experimental". > them "pip is going to cherry pick parts of distlib and potentially > other libraries and make them available as 'piplib', which will be > installed automatically when you install pip". At the moment, I no > longer feel I can say "distlib will become the reference > implementation". Note that there's also the bootstrapping issue with > having pip depend on an external library: having the core library *in > pip* makes that problem go away. > When I made that suggestion, I misunderstood your plans for distlib. > If pip are only adopting a subset of it, they can't use the same name, > or people will get confused. I can certainly see that there are ways to avoid confusion. But never mind, I see that you've made your decision. I would have hoped for a more transparent decision process, but that's probably due to my slowness of uptake. Regards, Vinay Sajip From ncoghlan at gmail.com Thu Aug 22 17:04:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 23 Aug 2013 01:04:46 +1000 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 23 August 2013 00:19, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: >> When I made that suggestion, I misunderstood your plans for distlib. >> If pip are only adopting a subset of it, they can't use the same name, >> or people will get confused. > > I can certainly see that there are ways to avoid confusion. But never mind, > I see that you've made your decision. I would have hoped for a more > transparent decision process, but that's probably due to my slowness of uptake. The only decision I've made is to stop saying "distlib is the future", since that is now in doubt, and I certainly don't have the time available or expertise needed to review all the APIs that have been added to it (I thought it was just the four distutils2 interfaces that almost made it into Python 3.3 and that all your experimental interfaces were in distil, not distlib. While there was plenty of evidence to indicate I was wrong in that belief, I wasn't paying proper attention to it and it didn't properly register until it came up in this thread). The next step is up to the pip folks - if they think adopting distlib wholesale makes sense for them, fine, I have no direct say in that. If they decide to make a "piplib" instead, to expose a public API for an updated version of pip's own infrastructure (perhaps derived from distlib), that's fine by me, too. The only absolute in this space relates to the default installation toolchain: it *will* be pip. Unlike setutptools as a build system, I consider easy_install irredeemable as an installer (from a social perspective), and there's no way I would ever inflict yet another change of recommended client on the community. Given that, any future changes to the core infrastructure will be heavily influenced by the technical choices of the pip developers. Other tools will exist around that, since packaging is a complex topic where "one size" really doesn't fit all, but pip will be the centrepiece. Cheers, Nick. From p.f.moore at gmail.com Thu Aug 22 17:16:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 22 Aug 2013 16:16:10 +0100 Subject: [Distutils] How to handle launcher script importability? In-Reply-To: References: <1376488739.68601.YahooMailNeo@web171402.mail.ir2.yahoo.com> <7AD31EE8-46D9-49C4-89EA-FDE4AA092FF2@stufft.io> <7160AEA5-C772-48D1-8776-707475288B00@stufft.io> Message-ID: On 22 August 2013 16:04, Nick Coghlan wrote: > The next step is up to the pip folks - if they think adopting distlib > wholesale makes sense for them, fine, I have no direct say in that. If > they decide to make a "piplib" instead, to expose a public API for an > updated version of pip's own infrastructure (perhaps derived from > distlib), that's fine by me, too. > For what it's worth, we currently have a vendored copy of distlib bundled into pip but (a) it's pretty out of date now and (b) we only make minimal use of it - in particular we do not use it for any of the wheel support at the moment. I don't have any feel for what we might do going forward - I suspect we'll wait until the dust settles a bit on the whole issue in distutils-sig before trying to make a decision. For virtualenv, I have a longer-term plan to switch to bundling pip and setuptools wheels instead of sdists, but again I don't plan on rushing into a decision on how I'll do that. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu Aug 22 17:33:40 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 08:33:40 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Thu, Aug 22, 2013 at 6:52 AM, Oscar Benjamin wrote: > I'm pretty sure the current Windows installer just doesn't bother with > BLAS/LAPACK libraries. Maybe it will become possible to expose them > via a separate wheel-distributed PyPI name one day. Well, the rule of thumb with Windows binaries is that you bundle in (usually via static linking) all the libs you need -- numpy could have a semi-optimized LAPACK or not, and the user shouldn't know the difference at install time. But the trick in this case is that numpy is used by itself, but also widely used with external C and Fortran that might want LAPACK. (including scipy, in fact...) But maybe this is all too much to bite off for pip and wheels. If we could get to a state where "pip install numpy" and "pip install scipy" would do something reasonable, if not optimized, I think that would be great! And it's really not a big deal to say: If you want an optimized LAPACK, etc. for your system, you need to do something special/ by hand/ etc... "something special" may be as simple as "download numpy_optimized_just_for_this_machine.whl and install it with pip. All that being said -- OS-X has a moderately complex binary set, what with universal binaries, so maybe we can have a bit more meta-data about the architecture supported. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From p.f.moore at gmail.com Thu Aug 22 18:37:50 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 22 Aug 2013 17:37:50 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 22 August 2013 16:33, Chris Barker - NOAA Federal wrote: > But maybe this is all too much to bite off for pip and wheels. If we > could get to a state where "pip install numpy" and "pip install scipy" > would do something reasonable, if not optimized, I think that would be > great! And it's really not a big deal to say: > That is essentially possible now. 1. Go to Christoph Gohlke's website and download his bdist_wininst installers for numpy and scipy. 2. Make sure you have pip 1.4+, setuptools and wheel installed (you only need wheel for the wheel convert step) 3. wheel convert numpy-*.exe; wheel convert scipy-*.exe 4. pip install numpy*.whl scipy*.whl You need to manually choose the right wininst installers for your architecture (or you can grab and wheel convert them all and let the architecture tags choose the right ones). I don't believe that Christoph's installers need a postinstall step, so the architecture/instruction set issues don't apply. But I've only visually checked and done some minimal use tests myself. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Thu Aug 22 19:12:27 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 22 Aug 2013 18:12:27 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 22 August 2013 16:33, Chris Barker - NOAA Federal wrote: > On Thu, Aug 22, 2013 at 6:52 AM, Oscar Benjamin > wrote: > >> I'm pretty sure the current Windows installer just doesn't bother with >> BLAS/LAPACK libraries. Maybe it will become possible to expose them >> via a separate wheel-distributed PyPI name one day. > > Well, the rule of thumb with Windows binaries is that you bundle in > (usually via static linking) all the libs you need -- numpy could have > a semi-optimized LAPACK or not, and the user shouldn't know the > difference at install time. But the trick in this case is that numpy > is used by itself, but also widely used with external C and Fortran > that might want LAPACK. (including scipy, in fact...) > > But maybe this is all too much to bite off for pip and wheels. If we > could get to a state where "pip install numpy" and "pip install scipy" > would do something reasonable, if not optimized, I think that would be > great! Agreed. And actually 'pip wheel numpy' works fine for me on Windows with MinGW installed. (I don't even need to patch distutils because numpy.distutils fixed the MinGW bug three years ago!). There's no BLAS/LAPACK support but I assume it has the appropriate SSE2 build which is basically what the win32 superpack gives you. > And it's really not a big deal to say: > > If you want an optimized LAPACK, etc. for your system, you need to do > something special/ by hand/ etc... > > "something special" may be as simple as "download > numpy_optimized_just_for_this_machine.whl and install it with pip. Exactly. Speaking personally, I do all my real computation on Linux clusters managed by scientific software professionals who have hand-tuned and pre-built a bewildering array of alternative BLAS/LAPACK setups and numpy etc. to go with. For me having numpy on Windows is just for developing, debugging etc. so hard-core optimisation isn't important. Oscar From chris.barker at noaa.gov Fri Aug 23 00:05:49 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 15:05:49 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: And sent form Paul Moore -- accidentally got off-list (my fault): On 22 August 2013 18:47, Chris Barker - NOAA Federal wrote: > I'm still confused as to the state of all this -- are the tools ready > for projects to start posting wheels so that pip can find them? Basically, the answer is "maybe". Projects that do will still very much be early adopters with all the risks that entails - things could change, users may need help when things don't go quite as expected, etc. C extensions are not ready for use yet anywhere other than on Windows (Linux in particular has architecture/ABI questions to resolve).. Projects that use script wrappers probably shouldn't expect wheels to manage these seamlessly yet (things are still in flux in that area). "Advanced" features like post-install scripts don't work yet. The rest is likely fine. If you have a simple pure-Python project, or one with a straightforward C extension, you should be fine. And of course, the more people that start publishing wheels, the more we'll get feedback on how things are working, and the faster things will settle down. Organisations maintaining internal (or even public) package indexes hosting "unofficial" wheels for projects they have an interest in is probably more realistic at this stage. But I know of no such public indexes (the nearest is Christoph Gohlke's site, but that is not structured to be a PyPI-style index, so it's not usable directly by pip even if he were to switch to wheels). And it's hard to determine if anyone is doing things like that internally. -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From chris.barker at noaa.gov Fri Aug 23 00:08:56 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 15:08:56 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: > C extensions are not ready for use yet anywhere other than on Windows > (Linux in particular has architecture/ABI questions to resolve).. > Projects that use script wrappers probably shouldn't expect wheels to > manage these seamlessly yet (things are still in flux in that area). > "Advanced" features like post-install scripts don't work yet. > The rest is likely fine. > > If you have a simple pure-Python project, or one with a > straightforward C extension, you should be fine. but there is little point for a pure-python project. -- pip install works fine form source for those.. as for a straightforward C extension -- does that only work for Windows now (as above)? I want to give it a shot for OS-X -- no one seems to want to maintian bdist_mpkg, and it's time to move forward... > And it's hard to determine > if anyone is doing things like that internally. Yup -- I'll probably keep it internal at first as well. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From donald at stufft.io Fri Aug 23 00:18:27 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 22 Aug 2013 18:18:27 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Aug 22, 2013, at 6:08 PM, Chris Barker - NOAA Federal wrote: > but there is little point for a pure-python project. -- pip install > works fine form source for those.. This isn't a true, a pure python wheel is still great. It's an order of magnitude faster with less moving parts and static metadata. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Fri Aug 23 00:03:55 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 15:03:55 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Thu, Aug 22, 2013 at 9:37 AM, Paul Moore wrote: > That is essentially possible now. > > 1. Go to Christoph Gohlke's website and download his bdist_wininst > installers for numpy and scipy. .... Exactly. And when all this settles down, hopefully Christoph, and others, will put up binary wheels and we'll have one stop installing that supports virtualenv, and pypi discover of "good enough" binary wheels. My point is that is may not be wise to try to support the more complex builds -- they ARE complex, and trying to suppor it with an auto tool is a bit much. Oscar wrote: > And actually 'pip wheel numpy' works fine for me on Windows with MinGW installed. Good start, but the bigger issue is that 'pip install' finds that wheel... I'm still confused as to the state of all this -- are the tools ready for project to start posting wheels so that pip can find them? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From p.f.moore at gmail.com Fri Aug 23 00:52:38 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 22 Aug 2013 23:52:38 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 22 August 2013 23:08, Chris Barker - NOAA Federal wrote: > I want to give it a shot for OS-X -- no one seems to want to maintian > bdist_mpkg, and it's time to move forward... > My impression is that the architecture and "fat binary" stuff on OSX is the bit that may bite you. I know little or nothing about OSX, but I'm sure if you try and report on how you get on, the people on the list will be able to help you get things sorted and we will be able to get any dark corners ironed out. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Aug 23 00:57:12 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 15:57:12 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Thu, Aug 22, 2013 at 3:52 PM, Paul Moore wrote: > On 22 August 2013 23:08, Chris Barker - NOAA Federal > My impression is that the architecture and "fat binary" stuff on OSX is the > bit that may bite you. exactly. > I know little or nothing about OSX, but I'm sure if > you try and report on how you get on, the people on the list will be able to > help you get things sorted and we will be able to get any dark corners > ironed out. yup -- we'll never figure it out if no one uses it... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ncoghlan at gmail.com Fri Aug 23 05:17:32 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 23 Aug 2013 13:17:32 +1000 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 23 August 2013 08:03, Chris Barker - NOAA Federal wrote: > On Thu, Aug 22, 2013 at 9:37 AM, Paul Moore wrote: > >> That is essentially possible now. >> >> 1. Go to Christoph Gohlke's website and download his bdist_wininst >> installers for numpy and scipy. > > .... > > Exactly. And when all this settles down, hopefully Christoph, and > others, will put up binary wheels and we'll have one stop installing > that supports virtualenv, and pypi discover of "good enough" binary > wheels. Right - this is exactly my ambition. Have pip+wheel+virtualenv provide a "good enough" out-of-the-box experience, but not necessarily support a fully optimised experience for any given platform. There are too many different possible full stack integration technologies for it to make sense for us to try to supersede them all - instead, I'd like to provide an 80% cross-platform solution that "plays well enough with others" to cover the remaining 20%. The "others" then includes things like zc.buildout, conda, Linux distro package managers, Microsoft System Centre, automated configuration management tools, PaaS providers like OpenShift and Heroku, container technologies like Docker and the underpinnings of OpenShift. In this space, the goal of the pip+wheel ecosystem will be to make it possible to either use a command like "pip install zc.buildout" or "pip install conda" to bootstrap a cross-platform toolchain, or else to use a platform specific toolchain (like pyp2rpm) to *consume* the upstream packages and produce nice, policy compliant, packages automatically. Managing arbitrary binary dependencies adds a lot of complexity for a capability that I believe the majority of Python projects don't need. Even the ones that can use it (like scientific tools) can often provide a "good enough" fallback option that will fit within the constraints of the draft metadata 2.0 standards. > My point is that is may not be wise to try to support the more complex > builds -- they ARE complex, and trying to suppor it with an auto tool > is a bit much. This is where I think "pip install conda" shows a lot of promise as a good, cross-platform solution, at least in the scientific space. However, the trade-off it makes is that the hash-based packaging system means you *always* pin your dependencies when building a package, with all the downsides that entails (mainly in increasing the complexity of security updates). It's just that in the scientific space, easily reproducing a particular software stack will often trump the desire to make security updates easy to deploy with minimal impact on other components. That said, I'm considering the idea of adding a "variant" field to the compatibility tags for wheel 1.1, along the lines of what Oscar Benjamin suggested earlier. By default, installers would only find wheels without a variant defined, but users could opt in to looking for particular variants. The meaning of the variants field would be entirely distribution specific. Numpy, for example, could publish: numpy-1.7.1-cp27-cp22m-win32.whl numpy-1.7.1-cp27-cp22m-win32-sse.whl numpy-1.7.1-cp27-cp22m-win32-sse2.whl numpy-1.7.1-cp27-cp22m-win32-sse3.whl The only restrictions on the variant tags would be: 1. must be ASCII 2. must not contain '.' or '-' characters You could even go to the extent of using hashes as variant tags. > Oscar wrote: >> And actually 'pip wheel numpy' works fine for me on Windows with MinGW > installed. > > Good start, but the bigger issue is that 'pip install' finds that wheel... > > I'm still confused as to the state of all this -- are the tools ready > for project to start posting wheels so that pip can find them? As others have noted: - definite yes for building your own wheels for internal use (including simple build caching to speed up virtualenv creation) - qualified yes for publication on PyPI (i.e. "there may still be rough edges, so don't be too surprised if this still has flaws at this point, especially on OS X and Linux") Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Aug 23 06:00:46 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 23 Aug 2013 00:00:46 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Aug 22, 2013, at 11:17 PM, Nick Coghlan wrote: > - qualified yes for publication on PyPI (i.e. "there may still be > rough edges, so don't be too surprised if this still has flaws at this > point, especially on OS X and Linux") PyPI won't even accept binary Wheels for Linux or OSX at the moment. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From benzolius at yahoo.com Fri Aug 23 06:07:52 2013 From: benzolius at yahoo.com (Benedek Zoltan) Date: Thu, 22 Aug 2013 21:07:52 -0700 (PDT) Subject: [Distutils] buildout bootstrap.py doesn't work on Sabayon Linux with system python Message-ID: <1377230872.43864.YahooMailNeo@web121605.mail.ne1.yahoo.com> Hi, I've downloaded bootstrap.py and tried to initialize with system python: sabd1 at sab /home/buildout $ wget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py ?? Warning: wildcards not supported in HTTP. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? --2013-08-23 06:40:52-- ?http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py ? ? ? ? ? Resolving svn.zope.org... 74.84.203.155 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? Connecting to svn.zope.org|74.84.203.155|:80... connected. ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? HTTP request sent, awaiting response... 200 OK ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Length: unspecified [text/x-python] ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? Saving to: ?bootstrap.py? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? [ ?<=> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?] 10,107 ? ? ?41.2KB/s ? in 0.2s ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? 2013-08-23 06:40:53 (41.2 KB/s) - ?bootstrap.py? saved [10107] sabd1 at sab /home/buildout $ python bootstrap.py? Downloading http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg ? ? ? ? ? ? ? ? ? Traceback (most recent call last): ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? File "bootstrap.py", line 258, in ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ws.require(requirement) ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? File "/tmp/tmpzJN6Tt/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 666, in require ? ? ? ? ? ? ? ? ? File "/tmp/tmpzJN6Tt/setuptools-0.6c11-py2.7.egg/pkg_resources.py", line 569, in resolve ? ? ? ? ? ? ? ? pkg_resources.VersionConflict: (setuptools 0.6c11 (/tmp/tmpzJN6Tt/setuptools-0.6c11-py2.7.egg), Requirement.parse('setuptools>=0.7')) I know, it works with virtualenv, but with system python is this expected?behavior? Thanks Zoltan Benedek -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Aug 23 06:47:24 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 22 Aug 2013 21:47:24 -0700 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On Thu, Aug 22, 2013 at 8:17 PM, Nick Coghlan wrote: > numpy-1.7.1-cp27-cp22m-win32.whl > numpy-1.7.1-cp27-cp22m-win32-sse.whl > numpy-1.7.1-cp27-cp22m-win32-sse2.whl > numpy-1.7.1-cp27-cp22m-win32-sse3.whl I'm still confused -- how would "pip install numpy" know which of these to install? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From donald at stufft.io Fri Aug 23 06:50:33 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 23 Aug 2013 00:50:33 -0400 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: <13A294B7-71F1-4977-AFD5-994CA8BFB46A@stufft.io> On Aug 23, 2013, at 12:47 AM, Chris Barker - NOAA Federal wrote: > On Thu, Aug 22, 2013 at 8:17 PM, Nick Coghlan wrote: > >> numpy-1.7.1-cp27-cp22m-win32.whl >> numpy-1.7.1-cp27-cp22m-win32-sse.whl >> numpy-1.7.1-cp27-cp22m-win32-sse2.whl >> numpy-1.7.1-cp27-cp22m-win32-sse3.whl > > I'm still confused -- how would "pip install numpy" know which of > these to install? Most likely pip would just ignore anything that has a variant it doesn't understand. So it would only see the first one as an available download. However an alternative installer that understands the variants could use the additional information to select one of the SSE optimized downloads. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ronaldoussoren at mac.com Fri Aug 23 08:22:18 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 23 Aug 2013 08:22:18 +0200 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: <431A1498-B690-40BB-8446-C26DB18FED59@mac.com> On 23 Aug, 2013, at 0:52, Paul Moore wrote: > On 22 August 2013 23:08, Chris Barker - NOAA Federal wrote: > I want to give it a shot for OS-X -- no one seems to want to maintian > bdist_mpkg, and it's time to move forward... > > My impression is that the architecture and "fat binary" stuff on OSX is the bit that may bite you. I know little or nothing about OSX, but I'm sure if you try and report on how you get on, the people on the list will be able to help you get things sorted and we will be able to get any dark corners ironed out. I don't really expect problems on OSX, I've used binary eggs in the past and those work just fine. Wheels seem to be simular enough to eggs to not expect problems there either. The one thing that might be problematic later on is that distutils (and hence setuptools and eggs) uses labels for sets of architectures in fat binaries, while wheels can describe those directly. That is, distutils uses "intel" as the architecture string for the set {i386, x86_64}, while wheel can use something like "darwin_i386.darwin_x86_64" (through PEP 425, the actual value may be different as this is based on a light rereading of the pep). That is an optional difference and can be ignored for now. Note that fat binaries are not at all problematic from the point of view of installing a wheel, a fat binary is a single file that happens to work on multiple CPU archectures. Creating a structure that would allow for wheels that support both 32-bit and 64-bit Windows is harder because you'd have two .pyd files that obviously cannot have the same path in the filesystem or wheel archive (but easily solved by another level of indirection, such as ".pyext" directory that can contain extension files whose name is the result of distutils.util.get_platform()). Ronald From ronaldoussoren at mac.com Fri Aug 23 08:31:58 2013 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 23 Aug 2013 08:31:58 +0200 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 23 Aug, 2013, at 5:17, Nick Coghlan wrote: > > That said, I'm considering the idea of adding a "variant" field to the > compatibility tags for wheel 1.1, along the lines of what Oscar > Benjamin suggested earlier. By default, installers would only find > wheels without a variant defined, but users could opt in to looking > for particular variants. The meaning of the variants field would be > entirely distribution specific. Numpy, for example, could publish: > > numpy-1.7.1-cp27-cp22m-win32.whl > numpy-1.7.1-cp27-cp22m-win32-sse.whl > numpy-1.7.1-cp27-cp22m-win32-sse2.whl > numpy-1.7.1-cp27-cp22m-win32-sse3.whl > > The only restrictions on the variant tags would be: > 1. must be ASCII > 2. must not contain '.' or '-' characters > > You could even go to the extent of using hashes as variant tags. Is adding variants necessary? Numpy use runtime selection for picking the most appropriate extension code (heck, AFAIK recent versions of GCC can even compile multiple variants of functions and pick the right one automaticly). A variant field like this introduces a new failure mode: uses that install packages on one machine and then copy the entire tree to another one where the software crashes hard because the other machine is older and doesn't support some CPU features. That said, I have no experience with directly using SSE as most of my code either doesn't vector calculations or isn't anywhere close to the performance limitations of using naive code (CPU's are too fast ;-) Regards, Ronald From p.f.moore at gmail.com Fri Aug 23 08:51:52 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 23 Aug 2013 07:51:52 +0100 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 23 August 2013 05:47, Chris Barker - NOAA Federal wrote: > On Thu, Aug 22, 2013 at 8:17 PM, Nick Coghlan wrote: > > > numpy-1.7.1-cp27-cp22m-win32.whl > > numpy-1.7.1-cp27-cp22m-win32-sse.whl > > numpy-1.7.1-cp27-cp22m-win32-sse2.whl > > numpy-1.7.1-cp27-cp22m-win32-sse3.whl > > I'm still confused -- how would "pip install numpy" know which of > these to install? > The cp27/cp22m/win32 bits are selected automatically based on target platform (it's the "tags" mechanism, see the Wheel PEP) Nick is suggesting that the version with no sse will always be selected by default, but the user can supply some sort of command line argument saying effectively "if you have a sse2 variant, give me that one instead". Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius at pov.lt Fri Aug 23 08:50:45 2013 From: marius at pov.lt (Marius Gedminas) Date: Fri, 23 Aug 2013 09:50:45 +0300 Subject: [Distutils] buildout bootstrap.py doesn't work on Sabayon Linux with system python In-Reply-To: <1377230872.43864.YahooMailNeo@web121605.mail.ne1.yahoo.com> References: <1377230872.43864.YahooMailNeo@web121605.mail.ne1.yahoo.com> Message-ID: <20130823065045.GA1694@fridge.pov.lt> On Thu, Aug 22, 2013 at 09:07:52PM -0700, Benedek Zoltan wrote: > I've downloaded bootstrap.py and tried to initialize with system python: > > sabd1 at sab /home/buildout $ wget http://svn.zope.org/*checkout*/zc.buildout/trunk/bootstrap/bootstrap.py ?? FWIW that's an old version. You should be using one of wget http://downloads.buildout.org/1/bootstrap.py wget http://downloads.buildout.org/2/bootstrap.py to get a bootstrap for zc.buildout 1.7.x or 2.2, respectively. If you don't know which one you want, use 2. > sabd1 at sab /home/buildout $ python bootstrap.py? > Downloading http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11-py2.7.egg ? ? ? ? ? ? ? ? ? > Traceback (most recent call last): ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ... > pkg_resources.VersionConflict: (setuptools 0.6c11 (/tmp/tmpzJN6Tt/setuptools-0.6c11-py2.7.egg), Requirement.parse('setuptools>=0.7')) > > I know, it works with virtualenv, but with system python is this expected?behavior? Basically, yes. At least it's what I've come to expect. Here's my fool-proof method of setting up buildouts on the brave new post-setuptools-0.7 world: virtualenv python python/bin/pip install -U setuptools python/bin/python bootstrap.py bin/buildout You'll notice that I upgrade setuptools in the virtualenv I created. That's because I'm using python-virtualenv from Ubuntu, and it installs an old copy of distribute in the virtualenv by default. Then bootstrap tries to upgrade it to the newest distribute, which is a shim that depends on new setuptools, but bootstrap's upgrader isn't smart enough to go and fetch dependencies so it all breaks down in the same error you've seen. Upgrading setuptools with pip avoids this failure. Marius Gedminas -- Beware of bugs in the above code; I have only proved it correct, not tried it. -- Donald Knuth -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 190 bytes Desc: Digital signature URL: From ncoghlan at gmail.com Fri Aug 23 09:40:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 23 Aug 2013 17:40:09 +1000 Subject: [Distutils] What does it mean for Python to "bundle pip"? In-Reply-To: References: Message-ID: On 23 August 2013 16:31, Ronald Oussoren wrote: > On 23 Aug, 2013, at 5:17, Nick Coghlan wrote: >> That said, I'm considering the idea of adding a "variant" field to the >> compatibility tags for wheel 1.1, along the lines of what Oscar >> Benjamin suggested earlier. By default, installers would only find >> wheels without a variant defined, but users could opt in to looking >> for particular variants. The meaning of the variants field would be >> entirely distribution specific. Numpy, for example, could publish: >> >> numpy-1.7.1-cp27-cp22m-win32.whl >> numpy-1.7.1-cp27-cp22m-win32-sse.whl >> numpy-1.7.1-cp27-cp22m-win32-sse2.whl >> numpy-1.7.1-cp27-cp22m-win32-sse3.whl >> >> The only restrictions on the variant tags would be: >> 1. must be ASCII >> 2. must not contain '.' or '-' characters >> >> You could even go to the extent of using hashes as variant tags. > > Is adding variants necessary? Numpy use runtime selection for picking > the most appropriate extension code (heck, AFAIK recent versions of GCC > can even compile multiple variants of functions and pick the right one > automaticly). No, I'm not sure the variant system is necessary. It is almost certainly acceptable to constrain people to offering at most one binary per platform per Python interpreter per Python version for the wheel ecosystem, and suggest they use something hash based like conda if they need finer granularity than that. If we *did* add that flexibility to wheels, though, then the variant system is how I would do it. It would just be an arbitrary labelling mechanism that allowed users to say "give me this variant rather than the default one", not anything actually automated or necessarily reflecting an underlying system capability. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ralf at systemexit.de Fri Aug 23 10:40:33 2013 From: ralf at systemexit.de (Ralf Schmitt) Date: Fri, 23 Aug 2013 10:40:33 +0200 Subject: [Distutils] buildout bootstrap.py doesn't work on Sabayon Linux with system python In-Reply-To: <20130823065045.GA1694@fridge.pov.lt> (Marius Gedminas's message of "Fri, 23 Aug 2013 09:50:45 +0300") References: <1377230872.43864.YahooMailNeo@web121605.mail.ne1.yahoo.com> <20130823065045.GA1694@fridge.pov.lt> Message-ID: <87bo4oiyv2.fsf@winserver.brainbot.com> Marius Gedminas writes: > > Basically, yes. At least it's what I've come to expect. > > Here's my fool-proof method of setting up buildouts on the brave new > post-setuptools-0.7 world: > > virtualenv python > python/bin/pip install -U setuptools > python/bin/python bootstrap.py > bin/buildout > Another fool-proof method is setting up a virtualenv with the --no-setuptools --no-pip flags. This gives you a clean python environment to work with buildout and you only need to do this once. If you create that virtualenv as root, you're also protected from accidentally installing python packages. You'll need virtualenv 1.10 or above in order to use --no-setuptools --no-pip. -- Cheers Ralf From reachbach at outlook.com Fri Aug 23 12:45:31 2013 From: reachbach at outlook.com (bharath ravi kumar) Date: Fri, 23 Aug 2013 16:15:31 +0530 Subject: [Distutils] Distributable binary with dependencies Message-ID: Hi, I'm looking to package an application with all its dependencies for deployment on multiple hosts. I'd like to ensure that there is no compilation or setup step before starting the application in production. An nice to have ability would be to isolate base library dependencies per application (like virtualenv does). Ideally, the development -> deployment lifecycle would involve: (a) Build an application archive with all its dependencies baked in (b) Copy archive to a host in production. (c) Unwrap archive (d) Start services. (Note that the build host & production hosts are identical in architecture, OS patch level and python version). Having looked at various tools (e.g. distutils, setup tools, pip + virtualenv, egg, wheel, pyinstaller, etc.) available to address specific aspects/stages of the development lifecycle, I'm undecided on the tool/methodology to adopt in order to comprehensively solve the above problem. Any recommendation in this regard would be very helpful. Thanks for for your time. -Bharath -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Fri Aug 23 18:39:02 2013 From: carl at oddbird.net (Carl Meyer) Date: Fri, 23 Aug 2013 10:39:02 -0600 Subject: [Distutils] Distributable binary with dependencies In-Reply-To: References: Message-ID: <52179026.4010901@oddbird.net> Hi Bharath, On 08/23/2013 04:45 AM, bharath ravi kumar wrote: > I'm looking to package an application with all its dependencies for > deployment on multiple hosts. I'd like to ensure that there is no > compilation or setup step before starting the application in production. > An nice to have ability would be to isolate base library dependencies > per application (like virtualenv does). Ideally, the development -> > deployment lifecycle would involve: (a) Build an application archive > with all its dependencies baked in (b) Copy archive to a host in > production. (c) Unwrap archive (d) Start services. (Note that the build > host & production hosts are identical in architecture, OS patch level > and python version). Some options if you want zero installation steps on production hosts: 1) Vendor dependencies' Python code directly into your application. You can use pip to automate this based on requirements files with a script like https://github.com/mozilla/moztrap/blob/master/bin/generate-vendor-lib - for portability this is normally only possible with pure-Python dependencies, but in your case if the app will only ever run on identical servers you could do it for compiled (C-extension) dependencies as well. One downside is that it generally requires some kind of sys.path hacking in your application environment or startup to make the vendor-library importable. 2) Install things into virtualenvs using pip on the build host (possibly using pre-built wheels to speed that up), and then copy the entire virtualenv to the production host. If build and production hosts are identical in every way (including the path of the virtualenv), this will Just Work. There are also tools like https://github.com/PolicyStat/terrarium that aim to smooth this workflow. 3) Generate OS packages (debs or rpms) containing your complete app and installed dependencies. I'd recommend this for the smoothest experience once you have some tooling in place, but it will probably take some time initially to develop that tooling. You might find Hynek Schlawack's blog post on the topic helpful: http://hynek.me/articles/python-app-deployment-with-native-packages/ Carl From pje at telecommunity.com Fri Aug 23 21:38:46 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 23 Aug 2013 15:38:46 -0400 Subject: [Distutils] Distributable binary with dependencies In-Reply-To: References: Message-ID: On Fri, Aug 23, 2013 at 6:45 AM, bharath ravi kumar wrote: > I'm looking to package an application with all its dependencies for > deployment on multiple hosts. I'd like to ensure that there is no > compilation or setup step before starting the application in production. An > nice to have ability would be to isolate base library dependencies per > application (like virtualenv does). Ideally, the development -> deployment > lifecycle would involve: (a) Build an application archive with all its > dependencies baked in (b) Copy archive to a host in production. (c) Unwrap > archive (d) Start services. (Note that the build host & production hosts are > identical in architecture, OS patch level and python version). You can use "easy_install -Zmad deployment_dir application", then archive deployment_dir and extract it on the target machines. (Note: "application" must be a setuptools-packaged project with its dependencies declared, for easy_install to know what to build and deploy.) The "Z" option means "unzip eggs", "m" means "don't worry about the target being on sys.path; we're not trying to install a default version", "a" means "copy all dependencies, even if locally installed already", and "d" means "install libraries and scripts to the following directory". So, the scripts will be put inside deployment_dir with a bunch of adjacent subdirectories containing all the compiled and ready-to-use libraries. The resulting directory is a portable installation of "application": as long as the entire subdirectory is copied to the target machines, everything should work just fine. None of the dependencies or the app itself will interfere with other Python code installed on the target system; it is in a sense a minimal virtualenv which will run whatever scripts that easy_install puts in that directory. One note: the target machines *will* need pkg_resources installed, and it will not be included in the directory by default. If they don't have local copies installed (due to e.g. setuptools, distribute, etc. being installed), you can manually copy a pkg_resources.py to the deployment directory, and it will be used by whatever scripts are in that directory. While there may be other tools available that support this kind of thing, I don't think any of them can do it quite this simply. This deployment scenario was actually a key use case for the original design of easy_install and eggs, so it actually works pretty decently for this. From ncoghlan at gmail.com Sat Aug 24 14:27:39 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 24 Aug 2013 22:27:39 +1000 Subject: [Distutils] Changing the way packaging related PEPs are drafted Message-ID: In order to provide an issue tracker and easy pull requests for the metadata PEPs, I have set up a dedicated repo for them on BitBucket: https://bitbucket.org/pypa/pypi-metadata-formats/src The current contents are the PEP 426 (core-metadata.rst) and 440 (versioning.rst) drafts, along with copies of the previously accepted PEPs for 376 (installation-db.rst), 425 (compatibility-tags.rst) and 427 (wheel-format.rst). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Sat Aug 24 17:04:12 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 24 Aug 2013 15:04:12 +0000 (UTC) Subject: [Distutils] Changing the way packaging related PEPs are drafted References: Message-ID: Nick Coghlan gmail.com> writes: > > In order to provide an issue tracker and easy pull requests for the > metadata PEPs, I have set up a dedicated repo for them on BitBucket: > > https://bitbucket.org/pypa/pypi-metadata-formats/src > > The current contents are the PEP 426 (core-metadata.rst) and 440 > (versioning.rst) drafts, along with copies of the previously accepted > PEPs for 376 (installation-db.rst), 425 (compatibility-tags.rst) and > 427 (wheel-format.rst). Hmmm... is it possible to push them automatically to the peps repo on hg.python.org, still? Having stale versions online is confusing; moreover, the formatted version is much better on e.g. http://www.python.org/dev/peps/pep-0426/, than https://bitbucket.org/pypa/pypi-metadata-formats/src/tip/core-metadata.rst Regards Antoine. From donald at stufft.io Sat Aug 24 22:47:27 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 24 Aug 2013 16:47:27 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: <57BEEEE2-27F8-4F14-A01A-929333EF176B@stufft.io> On Aug 10, 2013, at 9:07 PM, Donald Stufft wrote: > [snip] I guess I'm going to ask for some pronouncement on this? It's been two weeks with no real feedback. FWIW tangentially related to this proposal, g.pypi.python.org is now 16 days out of date. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sat Aug 24 23:20:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Aug 2013 07:20:30 +1000 Subject: [Distutils] Changing the way packaging related PEPs are drafted In-Reply-To: References: Message-ID: On 25 Aug 2013 01:05, "Antoine Pitrou" wrote: > > Nick Coghlan gmail.com> writes: > > > > > In order to provide an issue tracker and easy pull requests for the > > metadata PEPs, I have set up a dedicated repo for them on BitBucket: > > > > https://bitbucket.org/pypa/pypi-metadata-formats/src > > > > The current contents are the PEP 426 (core-metadata.rst) and 440 > > (versioning.rst) drafts, along with copies of the previously accepted > > PEPs for 376 (installation-db.rst), 425 (compatibility-tags.rst) and > > 427 (wheel-format.rst). > > Hmmm... is it possible to push them automatically to the peps repo on > hg.python.org, still? > Having stale versions online is confusing; moreover, the formatted version > is much better on e.g. http://www.python.org/dev/peps/pep-0426/, than > https://bitbucket.org/pypa/pypi-metadata-formats/src/tip/core-metadata.rst The draft PEPs will still be updated on python.org when something is coherent enough for additional discussion (I should clarify that in the README). Aside from adding an issue tracker and pull request support, this change is about creating a public record of *all* the changes that persists across PEP number changes when we do a new version of one of the specifications. Cheers, Nick. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Aug 25 07:57:37 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Aug 2013 15:57:37 +1000 Subject: [Distutils] Multi-version import support for wheel files Message-ID: I'm currently working on the docs for the __main__.__requires__ feature of pkg_resources, and have been generally poking around inside pkg_resources before I started on that. It gave me an idea for a question that has come up a few times: how should we support parallel installation of multiple versions of the same distribution in the same Python installation, *without* requiring complete isolation in the form of virtual environments. The current solution (at least in Fedora), is to use the multi-version support in pkg_resources by installing unpacked egg files. To use CherryPy as an example, Fedora currently provides RPMs for both CherryPy 2 and 3. CherryPy 3 is the default and installed directly into site-packages, with an appropriate .egg-info directory CherryPy 2 is *not* the default, and is instead installed as an unpacked "CherryPy-2.3.0-py2.7.egg" directory. You can force this directory to be added to sys.path by doing the following in your __main__ module: __requires__ = ["CherryPy < 3"] import pkg_resources (__main__.__requires__ *has* to be set before importing pkg_resources or the default CherryPy 3 module will be activated automatically, and conflict with a later call to pkg_resources.requires that asks for CherryPy 2) While I'm not a fan (to put it mildly) of non-trivial side effects when importing a module, this approach to multi-version imports *does* work well (and, as noted, I'm currently working on improving the docs for it), and I think the approach to the filesystem layout in particular makes sense - the alternative versions are installed to the usual location, but pushed down a level in a subdirectory or zip archive. So, it seems to me that only two new pieces are needed to gain multi-version import support for wheel files: 1. An option (or options) to pip, telling it to just drop a wheel file (or the unpacked contents of the wheel as a directory) into site-packages instead of installing the distribution directly as the default version. The "root_is_purelib" setting in the wheel metadata would indicate whether the destination was purelib or platlib. A wheel installed this way wouldn't have script wrappers generated, etc - it would only allow the contents to be used as an importable library. 2. Support for the ".whl" filename format and internal layout in pkg_resources, modelling after the existing support for the ".egg" filename format. For wheels that include both purelib and platlib, this would involved adding both directories to the path. That means there wouldn't be an major design work to be done - just replicating what easy_install and pkg_resources already support for the egg format using the wheel format instead. Regardless of my personal feelings about the current *front end* API for multi-version imports (and PJE has put up with my ranting about that with remarkably good grace!), the current egg-based back end design looks solid to me and worth keeping rather than trying to invent something new for the sake of being different. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Aug 25 08:41:11 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 02:41:11 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> On Aug 25, 2013, at 1:57 AM, Nick Coghlan wrote: > [snip] I'll look at this closer, my off the cuff response isn't good but before I commit to a side I want to dig into how it actually works currently. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Aug 25 09:06:07 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Aug 2013 17:06:07 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On 25 August 2013 16:41, Donald Stufft wrote: > > On Aug 25, 2013, at 1:57 AM, Nick Coghlan wrote: > >> [snip] > > I'll look at this closer, my off the cuff response isn't good but before I > commit to a side I want to dig into how it actually works currently. The clumsiness of the __main__.__requires__ workaround aside, the main advantage this offers is that it *should* result in a relatively straightforward addition to pkg_resources to make it work with wheel files as well as eggs. That's important, because anyone that is currently doing side-by-side multi-versioning in Python is using the pkg_resources API to do it, since that's the only option currently available. If Fedora is going to switch completely to a wheel based build process, we need to be able to do it in a way that allows side-by-side imports through pkg_resources to continue to work. I'd previously considered writing a pkg_resources replacement that worked with wheels instead of eggs, but I've now realised that would be repeating one of the mistakes made with the distutils2 effort: just as we needed to account for the fact that a lot of people are currently building their projects with setuptools, we also need to account for the fact that anyone doing side-by-side imports is using pkg_resources. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Aug 25 09:24:34 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 03:24:34 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: <6E6A50F6-AB03-4A84-92B4-EF073D1A9021@stufft.io> On Aug 25, 2013, at 3:06 AM, Nick Coghlan wrote: > On 25 August 2013 16:41, Donald Stufft wrote: >> >> On Aug 25, 2013, at 1:57 AM, Nick Coghlan wrote: >> >>> [snip] >> >> I'll look at this closer, my off the cuff response isn't good but before I >> commit to a side I want to dig into how it actually works currently. > > The clumsiness of the __main__.__requires__ workaround aside, the main > advantage this offers is that it *should* result in a relatively > straightforward addition to pkg_resources to make it work with wheel > files as well as eggs. That's important, because anyone that is > currently doing side-by-side multi-versioning in Python is using the > pkg_resources API to do it, since that's the only option currently > available. > > If Fedora is going to switch completely to a wheel based build > process, we need to be able to do it in a way that allows side-by-side > imports through pkg_resources to continue to work. > > I'd previously considered writing a pkg_resources replacement that > worked with wheels instead of eggs, but I've now realised that would > be repeating one of the mistakes made with the distutils2 effort: just > as we needed to account for the fact that a lot of people are > currently building their projects with setuptools, we also need to > account for the fact that anyone doing side-by-side imports is using > pkg_resources. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia Yea I understand the motivations for it. The main thing i'm worried about is codifying a solution that ends up being a misfeature instead of a feature. This may be perfectly fine (hence why I want to dig into further having never used it greatly before) but in general I think we should be conservative in copying things over from the old way into the new way. The new tools in many ways are removing features that are less than optimal or overall a bad idea and that's good. We don't need to copy over every single feature if someone really depends on a feature we don't bring over they can continue to use the existing tooling. That being said I don't know for sure that this falls into that category which is why I want to dig into it more. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sun Aug 25 10:02:49 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Aug 2013 09:02:49 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On 25 August 2013 06:57, Nick Coghlan wrote: > I'm currently working on the docs for the __main__.__requires__ > feature of pkg_resources, and have been generally poking around inside > pkg_resources before I started on that. It gave me an idea for a > question that has come up a few times: how should we support parallel > installation of multiple versions of the same distribution in the same > Python installation, *without* requiring complete isolation in the > form of virtual environments. > > The current solution (at least in Fedora), is to use the multi-version > support in pkg_resources by installing unpacked egg files. To use > CherryPy as an example, Fedora currently provides RPMs for both > CherryPy 2 and 3. > > CherryPy 3 is the default and installed directly into site-packages, > with an appropriate .egg-info directory > > CherryPy 2 is *not* the default, and is instead installed as an > unpacked "CherryPy-2.3.0-py2.7.egg" directory. You can force this > directory to be added to sys.path by doing the following in your > __main__ module: > > __requires__ = ["CherryPy < 3"] > import pkg_resources > > (__main__.__requires__ *has* to be set before importing pkg_resources > or the default CherryPy 3 module will be activated automatically, and > conflict with a later call to pkg_resources.requires that asks for > CherryPy 2) > > While I'm not a fan (to put it mildly) of non-trivial side effects > when importing a module, this approach to multi-version imports *does* > work well (and, as noted, I'm currently working on improving the docs > for it), and I think the approach to the filesystem layout in > particular makes sense - the alternative versions are installed to the > usual location, but pushed down a level in a subdirectory or zip > archive. > > So, it seems to me that only two new pieces are needed to gain > multi-version import support for wheel files: > > 1. An option (or options) to pip, telling it to just drop a wheel file > (or the unpacked contents of the wheel as a directory) into > site-packages instead of installing the distribution directly as the > default version. The "root_is_purelib" setting in the wheel metadata > would indicate whether the destination was purelib or platlib. A wheel > installed this way wouldn't have script wrappers generated, etc - it > would only allow the contents to be used as an importable library. > > 2. Support for the ".whl" filename format and internal layout in > pkg_resources, modelling after the existing support for the ".egg" > filename format. For wheels that include both purelib and platlib, > this would involved adding both directories to the path. > > That means there wouldn't be an major design work to be done - just > replicating what easy_install and pkg_resources already support for > the egg format using the wheel format instead. > > Regardless of my personal feelings about the current *front end* API > for multi-version imports (and PJE has put up with my ranting about > that with remarkably good grace!), the current egg-based back end > design looks solid to me and worth keeping rather than trying to > invent something new for the sake of being different. > Like Donald, I'm going to need to look into the proposal a bit more before commenting. My main concern is that for people who *don't* need multi-versioning, or who only use it for one or two special cases, there is no detrimental impact on user interface or runtime (the "if you don't use it, you don't pay for it" principle). A lot of the FUD around setuptools' multi-versioning was around precisely this point (path munging, etc) and I want to understand how the proposed solution avoids repeating those problems (to the extent that they were real and not imagined). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Aug 25 10:29:31 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 04:29:31 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On Aug 25, 2013, at 4:02 AM, Paul Moore wrote: > My main concern is that for people who *don't* need multi-versioning, or who only use it for one or two special cases, there is no detrimental impact on user interface or runtime (the "if you don't use it, you don't pay for it" principle). A lot of the FUD around setuptools' multi-versioning was around precisely this point (path munging, etc) and I want to understand how the proposed solution avoids repeating those problems (to the extent that they were real and not imagined). Yea my primary concerns is that the cost is paid only by those who use the feature and to make sure it's not something that's the way it works isn't going to leave us regretting the decision down the road. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From vinay_sajip at yahoo.co.uk Sun Aug 25 11:58:09 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 25 Aug 2013 09:58:09 +0000 (UTC) Subject: [Distutils] Multi-version import support for wheel files References: Message-ID: Nick Coghlan gmail.com> writes: > 1. An option (or options) to pip, telling it to just drop a wheel file > (or the unpacked contents of the wheel as a directory) into > site-packages instead of installing the distribution directly as the > default version. The "root_is_purelib" setting in the wheel metadata > would indicate whether the destination was purelib or platlib. A wheel > installed this way wouldn't have script wrappers generated, etc - it > would only allow the contents to be used as an importable library. ISTM you would still need path-munging for this to work, or is there some other way? Regards, Vinay Sajip From p.f.moore at gmail.com Sun Aug 25 13:02:46 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Aug 2013 12:02:46 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On 25 August 2013 10:58, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > > > 1. An option (or options) to pip, telling it to just drop a wheel file > > (or the unpacked contents of the wheel as a directory) into > > site-packages instead of installing the distribution directly as the > > default version. The "root_is_purelib" setting in the wheel metadata > > would indicate whether the destination was purelib or platlib. A wheel > > installed this way wouldn't have script wrappers generated, etc - it > > would only allow the contents to be used as an importable library. > > ISTM you would still need path-munging for this to work, or is there some > other way? > Essentially, that is my question - and I'd like to know what Nick's proposal is here because I am not happy with the existing pkg_resources solution of using .pth files. I know there's a new feature being discussed on import-sig which may replace the need for pth files, but I don't want to see pth files being needed if that isn't available. Specific reasons I have for this: 1. The extra sys.path entries that get added (not for performance reasons, but because having a long sys.path is difficult for the interactive user to interpret when trying to see what's going on). 2. The whole "if I'm not in a site packages directory I won't work" issue 3. Questions of where things go on sys.path (if it depends on the order you "declare" things, it gets nasty when there are name clashes) The other thing I want to understand is how things would work if, for example, I wanted to use CherryPy 3 99% of the time, but occasionally needed CherryPy 2. Could I install CherryPy 3 as a normal (not multi-versioned) package, and then override it when needed? So only applications needing CherryPy 2 had to declare a version requirement? More generally, how does a project choose whether to use run-time multi-versioning, vs metadata declaring a dependency for install time? Why would I ever write code that uses a run-time dependency on CherryPy 2, rather than saying in my metadata "requires: CherryPy = 2"? Or conversely why would I ever declare an install-time dependency rather than declaring my requirements at runtime "because that's more flexible"? Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Aug 25 13:48:54 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Aug 2013 21:48:54 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On 25 August 2013 21:02, Paul Moore wrote: > Essentially, that is my question - and I'd like to know what Nick's proposal > is here because I am not happy with the existing pkg_resources solution of > using .pth files. I know there's a new feature being discussed on import-sig > which may replace the need for pth files, but I don't want to see pth files > being needed if that isn't available. Specific reasons I have for this: > > 1. The extra sys.path entries that get added (not for performance reasons, > but because having a long sys.path is difficult for the interactive user to > interpret when trying to see what's going on). > 2. The whole "if I'm not in a site packages directory I won't work" issue > 3. Questions of where things go on sys.path (if it depends on the order you > "declare" things, it gets nasty when there are name clashes) I'm not proposing copying over any of the implicit .pth file magic. As far as I am aware, that's unrelated to the explicit multi-versioning feature - I believe it's only needed if you want to install something as an egg archive or subdirectory *and* have it available for import by default. The namespace package related .pth files are also unrelated. Explicit multi-versioning is different - you just install the directory, and you can't import from it by default. However, pkg_resources can be used to activate it if it is needed to satisfy a runtime requirement (or a dependency of a runtime requirement). If you don't import pkg_resources, none of the parallel installed versions will ever activate - you'll just get the version that is on sys.path by default. > The other thing I want to understand is how things would work if, for > example, I wanted to use CherryPy 3 99% of the time, but occasionally needed > CherryPy 2. Could I install CherryPy 3 as a normal (not multi-versioned) > package, and then override it when needed? So only applications needing > CherryPy 2 had to declare a version requirement? Yep, this is exactly how Fedora & EPEL use it. CherryPy 3 is installed as normal in site-packages (with an adjacent egg-info directory), while CherryPy 2 is installed as an egg file containing the earlier version of the module and the relevant metadata. > More generally, how does a project choose whether to use run-time > multi-versioning, vs metadata declaring a dependency for install time? Why > would I ever write code that uses a run-time dependency on CherryPy 2, > rather than saying in my metadata "requires: CherryPy = 2"? Or conversely > why would I ever declare an install-time dependency rather than declaring my > requirements at runtime "because that's more flexible"? For Beaker, we use runtime dependencies because we build system packages that run in the system Python, but need to run on RHEL 6 (CherryPy 2 as default) and also on recent versions of Fedora (CherryPy 3 as default, CherryPy 2 available for multi-version import). We also use it in our doc build scripts to handle the different versions of Sphinx (Sphinx 1.x as default in Fedora, but only available as a multi-version import on RHEL 6). The reason to declare install time dependencies as well is that pkg_resources works recursively - the dependencies of any distribution you activate will be checked to ensure you're importing a consistent set of packages into your application. If you use virtual environments to create isolated stacks for each application, these issues don't come up. On the other hand, if you're trying to make thousands of packages play nice in a single Python installation (as Linux distros do), then you need to deal with the "parallel installation of mutually incompatible versions" problem. The current Linux distro solution is to use pkg_resources, and having been working with it for several months now, while I still think there are some significant rough edges in the way the pkg_resources front end works, I'm happy that the *file layout* the explicit multi-versioning feature uses is a decent one that should translate to wheel files fairly easily. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun Aug 25 15:07:59 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 09:07:59 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On Aug 25, 2013, at 1:57 AM, Nick Coghlan wrote: > I'm currently working on the docs for the __main__.__requires__ > feature of pkg_resources, and have been generally poking around inside > pkg_resources before I started on that. It gave me an idea for a > question that has come up a few times: how should we support parallel > installation of multiple versions of the same distribution in the same > Python installation, *without* requiring complete isolation in the > form of virtual environments. > > The current solution (at least in Fedora), is to use the multi-version > support in pkg_resources by installing unpacked egg files. To use > CherryPy as an example, Fedora currently provides RPMs for both > CherryPy 2 and 3. > > CherryPy 3 is the default and installed directly into site-packages, > with an appropriate .egg-info directory > > CherryPy 2 is *not* the default, and is instead installed as an > unpacked "CherryPy-2.3.0-py2.7.egg" directory. You can force this > directory to be added to sys.path by doing the following in your > __main__ module: > > __requires__ = ["CherryPy < 3"] > import pkg_resources > > (__main__.__requires__ *has* to be set before importing pkg_resources > or the default CherryPy 3 module will be activated automatically, and > conflict with a later call to pkg_resources.requires that asks for > CherryPy 2) > > While I'm not a fan (to put it mildly) of non-trivial side effects > when importing a module, this approach to multi-version imports *does* > work well (and, as noted, I'm currently working on improving the docs > for it), and I think the approach to the filesystem layout in > particular makes sense - the alternative versions are installed to the > usual location, but pushed down a level in a subdirectory or zip > archive. > > So, it seems to me that only two new pieces are needed to gain > multi-version import support for wheel files: > > 1. An option (or options) to pip, telling it to just drop a wheel file > (or the unpacked contents of the wheel as a directory) into > site-packages instead of installing the distribution directly as the > default version. The "root_is_purelib" setting in the wheel metadata > would indicate whether the destination was purelib or platlib. A wheel > installed this way wouldn't have script wrappers generated, etc - it > would only allow the contents to be used as an importable library. > > 2. Support for the ".whl" filename format and internal layout in > pkg_resources, modelling after the existing support for the ".egg" > filename format. For wheels that include both purelib and platlib, > this would involved adding both directories to the path. > > That means there wouldn't be an major design work to be done - just > replicating what easy_install and pkg_resources already support for > the egg format using the wheel format instead. > > Regardless of my personal feelings about the current *front end* API > for multi-version imports (and PJE has put up with my ranting about > that with remarkably good grace!), the current egg-based back end > design looks solid to me and worth keeping rather than trying to > invent something new for the sake of being different. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig I think I am against this. Part of the beauty of Wheel is that it is simply a package format. This means it does not need to concern itself with situations that Egg had to which bloat the spec and make it harder to implement. I feel like tacking too much onto the Wheel format is going to end us up in the exact same place we are at with Eggs today. The more use cases we force onto Wheel the more we have to consider anytime we make a change. For instance with the proposed change now we have to worry about the importability of a Wheel file if an unpacked Wheel is added directly to sys.path which is something we currently do not need to worry about. As far as I can tell there's nothing preventing Installing a Wheel *into* and .egg directory which will give you exactly the same situation as you have today without needing to do *anything* except make a tool that will install a Wheel into an .egg directory. This solves your immediate desire without hanging more things onto Wheel. Additionally I think the way it munges sys.path is completely backwards. I believe, as you know, that the current order of the sys.path is somewhat nonsensical by default it goes ".", then standard library, then user site packages, then regular site packages. I believe that the order should be ".", user-packages, site-packages, and then standard library to provide a consistent ordering and allowing people to shadow packages using a hierarchy of specificity (the packages installed by a particular are more specific than the packages installed globally). However when I brought this up to you you were insistent that the fact that user installed code could not shadow the standard library was a feature and allowing it would be a bad move. However that is exactly what this system does. The sys.path looks like (after acting the cherrypy2 egg) ["cherryp2.egg", ".", stdlib, user-packages, site-packages]. This is wrong by both your definition and mine. So yea given that Wheels are not an on disk format, that you can install a Wheel into and egg if you want, and that the sys.path munging required is several broken I'm -1 on this. I do think we need a solution to multi version imports but I don't think this is it. However I think it's perfectly valid to tell people that the new tooling doesn't support this yet and they should continue to use the old. distutils2's problem was they tried to solve everything at once let's not make our problem that we rushed to find an answer for every problem and thus didn't fully flesh out all the options. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sun Aug 25 16:00:12 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Aug 2013 15:00:12 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On 25 August 2013 14:07, Donald Stufft wrote: > As far as I can tell there's nothing preventing Installing a Wheel *into* > and .egg > directory which will give you exactly the same situation as you have today > without needing to do *anything* except make a tool that will install a > Wheel > into an .egg directory. This solves your immediate desire without hanging > more things onto Wheel. > I think that relating this to wheels is a mistake. As Donald says, wheels are just a distribution format. Looking at this from a different perspective, I don't see any immediate issue with being able to install distributions outside of site-packages. That's what pip's --target option does. It seems like the multi-versioning support that is being discussed is nothing much more than a means of managing such "not on site-packages" directory in a way that allows them to be added to sys.path at runtime. On that basis, I don't have an issue with the idea. It's supported in pip (via --target) already, and the runtime aspects can be handled in pkg_resources or wherever without affecting me. If, on the other hand, there is a proposal in here to change pip, then (a) I'd like to see what the explicit proposal is before commenting, (b) it should apply equally to all means by which you can install using pip (wheel and sdist - AFAIK --target has no meaning for --develop installs, and the same should be true here), and (c) it should not need any changes to the wheel format (again, I see no indication that it does, but I'd like to see that explicitly stated). The key point here is that I *don't* want wheel associated with all of the extra baggage that goes along with the egg concept. Even putting wheels on sys.path gives me a nervous feeling that we're reinventing eggs. I feel strongly that wheel is inventing bdist_simple rather than egg, and should continue to do so. As regards Nick's proposal: 1. "An option (or options) to pip, telling it to just drop a wheel file (or the unpacked contents of the wheel as a directory) into site-packages instead of installing the distribution directly as the default version." Well, if it's in site-packages, it *is* importable/installed. That's how site-packages works. So I don't understand what "instead of installing" means. But if --target does what you want then go for it. If you want something different then I think it's likely a bad idea, but I'd need details of how it would work to be sure. 2. "Support for the ".whl" filename format and internal layout in pkg_resources". No, very definitely -1. The wheel format is a *distribution* format and pkg_resources is a *runtime* mechanism. Misxing the two is the key mistake (in my mind) that setuptools made, and we do not want to do so again. If you need a multiversion runtime layout for pkg_resources, then let's define it in a PEP, and write installers to put packages into that format if we need to. But let's not just reuse wheel as that format - that was not its intent. If a distribution format better than wheel comes along in the future, we should be able to replace it without needing to break everyone's runtime code because pkg_resources doesn't know wbout it. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Aug 25 16:02:40 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 10:02:40 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: <98FD7671-5A18-4B5C-9E71-1A098F3897FC@stufft.io> On Aug 25, 2013, at 10:00 AM, Paul Moore wrote: > > As regards Nick's proposal: > > 1. "An option (or options) to pip, telling it to just drop a wheel file (or the unpacked contents of the wheel as a directory) into site-packages instead of installing the distribution directly as the default version." Well, if it's in site-packages, it *is* importable/installed. That's how site-packages works. So I don't understand what "instead of installing" means. But if --target does what you want then go for it. If you want something different then I think it's likely a bad idea, but I'd need details of how it would work to be sure. As I understand it Nick means to take the .whl unzip it into a folder named foo-whatever?.whl and put that folder into site-packages. Basically the exact same structure as happens with .egg folders except with Wheels. > > 2. "Support for the ".whl" filename format and internal layout in pkg_resources". No, very definitely -1. The wheel format is a *distribution* format and pkg_resources is a *runtime* mechanism. Misxing the two is the key mistake (in my mind) that setuptools made, and we do not want to do so again. If you need a multiversion runtime layout for pkg_resources, then let's define it in a PEP, and write installers to put packages into that format if we need to. But let's not just reuse wheel as that format - that was not its intent. If a distribution format better than wheel comes along in the future, we should be able to replace it without needing to break everyone's runtime code because pkg_resources doesn't know wbout it. > > Paul ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sun Aug 25 18:13:39 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Aug 2013 17:13:39 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: <98FD7671-5A18-4B5C-9E71-1A098F3897FC@stufft.io> References: <98FD7671-5A18-4B5C-9E71-1A098F3897FC@stufft.io> Message-ID: On 25 August 2013 15:02, Donald Stufft wrote: > On Aug 25, 2013, at 10:00 AM, Paul Moore wrote: > > > As regards Nick's proposal: > > 1. "An option (or options) to pip, telling it to just drop a wheel file (or > the unpacked contents of the wheel as a directory) into site-packages > instead of installing the distribution directly as the default version." > Well, if it's in site-packages, it *is* importable/installed. That's how > site-packages works. So I don't understand what "instead of installing" > means. But if --target does what you want then go for it. If you want > something different then I think it's likely a bad idea, but I'd need > details of how it would work to be sure. > > > As I understand it Nick means to take the .whl unzip it into a folder > named foo-whatever?.whl and put that folder into site-packages. Basically > the exact same structure as happens with .egg folders except with Wheels. > OK, I get that. But I want to avoid referring to it as "wheel format" because that's what eggs did, having 3 distinct formats, all used for subtly different things and it meant that people had a confused view of "what an egg was". I'd rather not promote the same confusion for wheels. Thanks for the various comments that have been made. I think I'm clear where I stand now - I have no objection to the idea of the proposal, but (1) I'd rather the on-disk format was clearly distinguished from wheels as it's a runtime format rather than a distribution format, (2) I don't really want to add an option to pip to install in this format, but if we do it should work for sdists as well as wheels (notwithstanding any longer-term goal to deprecate installing from sdists) and (3) I don't want to see adding wheel files to sys.path as ever becoming a mainstream way of working (that's what "pkg_resources supports the whl format" means to me - which is why I want to see the unpacked format referred to as something other than "wheel"). Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun Aug 25 18:19:00 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sun, 25 Aug 2013 09:19:00 -0700 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Sun, Aug 25, 2013 at 12:06 AM, Nick Coghlan wrote: > anyone that is > currently doing side-by-side multi-versioning in Python is using the > pkg_resources API to do it, since that's the only option currently > available. I'm not sure it changes anything, but there are other options - ones specific to particular packages. For instance, wxPython has wx.version. And at least a few years ago, some other major packages (pyGTK?) also had roll-your-own approaches. I have no idea how active any of those are (wxPython's still works, though not on the Mac, and I don't think sees much use). -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From jim at zope.com Sun Aug 25 18:52:46 2013 From: jim at zope.com (Jim Fulton) Date: Sun, 25 Aug 2013 12:52:46 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: Message-ID: On Sun, Aug 25, 2013 at 1:57 AM, Nick Coghlan wrote: > I'm currently working on the docs for the __main__.__requires__ > feature of pkg_resources, and have been generally poking around inside > pkg_resources before I started on that. It gave me an idea for a > question that has come up a few times: how should we support parallel > installation of multiple versions of the same distribution in the same > Python installation, *without* requiring complete isolation in the > form of virtual environments. > > The current solution (at least in Fedora), is to use the multi-version > support in pkg_resources by installing unpacked egg files. This is also the approach used by buildout. (It's also the approach (except for the unpacked part) used in modern Java-ecosystem-based deployments, FWIW. Collect jar files, typically in a cache, and set application-specific classpaths to point to the right ones.) ... > CherryPy 2 is *not* the default, and is instead installed as an > unpacked "CherryPy-2.3.0-py2.7.egg" directory. You can force this > directory to be added to sys.path by doing the following in your > __main__ module: > > __requires__ = ["CherryPy < 3"] > import pkg_resources I'd never see this. Interesting. > While I'm not a fan (to put it mildly) of non-trivial side effects > when importing a module, Me neither. > this approach to multi-version imports *does* > work well (and, as noted, I'm currently working on improving the docs > for it), and I think the approach to the filesystem layout in > particular makes sense - the alternative versions are installed to the > usual location, but pushed down a level in a subdirectory or zip > archive. Note that buildout takes a different approach that I think is worth keeping in mind. In buildout, of course, there's a specific "build" step that assembles an application from its parts. In the case of Python distributions, this means creating an application specific Python path as a list of installed eggs. This works well. It's explicit and pretty non-invasive. No import magic, .pth files or funny site.py files*. Buildout really wants to have self-contained distribution installations, whether they be eggs or wheels or whatever, to function properly. Jim * There was a well-intention but unfortunate deviation in later buildout 1 versions. This was, somewhat ironically in pursuit of better integration with system Python installations. -- Jim Fulton http://www.linkedin.com/in/jimfulton From jim at zope.com Sun Aug 25 18:58:43 2013 From: jim at zope.com (Jim Fulton) Date: Sun, 25 Aug 2013 12:58:43 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Sun, Aug 25, 2013 at 3:06 AM, Nick Coghlan wrote: > On 25 August 2013 16:41, Donald Stufft wrote: >> >> On Aug 25, 2013, at 1:57 AM, Nick Coghlan wrote: >> >>> [snip] >> >> I'll look at this closer, my off the cuff response isn't good but before I >> commit to a side I want to dig into how it actually works currently. > > The clumsiness of the __main__.__requires__ workaround aside, the main > advantage this offers is that it *should* result in a relatively > straightforward addition to pkg_resources to make it work with wheel > files as well as eggs. That's important, because anyone that is > currently doing side-by-side multi-versioning in Python is using the > pkg_resources API to do it, since that's the only option currently > available. No. It isn't. Buildout doesn't use pks_resources to do it. (Buildout used pkg_resources at build time to manage package meta data, but I think that's orthogonal to what you're talking about.) I'd also hazard to guess that most of the folks with multi-version installs are using buildout to do it, as buildout does have a fair number of users. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From pje at telecommunity.com Sun Aug 25 21:53:27 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 25 Aug 2013 15:53:27 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Sun, Aug 25, 2013 at 12:58 PM, Jim Fulton wrote: > On Sun, Aug 25, 2013 at 3:06 AM, Nick Coghlan wrote: >> The clumsiness of the __main__.__requires__ workaround aside, the main >> advantage this offers is that it *should* result in a relatively >> straightforward addition to pkg_resources to make it work with wheel >> files as well as eggs. That's important, because anyone that is >> currently doing side-by-side multi-versioning in Python is using the >> pkg_resources API to do it, since that's the only option currently >> available. > > No. It isn't. Buildout doesn't use pks_resources to do it. > (Buildout used pkg_resources at build time to manage package meta > data, but I think that's orthogonal to what you're talking about.) > > I'd also hazard to guess that most of the folks with multi-version > installs are using buildout to do it, as buildout does have a > fair number of users. FWIW, I would also note that if you use easy_install to install anything, you are quite possibly using multi-version installs without realizing it. (The __main__.__requires__ API is used in easy_install-generated script wrappers, so there isn't any way you'd know about it without paying specific attention.) I don't know how big the "buildout users w/known multi-version" vs. "easy_install users w/implicit multi-version" groups are, but I imagine the combined group has got to be pretty darn big. ;-) From p.f.moore at gmail.com Sun Aug 25 22:32:09 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 25 Aug 2013 21:32:09 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On 25 August 2013 20:53, PJ Eby wrote: > FWIW, I would also note that if you use easy_install to install > anything, you are quite possibly using multi-version installs without > realizing it. (The __main__.__requires__ API is used in > easy_install-generated script wrappers, so there isn't any way you'd > know about it without paying specific attention.) > Unless I'm missing something, I suspect that this over-counts the number of people using multi-version, in the sense that many (the majority?) of wrapper scripts using multi-version do not actually need to,because the users never install more than one version. And quite likely don't even know that they could. Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Aug 25 22:51:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Aug 2013 06:51:09 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On 26 Aug 2013 05:53, "PJ Eby" wrote: > > On Sun, Aug 25, 2013 at 12:58 PM, Jim Fulton wrote: > > On Sun, Aug 25, 2013 at 3:06 AM, Nick Coghlan wrote: > >> The clumsiness of the __main__.__requires__ workaround aside, the main > >> advantage this offers is that it *should* result in a relatively > >> straightforward addition to pkg_resources to make it work with wheel > >> files as well as eggs. That's important, because anyone that is > >> currently doing side-by-side multi-versioning in Python is using the > >> pkg_resources API to do it, since that's the only option currently > >> available. > > > > No. It isn't. Buildout doesn't use pks_resources to do it. > > (Buildout used pkg_resources at build time to manage package meta > > data, but I think that's orthogonal to what you're talking about.) > > > > I'd also hazard to guess that most of the folks with multi-version > > installs are using buildout to do it, as buildout does have a > > fair number of users. > > FWIW, I would also note that if you use easy_install to install > anything, you are quite possibly using multi-version installs without > realizing it. (The __main__.__requires__ API is used in > easy_install-generated script wrappers, so there isn't any way you'd > know about it without paying specific attention.) > > I don't know how big the "buildout users w/known multi-version" vs. > "easy_install users w/implicit multi-version" groups are, but I > imagine the combined group has got to be pretty darn big. ;-) I'd be willing to bet the number of Linux installs relying on multi-version imports without the end user's knowledge trumps both :) Anyway, I like Paul's suggestion of defining a specific runtime format for this, even if it's just "wheel layout plus a RECORD file". I'm currently thinking of using the ".dist" suffix, matching the existing egg vs egg-info naming convention. The likely vehicle for defining it will be the next generation installation database format. Cheers Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun Aug 25 23:00:52 2013 From: donald at stufft.io (Donald Stufft) Date: Sun, 25 Aug 2013 17:00:52 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: <68411958-FE40-4BF1-B007-1C7DC1EA08B4@stufft.io> On Aug 25, 2013, at 4:51 PM, Nick Coghlan wrote: > Anyway, I like Paul's suggestion of defining a specific runtime format for this, even if it's just "wheel layout plus a RECORD file". I'm currently thinking of using the ".dist" suffix, matching the existing egg vs egg-info naming convention. > It seems to me the easiest thing to do is just continue using eggs for this feature for now especially if the proposal is just standardizing what eggs do and doesn't offer any benefits besides standardization. That gets you all the benefits sans standardization and doesn't spend time putting a PEP through (and all the back and forth that entails) for something that already works when we can spend the time on stuff that still needs actual design work. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Aug 25 23:21:09 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Aug 2013 07:21:09 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: <68411958-FE40-4BF1-B007-1C7DC1EA08B4@stufft.io> References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <68411958-FE40-4BF1-B007-1C7DC1EA08B4@stufft.io> Message-ID: On 26 Aug 2013 07:00, "Donald Stufft" wrote: > > > On Aug 25, 2013, at 4:51 PM, Nick Coghlan wrote: > >> Anyway, I like Paul's suggestion of defining a specific runtime format for this, even if it's just "wheel layout plus a RECORD file". I'm currently thinking of using the ".dist" suffix, matching the existing egg vs egg-info naming convention. > > > It seems to me the easiest thing to do is just continue using eggs for this feature for now especially if the proposal is just standardizing what eggs do and doesn't offer any benefits besides standardization. That gets you all the benefits sans standardization and doesn't spend time putting a PEP through (and all the back and forth that entails) for something that already works when we can spend the time on stuff that still needs actual design work. Egg based multi-version installs still suffer from the problem of lacking a RECORD file so you need an external tool to manage them properly. They also aren't integrated into pip's listing and upgrading capabilities. This is another problem in the "important but not urgent" category, though. This discussion covered enough of the big issues that I'm happy I can come up with a new standard that pip and pkg_resources will be willing to support at some point in the future, but in the meantime we can continue using the egg based approach. Cheers, Nick. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at zope.com Sun Aug 25 23:57:12 2013 From: jim at zope.com (Jim Fulton) Date: Sun, 25 Aug 2013 17:57:12 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <68411958-FE40-4BF1-B007-1C7DC1EA08B4@stufft.io> Message-ID: On Sun, Aug 25, 2013 at 5:21 PM, Nick Coghlan wrote: > > On 26 Aug 2013 07:00, "Donald Stufft" wrote: >> >> >> On Aug 25, 2013, at 4:51 PM, Nick Coghlan wrote: >> >>> Anyway, I like Paul's suggestion of defining a specific runtime format >>> for this, even if it's just "wheel layout plus a RECORD file". I'm currently >>> thinking of using the ".dist" suffix, matching the existing egg vs egg-info >>> naming convention. >> >> >> It seems to me the easiest thing to do is just continue using eggs for >> this feature for now especially if the proposal is just standardizing what >> eggs do and doesn't offer any benefits besides standardization. That gets >> you all the benefits sans standardization and doesn't spend time putting a >> PEP through (and all the back and forth that entails) for something that >> already works when we can spend the time on stuff that still needs actual >> design work. > > Egg based multi-version installs still suffer from the problem of lacking a > RECORD file so you need an external tool to manage them properly. Well, I'd argue that eggs are effectively also records. You can find out what's installed by simply looking at the names in whatever directory you put eggs. The harder part, of course, is deciding when an egg is no longer needed. I assume the RECORD file doesn't address that either. Note that with multi-version support, uninstalling things is an optimization, not a necessity. The only harm a never-uninstalled egg does is take up space and maybe make tools that scan for what's installed take more time. Jim -- Jim Fulton http://www.linkedin.com/in/jimfulton From ncoghlan at gmail.com Mon Aug 26 00:00:51 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Aug 2013 08:00:51 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <68411958-FE40-4BF1-B007-1C7DC1EA08B4@stufft.io> Message-ID: On 26 Aug 2013 07:57, "Jim Fulton" wrote: > > On Sun, Aug 25, 2013 at 5:21 PM, Nick Coghlan wrote: > > > > On 26 Aug 2013 07:00, "Donald Stufft" wrote: > >> > >> > >> On Aug 25, 2013, at 4:51 PM, Nick Coghlan wrote: > >> > >>> Anyway, I like Paul's suggestion of defining a specific runtime format > >>> for this, even if it's just "wheel layout plus a RECORD file". I'm currently > >>> thinking of using the ".dist" suffix, matching the existing egg vs egg-info > >>> naming convention. > >> > >> > >> It seems to me the easiest thing to do is just continue using eggs for > >> this feature for now especially if the proposal is just standardizing what > >> eggs do and doesn't offer any benefits besides standardization. That gets > >> you all the benefits sans standardization and doesn't spend time putting a > >> PEP through (and all the back and forth that entails) for something that > >> already works when we can spend the time on stuff that still needs actual > >> design work. > > > > Egg based multi-version installs still suffer from the problem of lacking a > > RECORD file so you need an external tool to manage them properly. > > Well, I'd argue that eggs are effectively also records. You can find out what's > installed by simply looking at the names in whatever directory you put eggs. > > The harder part, of course, is deciding when an egg is no longer needed. > I assume the RECORD file doesn't address that either. > > Note that with multi-version support, uninstalling things is an optimization, > not a necessity. The only harm a never-uninstalled egg does is take up > space and maybe make tools that scan for what's installed take more time. And make you fail security audits due to the "use" of unpatched software :) Cheers, Nick. > > Jim > > -- > Jim Fulton > http://www.linkedin.com/in/jimfulton -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Aug 26 00:14:36 2013 From: pje at telecommunity.com (PJ Eby) Date: Sun, 25 Aug 2013 18:14:36 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Sun, Aug 25, 2013 at 4:32 PM, Paul Moore wrote: > On 25 August 2013 20:53, PJ Eby wrote: >> >> FWIW, I would also note that if you use easy_install to install >> anything, you are quite possibly using multi-version installs without >> realizing it. (The __main__.__requires__ API is used in >> easy_install-generated script wrappers, so there isn't any way you'd >> know about it without paying specific attention.) > > > Unless I'm missing something, I suspect that this over-counts the number of > people using multi-version, in the sense that many (the majority?) of > wrapper scripts using multi-version do not actually need to,because the > users never install more than one version. And quite likely don't even know > that they could. That's just it: if you install two programs, one of which needs CherryPy 2 and the other CherryPy 3, then with easy_install this just works, without you having any idea that you even have more than one version installed, unless you for some reason choose to look into it. Thus, you don't have to know you have multiple versions installed; it can trivially happen by way of dependencies you aren't paying attention to. The more things you install, the more likely it is you have two versions hanging around. (The main limiting factor on conflicts isn't a choice to install multiple versions, it's the relative dearth of pinned versions and upper limits on version numbers. If everything just specifies minimum versions, you'll end up using the latest version for everything as the default version. It's only if a package pins or limits a dependency that any conflict is possible to begin with.) From vinay_sajip at yahoo.co.uk Mon Aug 26 10:19:14 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 26 Aug 2013 09:19:14 +0100 (BST) Subject: [Distutils] Multi-version import support for wheel files Message-ID: <1377505154.55932.YahooMailNeo@web171401.mail.ir2.yahoo.com> > From: Donald Stufft > I think I am against this. >? > Part of the beauty of Wheel is that it is simply a package format. This means > it does not need to concern itself with situations that Egg had to which bloat > the spec and make it harder to implement. I feel like tacking too much onto > the Wheel format is going to end us up in the exact same place we are at with > Eggs today. The more use cases we force onto Wheel the more we have to > consider anytime we make a change. For instance with the proposed change > now we have to worry about the importability of a Wheel file if an unpacked > Wheel is added directly to sys.path which is something we currently do not need > to worry about. >? > As far as I can tell there's nothing preventing Installing a Wheel *into*? > and .egg > directory which will give you exactly the same situation as you have today > without needing to do *anything* except make a tool that will install a Wheel > into an .egg directory. This solves your immediate desire without hanging > more things onto Wheel. >? > Additionally I think the way it munges sys.path is completely backwards. I? > believe, > as you know, that the current order of the sys.path is somewhat nonsensical by > default it goes ".", then standard library, then user site packages,? > then regular > site packages. I believe that the order should be ".", user-packages,? > site-packages, > and then standard library to provide a consistent ordering and allowing people > to shadow packages using a hierarchy of specificity (the packages installed by > a particular are more specific than the packages installed globally). However? > when > I brought this up to you you were insistent that the fact that user installed? > code could > not shadow the standard library was a feature and allowing it would be a bad? > move. > However that is exactly what this system does. The sys.path looks like (after? > acting > the cherrypy2 egg) ["cherryp2.egg", ".", stdlib,? > user-packages, site-packages]. This > is wrong by both your definition and mine. >? > So yea given that Wheels are not an on disk format, that you can install a Wheel? > into > and egg if you want, and that the sys.path munging required is several broken? > I'm > -1 on this. I do think we need a solution to multi version imports but I? > don't think this > is it. However I think it's perfectly valid to tell people that the new? > tooling doesn't support > this yet and they should continue to use the old. distutils2's problem was? > they tried to > solve everything at once let's not make our problem that we rushed to find? > an answer > for every problem and thus didn't fully flesh out all the options. +1 Regards, Vinay Sajip From p.f.moore at gmail.com Mon Aug 26 11:20:05 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 26 Aug 2013 10:20:05 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On 25 August 2013 23:14, PJ Eby wrote: > > Unless I'm missing something, I suspect that this over-counts the number > of > > people using multi-version, in the sense that many (the majority?) of > > wrapper scripts using multi-version do not actually need to,because the > > users never install more than one version. And quite likely don't even > know > > that they could. > > That's just it: if you install two programs, one of which needs > CherryPy 2 and the other CherryPy 3, then with easy_install this just > works, without you having any idea that you even have more than one > version installed, unless you for some reason choose to look into it. > > Thus, you don't have to know you have multiple versions installed; it > can trivially happen by way of dependencies you aren't paying > attention to. The more things you install, the more likely it is you > have two versions hanging around. OK, I see. But I'm not sure if we're agreeing or disagreeing over the result. To me, this is a bad thing on the principle that there is a cost to multiversion support (it's not part of core Python, so you have to do *something* to make it work) and so having people inadvertently pay that cost to use a feature that they don't actually *need* is wrong. An opt-in solution is fine, as in that case the user has to choose to use multiversion, and if they don't want to they can choose an alternative approach that they accept the cost of (for example, running their one CherryPy 2 using application in a virualenv). One other point, just as a matter of curiosity (because it's not relevant to the current discussion): in your explanation above, there doesn't seem to be any step that says the user normally uses CherryPy 3 (so that would be the one they would get automatically at the interactive interpreter). For me, that's really the only use case I'd have for multi-versioning - 99% of the time I use a particular version of a project, but I have one particular application that can't work with the version I prefer. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Mon Aug 26 14:34:40 2013 From: dholth at gmail.com (Daniel Holth) Date: Mon, 26 Aug 2013 08:34:40 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Mon, Aug 26, 2013 at 5:20 AM, Paul Moore wrote: > On 25 August 2013 23:14, PJ Eby wrote: >> >> > Unless I'm missing something, I suspect that this over-counts the number >> > of >> > people using multi-version, in the sense that many (the majority?) of >> > wrapper scripts using multi-version do not actually need to,because the >> > users never install more than one version. And quite likely don't even >> > know >> > that they could. >> >> That's just it: if you install two programs, one of which needs >> CherryPy 2 and the other CherryPy 3, then with easy_install this just >> works, without you having any idea that you even have more than one >> version installed, unless you for some reason choose to look into it. >> >> Thus, you don't have to know you have multiple versions installed; it >> can trivially happen by way of dependencies you aren't paying >> attention to. The more things you install, the more likely it is you >> have two versions hanging around. > > > OK, I see. But I'm not sure if we're agreeing or disagreeing over the > result. To me, this is a bad thing on the principle that there is a cost to > multiversion support (it's not part of core Python, so you have to do > *something* to make it work) and so having people inadvertently pay that > cost to use a feature that they don't actually *need* is wrong. An opt-in > solution is fine, as in that case the user has to choose to use > multiversion, and if they don't want to they can choose an alternative > approach that they accept the cost of (for example, running their one > CherryPy 2 using application in a virualenv). > > One other point, just as a matter of curiosity (because it's not relevant to > the current discussion): in your explanation above, there doesn't seem to be > any step that says the user normally uses CherryPy 3 (so that would be the > one they would get automatically at the interactive interpreter). For me, > that's really the only use case I'd have for multi-versioning - 99% of the > time I use a particular version of a project, but I have one particular > application that can't work with the version I prefer. > > Paul It is important to at least give the "unpacked wheel file that is added to sys.path" a different name. The format is designed to give different names to different things. I would like to see some consideration given to what Ruby and npm do, which is to place what we are calling dists into a special directory that only contains dists /somedir/distname-1.0/... rather than placing them as specially named objects on the normal search path. From pje at telecommunity.com Mon Aug 26 16:33:06 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Aug 2013 10:33:06 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: On Mon, Aug 26, 2013 at 5:20 AM, Paul Moore wrote: > On 25 August 2013 23:14, PJ Eby wrote: >> Thus, you don't have to know you have multiple versions installed; it >> can trivially happen by way of dependencies you aren't paying >> attention to. The more things you install, the more likely it is you >> have two versions hanging around. > > > OK, I see. But I'm not sure if we're agreeing or disagreeing over the > result. To me, this is a bad thing on the principle that there is a cost to > multiversion support (it's not part of core Python, so you have to do > *something* to make it work) Seriously? The basic functionality of using sys.path to have multiple versions *is* part of core Python, and has been since 1.5.2 (16 years ago), and probably longer than that. In the days before easy_install and virtualenv, if you needed different versions of things, you used "setup.py install" to different directories (assuming distutils was involved, otherwise you just copied files) and either put your scripts in the same directories, or used PYTHONPATH or explicit sys.path manipulation. That is all easy_install does: add a naming convention for the directories, and automate the sys.path manipulation. Buildout does the same thing, it just writes the sys.path manipulation into the scripts statically, instead of using pkg_resources at runtime. So the notion of "cost" doesn't make any sense. Tools like easy_install and buildout *reduce* the management cost, they don't add anything to core Python. (Now, if you're talking about the .pth files from easy_install, those are something that I added because people complained about having to use require(), and wanted to have a default version available in the interpreter.) > and so having people inadvertently pay that > cost to use a feature that they don't actually *need* is wrong. What cost are you talking about here? Given that most people don't even know they *have* multiple versions installed or care, how is a cost being imposed upon them? Are you talking about disk storage? > One other point, just as a matter of curiosity (because it's not relevant to > the current discussion): in your explanation above, there doesn't seem to be > any step that says the user normally uses CherryPy 3 (so that would be the > one they would get automatically at the interactive interpreter). If they easy_install that version, sure, that's what they'll get as a default version. > For me, > that's really the only use case I'd have for multi-versioning - 99% of the > time I use a particular version of a project, but I have one particular > application that can't work with the version I prefer. Yes, and that's the sort of scenario Nick was proposing pip support, that you have an explicit "install me a different version for my other app" capability -- such that that app's script wrapper adds its alternate version to sys.path ahead of the default one. So it would have been opt-in and impose the "cost" of a slightly longer sys.path and increased disk space usage only on those who ask for it. (Honestly, 90% of this entire thread has sounded like complete FUD to me, i.e. fear based on a lack of understanding that there actually isn't anything magical about multi-version support. As Jim has pointed out, buildout does multi-version support without even using pkg_resources. And before all these tools existed, people just installed things in different directories and used either adjacent scripts, PYTHONPATH, or explicit sys.path manipulation. There is nothing magical whatsoever about having multiple versions of a thing installed on your system; all the tools do is add naming conventions for where stuff is installed... and having such naming conventions is a *good* thing, compared to the old days.) From antoine at python.org Mon Aug 26 16:44:29 2013 From: antoine at python.org (Antoine Pitrou) Date: Mon, 26 Aug 2013 14:44:29 +0000 (UTC) Subject: [Distutils] Multi-version import support for wheel files References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: PJ Eby telecommunity.com> writes: > > That is all easy_install does: add a naming convention for the > directories, and automate the sys.path manipulation. > > Buildout does the same thing, it just writes the sys.path manipulation > into the scripts statically, instead of using pkg_resources at > runtime. > > So the notion of "cost" doesn't make any sense. Tools like > easy_install and buildout *reduce* the management cost, they don't add > anything to core Python. > > (Now, if you're talking about the .pth files from easy_install, those > are something that I added because people complained about having to > use require(), and wanted to have a default version available in the > interpreter.) Pre-3.3, there is a non-negligible runtime cost per sys.path entry, because each import tries importing multiple filenames for each sys.path entry. Post-3.3, things should be better (thanks to the directory contents cache), but there's still a computational overhead for each sys.path entry. So, yes, easy_install adding sys.path entries *by default* using .pth files is clearly not costless. pip doesn't have this problem AFAIK (it doesn't create .pth files). Regards Antoine. From donald at stufft.io Mon Aug 26 17:15:07 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 26 Aug 2013 11:15:07 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> On Aug 26, 2013, at 10:33 AM, PJ Eby wrote: > On Mon, Aug 26, 2013 at 5:20 AM, Paul Moore wrote: >> On 25 August 2013 23:14, PJ Eby wrote: >>> Thus, you don't have to know you have multiple versions installed; it >>> can trivially happen by way of dependencies you aren't paying >>> attention to. The more things you install, the more likely it is you >>> have two versions hanging around. >> >> >> OK, I see. But I'm not sure if we're agreeing or disagreeing over the >> result. To me, this is a bad thing on the principle that there is a cost to >> multiversion support (it's not part of core Python, so you have to do >> *something* to make it work) > > Seriously? The basic functionality of using sys.path to have multiple > versions *is* part of core Python, and has been since 1.5.2 (16 years > ago), and probably longer than that. > > In the days before easy_install and virtualenv, if you needed > different versions of things, you used "setup.py install" to different > directories (assuming distutils was involved, otherwise you just > copied files) and either put your scripts in the same directories, or > used PYTHONPATH or explicit sys.path manipulation. > > That is all easy_install does: add a naming convention for the > directories, and automate the sys.path manipulation. > > Buildout does the same thing, it just writes the sys.path manipulation > into the scripts statically, instead of using pkg_resources at > runtime. > > So the notion of "cost" doesn't make any sense. Tools like > easy_install and buildout *reduce* the management cost, they don't add > anything to core Python. > > (Now, if you're talking about the .pth files from easy_install, those > are something that I added because people complained about having to > use require(), and wanted to have a default version available in the > interpreter.) > > >> and so having people inadvertently pay that >> cost to use a feature that they don't actually *need* is wrong. > > What cost are you talking about here? Given that most people don't > even know they *have* multiple versions installed or care, how is a > cost being imposed upon them? Are you talking about disk storage? > > >> One other point, just as a matter of curiosity (because it's not relevant to >> the current discussion): in your explanation above, there doesn't seem to be >> any step that says the user normally uses CherryPy 3 (so that would be the >> one they would get automatically at the interactive interpreter). > > If they easy_install that version, sure, that's what they'll get as a > default version. > > >> For me, >> that's really the only use case I'd have for multi-versioning - 99% of the >> time I use a particular version of a project, but I have one particular >> application that can't work with the version I prefer. > > Yes, and that's the sort of scenario Nick was proposing pip support, > that you have an explicit "install me a different version for my other > app" capability -- such that that app's script wrapper adds its > alternate version to sys.path ahead of the default one. So it would > have been opt-in and impose the "cost" of a slightly longer sys.path > and increased disk space usage only on those who ask for it. > > (Honestly, 90% of this entire thread has sounded like complete FUD to > me, i.e. fear based on a lack of understanding that there actually > isn't anything magical about multi-version support. As Jim has > pointed out, buildout does multi-version support without even using > pkg_resources. And before all these tools existed, people just > installed things in different directories and used either adjacent > scripts, PYTHONPATH, or explicit sys.path manipulation. There is > nothing magical whatsoever about having multiple versions of a thing > installed on your system; all the tools do is add naming conventions > for where stuff is installed... and having such naming conventions is > a *good* thing, compared to the old days.) There is always a cost. In this case mostly in complexity and start up time. As you mentioned originally the cost to multi version support was the need to use a require() function and when people complained about that you added the .pth files which imposed another sort of cost to people using multi versioned installs. You claim it is part of core Python but it's really not, if it was it wouldn't require importing pkg_resources of the .pth files to make it work. I find it ridiculous that you'd call this thread 90% FUD when the vast bulk of the thread has been trying to determine if there were any reasonable concerns with the approach and upon examination determined that the biggest problem with it was attaching it to Wheel and not the multi version support at all. I realize setuptools and easy_install are your baby but the persecution complex doesn't help to win people over to your side of things. In my experience setuptools has a lot of good ideas but they are wrapped in bad ideas or implementations that obscure the fact that there *are* good ideas there. I do not believe it to be unreasonable for people to want to make sure that we're standardizing around one of the *good* ideas instead of one of the bad ideas. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Mon Aug 26 19:48:57 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 26 Aug 2013 10:48:57 -0700 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: Just to add a bit more "FUD" ;-) I do a lot of packaging things up with py2app, py2exe, etc. -- I find I often want to be able to give folks "one thing" that they can install and run, and I'd rather they don't even need to know it's built with python. A while back, when I was doing this with a web app (actually a "Browser Interface, Local Server (BILS) app --i.e, a web server and html interface, but the browser embedded in a wx app so it acted like a desktop app to the user...) setuptools, and pkg_resources really made it hard. I don't remember the details at this point, but it was built on Pylons, which made heavy use of setuptools at the time. And easy_install pylons worked great. But bundling it up was a pain -- we had to put an enormous amount of crap in there to satisfy pkg_resources, and who knows what -- crap that was not used, and should not have been needed at run time. In fact, we needed to put in at least the skeleton of stuff that was a different version than the one used. Note also that the only reliable way to get an egg-installed package to work with py2app was to include the whole egg. My conclusion at the time was that there was a real confusion between what should happen at install time, and what should happen at run time. For instance, making sure that a dependency is there and the right version seems like an install-time problem to me -- and yet, there is was, getting checked every time you started up the app. I'm not clear at this point how much of that was inherent to how setuptools works, and how much was how Pylons chose to use it, but I hope we'll all keep this use-case in mind at this point. i.e. if a user has only needs one version of a given package installed, lets not have much overhead there to support that, and let's not require much run-time support at all. I note that I've been a user of wx.version for wxPython, and it was always easy and painless, including py2app, py2exe use. So it can be done. -Chris On Mon, Aug 26, 2013 at 8:15 AM, Donald Stufft wrote: > > On Aug 26, 2013, at 10:33 AM, PJ Eby wrote: > >> On Mon, Aug 26, 2013 at 5:20 AM, Paul Moore wrote: >>> On 25 August 2013 23:14, PJ Eby wrote: >>>> Thus, you don't have to know you have multiple versions installed; it >>>> can trivially happen by way of dependencies you aren't paying >>>> attention to. The more things you install, the more likely it is you >>>> have two versions hanging around. >>> >>> >>> OK, I see. But I'm not sure if we're agreeing or disagreeing over the >>> result. To me, this is a bad thing on the principle that there is a cost to >>> multiversion support (it's not part of core Python, so you have to do >>> *something* to make it work) >> >> Seriously? The basic functionality of using sys.path to have multiple >> versions *is* part of core Python, and has been since 1.5.2 (16 years >> ago), and probably longer than that. >> >> In the days before easy_install and virtualenv, if you needed >> different versions of things, you used "setup.py install" to different >> directories (assuming distutils was involved, otherwise you just >> copied files) and either put your scripts in the same directories, or >> used PYTHONPATH or explicit sys.path manipulation. >> >> That is all easy_install does: add a naming convention for the >> directories, and automate the sys.path manipulation. >> >> Buildout does the same thing, it just writes the sys.path manipulation >> into the scripts statically, instead of using pkg_resources at >> runtime. >> >> So the notion of "cost" doesn't make any sense. Tools like >> easy_install and buildout *reduce* the management cost, they don't add >> anything to core Python. >> >> (Now, if you're talking about the .pth files from easy_install, those >> are something that I added because people complained about having to >> use require(), and wanted to have a default version available in the >> interpreter.) >> >> >>> and so having people inadvertently pay that >>> cost to use a feature that they don't actually *need* is wrong. >> >> What cost are you talking about here? Given that most people don't >> even know they *have* multiple versions installed or care, how is a >> cost being imposed upon them? Are you talking about disk storage? >> >> >>> One other point, just as a matter of curiosity (because it's not relevant to >>> the current discussion): in your explanation above, there doesn't seem to be >>> any step that says the user normally uses CherryPy 3 (so that would be the >>> one they would get automatically at the interactive interpreter). >> >> If they easy_install that version, sure, that's what they'll get as a >> default version. >> >> >>> For me, >>> that's really the only use case I'd have for multi-versioning - 99% of the >>> time I use a particular version of a project, but I have one particular >>> application that can't work with the version I prefer. >> >> Yes, and that's the sort of scenario Nick was proposing pip support, >> that you have an explicit "install me a different version for my other >> app" capability -- such that that app's script wrapper adds its >> alternate version to sys.path ahead of the default one. So it would >> have been opt-in and impose the "cost" of a slightly longer sys.path >> and increased disk space usage only on those who ask for it. >> >> (Honestly, 90% of this entire thread has sounded like complete FUD to >> me, i.e. fear based on a lack of understanding that there actually >> isn't anything magical about multi-version support. As Jim has >> pointed out, buildout does multi-version support without even using >> pkg_resources. And before all these tools existed, people just >> installed things in different directories and used either adjacent >> scripts, PYTHONPATH, or explicit sys.path manipulation. There is >> nothing magical whatsoever about having multiple versions of a thing >> installed on your system; all the tools do is add naming conventions >> for where stuff is installed... and having such naming conventions is >> a *good* thing, compared to the old days.) > > There is always a cost. In this case mostly in complexity and start up time. > > As you mentioned originally the cost to multi version support was the need > to use a require() function and when people complained about that you > added the .pth files which imposed another sort of cost to people using > multi versioned installs. > > You claim it is part of core Python but it's really not, if it was it wouldn't require > importing pkg_resources of the .pth files to make it work. > > I find it ridiculous that you'd call this thread 90% FUD when the vast bulk of the > thread has been trying to determine if there were any reasonable concerns > with the approach and upon examination determined that the biggest problem > with it was attaching it to Wheel and not the multi version support at all. I realize > setuptools and easy_install are your baby but the persecution complex doesn't > help to win people over to your side of things. > > In my experience setuptools has a lot of good ideas but they are wrapped in bad > ideas or implementations that obscure the fact that there *are* good ideas there. > I do not believe it to be unreasonable for people to want to make sure that we're > standardizing around one of the *good* ideas instead of one of the bad ideas. > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From h.goebel at crazy-compilers.com Mon Aug 26 12:36:49 2013 From: h.goebel at crazy-compilers.com (Hartmut Goebel) Date: Mon, 26 Aug 2013 12:36:49 +0200 Subject: [Distutils] How to detect a namespace packages? Message-ID: <521B2FC1.2020507@crazy-compilers.com> Hi, I'm one of the developers of www.pyinstaller.org, a tool for creating stand-alone executables. We need to reliable detect if a package is a namespace package (nspkg). For each namespace, we need to add an empty fake-module into our executable to keep the import mechanism working. This has to work in all versions of Python starting with 2.4. nspkgs set up via a nspkg.pth-file are detected by being in sys.modules, but imp.find_module() files. For nspkgs using __init__.py-files (which use pkg_resources.declare_namespace() or pkgutil.extend_path()) I have no clue how to detect them. I tried to query meta-information using pkgresources, but I did not find a solution. Any help? -- Regards Hartmut Goebel | Hartmut Goebel | h.goebel at crazy-compilers.com | | www.crazy-compilers.com | compilers which you thought are impossible | From reachbach at outlook.com Mon Aug 26 20:25:55 2013 From: reachbach at outlook.com (bharath ravi kumar) Date: Mon, 26 Aug 2013 23:55:55 +0530 Subject: [Distutils] Distributable binary with dependencies In-Reply-To: References: , Message-ID: Carl, Eby, Thanks for taking time to suggest various alternatives. Considering that the deployment hosts are identical in every as[ect, the approach of moving virtualenv's with packages pip-installed at build time appears the simplest, low-overhead approach that can be implemented without hacking the environment or resorting to custom scripts. I'll go ahead with that option. Thanks,Bharath > From: pje at telecommunity.com > Date: Fri, 23 Aug 2013 15:38:46 -0400 > Subject: Re: [Distutils] Distributable binary with dependencies > To: reachbach at outlook.com > CC: distutils-sig at python.org > > On Fri, Aug 23, 2013 at 6:45 AM, bharath ravi kumar > wrote: > > I'm looking to package an application with all its dependencies for > > deployment on multiple hosts. I'd like to ensure that there is no > > compilation or setup step before starting the application in production. An > > nice to have ability would be to isolate base library dependencies per > > application (like virtualenv does). Ideally, the development -> deployment > > lifecycle would involve: (a) Build an application archive with all its > > dependencies baked in (b) Copy archive to a host in production. (c) Unwrap > > archive (d) Start services. (Note that the build host & production hosts are > > identical in architecture, OS patch level and python version). > > You can use "easy_install -Zmad deployment_dir application", then > archive deployment_dir and extract it on the target machines. (Note: > "application" must be a setuptools-packaged project with its > dependencies declared, for easy_install to know what to build and > deploy.) > > The "Z" option means "unzip eggs", "m" means "don't worry about the > target being on sys.path; we're not trying to install a default > version", "a" means "copy all dependencies, even if locally installed > already", and "d" means "install libraries and scripts to the > following directory". > > So, the scripts will be put inside deployment_dir with a bunch of > adjacent subdirectories containing all the compiled and ready-to-use > libraries. The resulting directory is a portable installation of > "application": as long as the entire subdirectory is copied to the > target machines, everything should work just fine. None of the > dependencies or the app itself will interfere with other Python code > installed on the target system; it is in a sense a minimal virtualenv > which will run whatever scripts that easy_install puts in that > directory. > > One note: the target machines *will* need pkg_resources installed, and > it will not be included in the directory by default. If they don't > have local copies installed (due to e.g. setuptools, distribute, etc. > being installed), you can manually copy a pkg_resources.py to the > deployment directory, and it will be used by whatever scripts are in > that directory. > > While there may be other tools available that support this kind of > thing, I don't think any of them can do it quite this simply. This > deployment scenario was actually a key use case for the original > design of easy_install and eggs, so it actually works pretty decently > for this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Mon Aug 26 22:40:26 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Aug 2013 16:40:26 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Mon, Aug 26, 2013 at 11:15 AM, Donald Stufft wrote: > There is always a cost. In this case mostly in complexity and start up time. > > As you mentioned originally the cost to multi version support was the need > to use a require() function and when people complained about that you > added the .pth files which imposed another sort of cost to people using > multi versioned installs. See, this is exactly what I'm talking about: you've got this 100% backwards: .pth files are for people who *aren't* using multi-version imports. They're for *default* versions, not alternate versions! And they're utterly unnecessary for Nick's proposal. > You claim it is part of core Python but it's really not, if it was it wouldn't require > importing pkg_resources of the .pth files to make it work. As I pointed out in the email you apparently didn't read, along with multiple emails from Jim: pkg_resources isn't necessary for alternate-version support. All that's required for alternate versions is to add them to sys.path, which buildout does just fine *without pkg_resources*. > I find it ridiculous that you'd call this thread 90% FUD when the vast bulk of the > thread has been trying to determine if there were any reasonable concerns > with the approach and upon examination determined that the biggest problem > with it was attaching it to Wheel and not the multi version support at all What I'm referring to as the FUD is that people have been confusing what Nick proposed with what setuptools does, and getting *both* of them wrong in the details. Nick's proposal was not to mimic setuptools' multi-version support, but rather to provide something else: let's call it "alternate version support", to separate it from what setuptools does. In Nick's AVS proposal, there is *no* overhead for anything that doesn't need a non-default version, and it's 100% opt-in, used only for things that need *non-default* versions. Note, by the way, that since these *non-default* packages aren't on sys.path by default, *there is no overhead and no .pth files are involved*. They are effectively invisible and irrelevant for anything that doesn't use them. The only place where there's overhead is in the script that needs the alternative version(s), and its sys.path is lengthened only by those items that it can't obtain from the default sys.path. And if you use buildout's approach of simply adding: sys.path[0:0] = [path1,...] to the head of a script, then *pkg_resources isn't involved either*. This is bog-standard stock Python. So the FUD part I was referring to is all the "oh no, setuptools is complicated" in response to Nick's perfectly reasonable idea *which doesn't involve any of setuptools' complexity*, because it's doing something completely different. > I realize > setuptools and easy_install are your baby but the persecution complex doesn't > help to win people over to your side of things. I think you're confused here. I don't think setuptools is being persecuted, I think *Nick's idea* is being misunderstood, and being construed as almost the exact *opposite* of what it is. All the stuff people bitch about that relates to multi-versions in setuptools are actually issues with setuptools' implementation of *default* versions, not *alternative* versions. So to look at Nick's proposal and think it's going to have the same problems is completely ludicrous - it's 180 degrees opposite of what setuptools does, because for setuptools, *default versions* are the special case -- they're what cause 90% of the complexity in pkg_resources' manipulation of sys.path, and they're the main reason .pth files are ever used. So it's crazy-making to see people thinking Nick's proposal is going to bring all that crap along, when that's the exact *opposite* of the situation. > In my experience setuptools has a lot of good ideas but they are wrapped in bad > ideas or implementations that obscure the fact that there *are* good ideas there. > I do not believe it to be unreasonable for people to want to make sure that we're > standardizing around one of the *good* ideas instead of one of the bad ideas. It would help if people understood the actual facts, then. AFAICT, Nick's proposal doesn't do any of the things that people are worried about, or at the very least does not *require* them. As Jim and I have pointed out more than once, pkg_resources is not a runtime requirement to allow alternative versions to be importable by code that wants them. It would really be a shame to shoot down Nick's idea based on a vague misunderstanding of it. It's a good proposal, and has far less to do with setuptools than most people in the thread seem to think. From pje at telecommunity.com Mon Aug 26 22:45:01 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Aug 2013 16:45:01 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Mon, Aug 26, 2013 at 1:48 PM, Chris Barker - NOAA Federal wrote: > Just to add a bit more "FUD" ;-) > .... > i.e. if a user has only needs one version of a given package > installed, lets not have much overhead there to support that, and > let's not require much run-time support at all. Nick's proposal does that. What I mean by FUD in this context is that a lot of the thread is discussing fears that *are* relevant to setuptools, but *not* to Nick's proposal. It doesn't mean I think anyone's use cases or needs are irrelevant or dumb; I'm just saying that a lack of understanding of Nick's proposal is causing people to equate it with problems in setuptools that relate to *default* versions, not to making alternate versions available. Nick's proposal doesn't involve any weirdness for packages that aren't *already* using pkg_resources or which require the use of a non-default version. Under his proposal, default versions don't behave any differently than stock Python and pip, and nobody "pays" any cost for something they're not actually using. If you never need a non-default version, his proposal affects nothing on that system. From chris.barker at noaa.gov Mon Aug 26 22:45:45 2013 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 26 Aug 2013 13:45:45 -0700 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: <4833001943696553456@unknownmsgid> PJE: Thanks for the clarification: based on that: +1 on Nick's proposal. Chris On Aug 26, 2013, at 1:41 PM, PJ Eby wrote: > On Mon, Aug 26, 2013 at 11:15 AM, Donald Stufft wrote: >> There is always a cost. In this case mostly in complexity and start up time. >> >> As you mentioned originally the cost to multi version support was the need >> to use a require() function and when people complained about that you >> added the .pth files which imposed another sort of cost to people using >> multi versioned installs. > > See, this is exactly what I'm talking about: you've got this 100% backwards: > > .pth files are for people who *aren't* using multi-version imports. > They're for *default* versions, not alternate versions! > > And they're utterly unnecessary for Nick's proposal. > > >> You claim it is part of core Python but it's really not, if it was it wouldn't require >> importing pkg_resources of the .pth files to make it work. > > As I pointed out in the email you apparently didn't read, along with > multiple emails from Jim: pkg_resources isn't necessary for > alternate-version support. All that's required for alternate versions > is to add them to sys.path, which buildout does just fine *without > pkg_resources*. > > >> I find it ridiculous that you'd call this thread 90% FUD when the vast bulk of the >> thread has been trying to determine if there were any reasonable concerns >> with the approach and upon examination determined that the biggest problem >> with it was attaching it to Wheel and not the multi version support at all > > What I'm referring to as the FUD is that people have been confusing > what Nick proposed with what setuptools does, and getting *both* of > them wrong in the details. > > Nick's proposal was not to mimic setuptools' multi-version support, > but rather to provide something else: let's call it "alternate version > support", to separate it from what setuptools does. > > In Nick's AVS proposal, there is *no* overhead for anything that > doesn't need a non-default version, and it's 100% opt-in, used only > for things that need *non-default* versions. > > Note, by the way, that since these *non-default* packages aren't on > sys.path by default, *there is no overhead and no .pth files are > involved*. They are effectively invisible and irrelevant for anything > that doesn't use them. > > The only place where there's overhead is in the script that needs the > alternative version(s), and its sys.path is lengthened only by those > items that it can't obtain from the default sys.path. And if you use > buildout's approach of simply adding: > > sys.path[0:0] = [path1,...] > > to the head of a script, then *pkg_resources isn't involved either*. > > This is bog-standard stock Python. > > So the FUD part I was referring to is all the "oh no, setuptools is > complicated" in response to Nick's perfectly reasonable idea *which > doesn't involve any of setuptools' complexity*, because it's doing > something completely different. > > >> I realize >> setuptools and easy_install are your baby but the persecution complex doesn't >> help to win people over to your side of things. > > I think you're confused here. I don't think setuptools is being > persecuted, I think *Nick's idea* is being misunderstood, and being > construed as almost the exact *opposite* of what it is. > > All the stuff people bitch about that relates to multi-versions in > setuptools are actually issues with setuptools' implementation of > *default* versions, not *alternative* versions. So to look at Nick's > proposal and think it's going to have the same problems is completely > ludicrous - it's 180 degrees opposite of what setuptools does, because > for setuptools, *default versions* are the special case -- they're > what cause 90% of the complexity in pkg_resources' manipulation of > sys.path, and they're the main reason .pth files are ever used. > > So it's crazy-making to see people thinking Nick's proposal is going > to bring all that crap along, when that's the exact *opposite* of the > situation. > > >> In my experience setuptools has a lot of good ideas but they are wrapped in bad >> ideas or implementations that obscure the fact that there *are* good ideas there. >> I do not believe it to be unreasonable for people to want to make sure that we're >> standardizing around one of the *good* ideas instead of one of the bad ideas. > > It would help if people understood the actual facts, then. AFAICT, > Nick's proposal doesn't do any of the things that people are worried > about, or at the very least does not *require* them. As Jim and I > have pointed out more than once, pkg_resources is not a runtime > requirement to allow alternative versions to be importable by code > that wants them. > > It would really be a shame to shoot down Nick's idea based on a vague > misunderstanding of it. It's a good proposal, and has far less to do > with setuptools than most people in the thread seem to think. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From pje at telecommunity.com Mon Aug 26 22:48:30 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Aug 2013 16:48:30 -0400 Subject: [Distutils] How to detect a namespace packages? In-Reply-To: <521B2FC1.2020507@crazy-compilers.com> References: <521B2FC1.2020507@crazy-compilers.com> Message-ID: On Mon, Aug 26, 2013 at 6:36 AM, Hartmut Goebel wrote: > Hi, > > I'm one of the developers of www.pyinstaller.org, a tool for creating > stand-alone executables. > > We need to reliable detect if a package is a namespace package (nspkg). > For each namespace, we need to add an empty fake-module into our > executable to keep the import mechanism working. This has to work in all > versions of Python starting with 2.4. > > nspkgs set up via a nspkg.pth-file are detected by being in sys.modules, > but imp.find_module() files. > > For nspkgs using __init__.py-files (which use > pkg_resources.declare_namespace() or pkgutil.extend_path()) I have no > clue how to detect them. > > I tried to query meta-information using pkgresources, but I did not find > a solution. > > Any help? Setuptools package metadata includes a namespace_packages.txt file with this information: http://peak.telecommunity.com/DevCenter/EggFormats#namespace-packages-txt-namespace-package-metadata This won't help you with PEP 420 namespace packages (3.3+), unless someone declares them, and likewise it won't help if somebody uses the dynamic APIs without any declaration. But at least it'll give you the declared ones. From donald at stufft.io Mon Aug 26 23:59:48 2013 From: donald at stufft.io (Donald Stufft) Date: Mon, 26 Aug 2013 17:59:48 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Aug 26, 2013, at 4:40 PM, PJ Eby wrote: > On Mon, Aug 26, 2013 at 11:15 AM, Donald Stufft wrote: >> There is always a cost. In this case mostly in complexity and start up time. >> >> As you mentioned originally the cost to multi version support was the need >> to use a require() function and when people complained about that you >> added the .pth files which imposed another sort of cost to people using >> multi versioned installs. > > See, this is exactly what I'm talking about: you've got this 100% backwards: > > .pth files are for people who *aren't* using multi-version imports. > They're for *default* versions, not alternate versions! > > And they're utterly unnecessary for Nick's proposal. > > >> You claim it is part of core Python but it's really not, if it was it wouldn't require >> importing pkg_resources of the .pth files to make it work. > > As I pointed out in the email you apparently didn't read, along with > multiple emails from Jim: pkg_resources isn't necessary for > alternate-version support. All that's required for alternate versions > is to add them to sys.path, which buildout does just fine *without > pkg_resources*. > > >> I find it ridiculous that you'd call this thread 90% FUD when the vast bulk of the >> thread has been trying to determine if there were any reasonable concerns >> with the approach and upon examination determined that the biggest problem >> with it was attaching it to Wheel and not the multi version support at all > > What I'm referring to as the FUD is that people have been confusing > what Nick proposed with what setuptools does, and getting *both* of > them wrong in the details. > > Nick's proposal was not to mimic setuptools' multi-version support, > but rather to provide something else: let's call it "alternate version > support", to separate it from what setuptools does. > > In Nick's AVS proposal, there is *no* overhead for anything that > doesn't need a non-default version, and it's 100% opt-in, used only > for things that need *non-default* versions. > > Note, by the way, that since these *non-default* packages aren't on > sys.path by default, *there is no overhead and no .pth files are > involved*. They are effectively invisible and irrelevant for anything > that doesn't use them. > > The only place where there's overhead is in the script that needs the > alternative version(s), and its sys.path is lengthened only by those > items that it can't obtain from the default sys.path. And if you use > buildout's approach of simply adding: > > sys.path[0:0] = [path1,...] > > to the head of a script, then *pkg_resources isn't involved either*. > > This is bog-standard stock Python. > > So the FUD part I was referring to is all the "oh no, setuptools is > complicated" in response to Nick's perfectly reasonable idea *which > doesn't involve any of setuptools' complexity*, because it's doing > something completely different. > > >> I realize >> setuptools and easy_install are your baby but the persecution complex doesn't >> help to win people over to your side of things. > > I think you're confused here. I don't think setuptools is being > persecuted, I think *Nick's idea* is being misunderstood, and being > construed as almost the exact *opposite* of what it is. > > All the stuff people bitch about that relates to multi-versions in > setuptools are actually issues with setuptools' implementation of > *default* versions, not *alternative* versions. So to look at Nick's > proposal and think it's going to have the same problems is completely > ludicrous - it's 180 degrees opposite of what setuptools does, because > for setuptools, *default versions* are the special case -- they're > what cause 90% of the complexity in pkg_resources' manipulation of > sys.path, and they're the main reason .pth files are ever used. > > So it's crazy-making to see people thinking Nick's proposal is going > to bring all that crap along, when that's the exact *opposite* of the > situation. > > >> In my experience setuptools has a lot of good ideas but they are wrapped in bad >> ideas or implementations that obscure the fact that there *are* good ideas there. >> I do not believe it to be unreasonable for people to want to make sure that we're >> standardizing around one of the *good* ideas instead of one of the bad ideas. > > It would help if people understood the actual facts, then. AFAICT, > Nick's proposal doesn't do any of the things that people are worried > about, or at the very least does not *require* them. As Jim and I > have pointed out more than once, pkg_resources is not a runtime > requirement to allow alternative versions to be importable by code > that wants them. > > It would really be a shame to shoot down Nick's idea based on a vague > misunderstanding of it. It's a good proposal, and has far less to do > with setuptools than most people in the thread seem to think. I think you're confused. The only comments I see in this thread are people doing due diligence to ensure that Nick's proposal *didn't* include the parts of setuptools that we felt were incurring a cost against people not using the feature and expressing a desire *not* to attach it to the Wheel format and instead attach it to another format on it's own. I mean the person you originally quoted even explicitly said "I have no objection to this proposal" sans some stuff about not wanting it to be attached to Wheels. So I'm not sure how you can take someone saying they have no objection to the proposal and translate it to people are shooting down Nick's proposal. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Tue Aug 27 00:45:17 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Aug 2013 08:45:17 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: I think we're actually all in violent agreement at this point. I mainly wanted to ensure we had a sensible path forward for this capability that could be made backwards compatible with pkg_resources without too much difficulty :) To summarise: * default versions will continue to work as they do now * the next generation installation database spec will define a wheel inspired layout for alternative versions * by analogy with egg/egg-info the likely extension for this format is "dist" * for easy discoverability on existing versions of Python, these will be allowed anywhere on sys.path. A future version of Python may define a dedicated "alt-packages" directory, but that won't happen for Python 3.4. * projects that need the alternative versions will need to do *some* kind of sys.path manipulation to make them available for import. I plan to ensure that the new layout "just works" for at least pkg_resources users (once they update to a sufficiently recent version) * this isn't a pip 1.5 timeline idea, but I'll aim to have the installation database spec updated and a coherent proposal together for pip 1.6 to add a "pip altinstall" command that uses the standard layout and naming scheme. Setting the target directory explicitly is sufficient for experimentation with pip, so a command to make it easy isn't urgent. * until this ready, users that need the capability can continue to use the egg layout. Note that this proposal *does not* attempt to solve the "share distributions between virtual environments" problem. That's far more complex, as it requires a cross-platform solution for both namespace package compatible indirect imports (which Eric Snow and I hope to have ready for 3.4), a "dist-link" equivalent to "egg-link" files (mostly a rebranding of an existing tool, but with added support for partially qualified versions) *and* an unversioned way to advertise the fact that the default metadata lives somewhere else (dist-info-link, perhaps?). It is likely that only the "dist-link" part would work completely on earlier Python versions (although it's possible a metapath hook could make the necessary indirect import features available). Note that symlink farms don't really solve the problem, since you can't readily upgrade the symlinked version due to the version being embedded in the dist-info directory name. It's going to take a while to get all the pieces in place, but I'm hopeful we can get this sorted in the pip 1.6 or 1.7 time frame. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Tue Aug 27 01:15:59 2013 From: pje at telecommunity.com (PJ Eby) Date: Mon, 26 Aug 2013 19:15:59 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Mon, Aug 26, 2013 at 5:59 PM, Donald Stufft wrote: > I think you're confused. The only comments I see in this thread are people doing > due diligence to ensure that Nick's proposal *didn't* include the parts of setuptools > that we felt were incurring a cost against people not using the feature and expressing > a desire *not* to attach it to the Wheel format and instead attach it to another format > on it's own. I mean the person you originally quoted even explicitly said "I have no > objection to this proposal" sans some stuff about not wanting it to be attached to > Wheels. So I'm not sure how you can take someone saying they have no objection > to the proposal and translate it to people are shooting down Nick's proposal. FUD stands for fear, uncertainty, doubt. My comment was that a lot of the original objections to Nick's proposal seemed fearful, uncertain, and doubting, specifically because they were thinking the proposal was proposing things it wasn't. It was you who brought up the idea of persecution; my response was that I don't think anybody's persecuting setuptools, only giving unnecessary levels of doubt to Nick's proposal due to confusion about how it relates (i.e. mostly doesn't) to setuptools. You pounced on a tiny piece of my email to Paul, in which I mainly expressed confusion about his statements about "cost". I was having trouble understanding what sort of "costs" he meant, and in subsequent discussion realized that it's because he and others appeared to have conflated setuptools' default-version issues, with Nick's proposal for handling non-default versions. My comment was that 90% of the thread appeared to stem from this fear, uncertainty, and doubt, based on this misunderstanding, although more precisely worded, what I actually meant was that 90% of the *objections* raised to Nick's proposal were based on the aforementioned fear, uncertainty, and doubt -- i.e., that the objections had nothing to do with that which was being proposed. At one point this weekend, I had intended to write a detailed rebuttal to all of the points that had been raised, but by the time I had time to do so, the discussion was mostly settled and the issue mostly moot... but the impression that 90% of the original objections were misunderstanding-based remained, which led to my (perhaps poorly-phrased) 90% remark. All that being said, I'm not sure why you pounced on that side-comment in the first place; did you think I was personally insulting you or accusing you of something? ISTM that you are making an awfully big deal out of an incidental remark that had very little to do with the main point of the email, and framing it as though I am the one who is making a big deal of something. If you hadn't intervened, I don't see any reason why the conversation wouldn't have reached a peaceable conclusion, and am still puzzled as to why you felt the need to intervene. Your initial email began by disputing facts that you now appear to accept, in that you did not reply to any of my rebuttals to your assertions. But instead of admitting your assertions were in error, you're asserting that I'm the one who's confused. Well, I wasn't before, but I sure am *now*. ;-) From greg.ewing at canterbury.ac.nz Tue Aug 27 01:21:54 2013 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 27 Aug 2013 11:21:54 +1200 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> Message-ID: <521BE312.7060506@canterbury.ac.nz> Daniel Holth wrote: > I would like to see some consideration given to what Ruby and npm do, > which is to place what we are calling dists into a special directory > that only contains dists /somedir/distname-1.0/... rather than placing > them as specially named objects on the normal search path. What advantage would there be in doing that? I can think of one disadvantage -- we would then need new rules to determine their priority in the search order. -- Greg From p.f.moore at gmail.com Tue Aug 27 09:01:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 27 Aug 2013 08:01:10 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On 27 August 2013 00:15, PJ Eby wrote: > You pounced on a tiny piece of my email to Paul, in which I mainly > expressed confusion about his statements about "cost". I was having > trouble understanding what sort of "costs" he meant, and in subsequent > discussion realized that it's because he and others appeared to have > conflated setuptools' default-version issues, with Nick's proposal for > handling non-default versions. > Note that I freely admit to *having* fear, uncertainty and doubt: I feared that Nick's proposal would impact users who just wanted to use default versions. I was wrong, no issue, but I was concerned. I was uncertain as to what Nick meant by "pkg_resources compatible". This has now been explained, thanks, but I wasn't sure. I doubted that I had the full picture and I was going to investigate. Others provided extra information so I didn't need to do so myself, but I had questions that needed to be answered initially. None of these things is wrong. It is *spreading* FUD (and in particular, doing so cynically to undermine a proposal) that is wrong, and I hope I didn't do that - I certainly did not intend to and I'm a bit unhappy about the implication that I might have. (Not enough to make an issue of it, this is distutils-sig after all and you need a thick skin to hang out here :-)) Just as a side-note, I'm impressed by how careful everyone is being to keep discussions on distutils-sig friendly and constructive these days. My thanks to everyone for that. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Aug 27 11:00:03 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Aug 2013 19:00:03 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On 27 August 2013 17:01, Paul Moore wrote: > On 27 August 2013 00:15, PJ Eby wrote: >> >> You pounced on a tiny piece of my email to Paul, in which I mainly >> expressed confusion about his statements about "cost". I was having >> trouble understanding what sort of "costs" he meant, and in subsequent >> discussion realized that it's because he and others appeared to have >> conflated setuptools' default-version issues, with Nick's proposal for >> handling non-default versions. > > > Note that I freely admit to *having* fear, uncertainty and doubt: > > I feared that Nick's proposal would impact users who just wanted to use > default versions. I was wrong, no issue, but I was concerned. > I was uncertain as to what Nick meant by "pkg_resources compatible". This > has now been explained, thanks, but I wasn't sure. > I doubted that I had the full picture and I was going to investigate. Others > provided extra information so I didn't need to do so myself, but I had > questions that needed to be answered initially. I think it was partly my fault, too. While I tried to emphasise that I was only interested in copying the pkg_resources back end layout for alternative versions in the initial post, the replies made me realise that (prior to this thread) PJE and Jason were probably the only other current distutils-sig participants familiar enough with setuptools and pkg_resources to understand the distinction between that aspect, the default version handling and the activation API (and my familiarity is a recent thing - I only really started understanding pkg_resources properly in the last couple of days while trying to fix a bug I reported a while back). While I haven't figured out how to fix the bug yet, I learned enough to figure out how to design a next generation alternative version mechanism that pkg_resources should be able to support, so I'm still calling that a win :) Just to scare people though... I did come up with a potentially decent use case for .pth files: they're actually a reasonable solution for sharing distributions between virtual environments in a way that works cross platform and on all currently used Python versions. Say you want to let virtual environments choose between "latest CherryPy 2" and "latest Cherry Py 3". Install CherryPy2 into a directory called "/full/path/to/some-alt-versions-directory/CherryPy2" and 3 into "/full/path/to/some-alt-versions-directory/CherryPy3". Now you can say "use latest available CherryPy2" in your virtual environment by adding a CherryPy2.pth file with a single line containing: /full/path/to/some-alt-versions-directory/CherryPy2 And similarly for CherryPy3.pth (but not in the same virtual environment as CherryPy2.pth!): /full/path/to/some-alt-versions-directory/CherryPy3 Because this actually modifies sys.path inside the environment, it works for both imports *and* for finding distribution metadata. If you upgrade the version of CherryPy2 or CherryPy3, all virtual environments referencing those directories will see the upgraded version. Anything using a version of CherryPy installed directly into the environment will ignore it. For those playing along at home... this is similar to how the default version support in setuptools works. The difference is that using .pth files to implicitly append to sys.path in a single-application virtual environment is significantly less surprising than doing so in a shared Python installation :) Cheers, Nick. P.S. If anyone missed me mentioning why I keep picking on the CherryPy 2 vs 3 migration, it's the actual parallel installation case that we have to deal with for beaker-project.org. Debugging some issues with that is what forced me to start learning how the multi-version support in pkg_resources actually works :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Aug 27 11:15:29 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 27 Aug 2013 10:15:29 +0100 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On 27 August 2013 10:00, Nick Coghlan wrote: > Just to scare people though... I did come up with a potentially decent > use case for .pth files: they're actually a reasonable solution for > sharing distributions between virtual environments in a way that works > cross platform and on all currently used Python versions. Say you want > to let virtual environments choose between "latest CherryPy 2" and > "latest Cherry Py 3". Install CherryPy2 into a directory called > "/full/path/to/some-alt-versions-directory/CherryPy2" and 3 into > "/full/path/to/some-alt-versions-directory/CherryPy3". > > Now you can say "use latest available CherryPy2" in your virtual > environment by adding a CherryPy2.pth file with a single line > containing: > > /full/path/to/some-alt-versions-directory/CherryPy2 > Personally, I have no particular objection to .pth files. What I dislike is: 1. A proliferation (well, two of them if you mean setuptools :-)) of general pth files containing multiple entries - I'd rather see the name of the pth file match the project it refers to, as you shown here. 2. The hacks in setuptools' pth files to put things near the *start* of sys.path. I know of no reason why this should be necessary. 3. Over-use of pth files resulting in an excessively long sys.path (less of a problem in 3.3 where scanning sys.path is a lot faster). The way pth files need to be on a "site" directory also causes some obscure and annoying failure modes in setuptools-based installs at times (nothing drastic, usually just a case of "you forgot to use --single-version-externally-managed again", so the pth issue is a symptom not a cause). So it's mostly that pth files have a bad rep because of the way they have been used in the past, rather than that they are a bad idea per se. I actually quite like this approach - it's simple and uses Python features that have been round for ages. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Tue Aug 27 13:10:47 2013 From: holger at merlinux.eu (holger krekel) Date: Tue, 27 Aug 2013 11:10:47 +0000 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <57BEEEE2-27F8-4F14-A01A-929333EF176B@stufft.io> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <57BEEEE2-27F8-4F14-A01A-929333EF176B@stufft.io> Message-ID: <20130827111047.GB20106@merlinux.eu> On Sat, Aug 24, 2013 at 16:47 -0400, Donald Stufft wrote: > On Aug 10, 2013, at 9:07 PM, Donald Stufft wrote: > > > [snip] > > > I guess I'm going to ask for some pronouncement on this? It's been two weeks with no real feedback. > > FWIW tangentially related to this proposal, g.pypi.python.org is now 16 days out of date. Being one of the people who wanted to but didn't feedback (still in vacation, writing from a camping place with ssh/mutt and lousy connectivity FWIW): - the PEP claims that PEP381 mirroring protocol continues to exist. But are the statements in http://www.python.org/dev/peps/pep-0381/#statistics-page still valid, i.e. does pypi.python.org still crawl mirrors for statistics when the PEP449-DNS removal happens? Also PEP381 has seen some modifications and enhancements before and after the CDN introduction. - relatedly, I'd suggest to clarify that this PEP does at least not preclude further PEPs or attempts to introduce other means than DNS to manage PyPI mirrors (one where mirror availability is stored at and queried via a python.org address). Ideally, it should already incorporate a procedure to register mirrors and to list them at a web page. - maybe a "future work" section could list these issues. I guess one underlying question is how much we want to rely on the CDN mid/long-term. It's introduction was not discussed in a PEP but it is mentioned e.g. in PEP449 as a reason to shutdown mirror management infrastructure. That all being said, i am otherwise ok with PEP449 as DNS seems indeed the wrong way to handle mirror management. best, holger > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Tue Aug 27 13:36:23 2013 From: donald at stufft.io (Donald Stufft) Date: Tue, 27 Aug 2013 07:36:23 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <20130827111047.GB20106@merlinux.eu> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <57BEEEE2-27F8-4F14-A01A-929333EF176B@stufft.io> <20130827111047.GB20106@merlinux.eu> Message-ID: <36EBBFFD-9824-4719-BE9D-5D0AE7353EF4@stufft.io> On Aug 27, 2013, at 7:10 AM, holger krekel wrote: > Being one of the people who wanted to but didn't feedback (still in vacation, > writing from a camping place with ssh/mutt and lousy connectivity FWIW): Ah! I hope you're having fun :) > > - the PEP claims that PEP381 mirroring protocol continues to exist. > But are the statements in http://www.python.org/dev/peps/pep-0381/#statistics-page still valid, i.e. does pypi.python.org still crawl mirrors for > statistics when the PEP449-DNS removal happens? Also PEP381 has > seen some modifications and enhancements before and after the CDN > introduction. PyPI hasn't crawled mirrors for statistics for 3 months? or so. Ever since download counts first got shut off and it has never been restored. Personally I have no plans to restore it. It caused needless complication and the download counts from the mirrors aren't that important IMO. The download counts are already inaccurate and are primarily useful as a form of relative comparison so the additional numbers do not aid much in the form of relativeness only in absolute counts (which as stated are already widely inaccurate). Perhaps PEP381 should be updates to take into account the new abilities added for mirrors that have happened recently (the Serials, the Headers etc). It also probably makes sense to update it since PyPI is no longer fetching statistics from the mirrors. I don't think we need a new mirror PEP though as the mirroring protocol is mostly the same and the enhancements exist primarily as a means of getting a more accurate mirror. I *do* have plans down the road to introduce a new mirroring protocol but that is a ways out still as there are other things higher up on my todo list. > > - relatedly, I'd suggest to clarify that this PEP does at least not preclude > further PEPs or attempts to introduce other means than DNS to manage > PyPI mirrors (one where mirror availability is stored at and queried via > a python.org address). Ideally, it should already incorporate a > procedure to register mirrors and to list them at a web page. I don't see this PEP as precluding anything else. Currently it points to http://pypi-mirrors.org/ as the place to locate new mirrors from in a manual fashion. I'm not too concerned with an automatic discovery protocol since the only installer as far as I'm aware that even used the existing one was pip which is removing that support in 1.5 anyways. That being said I'm not opposed to a new PEP introducing a different scheme but I probably won't be the architect of it and I can make a small update to it that it doesn't preclude further PEPs if that would make people feel more comfortable. > > - maybe a "future work" section could list these issues. > > I guess one underlying question is how much we want to rely on the CDN > mid/long-term. It's introduction was not discussed in a PEP but it > is mentioned e.g. in PEP449 as a reason to shutdown mirror management > infrastructure. Personally I see the CDN as the best option for the bulk of people wanting to install from a *public* mirror. There are of course situations where you might want to install from a different public mirror (China being a big one). I see mirrors mostly being useful for smaller use cases now. However I have no plans or desire to make the public mirrors go away other than the existing DNS names (and only then because of security concerns). > > That all being said, i am otherwise ok with PEP449 as DNS seems indeed > the wrong way to handle mirror management. Awesome good to hear. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tseaver at palladion.com Tue Aug 27 17:22:25 2013 From: tseaver at palladion.com (Tres Seaver) Date: Tue, 27 Aug 2013 11:22:25 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/27/2013 05:00 AM, Nick Coghlan wrote: > PJE and Jason were probably the only other current distutils-sig > participants familiar enough with setuptools and pkg_resources to > understand the distinction between that aspect, the default version > handling and the activation API There are lots of folks here who have been building tooling on top of eggs for almost a decade now. Perhaps those who *do* grok how that stuff works (I'd be willing to guess a lot more than the three of you and myself) weren't alarmed by you proposal. Oophobia is not ubiquitous. :) Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlIcxDEACgkQ+gerLs4ltQ4qCgCfQKtjEAQkx7XnkQS8A8Q767E6 lnwAoNJAcSDTbN6I1DW2DZAzC3lMvolQ =AeYU -----END PGP SIGNATURE----- From vinay_sajip at yahoo.co.uk Tue Aug 27 17:22:22 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 27 Aug 2013 15:22:22 +0000 (UTC) Subject: [Distutils] Multi-version import support for wheel files References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > Just to scare people though... I did come up with a potentially decent > use case for .pth files: they're actually a reasonable solution for > sharing distributions between virtual environments in a way that works > cross platform and on all currently used Python versions. Say you want Right, and ISTM it also enables a useful subset of "setup.py develop" functionality where the .pth acts analogously to an .egg-link - the referenced project becomes importable while still being editable, though its headers, scripts and data are not installed, nor does it appear on a list of installed distributions. Regards, Vinay Sajip From pje at telecommunity.com Tue Aug 27 18:30:09 2013 From: pje at telecommunity.com (PJ Eby) Date: Tue, 27 Aug 2013 12:30:09 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Tue, Aug 27, 2013 at 3:01 AM, Paul Moore wrote: > On 27 August 2013 00:15, PJ Eby wrote: > None of these things is wrong. It is *spreading* FUD (and in particular, > doing so cynically to undermine a proposal) that is wrong, and I hope I > didn't do that - I certainly did not intend to and I'm a bit unhappy about > the implication that I might have. Sorry for the implication; it was not intended. I did not think you had any intent to make other people share your doubts or had any desire to shoot down the proposal. As I said, the real intent of my (clearly, in retrospect, very poorly-worded) side-remark was that I thought 90% of the objections to Nick's proposals were based on fear, uncertainty, and doubt rather than any actual issues with the proposals themselves. From ncoghlan at gmail.com Wed Aug 28 00:34:46 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 28 Aug 2013 08:34:46 +1000 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On 28 Aug 2013 01:25, "Tres Seaver" wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 08/27/2013 05:00 AM, Nick Coghlan wrote: > > > PJE and Jason were probably the only other current distutils-sig > > participants familiar enough with setuptools and pkg_resources to > > understand the distinction between that aspect, the default version > > handling and the activation API > > There are lots of folks here who have been building tooling on top of > eggs for almost a decade now. Perhaps those who *do* grok how that stuff > works (I'd be willing to guess a lot more than the three of you and > myself) weren't alarmed by you proposal. Oophobia is not ubiquitous. :) Ah, ye olde "Usenet nod" syndrome, the internet's ever-present friend skewing our perception of mailing list feedback :) Cheers, Nick. > > > > Tres. > - -- > =================================================================== > Tres Seaver +1 540-429-0999 tseaver at palladion.com > Palladion Software "Excellence by Design" http://palladion.com > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > Comment: Using GnuPG with undefined - http://www.enigmail.net/ > > iEYEARECAAYFAlIcxDEACgkQ+gerLs4ltQ4qCgCfQKtjEAQkx7XnkQS8A8Q767E6 > lnwAoNJAcSDTbN6I1DW2DZAzC3lMvolQ > =AeYU > -----END PGP SIGNATURE----- > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From benzolius at yahoo.com Wed Aug 28 00:01:43 2013 From: benzolius at yahoo.com (Benedek Zoltan) Date: Tue, 27 Aug 2013 15:01:43 -0700 (PDT) Subject: [Distutils] buildout bootstrap.py doesn't work on Sabayon Linux with system python Message-ID: <1377640903.89269.YahooMailNeo@web121602.mail.ne1.yahoo.com> @Marius Gedminas @Ralf Schmitt Thanks for explanation and the tipp. Cheers Zoltan Benedek -------------- next part -------------- An HTML attachment was scrubbed... URL: From ct at gocept.com Tue Aug 27 17:22:52 2013 From: ct at gocept.com (Christian Theune) Date: Tue, 27 Aug 2013 17:22:52 +0200 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: Howdi, On 17. Aug2013, at 7:55 AM, Christian Theune wrote: > > Sorry for the stealth mode. The long thread has been sitting in my Inbox while the last week was very busy and we're hosting a sprint at our offices right now. > > I hope I will get around to reading and responding early next week. Well, almost. Picking the pieces back up, I'll stop bickering now [1]. I'm grateful for the offer to include gocept in a more formally trusted relationship. However, I don't think that adds more value and just defers paying back technical debt. I'll be a bit more aggressive with the migration and plan to do the following things: In the next days - Establish a new primary name for our mirror (probably pypi.rzob.gocept.net or something similar) - Inform our internal team that they should update f.pypi.python.org to the new name - Make f.pypi.python.org serve 301 redirects to the new name In two weeks: - Make f.pypi.python.org serve 401 Gone This should get your plan back on track. I'm happy with the proposal otherwise, thanks for taking the time to keep me informed! Christian PS: Not sure whether I missed anything important in the meantime ? ;) [1] I still feel some need for personal interaction confirming or disproving some of my thoughts, but I don't think it's worthwhile to do that on a mailinglist. Personally I feel a bit stupid for the amount of pain we experience with the packaging right now - especially as we try to stay in touch and contribute. I don't mind the work in general but it would be nice to have less pain ? ? -- Christian Theune ? gocept gmbh & co. kg flyingcircus.io ? operations as a service Forsterstra?e 29 ? 06112 Halle (Saale) ? Tel +49 345 1229889-7 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ct at gocept.com Wed Aug 28 14:37:01 2013 From: ct at gocept.com (Christian Theune) Date: Wed, 28 Aug 2013 14:37:01 +0200 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> Message-ID: <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> On 27. Aug2013, at 5:22 PM, Christian Theune wrote: > Howdi, > > On 17. Aug2013, at 7:55 AM, Christian Theune wrote: >> >> Sorry for the stealth mode. The long thread has been sitting in my Inbox while the last week was very busy and we're hosting a sprint at our offices right now. >> >> I hope I will get around to reading and responding early next week. > > Well, almost. > > Picking the pieces back up, I'll stop bickering now [1]. > > I'm grateful for the offer to include gocept in a more formally trusted relationship. However, I don't think that adds more value and just defers paying back technical debt. > > I'll be a bit more aggressive with the migration and plan to do the following things: > > In the next days > > - Establish a new primary name for our mirror (probably pypi.rzob.gocept.net or something similar) The new primary name is "pypi.gocept.com". This already existed before. > - Inform our internal team that they should update f.pypi.python.org to the new name > - Make f.pypi.python.org serve 301 redirects to the new name It does this now. I will also add a valid SSL certificate in the next minutes. What's your take on enforcing SSL e.g. via redirects? Christian -- Christian Theune ? gocept gmbh & co. kg flyingcircus.io ? operations as a service Forsterstra?e 29 ? 06112 Halle (Saale) ? Tel +49 345 1229889-7 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed Aug 28 15:59:39 2013 From: dholth at gmail.com (Daniel Holth) Date: Wed, 28 Aug 2013 09:59:39 -0400 Subject: [Distutils] Multi-version import support for wheel files In-Reply-To: References: <398AAA48-CDB2-4CF3-9052-8D194A9B10FA@stufft.io> <57C67AE5-8A9F-4444-A385-F9F14D6200E0@stufft.io> Message-ID: On Tue, Aug 27, 2013 at 6:34 PM, Nick Coghlan wrote: > > On 28 Aug 2013 01:25, "Tres Seaver" wrote: >> >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 08/27/2013 05:00 AM, Nick Coghlan wrote: >> >> > PJE and Jason were probably the only other current distutils-sig >> > participants familiar enough with setuptools and pkg_resources to >> > understand the distinction between that aspect, the default version >> > handling and the activation API >> >> There are lots of folks here who have been building tooling on top of >> eggs for almost a decade now. Perhaps those who *do* grok how that stuff >> works (I'd be willing to guess a lot more than the three of you and >> myself) weren't alarmed by you proposal. Oophobia is not ubiquitous. :) > > Ah, ye olde "Usenet nod" syndrome, the internet's ever-present friend > skewing our perception of mailing list feedback :) > > Cheers, > Nick. +1 on the proposal. From tk47 at students.poly.edu Wed Aug 28 16:03:25 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 28 Aug 2013 10:03:25 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> Message-ID: <521E032D.2040300@students.poly.edu> On 8/28/13 8:37 AM, Christian Theune wrote: > > I will also add a valid SSL certificate in the next minutes. What's your take on enforcing SSL e.g. via redirects? > I am not an expert, but I guess this depends on who is enforcing the SSL redirection. If someone untrusted can be a man-in-the-middle between your clients and http://pypi.gocept.com, then this man-in-the-middle should be able to redirect your HTTP-only clients anywhere else. I would venture that the best thing to do, if feasible, is to get your clients to point strictly to https://pypi.gocept.com and test that pip >= 1.3 verifies the SSL connection. From ct at gocept.com Wed Aug 28 18:09:38 2013 From: ct at gocept.com (Christian Theune) Date: Wed, 28 Aug 2013 18:09:38 +0200 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <521E032D.2040300@students.poly.edu> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> <521E032D.2040300@students.poly.edu> Message-ID: <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> On 28. Aug2013, at 4:03 PM, Trishank Karthik Kuppusamy wrote: > On 8/28/13 8:37 AM, Christian Theune wrote: >> >> I will also add a valid SSL certificate in the next minutes. What's your take on enforcing SSL e.g. via redirects? >> > > I am not an expert, but I guess this depends on who is enforcing the SSL redirection. If someone untrusted can be a man-in-the-middle between your clients and http://pypi.gocept.com, then this man-in-the-middle should be able to redirect your HTTP-only clients anywhere else. Right. It doesn't add any security on its own, but it's a way that people can discover you're using SSL. :) I'll have to read up on how to do HSTS actually ? > I would venture that the best thing to do, if feasible, is to get your clients to point strictly to https://pypi.gocept.com and test that pip >= 1.3 verifies the SSL connection. Right. Christian -- Christian Theune ? gocept gmbh & co. kg flyingcircus.io ? operations as a service Forsterstra?e 29 ? 06112 Halle (Saale) ? Tel +49 345 1229889-7 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tk47 at students.poly.edu Wed Aug 28 18:13:46 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 28 Aug 2013 12:13:46 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> <521E032D.2040300@students.poly.edu> <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> Message-ID: <521E21BA.20506@students.poly.edu> On 08/28/2013 12:09 PM, Christian Theune wrote: > Right. It doesn't add any security on its own, but it's a way that > people can discover you're using SSL. :) I'll have to read up on how > to do HSTS actually ? That was my next question. Does pip honour HSTS? I could be wrong, but I do not think so... -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 899 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Thu Aug 29 01:05:45 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Aug 2013 09:05:45 +1000 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: <521E21BA.20506@students.poly.edu> References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> <521E032D.2040300@students.poly.edu> <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> <521E21BA.20506@students.poly.edu> Message-ID: On 29 Aug 2013 03:17, "Trishank Karthik Kuppusamy" wrote: > > On 08/28/2013 12:09 PM, Christian Theune wrote: > > Right. It doesn't add any security on its own, but it's a way that > > people can discover you're using SSL. :) I'll have to read up on how > > to do HSTS actually ? > > That was my next question. Does pip honour HSTS? I could be wrong, but I > do not think so... It's likely worth checking with Donald and Noah how the SSL enforcement on PyPI itself is set up. I believe the aim was just to ensure browsers are always using HTTPS, while switching other tools to SSL still requires client side updates. Cheers, Nick. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tk47 at students.poly.edu Thu Aug 29 01:53:00 2013 From: tk47 at students.poly.edu (Trishank Karthik Kuppusamy) Date: Wed, 28 Aug 2013 19:53:00 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> <521E032D.2040300@students.poly.edu> <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> <521E21BA.20506@students.poly.edu> Message-ID: <521E8D5C.5000502@students.poly.edu> On 08/28/2013 07:05 PM, Nick Coghlan wrote: > It's likely worth checking with Donald and Noah how the SSL > enforcement on PyPI itself is set up. I believe the aim was just to > ensure browsers are always using HTTPS, while switching other tools to > SSL still requires client side updates. Makes perfect sense to me! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 899 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Thu Aug 29 12:31:12 2013 From: donald at stufft.io (Donald Stufft) Date: Thu, 29 Aug 2013 06:31:12 -0400 Subject: [Distutils] PEP449 - Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: <5BF937B2-9175-412A-A1EB-962A5DEA2E08@stufft.io> <7FEEAB28-C650-4A95-9F8F-FAB3DBDAB937@gocept.com> <521E032D.2040300@students.poly.edu> <245DACDE-1440-446F-8633-399634A1AADB@gocept.com> <521E21BA.20506@students.poly.edu> Message-ID: On Aug 28, 2013, at 7:05 PM, Nick Coghlan wrote: > > On 29 Aug 2013 03:17, "Trishank Karthik Kuppusamy" wrote: > > > > On 08/28/2013 12:09 PM, Christian Theune wrote: > > > Right. It doesn't add any security on its own, but it's a way that > > > people can discover you're using SSL. :) I'll have to read up on how > > > to do HSTS actually ? > > > > That was my next question. Does pip honour HSTS? I could be wrong, but I > > do not think so... > > It's likely worth checking with Donald and Noah how the SSL enforcement on PyPI itself is set up. I believe the aim was just to ensure browsers are always using HTTPS, while switching other tools to SSL still requires client side updates. > > Cheers, > Nick. > > > > > > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > http://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig pip does not respect HSTS. It would be somewhat nice if it did but the primary purpose of HSTS is to prevent against SSL downgrade attacks and users own error by entering http:// instead of https://. It's less important in a tool like pip where https should be hardcoded. It's use would essentially work to remove user error if they accidentally enter a http:// url instead of a https:// url (which isn't a bad thing). HTTP on PyPI always redirects idempotent methods to HTTPS and it includes HSTS but it does generally require client side updates to switch to HTTPS (in part because it requires client side updates to even validate SSL). What somebody else said that redirecting HTTP to HTTPS is a nice signal to users they should be using HTTPS but it doesn't actually protect users as someone in a MITM position can intercept the redirect and just return content instead. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Aug 29 15:47:44 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Aug 2013 23:47:44 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 20 August 2013 23:25, Antoine Pitrou wrote: > > Hello, > > Some comments about PEP 426: > >> The information defined in this PEP is serialised to pydist.json files for > some > use cases. These are files containing UTF-8 encoded JSON metadata. > > Perhaps add that on-disk pydist.json files may/should be generated in printed > form with sorted keys, to ease direct inspection by users and developers? Good point: https://bitbucket.org/pypa/pypi-metadata-formats/issue/7/require-sorted-keys-when-serialising >> Source labels MUST be unique within each project and MUST NOT match any >> defined version for the project. > > Is there a motivation for the "not matching any defined version"? > AFAICT it makes it necessary to have two different representation > schemes, e.g. "X.Y.Z" for source labels and "vX.Y.Z" for versions. >> For source archive references, an expected hash value may be specified by >> including a ``=`` entry as part of the URL >> fragment. > > Why only source archive references (and not e.g. binary)? The core metadata is for the source distribution (something that can be used to rebuild the software on a trusted build server). wheel (PEP 427) is the format for built binaries. PEP 440 does cover direct references to binary archives as dependencies, though. >> "project_urls": { >> "Documentation": "https://distlib.readthedocs.org" >> "Home": "https://bitbucket.org/pypa/distlib" >> "Repository": "https://bitbucket.org/pypa/distlib/src" >> "Tracker": "https://bitbucket.org/pypa/distlib/issues" >> } > > This example lacks commas. Fixed. >> An abbreviation of "metadistribution requires". This is a list of >> subdistributions that can easily be installed and used together by >> depending on this metadistribution. > > I don't understand what it means :-) Care to explain and/or clarify > the purpose? > > (for me, "meta-requires" sounds like something that setup.py depends > on for its own operation, but that the installed software doesn't need) > > (edit: I now see this is clarified in Appendix C. The section ordering > in the PEP makes it look like "meta_requires" are the primary type of > requires, though, while according to that appendix they're a rather > exotic use case. Would be nice to spell that out *before* the appendices :-)). https://bitbucket.org/pypa/pypi-metadata-formats/issue/9/move-meta_requires-after-requires >> * MAY allow direct references > > What is a direct reference? Explained in PEP 440. Will likely add some more cross references to that when I next do a full editing pass. >> Automated tools MUST NOT allow strict version matching clauses or direct >> references in this field - if permitted at all, such clauses should appear >> in ``meta_requires`` instead. > > Why so? Because version pinning is almost always the wrong thing to do in published projects, while *not* pinning is wrong if you're redistributing an exact version of something as part of an umbrella project. Separating the two use cases (ordinary dependencies and redistribution) into separate fields gives the tools the info they need to provide suitable warnings. > > [test requires] >> Public index servers SHOULD NOT allow strict version matching clauses or >> direct references in this field. > > Again, why? Is it important for public index servers that test > dependencies be not pinned? Yes. Overzealous pinning sets up dependency hell. >> Note that while these are build dependencies for the distribution being >> built, the installation is a *deployment* scenario for the dependencies. > > But there are no deployment requires, right? :) > (or is what "meta requires" are for?) deployment = runtime = run_requires and meta_requires I'm currently fairly inconsistent in whether I refer to deployment or runtime, though. >> For example, multiple projects might supply >> PostgreSQL bindings for use with SQL Alchemy: each project might declare >> that it provides ``sqlalchemy-postgresql-bindings``, allowing other >> projects to depend only on having at least one of them installed. > > But the automated installer wouldn't be able to suggest the various > packages providing ``sqlalchemy-postgresql-bindings`` if none is > installed, which should IMO discourage such a scheme. > >> To handle this case in a way that doesn't allow for name hijacking, the >> authors of the distribution that first defines the virtual dependency >> should >> create a project on the public index server with the corresponding name, >> and >> depend on the specific distribution that should be used if no other >> provider >> is already installed. This also has the benefit of publishing the default >> provider in a way that automated tools will understand. > > But then the alternatives needn't provide the "virtual dependency". > They can just provide the "default provider", which saves the time and > hassle of defining a well-known virtual dependency for all similar > projects. No, because the default provider probably won't be implemented by the project defining the virtual dependency and may change over time. Think of it as "symlinks for dependencies". Using a direct dependency would only make sense if the same group of people were developing both projects. >> A string that indicates that this project is no longer being developed. >> The >> named project provides a substitute or replacement. > > How about a project that is no longer being developed but has no > direct substitution? :) > Can it use an empty string (or null / None perhaps?) Hmm, I'll think about that one. https://bitbucket.org/pypa/pypi-metadata-formats/issue/10/allow-obsoleted_by-to-be-set-to-null-none >> Examples indicating supported operating systems:: >> >> # Windows only >> "supports_environments": ["sys_platform == 'win32'"] > > Hmm, which syntax is it exactly? In a previous section, you used > the following example: > >> "environment": "sys.platform == 'win32'" > > (note dot vs. underscore) Must have missed updating those from the previous PEP. Fixed. >> "modules": ["chair", "chair.cushions", (...)] > > The example is a bit intriguing. Is it expected that both "chair" and > "chair.cushions" be specified there, or is "chair" sufficient? If only "chair" was declared, you wouldn't be able to search the index server for the provider of "chair.cushions". It gets more interesting once namespace packages get involved. >> When installing from an sdist, source archive or VCS checkout, >> installation >> tools SHOULD create a binary archive using ``setup.py bdist_wheel`` and >> then install binary archive normally (including invocation of any install >> hooks). Installation tools SHOULD NOT invoke ``setup.py install`` >> directly. > > Interesting. Is "setup.py install" meant to die, or will it be redefined > as "bdist_wheel + install_wheel"? > (also, why is this mentioned in the postinstall hooks section, or > even in a metadata-related PEP?) Because it's all been co-evolving in an attempt to make sure existing use cases are covered adequately in the new metadata. It needs to be split out, though, and will be eventually. >> Installation tools SHOULD treat an exception thrown by a preuninstall >> hook as an indication the removal of the distribution should be aborted. > > I hope a "--force" option will be provided by such tools. Failure to > uninstall because of buggy uninstall tools is a frustrating experience. > >> Extras are additional dependencies that enable an optional aspect >> of the distribution > > I am confused. To me, extras look like additional provides, not > additional dependencies. I.e. in: > > "requires": ["ComfyChair[warmup]"] > -> requires ``ComfyChair`` and ``SoftCushions`` > > "warmup" is an additional provide of ComfyChair, and it depends on > SoftCushions. All of the ComfyChair code needed for "ComfyChair[warmup]" is installed when you install "ComfyChair". It just won't work properly without the extra dependencies. So the extra names really only refer to the optional dependencies, not to optional components of the ComfyChair code base. >> "requires": ["ComfyChair[*]"] >> -> requires ``ComfyChair`` and ``SoftCushions``, but will also >> pick up any new extras defined in later versions > > This one confuses me (again :-)). What does "pick up" mean? Accept? > Require? If a latter versions defines another extra, this one will depend on those. >> pip install ComfyChair[-,:*:,*] >> -> installs the full set of development dependencies, but avoids >> installing ComfyChair itself > > Are all these possibilities ("-", ":*:", "*") useful in real life? Yes. The five examples given are meant to illustrate that. >> Environment markers >> =================== > > In this section, there still are inconsistencies in the format examples > ("sys.platform" vs. "sys_platform"). > >> * ``platform_python_implementation``: ``platform.python_implementation()`` >> * ``implementation_name````: ``sys.implementation.name`` > > Why two different ways to spell nearly the same thing: > >>>> platform.python_implementation() > 'CPython' >>>> sys.implementation.name > 'cpython' > > (also, look at how platform.python_implementation() is implemented :-)) Older versions of Python don't have sys.implementation. > Also, do the ordering operators ("<=", ">=", etc.) operate logically or > lexicographically on version values? Purely string based. Environment markers are intended to be really dumb. >> Build labels >> ------------ >> >> See PEP 440 for the rationale behind the addition of this field. > > I can't see anything named "Build label" in PEP 426. Did you mean "source > label"? The rationale section is lagging a bit. It'll get some TLC next time I do a full editing pass. >> This version of the metadata specification continues to use ``setup.py`` >> and the distutils command syntax to invoke build and test related >> operations on a source archive or VCS checkout. > > I don't really understand how Metadata 2.0 is dependent on the distutils > command scheme. Can you elaborate? Given an sdist, how do you build a wheel? Given a source checkout or raw tarball, how do you build an sdist or generate the distribution's metadata? The whole problem of building from source is currently woefully underspecified, and there's a lot to be said in favour of standardising a subset of the existing setuptools command line API (since we'll have to do that as the legacy fallback option anyway). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From antoine at python.org Thu Aug 29 16:30:10 2013 From: antoine at python.org (Antoine Pitrou) Date: Thu, 29 Aug 2013 14:30:10 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: Message-ID: Hi, Nick Coghlan gmail.com> writes: > > deployment = runtime = run_requires and meta_requires > > I'm currently fairly inconsistent in whether I refer to deployment or > runtime, though. Ah. For me, intuitively, deployment == "what happens when I install and configure the software", while runtime == "what happens when I run the provided binaries, or import the provided library and call some of its APIs". > > Why two different ways to spell nearly the same thing: > > > >>>> platform.python_implementation() > > 'CPython' > >>>> sys.implementation.name > > 'cpython' > > > > (also, look at how platform.python_implementation() is implemented ) > > Older versions of Python don't have sys.implementation. But then "implementation_name" can have an implementation fallback for older Pythons. Exposing another name sounds a bit confusing. > >> This version of the metadata specification continues to use ``setup.py`` > >> and the distutils command syntax to invoke build and test related > >> operations on a source archive or VCS checkout. > > > > I don't really understand how Metadata 2.0 is dependent on the distutils > > command scheme. Can you elaborate? > > Given an sdist, how do you build a wheel? > > Given a source checkout or raw tarball, how do you build an sdist or > generate the distribution's metadata? > > The whole problem of building from source is currently woefully > underspecified, and there's a lot to be said in favour of > standardising a subset of the existing setuptools command line API Hmmm... I'm not sure I follow the reasoning. The internal mechanics of building a binary archive may deserve standardizing, and perhaps a dedicated distlib API for it, but why would that impact the command-line API? (after all, it's just "python setup.py build_bdist", or something :-)) cheers Antoine. From oscar.j.benjamin at gmail.com Thu Aug 29 17:02:07 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 29 Aug 2013 16:02:07 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 29 August 2013 15:30, Antoine Pitrou wrote: > Nick Coghlan gmail.com> writes: > >> >> This version of the metadata specification continues to use ``setup.py`` >> >> and the distutils command syntax to invoke build and test related >> >> operations on a source archive or VCS checkout. >> > >> > I don't really understand how Metadata 2.0 is dependent on the distutils >> > command scheme. Can you elaborate? >> >> Given an sdist, how do you build a wheel? >> >> Given a source checkout or raw tarball, how do you build an sdist or >> generate the distribution's metadata? >> >> The whole problem of building from source is currently woefully >> underspecified, and there's a lot to be said in favour of >> standardising a subset of the existing setuptools command line API > > Hmmm... I'm not sure I follow the reasoning. The internal mechanics > of building a binary archive may deserve standardizing, and perhaps > a dedicated distlib API for it, but why would that impact the > command-line API? > > (after all, it's just "python setup.py build_bdist", or something :-)) The point is that pip and other packaging tools will use 'python setup.py ...' to do all the building and wheel making and so on. However the required interface that setup.py should expose is not documented anywhere and is essentially implementation defined where the implementation is the setup() function from a recent version of setuptools. In the interest of standardising the required parts of existing practice the required subset of this interface should be documented. Projects like numpy/scipy that deliberately don't use the setup() function from either setuptools or distutils need to know what interface is expected of their setup.py. The same goes for any attempt to build a new third-party package that would be used as a replacement for building with distutils/setuptools (which are woefully inadequate for projects with significant C/Fortran etc. code): any new build system needs to have an API that it should conform to. On the other side the packaging tools like pip etc. need to know what interface they can *require* of setup.py without breaking compatibility with non-distutils/setuptools build systems. Oscar From p.f.moore at gmail.com Thu Aug 29 17:49:18 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 29 Aug 2013 16:49:18 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 29 August 2013 16:02, Oscar Benjamin wrote: >On 29 August 2013 15:30, Antoine Pitrou wrote: [...] >> (after all, it's just "python setup.py build_bdist", or something :-)) > > The point is that pip and other packaging tools will use 'python > setup.py ...' to do all the building and wheel making and so on. > However the required interface that setup.py should expose is not > documented anywhere and is essentially implementation defined where > the implementation is the setup() function from a recent version of > setuptools. In the interest of standardising the required parts of > existing practice the required subset of this interface should be > documented. Specifically, the command is python setup.py bdist_wheel But that requires the wheel project and setuptools to be installed, and we're not going to require all users to have those available. Also, other projects can build wheels with different commands/interfaces: * distlib says put all your built files in a set of directories then do wheel.build(paths=path_mapping) - no setup.py needed at all * pip says pip wheel requirement (but that uses setuptools/wheel under the hood) * bento might do something completely different The whole question of standardising the command line API for building (sdists and) wheels is being avoided at the moment, as it's going to be another long debate (setup.py is too closely associated with distutils and/or setuptools for some people). AIUI, we're sort of moving towards the "official" command line API being pip's (so "pip wheel XXX") but that's not a complete answer as currently pip internally just uses the setup.py command line, and the intention is to decouple the two so that alternative build tools (like bento, I guess) get a look in. It's all a bit vague at the moment, though, because nobody has even looked at what alternative build tools might even be needed. I could have this completely wrong, though - we're trying very hard to keep the work in small chunks, and building is not one of those chunks yet. Paul. From dholth at gmail.com Thu Aug 29 19:11:48 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 29 Aug 2013 13:11:48 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: It probably makes sense for some version of bdist_wheel to be merged into setuptools eventually. In that system pip would document which setup.py commands and arguments it uses and a non-distutils-derived setup.py would have to implement a minimal set of commands to interoperate. This is basically where we are today minus the "minimal" and "documented" details. The alternative, not mutually exclusive solution would be to define a Python-level detect/build plugin system for pip which would call a few methods to generate an installable from a source distribution. It doesn't exist yet mostly because the pip developers haven't written enough alternative build systems. There is no strategic reason for the delay. On Thu, Aug 29, 2013 at 11:49 AM, Paul Moore wrote: > On 29 August 2013 16:02, Oscar Benjamin wrote: >>On 29 August 2013 15:30, Antoine Pitrou wrote: > [...] >>> (after all, it's just "python setup.py build_bdist", or something :-)) >> >> The point is that pip and other packaging tools will use 'python >> setup.py ...' to do all the building and wheel making and so on. >> However the required interface that setup.py should expose is not >> documented anywhere and is essentially implementation defined where >> the implementation is the setup() function from a recent version of >> setuptools. In the interest of standardising the required parts of >> existing practice the required subset of this interface should be >> documented. > > Specifically, the command is > > python setup.py bdist_wheel > > But that requires the wheel project and setuptools to be installed, > and we're not going to require all users to have those available. > > Also, other projects can build wheels with different commands/interfaces: > * distlib says put all your built files in a set of directories then > do wheel.build(paths=path_mapping) - no setup.py needed at all > * pip says pip wheel requirement (but that uses setuptools/wheel under the hood) > * bento might do something completely different > > The whole question of standardising the command line API for building > (sdists and) wheels is being avoided at the moment, as it's going to > be another long debate (setup.py is too closely associated with > distutils and/or setuptools for some people). > > AIUI, we're sort of moving towards the "official" command line API > being pip's (so "pip wheel XXX") but that's not a complete answer as > currently pip internally just uses the setup.py command line, and the > intention is to decouple the two so that alternative build tools (like > bento, I guess) get a look in. It's all a bit vague at the moment, > though, because nobody has even looked at what alternative build tools > might even be needed. > > I could have this completely wrong, though - we're trying very hard to > keep the work in small chunks, and building is not one of those chunks > yet. > > Paul. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig From oscar.j.benjamin at gmail.com Thu Aug 29 19:30:07 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Thu, 29 Aug 2013 18:30:07 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 29 August 2013 18:11, Daniel Holth wrote: > It probably makes sense for some version of bdist_wheel to be merged > into setuptools eventually. In that system pip would document which > setup.py commands and arguments it uses and a non-distutils-derived > setup.py would have to implement a minimal set of commands to > interoperate. This is basically where we are today minus the "minimal" > and "documented" details. > > The alternative, not mutually exclusive solution would be to define a > Python-level detect/build plugin system for pip which would call a > few methods to generate an installable from a source distribution. > > It doesn't exist yet mostly because the pip developers haven't written > enough alternative build systems. There is no strategic reason for the > delay. I thought that the list in the PEP seemed reasonable: python setup.py dist_info python setup.py sdist python setup.py build_ext --inplace python setup.py test python setup.py bdist_wheel Most projects already have a setup.py that can do these things with the exception of bdist_wheel. The only ambiguity is that it's not clear whether the expectation is *exactly* those invocations or whether any other command line options etc. would be needed. Can it not simply be documented that these are the commands needed by current packaging tools (and return codes, expected behaviour, ...) to fit with the current bleeding edge infrastructure? I would have thought that that would be good enough as a stop-gap while a better non-setup.py solution awaits. Oscar From dholth at gmail.com Thu Aug 29 19:56:12 2013 From: dholth at gmail.com (Daniel Holth) Date: Thu, 29 Aug 2013 13:56:12 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On Thu, Aug 29, 2013 at 1:30 PM, Oscar Benjamin wrote: > On 29 August 2013 18:11, Daniel Holth wrote: >> It probably makes sense for some version of bdist_wheel to be merged >> into setuptools eventually. In that system pip would document which >> setup.py commands and arguments it uses and a non-distutils-derived >> setup.py would have to implement a minimal set of commands to >> interoperate. This is basically where we are today minus the "minimal" >> and "documented" details. >> >> The alternative, not mutually exclusive solution would be to define a >> Python-level detect/build plugin system for pip which would call a >> few methods to generate an installable from a source distribution. >> >> It doesn't exist yet mostly because the pip developers haven't written >> enough alternative build systems. There is no strategic reason for the >> delay. > > I thought that the list in the PEP seemed reasonable: > > python setup.py dist_info > python setup.py sdist > python setup.py build_ext --inplace > python setup.py test > python setup.py bdist_wheel > > Most projects already have a setup.py that can do these things with > the exception of bdist_wheel. The only ambiguity is that it's not > clear whether the expectation is *exactly* those invocations or > whether any other command line options etc. would be needed. > > Can it not simply be documented that these are the commands needed by > current packaging tools (and return codes, expected behaviour, ...) to > fit with the current bleeding edge infrastructure? > > I would have thought that that would be good enough as a stop-gap > while a better non-setup.py solution awaits. I don't think defining the plugin interface will cause build systems to be written. I think it has to go in the other direction. Otherwise we would be uninformed about the real needs of the plugin interface. There are not currently large numbers of better non-distutils-derived systems clamoring for easy pip interop. From ncoghlan at gmail.com Fri Aug 30 01:08:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Aug 2013 09:08:23 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 Aug 2013 03:31, "Oscar Benjamin" wrote: > > On 29 August 2013 18:11, Daniel Holth wrote: > > It probably makes sense for some version of bdist_wheel to be merged > > into setuptools eventually. In that system pip would document which > > setup.py commands and arguments it uses and a non-distutils-derived > > setup.py would have to implement a minimal set of commands to > > interoperate. This is basically where we are today minus the "minimal" > > and "documented" details. > > > > The alternative, not mutually exclusive solution would be to define a > > Python-level detect/build plugin system for pip which would call a > > few methods to generate an installable from a source distribution. > > > > It doesn't exist yet mostly because the pip developers haven't written > > enough alternative build systems. There is no strategic reason for the > > delay. > > I thought that the list in the PEP seemed reasonable: > > python setup.py dist_info > python setup.py sdist > python setup.py build_ext --inplace > python setup.py test > python setup.py bdist_wheel > > Most projects already have a setup.py that can do these things with > the exception of bdist_wheel. The only ambiguity is that it's not > clear whether the expectation is *exactly* those invocations or > whether any other command line options etc. would be needed. > > Can it not simply be documented that these are the commands needed by > current packaging tools (and return codes, expected behaviour, ...) to > fit with the current bleeding edge infrastructure? > > I would have thought that that would be good enough as a stop-gap > while a better non-setup.py solution awaits. Right, that's the status quo, and even if we define a plugin system to make setup.py optional, this will still be the backwards compatible fallback. Originally I had planned to postpone specifying this properly to something post metadata 2.0, but talking to Fedora and Debian folks changed my mind - for reliable rebuilding from source on trusted build servers, this needs to be pulled out of PEP 426 into a separate PEP and at least the required command line options need to be documented. Volunteers welcome, otherwise I'll eventually get to it myself :) We also need to officially bless pip's trick of forcing the use of setuptools for distutils based setup.py files. Cheers, Nick. > > > Oscar > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Fri Aug 30 03:33:52 2013 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Fri, 30 Aug 2013 11:33:52 +1000 Subject: [Distutils] PEP 449 -- Removal of the PyPI Mirror Auto Discovery and Naming Scheme Message-ID: Hi all, Given that I believe all outstanding issues with PEP 449 < http://python.org/dev/peps/pep-0449/> have been resolved I will accept it in its current form (last modified August 16) so the immediate changes may be made and publicity of the change can be started. My thanks to everyone involved in trying to make mirroring work, and also for everyone who has contributed to getting this PEP to an acceptable state. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Aug 30 09:23:10 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Aug 2013 08:23:10 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 August 2013 00:08, Nick Coghlan wrote: > We also need to officially bless pip's trick of forcing the use of > setuptools for distutils based setup.py files. Do we? What does official blessing imply? We've managed for years without the trick being "official"... The main reason it is currently used is to allow setup.py install to specify --record, so that we can get the list of installed files. If distutils added a --record flag, for example, I don't believe we'd need the hack at all. (Obviously, we'd still need setuptools so we could use wheel to build wheels, but that's somewhat different as it's a new feature). Maybe a small distutils patch is better than blessing setuptools here? Forcing setuptools does break some builds. From when I last tried, for instance, I believe that cx_Oracle won't build with setuptools forced. Whose issue should that be? Pip's, setuptools', or cx_Oracle's? (Note that I'm *not* saying that this is a showstopper, just trying to clarify the intent here). Note - I'm not against blessing pip's hack. But I'd like it to be clear what such a blessing *means* and why just leaving it as an internal implementation detail isn't sufficient. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Fri Aug 30 11:13:49 2013 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 30 Aug 2013 09:13:49 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: Message-ID: Nick Coghlan gmail.com> writes: > We also need to officially bless pip's trick of forcing the use of > setuptools for distutils based setup.py files. What does official blessing mean in practice? Is pip to be wedded to setuptools forever, or might it one day have the possibility of being implemented differently? If so, why should any comment be made about what is an implementation detail? Regards, Vinay Sajip From ncoghlan at gmail.com Fri Aug 30 11:18:14 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Aug 2013 19:18:14 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 Aug 2013 17:23, "Paul Moore" wrote: > > On 30 August 2013 00:08, Nick Coghlan wrote: >> >> We also need to officially bless pip's trick of forcing the use of setuptools for distutils based setup.py files. > > > Do we? What does official blessing imply? We've managed for years without the trick being "official"... > > The main reason it is currently used is to allow setup.py install to specify --record, so that we can get the list of installed files. If distutils added a --record flag, for example, I don't believe we'd need the hack at all. (Obviously, we'd still need setuptools so we could use wheel to build wheels, but that's somewhat different as it's a new feature). Maybe a small distutils patch is better than blessing setuptools here? A distutils patch won't help with Python 2.7 or 3.3. The purpose of blessing the substitution is to decouple the update cycle of the build system from the update cycle of the standard library. (This is a lesson I note even RH acknowledged for RHEL when we first released the Red Hat Developer Toolset) It's not just the record flag - it's also about supporting wheel generation and eventually the metadata 2.0 formats and features. > Forcing setuptools does break some builds. From when I last tried, for instance, I believe that cx_Oracle won't build with setuptools forced. Whose issue should that be? Pip's, setuptools', or cx_Oracle's? (Note that I'm *not* saying that this is a showstopper, just trying to clarify the intent here). Officially blessing replacing distutils with setuptools would strongly push this case in the direction of being a setuptools bug (since setuptools generally aims to be a drop in replacement for distutils), without ruling out a cx_Oracle bug (e.g. if the problem is incompatible monkey patching of distutils). pip would be cleared of any blame. > > Note - I'm not against blessing pip's hack. But I'd like it to be clear what such a blessing *means* and why just leaving it as an internal implementation detail isn't sufficient. My aim would be to ensure all build systems are free to substitute setuptools for distutils to get support for new distribution standards on old versions of Python. Cheers, Nick. > > Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Fri Aug 30 13:54:39 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Fri, 30 Aug 2013 12:54:39 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 29 August 2013 16:49, Paul Moore wrote: > On 29 August 2013 16:02, Oscar Benjamin wrote: >>On 29 August 2013 15:30, Antoine Pitrou wrote: > [...] >>> (after all, it's just "python setup.py build_bdist", or something :-)) >> >> The point is that pip and other packaging tools will use 'python >> setup.py ...' to do all the building and wheel making and so on. >> However the required interface that setup.py should expose is not >> documented anywhere and is essentially implementation defined where >> the implementation is the setup() function from a recent version of >> setuptools. In the interest of standardising the required parts of >> existing practice the required subset of this interface should be >> documented. > > Specifically, the command is > > python setup.py bdist_wheel > > But that requires the wheel project and setuptools to be installed, > and we're not going to require all users to have those available. > > Also, other projects can build wheels with different commands/interfaces: > * distlib says put all your built files in a set of directories then > do wheel.build(paths=path_mapping) - no setup.py needed at all > * pip says pip wheel requirement (but that uses setuptools/wheel under the hood) > * bento might do something completely different Yes, but whatever is used if the required interface from setup.py is documented it's easy enough for a distribution author to create a setup.py that would satisfy those commands. It could be as easy as: if sys.argv[1] == 'bdist_wheel': sys.exit(subprocess.call(['bentomaker', 'build_wheel']) or whatever. Then, if bento is build-required (or distributed in the sdist), 'pip wheel' would work right? Bento could even ship/generate setup.py files for bento-using distributions to use (I assume 'bentomaker sdist' does actually do this but I got an error installing bento; see below...). However, right now it's not clear exactly what the command line interface would need to be e.g.: Should setup.py process any optional arguments? How should it know what filename to give the wheel and what directory to put it in? Should the setup.py assume that its current working directory is the VCS checkout or unpacked sdist directory? Will pip et al. infer sucess/failure from the return code? Who is supposed to be responsible for any cleanup if necessary? > The whole question of standardising the command line API for building > (sdists and) wheels is being avoided at the moment, as it's going to > be another long debate (setup.py is too closely associated with > distutils and/or setuptools for some people). Yes but rather than try to think of something better I'm just suggesting to document what is *already* required, with some guarantee of backward compatibility that will be respected in the future. Even if wheels become commonplace and are used by the most significant projects there will still be a need to build some distributions from source e.g. because the authors didn't build a wheel for your architecture, or the user/author prefer to make build-time optimisations etc. > AIUI, we're sort of moving towards the "official" command line API > being pip's (so "pip wheel XXX") but that's not a complete answer as > currently pip internally just uses the setup.py command line, and the > intention is to decouple the two so that alternative build tools (like > bento, I guess) get a look in. It's all a bit vague at the moment, > though, because nobody has even looked at what alternative build tools > might even be needed. > > I could have this completely wrong, though - we're trying very hard to > keep the work in small chunks, and building is not one of those chunks > yet. Leaving the build infrastructure alone for now seems reasonable to me. However if a static target is created for third-party build tools then there could be more progress on that front. I just tried to install bento to test it out and: $ pip install bento Downloading/unpacking bento Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded Running setup.py egg_info for package bento Installing collected packages: bento Running setup.py install for bento Could not find .egg-info directory in install record for bento Cleaning up... Exception: Traceback (most recent call last): File "Q:\tools\Python27\lib\site-packages\pip\basecommand.py", line 134, in main status = self.run(options, args) File "Q:\tools\Python27\lib\site-packages\pip\commands\install.py", line 241, in run requirement_set.install(install_options, global_options, root=options.root_path) File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 1298, in install requirement.install(install_options, global_options, *args, **kwargs) File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 668, in install os.remove(record_filename) WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'c:\\docume~1\\enojb\\locals~1\\temp\\pip-aae65s-record\\install-record.txt' Storing complete log in c:/Documents and Settings/enojb\pip\pip.log I tried deleting the mentioned file but just got the same error message again. Is that a bento/pip/setuptools bug? I notice that the bento docs don't mention pip on the installation page: http://cournape.github.io/Bento/html/install.html Here's the appropriate version information: $ pip --version pip 1.4.1 from q:\tools\python27\lib\site-packages (python 2.7) $ python --version Python 2.7.5 $ python -c 'import setuptools; print(setuptools.__version__)' 1.1 (I just very carefully updated pip/setuptools based on Paul's previous instructions). The bento setup.py uses bento's own setup() command: https://github.com/cournape/Bento/blob/master/setup.py Oscar From p.f.moore at gmail.com Fri Aug 30 14:26:59 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Aug 2013 13:26:59 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 August 2013 12:54, Oscar Benjamin wrote: > setup.py that would satisfy those commands. It could be as easy as: > > if sys.argv[1] == 'bdist_wheel': > sys.exit(subprocess.call(['bentomaker', 'build_wheel']) > > or whatever. OK, inspiration moment for me. It had never occurred to me that a project could do something like that, and now I see it explained, I understand the various discussions about standardising command line APIs much better. Thanks! > I tried deleting the mentioned file but just got the same error > message again. Is that a bento/pip/setuptools bug? I notice that the > bento docs don't mention pip on the installation page: > http://cournape.github.io/Bento/html/install.html > > I've never yet got bento working for me. I have assumed that some aspect of Windows, Python 3.x, pip or oddities of my setup are not supported yet, and have simply left it as "not for prime time yet" (based on the 0.1.1 version number) in my mind. So it's always remained as the "obvious example of an alternative build system" while still being theoretical. I'd love to see something concrete though, as these debates tend to get remarkably abstract at times :-) Paul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 30 14:37:13 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 08:37:13 -0400 Subject: [Distutils] PEP 449 -- Removal of the PyPI Mirror Auto Discovery and Naming Scheme In-Reply-To: References: Message-ID: <99C68E67-2B9C-4887-AF2E-138F3EF9C660@stufft.io> On Aug 29, 2013, at 9:33 PM, Richard Jones wrote: > Hi all, > > Given that I believe all outstanding issues with PEP 449 < http://python.org/dev/peps/pep-0449/> have been resolved I will accept it in its current form (last modified August 16) so the immediate changes may be made and publicity of the change can be started. > > My thanks to everyone involved in trying to make mirroring work, and also for everyone who has contributed to getting this PEP to an acceptable state. > Awesome, thanks! ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Fri Aug 30 14:49:10 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 30 Aug 2013 08:49:10 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On Fri, Aug 30, 2013 at 7:54 AM, Oscar Benjamin wrote: > On 29 August 2013 16:49, Paul Moore wrote: >> On 29 August 2013 16:02, Oscar Benjamin wrote: >>>On 29 August 2013 15:30, Antoine Pitrou wrote: >> [...] >>>> (after all, it's just "python setup.py build_bdist", or something :-)) >>> >>> The point is that pip and other packaging tools will use 'python >>> setup.py ...' to do all the building and wheel making and so on. >>> However the required interface that setup.py should expose is not >>> documented anywhere and is essentially implementation defined where >>> the implementation is the setup() function from a recent version of >>> setuptools. In the interest of standardising the required parts of >>> existing practice the required subset of this interface should be >>> documented. >> >> Specifically, the command is >> >> python setup.py bdist_wheel >> >> But that requires the wheel project and setuptools to be installed, >> and we're not going to require all users to have those available. >> >> Also, other projects can build wheels with different commands/interfaces: >> * distlib says put all your built files in a set of directories then >> do wheel.build(paths=path_mapping) - no setup.py needed at all >> * pip says pip wheel requirement (but that uses setuptools/wheel under the hood) >> * bento might do something completely different > > Yes, but whatever is used if the required interface from setup.py is > documented it's easy enough for a distribution author to create a > setup.py that would satisfy those commands. It could be as easy as: > > if sys.argv[1] == 'bdist_wheel': > sys.exit(subprocess.call(['bentomaker', 'build_wheel']) > > or whatever. Then, if bento is build-required (or distributed in the > sdist), 'pip wheel' would work right? Bento could even ship/generate > setup.py files for bento-using distributions to use (I assume > 'bentomaker sdist' does actually do this but I got an error installing > bento; see below...). > > However, right now it's not clear exactly what the command line > interface would need to be e.g.: Should setup.py process any optional > arguments? How should it know what filename to give the wheel and what > directory to put it in? Should the setup.py assume that its current > working directory is the VCS checkout or unpacked sdist directory? > Will pip et al. infer sucess/failure from the return code? Who is > supposed to be responsible for any cleanup if necessary? > >> The whole question of standardising the command line API for building >> (sdists and) wheels is being avoided at the moment, as it's going to >> be another long debate (setup.py is too closely associated with >> distutils and/or setuptools for some people). > > Yes but rather than try to think of something better I'm just > suggesting to document what is *already* required, with some guarantee > of backward compatibility that will be respected in the future. Even > if wheels become commonplace and are used by the most significant > projects there will still be a need to build some distributions from > source e.g. because the authors didn't build a wheel for your > architecture, or the user/author prefer to make build-time > optimisations etc. > >> AIUI, we're sort of moving towards the "official" command line API >> being pip's (so "pip wheel XXX") but that's not a complete answer as >> currently pip internally just uses the setup.py command line, and the >> intention is to decouple the two so that alternative build tools (like >> bento, I guess) get a look in. It's all a bit vague at the moment, >> though, because nobody has even looked at what alternative build tools >> might even be needed. >> >> I could have this completely wrong, though - we're trying very hard to >> keep the work in small chunks, and building is not one of those chunks >> yet. > > Leaving the build infrastructure alone for now seems reasonable to me. > However if a static target is created for third-party build tools then > there could be more progress on that front. > > I just tried to install bento to test it out and: > > $ pip install bento > Downloading/unpacking bento > Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded > Running setup.py egg_info for package bento > Installing collected packages: bento > Running setup.py install for bento > Could not find .egg-info directory in install record for bento > Cleaning up... > Exception: > Traceback (most recent call last): > File "Q:\tools\Python27\lib\site-packages\pip\basecommand.py", line > 134, in main > status = self.run(options, args) > File "Q:\tools\Python27\lib\site-packages\pip\commands\install.py", > line 241, in run > requirement_set.install(install_options, global_options, > root=options.root_path) > File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 1298, in install > requirement.install(install_options, global_options, *args, **kwargs) > File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 668, in install > os.remove(record_filename) > WindowsError: [Error 32] The process cannot access the file because it > is being used by another process: > 'c:\\docume~1\\enojb\\locals~1\\temp\\pip-aae65s-record\\install-record.txt' > > Storing complete log in c:/Documents and Settings/enojb\pip\pip.log > > I tried deleting the mentioned file but just got the same error > message again. Is that a bento/pip/setuptools bug? I notice that the > bento docs don't mention pip on the installation page: > http://cournape.github.io/Bento/html/install.html > > Here's the appropriate version information: > > $ pip --version > pip 1.4.1 from q:\tools\python27\lib\site-packages (python 2.7) > $ python --version > Python 2.7.5 > $ python -c 'import setuptools; print(setuptools.__version__)' > 1.1 > > (I just very carefully updated pip/setuptools based on Paul's previous > instructions). > > The bento setup.py uses bento's own setup() command: > https://github.com/cournape/Bento/blob/master/setup.py It looks like you cannot install bento itself using pip on Windows. It might be a Windows bug "WindowsError: [Error 32] The process cannot access the file because it is being used by another process:". It's a little better on Linux (it gets installed) but I don't think Bento is really meant to be installed in this way. Will pip always use setuptools to build packages that were *designed* to use distutils or setuptools? Yes. This does not mean pip is tied to setuptools. It only means a lot of packages are. In the future pip will also be able to detect that a package was designed to work with a non-distutils-derived build system and invoke said system via the plugin interface. Even now a non-distutils setup.py won't be affected by pip's forced setuptools monkey patching since it won't be importing the monkey patched code. From ncoghlan at gmail.com Fri Aug 30 14:50:23 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Aug 2013 22:50:23 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 August 2013 22:26, Paul Moore wrote: > On 30 August 2013 12:54, Oscar Benjamin wrote: >> >> setup.py that would satisfy those commands. It could be as easy as: >> >> if sys.argv[1] == 'bdist_wheel': >> sys.exit(subprocess.call(['bentomaker', 'build_wheel']) >> >> or whatever. > > > OK, inspiration moment for me. It had never occurred to me that a project > could do something like that, and now I see it explained, I understand the > various discussions about standardising command line APIs much better. > Thanks! If you look at bento's setup.py it's even simpler: from bento.distutils.monkey_patch import setup setup() >> I tried deleting the mentioned file but just got the same error >> message again. Is that a bento/pip/setuptools bug? I notice that the >> bento docs don't mention pip on the installation page: >> http://cournape.github.io/Bento/html/install.html >> > > I've never yet got bento working for me. I have assumed that some aspect of > Windows, Python 3.x, pip or oddities of my setup are not supported yet, and > have simply left it as "not for prime time yet" (based on the 0.1.1 version > number) in my mind. So it's always remained as the "obvious example of an > alternative build system" while still being theoretical. Could just be a Windows incompatibility - *nix will let another process open the same file quite happily. > I'd love to see something concrete though, as these debates tend to get > remarkably abstract at times :-) d2to1 is actually my "go to" example of an alternate build system that people are already using. A setup.py style shim for a setup.cfg based project. Although d2to1 may just embed the metadata directly in setup.py - I haven't actually looked into it in detail. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Aug 30 15:18:06 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Aug 2013 23:18:06 +1000 Subject: [Distutils] Rejecting PEP 439: implicit pip bootstrapping Message-ID: Donald's PEP 453 for explicit pip bootstrapping (by inclusion of a "getpip" module in the standard library, automatically executed during installation by the CPython binary installers) is now available on python.org. Accordingly, I am now formally rejecting the implicit bootstrapping proposal that Richard put together for PEP 439. The explicit bootstrapping proposal in PEP 453 should provide a more reliable experience, both on initial installation and when upgrading both pip and CPython. Discussions on PEP 453 will likely be split - here on distutils-sig to ensure the proposal is acceptable to the pip developers, and on python-dev to ensure it is acceptable to the CPython team. As a proposal with a direct impact on CPython, final pronouncement on PEP 453 will take place on python-dev rather than here. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From oscar.j.benjamin at gmail.com Fri Aug 30 15:33:44 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Fri, 30 Aug 2013 14:33:44 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On 30 August 2013 13:49, Daniel Holth wrote: > On Fri, Aug 30, 2013 at 7:54 AM, Oscar Benjamin wrote: >> >> I just tried to install bento to test it out and: >> >> $ pip install bento >> Downloading/unpacking bento >> Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded >> Running setup.py egg_info for package bento >> Installing collected packages: bento >> Running setup.py install for bento >> Could not find .egg-info directory in install record for bento >> Cleaning up... >> Exception: >> Traceback (most recent call last): >> File "Q:\tools\Python27\lib\site-packages\pip\basecommand.py", line >> 134, in main >> status = self.run(options, args) >> File "Q:\tools\Python27\lib\site-packages\pip\commands\install.py", >> line 241, in run >> requirement_set.install(install_options, global_options, >> root=options.root_path) >> File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 1298, in install >> requirement.install(install_options, global_options, *args, **kwargs) >> File "Q:\tools\Python27\lib\site-packages\pip\req.py", line 668, in install >> os.remove(record_filename) >> WindowsError: [Error 32] The process cannot access the file because it >> is being used by another process: >> 'c:\\docume~1\\enojb\\locals~1\\temp\\pip-aae65s-record\\install-record.txt' >> >> Storing complete log in c:/Documents and Settings/enojb\pip\pip.log >> >> I tried deleting the mentioned file but just got the same error >> message again. Is that a bento/pip/setuptools bug? I notice that the >> bento docs don't mention pip on the installation page: >> http://cournape.github.io/Bento/html/install.html >> >> Here's the appropriate version information: >> >> $ pip --version >> pip 1.4.1 from q:\tools\python27\lib\site-packages (python 2.7) >> $ python --version >> Python 2.7.5 >> $ python -c 'import setuptools; print(setuptools.__version__)' >> 1.1 >> >> (I just very carefully updated pip/setuptools based on Paul's previous >> instructions). >> >> The bento setup.py uses bento's own setup() command: >> https://github.com/cournape/Bento/blob/master/setup.py > > It looks like you cannot install bento itself using pip on Windows. It > might be a Windows bug "WindowsError: [Error 32] The process cannot > access the file because it is being used by another process:". It's a > little better on Linux (it gets installed) but I don't think Bento is > really meant to be installed in this way. I't's a bug in pip. The file in question is opened by pip a few lines above. The particular code path is called because the else logger.warn() clause gets triggered (i.e. where it says "## FIX ME" :) ) f = open(record_filename) for line in f: line = line.strip() if line.endswith('.egg-info'): egg_info_dir = prepend_root(line) break else: logger.warn('Could not find .egg-info directory in install record for %s' % self) ## FIXME: put the record somewhere ## FIXME: should this be an error? return f.close() new_lines = [] f = open(record_filename) for line in f: filename = line.strip() if os.path.isdir(filename): filename += os.path.sep new_lines.append(make_path_relative(prepend_root(filename), egg_info_dir)) f.close() f = open(os.path.join(egg_info_dir, 'installed-files.txt'), 'w') f.write('\n'.join(new_lines)+'\n') f.close() finally: if os.path.exists(record_filename): os.remove(record_filename) os.rmdir(temp_location) The error comes from the os.remove line 2nd from bottom. The file was opened in the top line. The logger.warn code path returns without closing the file. If I add f.close() just before return then I get: $ pip install bento Downloading/unpacking bento Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded Running setup.py egg_info for package bento Installing collected packages: bento Running setup.py install for bento Could not find .egg-info directory in install record for bento Successfully installed bento Cleaning up... It's probably better to use the with statement though. Oscar From donald at stufft.io Fri Aug 30 15:44:58 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 09:44:58 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations Message-ID: Abstract ======== This PEP proposes the inclusion of a method for explicitly bootstrapping `pip`_ as the default package manager for Python. It also proposes that the distributions of Python available via Python.org will automatically run this explicit bootstrapping method and a recommendation to third party redistributors of Python to also provide pip by default (in a way reasonable for their distributions). This PEP does *not* propose the inclusion of pip itself in the standard library. Proposal ======== This PEP proposes the inclusion of a ``getpip`` bootstrapping module in Python 3.4, as well as in the upcoming maintenance releases of Python 2.7 and Python 3.3. Rationale ========= Installing a third party package into a freshly installed Python requires first installing the package manager. This requires users ahead of time to know what the package manager is, where to get them from, and how to install them. The effect of this is that these external projects are required to either blindly assume the user already has the package manager installed, needs to duplicate the instructions and tell their users how to install the package manager, or completely forgo the use of dependencies to ease installation concerns for their users. All of the available options have their own drawbacks. If a project simply assumes a user already has the tooling then they get a confusing error message when the installation command doesn't work. Some operating may ease this pain by providing a global hook that looks for commands that don't exist and suggest an OS package they can install to make the command work. If a project chooses to duplicate the installation instructions and tell their users how to install the package manager before telling them how to install their own project then whenever these instructions need updates they need updating by every project that has duplicated them. This will inevitably not happen in every case leaving many different instructions on how to install it many of them broken or less than optimal. These additional instructions might also confuse users who try to install the package manager a second time thinking that it's part of the instructions of installing the project. The problem of stale instructions can be alleviated by referencing `pip's own bootstrapping instructions `__, but the user experience involved still isn't good (especially on Windows, where downloading and running a Python script with the default OS configuration is significantly more painful than downloading and running a binary executable or installer). The projects that have decided to forgo dependencies all together are forced to either duplicate the efforts of other projects by inventing their own solutions to problems or are required to simply include the other projects in their own source trees. Both of these options present their own problems either in duplicating maintenance work across the ecosystem or potentially leaving users vulnerable to security issues because the included code or duplicated efforts are not automatically updated when upstream releases a new version. By providing the package manager by default it will be easier for users trying to install these third party packages as well as easier for the people distributing them as they no longer need to pick the lesser evil. This will become more important in the future as the Wheel_ package format does not have a built in "installer" in the form of ``setup.py`` so users wishing to install a Wheel package will need an installer even in the simple case. Reducing the burden of actually installing a third party package should also decrease the pressure to add every useful module to the standard library. This will allow additions to the standard library to focus more on why Python should have a particular tool out of the box instead of needing to use the difficulty in installing a package as justification for inclusion. Explicit Bootstrapping ====================== An additional module called ``getpip`` will be added to the standard library whose purpose is to install pip and any of its dependencies into the appropriate location (most commonly site-packages). It will expose a single callable named ``bootstrap()`` as well as offer direct execution via ``python -m getpip``. Options for installing it such as index server, installation location (``--user``, ``--root``, etc) will also be available to enable different installation schemes. It is believed that users will want the most recent versions available to be installed so that they can take advantage of the new advances in packaging. Since any particular version of Python has a much longer staying power than a version of pip in order to satisfy a user's desire to have the most recent version the bootstrap will contact PyPI, find the latest version, download it, and then install it. This process is security sensitive, difficult to get right, and evolves along with the rest of packaging. Instead of attempting to maintain a "mini pip" for the sole purpose of installing pip the ``getpip`` module will, as an implementation detail, include a private copy of pip which will be used to discover and install pip from PyPI. It is important to stress that this private copy of pip is *only* an implementation detail and it should *not* be relied on or assumed to exist. Not all users will have network access to PyPI whenever they run the bootstrap. In order to ensure that these users will still be able to bootstrap pip the bootstrap will fallback to simply installing the included copy of pip. This presents a balance between giving users the latest version of pip, saving them from needing to immediately upgrade pip after bootstrapping it, and allowing the bootstrap to work offline in situations where users might already have packages downloaded that they wish to install. Updating the Bundled pip ------------------------ In order to keep up with evolutions in packaging as well as providing users who are using the offline installation method with as recent version as possible the ``getpip`` module should be updates to the latest versions of everything it bootstraps. During the preparation for any release of Python, a script, provided as part of this PEP, should be run to update the bundled packages to the latest versions. This means that maintenance releases of the CPython installers will include an updated version of the ``getpip`` bootstrap module. Pre-installation ================ During the installation of Python from Python.org ``python -m getpip`` should be executed. Leaving people using the Windows or OSX installers with a working copy of pip once the installation has completed. The exact method of this is left up to the maintainers of the installers however if the bootstrapping is optional it should be opt out rather than opt in. The Windows and OSX installers distributed by Python.org will automatically attempt to run ``python -m getpip`` by default however the ``make install`` and ``make altinstall`` commands of the source distribution will not. Keeping the pip bootstrapping as a separate step for make based installations should minimize the changes CPython redistributors need to make to their build processes. Avoiding the layer of indirection through make for the getpip invocation also ensures those installing from a custom source build can easily force an offline installation of pip, install it from a private index server, or skip installing pip entirely. Python Virtual Environments =========================== Python 3.3 included a standard library approach to virtual Python environments through the ``venv`` module. Since it's release it has become clear that very few users have been willing to use this feature in part due to the lack of an installer present by default inside of the virtual environment. They have instead opted to continue using the ``virtualenv`` package which *does* include pip installed by default. To make the ``venv`` more useful to users it will be modified to issue the pip bootstrap by default inside of the new environment while creating it. This will allow people the same convenience inside of the virtual environment as this PEP provides outside of it as well as bringing the ``venv`` module closer to feature parity with the external ``virtualenv`` package making it a more suitable replacement. Recommendations for Downstream Distributors =========================================== A common source of Python installations are through downstream distributors such as the various Linux Distributions [#ubuntu]_ [#debian]_ [#fedora]_, OSX package managers [#homebrew]_, or python specific tools [#conda]_. In order to provide a consistent, user friendly experience to all users of Python regardless of how they attained Python this PEP recommends and asks that downstream distributors: * Ensure that whenever Python is installed pip is also installed. * This may take the form of separate with dependencies on each either so that installing the python package installs the pip package and installing the pip package installs the Python package. * Do not remove the bundled copy of pip. * This is required for offline installation of pip into a virtual environment. * This is similar to the existing ``virtualenv`` package for which many downstream distributors have already made exception to the common "debundling" policy. * This does mean that if ``pip`` needs to be updated due to a security issue, so does the bundled version in the ``getpip`` bootstrap module * Migrate build systems to utilize `pip`_ and `Wheel`_ instead of directly using ``setup.py``. * This will ensure that downstream packages can utilize the new formats which will not have a ``setup.py`` easier. * Ensure that all features of this PEP continue to work with any modifications made. * Online installation of the latest version of pip into a global or virtual python environment using ``python -m getpip``. * Offline installation of the bundled version of pip into a global or virtual python environment using ``python -m getpip``. * ``pip install --upgrade pip`` in a global installation should not affect any already created virtual environments. * ``pip install --upgrade pip`` in a virtual environment should not affect the global installation. Policies & Governance ===================== The maintainers of the bundled software and the CPython core team will work together in order to address the needs of both. The bundled software will still remain external to CPython and this PEP does not include CPython subsuming the responsibilities or decisions of the bundled software. This PEP aims to decrease the burden on end users wanting to use third party packages and the decisions inside it are pragmatic ones that represent the trust that the Python community has placed in the authors and maintainers of the bundled software. Backwards Compatibility ----------------------- The public API of the ``getpip`` module itself will fall under the typical backwards compatibility policy of Python for its standard library. The externally developed software that this PEP bundles does not. Security Releases ----------------- Any security update that affects the ``getpip`` module will be shared prior to release with the PSRT. The PSRT will then decide if the issue inside warrants a security release of Python. Appendix: Rejected Proposals ============================ Implicit Bootstrap ------------------ `PEP439`_, the predecessor for this PEP, proposes it's own solution. Its solution involves shipping a fake ``pip`` command that when executed would implicitly bootstrap and install pip if it does not already exist. This has been rejected because it is too "magical". It hides from the end user when exactly the pip command will be installed or that it is being installed at all. It also does not provide any recommendations or considerations towards downstream packagers who wish to manage the globally installed pip through the mechanisms typical for their system. Including pip In the Standard Library ------------------------------------- Similar to this PEP is the proposal of just including pip in the standard library. This would ensure that Python always includes pip and fixes all of the end user facing problems with not having pip present by default. This has been rejected because we've learned through the inclusion and history of ``distutils`` in the standard library that losing the ability to update the packaging tools independently can leave the tooling in a state of constant limbo. Making it unable to ever reasonably evolve in a timeframe that actually affects users as any new features will not be available to the general population for *years*. Allowing the packaging tools to progress separately from the Python release and adoption schedules allows the improvements to be used by *all* members of the Python community and not just those able to live on the bleeding edge of Python releases. .. _Wheel: http://www.python.org/dev/peps/pep-0427/ .. _pip: http://www.pip-installer.org .. _setuptools: https://pypi.python.org/pypi/setuptools .. _PEP439: http://www.python.org/dev/peps/pep-0439/ References ========== .. [#ubuntu] `Ubuntu ` .. [#debian] `Debian ` .. [#fedora] `Fedora ` .. [#homebrew] `Homebrew ` .. [#conda] `Conda ` ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri Aug 30 15:45:55 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 09:45:55 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: <8F4564F1-938B-43E2-87F9-9D7BB8923832@stufft.io> On Aug 30, 2013, at 9:44 AM, Donald Stufft wrote: > [snip] Available online at http://www.python.org/dev/peps/pep-0453/ ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Fri Aug 30 17:39:09 2013 From: brett at python.org (Brett Cannon) Date: Fri, 30 Aug 2013 11:39:09 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: On Fri, Aug 30, 2013 at 9:44 AM, Donald Stufft wrote: > [SNIP] > Pre-installation > ================ > > During the installation of Python from Python.org ``python -m getpip`` > should > be executed. Leaving people using the Windows or OSX installers with a > working > copy of pip once the installation has completed. "should be executed, leaving". > The exact method of this is > left up to the maintainers of the installers however if the bootstrapping > is > optional it should be opt out rather than opt in. > "installers, however" "opt-in", "opt-out" > > The Windows and OSX installers distributed by Python.org will automatically > attempt to run ``python -m getpip`` by default however the ``make install`` > and ``make altinstall`` commands of the source distribution will not. > Is the plan to leave getpip entirely out of the source distribution or to have it checked into hg.python.org/cpython? > > Keeping the pip bootstrapping as a separate step for make based > installations should minimize the changes CPython redistributors need to > make to their build processes. Avoiding the layer of indirection through > make for the getpip invocation also ensures those installing from a custom > source build can easily force an offline installation of pip, install it > from a private index server, or skip installing pip entirely. > > > Python Virtual Environments > =========================== > > Python 3.3 included a standard library approach to virtual Python > environments > through the ``venv`` module. Since it's release it has become clear that > very > few users have been willing to use this feature in part due to the lack of > an installer present by default inside of the virtual environment. They > have > instead opted to continue using the ``virtualenv`` package which *does* > include > pip installed by default. > > To make the ``venv`` more useful to users it will be modified to issue the > pip bootstrap by default inside of the new environment while creating it. > This > will allow people the same convenience inside of the virtual environment as > this PEP provides outside of it as well as bringing the ``venv`` module > closer > to feature parity with the external ``virtualenv`` package making it a more > suitable replacement. > What about a --without-pip option? > > > Recommendations for Downstream Distributors > =========================================== > > A common source of Python installations are through downstream distributors > such as the various Linux Distributions [#ubuntu]_ [#debian]_ [#fedora]_, > OSX > package managers [#homebrew]_, or python specific tools [#conda]_. In > order to > provide a consistent, user friendly experience to all users of Python > regardless of how they attained Python this PEP recommends and asks that > downstream distributors: > > * Ensure that whenever Python is installed pip is also installed. > > * This may take the form of separate with dependencies on each either so > that > installing the python package installs the pip package and installing > the > pip package installs the Python package. > > * Do not remove the bundled copy of pip. > > * This is required for offline installation of pip into a virtual > environment. > * This is similar to the existing ``virtualenv`` package for which many > downstream distributors have already made exception to the common > "debundling" policy. > * This does mean that if ``pip`` needs to be updated due to a security > issue, so does the bundled version in the ``getpip`` bootstrap module > > * Migrate build systems to utilize `pip`_ and `Wheel`_ instead of directly > using ``setup.py``. > > * This will ensure that downstream packages can utilize the new formats > which > will not have a ``setup.py`` easier. > > * Ensure that all features of this PEP continue to work with any > modifications > made. > > * Online installation of the latest version of pip into a global or > virtual > python environment using ``python -m getpip``. > * Offline installation of the bundled version of pip into a global or > virtual > python environment using ``python -m getpip``. > * ``pip install --upgrade pip`` in a global installation should not > affect > any already created virtual environments. > * ``pip install --upgrade pip`` in a virtual environment should not > affect > the global installation. > > > Policies & Governance > ===================== > > The maintainers of the bundled software and the CPython core team will work > together in order to address the needs of both. The bundled software will > still > remain external to CPython and this PEP does not include CPython subsuming > the > responsibilities or decisions of the bundled software. This PEP aims to > decrease the burden on end users wanting to use third party packages and > the > decisions inside it are pragmatic ones that represent the trust that the > Python community has placed in the authors and maintainers of the bundled > software. > This should specify if it is ever expected to be kept in the CPython repo (and thus distributed in the source tarball) or if it will simply be bundled in the installers (and thus never included with the source). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 30 17:49:45 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 11:49:45 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: On Aug 30, 2013, at 11:39 AM, Brett Cannon wrote: > What about a --without-pip option? Added this and the grammar mistakes you noted (I suck at this english thing~). > This should specify if it is ever expected to be kept in the CPython repo (and thus distributed in the source tarball) or if it will simply be bundled in the installers (and thus never included with the source). Nick may say differently (he's much more "in tune" with what files are owned by which processes) but I'd expect that the ``getpip`` module will be included as a normal part of the standard library and with it a tarball (or a Wheel) containing the sources of the bundled software. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Fri Aug 30 18:02:07 2013 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Aug 2013 17:02:07 +0100 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: On 30 August 2013 14:44, Donald Stufft wrote: > The Windows and OSX installers distributed by Python.org will automatically > attempt to run ``python -m getpip`` by default however the ``make install`` > and ``make altinstall`` commands of the source distribution will not. > Presumably the uninstaller components of these installers should similarly uninstall pip before uninstalling Python. Would something like a "python -m getpip uninstall" command be worthwhile to support this? (That's probably a question for the authors of the installers to answer, as I don't know if such a command would be needed - I suspect that the Windows MSI installer just records what files it installs and uninstalls them, so it may not be needed there). But otherwise +1 on this PEP. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Aug 30 18:04:09 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 30 Aug 2013 09:04:09 -0700 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: There's no mention of setuptools. I guess the handling of that dependency (or not handling it up front, if pip is refactored such that setuptools just becomes a dependency when building) is considered an implementation detail of get-pip? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 30 18:07:47 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 12:07:47 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: On Aug 30, 2013, at 12:04 PM, Marcus Smith wrote: > There's no mention of setuptools. > I guess the handling of that dependency (or not handling it up front, if pip is refactored such that setuptools just becomes a dependency when building) is considered an implementation detail of get-pip? It's not mentioned by name but instead it's said: An additional module called ``getpip`` will be added to the standard library whose purpose is to install pip and any of its dependencies into the appropriate location (most commonly site-packages). Setuptools would be the "and any of its dependencies". I want to make setuptools not depend on setuptools but I added that as an escape clause incase that proves to be hard to actually do. The PEP itself specifies only pip because I'm considering setuptools an implementation detail of pip (one that can be replaced eventually or removed from it's "special" status). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Fri Aug 30 18:12:56 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 30 Aug 2013 09:12:56 -0700 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: ok, got it. so, theoretically if the pip/setuptools relationship stays what it is at the moment, get-pip would end up including a bundled setuptools to fulfill the "Not all users will have network access to PyPI" On Fri, Aug 30, 2013 at 9:07 AM, Donald Stufft wrote: > > On Aug 30, 2013, at 12:04 PM, Marcus Smith wrote: > > > There's no mention of setuptools. > > I guess the handling of that dependency (or not handling it up front, if > pip is refactored such that setuptools just becomes a dependency when > building) is considered an implementation detail of get-pip? > > > It's not mentioned by name but instead it's said: > > An additional module called ``getpip`` will be added to the standard > library > whose purpose is to install pip and any of its dependencies into the > appropriate location (most commonly site-packages). > > Setuptools would be the "and any of its dependencies". I want to make > setuptools not depend on setuptools but I added that as an escape clause > incase that proves to be hard to actually do. The PEP itself specifies only > pip because I'm considering setuptools an implementation detail of pip (one > that can be replaced eventually or removed from it's "special" status). > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 30 18:13:30 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 12:13:30 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: <4CED6097-06F8-4698-9EAD-1EEF51EFD50B@stufft.io> On Aug 30, 2013, at 12:12 PM, Marcus Smith wrote: > ok, got it. so, theoretically if the pip/setuptools relationship stays what it is at the moment, get-pip would end up including a bundled setuptools to fulfill the "Not all users will have network access to PyPI" Yes. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Fri Aug 30 18:32:43 2013 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 30 Aug 2013 09:32:43 -0700 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: > This should specify if it is ever expected to be kept in the CPython repo > (and thus distributed in the source tarball) or if it will simply be > bundled in the installers (and thus never included with the source). > > Nick may say differently (he's much more "in tune" with what files are > owned by which processes) but I'd expect that the ``getpip`` module will be > included as a normal part of the standard library and with it a tarball (or > a Wheel) containing the sources of the bundled software. > will getpip also replace the current role of "get-pip.py" (available in pip's master branch), i.e. will getpip be available for download and be mentioned in the pip install instructions for python's that don't have it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Aug 30 18:36:00 2013 From: donald at stufft.io (Donald Stufft) Date: Fri, 30 Aug 2013 12:36:00 -0400 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: <0096A4C4-19DE-4442-AC30-045F767A5DED@stufft.io> On Aug 30, 2013, at 12:32 PM, Marcus Smith wrote: > > > > This should specify if it is ever expected to be kept in the CPython repo (and thus distributed in the source tarball) or if it will simply be bundled in the installers (and thus never included with the source). > > Nick may say differently (he's much more "in tune" with what files are owned by which processes) but I'd expect that the ``getpip`` module will be included as a normal part of the standard library and with it a tarball (or a Wheel) containing the sources of the bundled software. > > will getpip also replace the current role of "get-pip.py" (available in pip's master branch), i.e. will getpip be available for download and be mentioned in the pip install instructions for python's that don't have it? > Pip will probably need to maintain get-pip.py for any Python version it supports that Doesn't have the getpip module. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pje at telecommunity.com Fri Aug 30 19:15:57 2013 From: pje at telecommunity.com (PJ Eby) Date: Fri, 30 Aug 2013 13:15:57 -0400 Subject: [Distutils] Distributable binary with dependencies In-Reply-To: References: Message-ID: On Mon, Aug 26, 2013 at 2:25 PM, bharath ravi kumar wrote: > Carl, Eby, > > Thanks for taking time to suggest various alternatives. Considering that the > deployment hosts are identical in every as[ect, the approach of moving > virtualenv's with packages pip-installed at build time appears the simplest, > low-overhead approach that can be implemented without hacking the > environment or resorting to custom scripts. I'll go ahead with that option. What hacking the environment or custom scripts? I'm confused, because AFAIK there are actually more steps to pip-install a virtualenv and copy it to different machines, than there are involved in using easy_install to create a portable installation. In both cases, you end up with a directory to archive and copy, so the only difference is in the commands used to build that directory, and the layout of the directory afterwards. Perhaps you misunderstood my post as meaning that you had to run easy_install on the target system? (I don't have any particular stake in what you do for your own system, but I'm curious, both for the future reference of folks reading this thread by way of Googling this question, and in case there is something for me to learn or that I'm mistaken about, in relation either to pip/virtualenv or your use case. And certainly if you are more familiar with pip+virtualenv, that would actually be sufficient reason in this case to use it. But I'd prefer future readers of this thread not to be under an erroneous impression that easy_install involves more steps, scripts, or environment changes in order to implement this use case. Thanks.) From solipsis at pitrou.net Sat Aug 31 01:18:32 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Aug 2013 23:18:32 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: Message-ID: Nick Coghlan gmail.com> writes: > > On 30 Aug 2013 17:23, "Paul Moore" gmail.com> wrote: > > > > On 30 August 2013 00:08, Nick Coghlan gmail.com> wrote: > >> > >> We also need to officially bless pip's trick of forcing the use of setuptools for distutils based setup.py files. > > > > > > Do we? What does official blessing imply? We've managed for years without the trick being "official"... > > > > The main reason it is currently used is to allow setup.py install to specify --record, so that we can get the list of installed files. If distutils added a --record flag, for example, I don't believe we'd need the hack at all. (Obviously, we'd still need setuptools so we could use wheel to build wheels, but that's somewhat different as it's a new feature). Maybe a small distutils patch is better than blessing setuptools here? > > A distutils patch won't help with Python 2.7 or 3.3. The purpose of blessing the substitution is to decouple the update cycle of the build system from the update cycle of the standard library. It sounds like a nasty hack. What you call "substitution" is actually monkey patching, right? (edit: apparently it is pre-loading setuptools, which probably does the monkey patching by itself) This is crazy. We removed packaging from the stdlib because it wasn't "good enough", and now we would "bless the substitution" (aka silent runtime monkeypatching) of distutils with setuptools, a third-party library whose stdlib inclusion has always been widely refused by the community (for many reasons)? pip can do what it likes, but blessing such behaviour officially sounds completely backwards. Regards Antoine. From ncoghlan at gmail.com Sat Aug 31 01:20:33 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 31 Aug 2013 09:20:33 +1000 Subject: [Distutils] PEP453 - Explicit bootstrapping of pip in Python installations In-Reply-To: References: Message-ID: On 31 Aug 2013 02:02, "Paul Moore" wrote: > > > On 30 August 2013 14:44, Donald Stufft wrote: >> >> The Windows and OSX installers distributed by Python.org will automatically >> attempt to run ``python -m getpip`` by default however the ``make install`` >> and ``make altinstall`` commands of the source distribution will not. > > > Presumably the uninstaller components of these installers should similarly uninstall pip before uninstalling Python. Would something like a "python -m getpip uninstall" command be worthwhile to support this? (That's probably a question for the authors of the installers to answer, as I don't know if such a command would be needed - I suspect that the Windows MSI installer just records what files it installs and uninstalls them, so it may not be needed there). > > But otherwise +1 on this PEP. Yeah, a command to uninstall pip and everything it installed would be desirable. I wouldn't consider it a *blocker* for inclusion (since anyone bootstrapping pip manually already has this problem), but it would be a nice one to solve (and having it on getpip neatly avoids the issues with pip uninstalling itself on Windows). For the other questions people asked: * yes, pyvenv should gain a "--without-pip" option * the only change to the existing get-pip.py bootstrap script should be to check for "getpip" and use it if available, and otherwise continue on with the legacy bootstrap mechanism. * I agree with Donald that getpip should be an ordinary standard library Python module * To allow releases to be recreated exactly from the source tarball, I agree with Donald that we need to actually include the relevant wheel files inside the CPython source tree. Retrieving them automatically at build time would be nice, but that unfortunately creates ugly version reproducibility issues, and also requires that the build process be patched before it can run on a trusted build server with no network access. * If we have to bundle setuptools as well, so be it. However, we should document that it is *not* guaranteed to be available, so projects that need it should declare the appropriate dependencies and mention "pip install setuptools" in the appropriate places to ensure it is available * a "--wheel-only" option for both getpip and pyvenv might be interesting, if pip gets to a point where it only needs setuptools for source builds and not installing from wheels. However, for now, I think the PEP should just assume setuptools will be bundled and installed as a pip dependency. I'm OK with that - Guido was amenable to adding setuptools to the standard library back in 2.5 as a case of practicality beating purity. (In retrospect, that would have just meant setuptools suffered from the same "can't readily be updated in existing Python versions" problem as distutils, so it's probably a net positive that PJE eventually decided not to continue with the idea) Cheers, Nick. > > Paul > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph.bean at gmail.com Fri Aug 30 20:01:02 2013 From: ralph.bean at gmail.com (Ralph Bean) Date: Fri, 30 Aug 2013 14:01:02 -0400 Subject: [Distutils] Retiring mirror g.pypi.python.org Message-ID: <20130830180102.GA21691@radek> The admins at g.pypi.python.org/mirror.rit.edu have decided they no longer have the resources to maintain their mirror. They've already taken down the content and would like to be removed from any indexes out there (such as www.pypi-mirrors.org). -Ralph From dholth at gmail.com Sat Aug 31 03:57:30 2013 From: dholth at gmail.com (Daniel Holth) Date: Fri, 30 Aug 2013 21:57:30 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On Fri, Aug 30, 2013 at 7:18 PM, Antoine Pitrou wrote: > Nick Coghlan gmail.com> writes: >> >> On 30 Aug 2013 17:23, "Paul Moore" gmail.com> wrote: >> > >> > On 30 August 2013 00:08, Nick Coghlan gmail.com> wrote: >> >> >> >> We also need to officially bless pip's trick of forcing the use of > setuptools for distutils based setup.py files. >> > >> > >> > Do we? What does official blessing imply? We've managed for years > without the trick being "official"... >> > >> > The main reason it is currently used is to allow setup.py install to > specify --record, so that we can get the list of installed files. If > distutils added a --record flag, for example, I don't believe we'd need the > hack at all. (Obviously, we'd still need setuptools so we could use wheel to > build wheels, but that's somewhat different as it's a new feature). Maybe a > small distutils patch is better than blessing setuptools here? >> >> A distutils patch won't help with Python 2.7 or 3.3. The purpose of > blessing the substitution is to decouple the update cycle of the build > system from the update cycle of the standard library. > > It sounds like a nasty hack. What you call "substitution" is actually monkey > patching, > right? (edit: apparently it is pre-loading setuptools, which probably does > the monkey patching by itself) > > This is crazy. We removed packaging from the stdlib because it wasn't > "good enough", and now we would "bless the substitution" (aka silent runtime > monkeypatching) of distutils with setuptools, a third-party library whose stdlib > inclusion has always been widely refused by the community (for many reasons)? > > pip can do what it likes, but blessing such behaviour officially sounds > completely > backwards. > > Regards > > Antoine. One of the most important packaging insights to understand is that distutils is in fact worse than setuptools. It is badly outdated, poorly extensible, and too complex to ever re-implement in a compatible way. It is not better before the monkey patching. In a perfect world it should be removed from the standard library too except that removing distutils would be too impractical. The new strategy is to standardize just the install (how to install a wheel binary package after it has already been built) and runtime for eventual standard library inclusion. These are simple enough that they can be documented and re-implemented adequately. Distutils and setuptools will both be equally discouraged as legacy build tools. *Hopefully* a dominant 80% simpler-use-cases distutils replacement will emerge along with a more complicated 20% "scipy" distutils replacement but neither will necessarily need to be in the standard library; more complicated build+install tools that can deal with both sdists and wheels will be able to grab the necessary build tool as part of the build. If you can believe that distutils, not setuptools, is the problem we are trying to recover from, then the monkeypatching strategy may make more sense. We are also making setuptools optional *at run time* even for packages that need to be built under setuptools. From noah at coderanger.net Sat Aug 31 04:58:01 2013 From: noah at coderanger.net (Noah Kantrowitz) Date: Fri, 30 Aug 2013 19:58:01 -0700 Subject: [Distutils] Retiring mirror g.pypi.python.org In-Reply-To: <20130830180102.GA21691@radek> References: <20130830180102.GA21691@radek> Message-ID: DNS and LB config have been updated and should take effect over the next day or so. --Noah On Aug 30, 2013, at 11:01 AM, Ralph Bean wrote: > The admins at g.pypi.python.org/mirror.rit.edu have decided they no > longer have the resources to maintain their mirror. They've already > taken down the content and would like to be removed from any indexes > out there (such as www.pypi-mirrors.org). > > -Ralph > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From tseaver at palladion.com Sat Aug 31 05:25:30 2013 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 30 Aug 2013 23:25:30 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/30/2013 09:57 PM, Daniel Holth wrote: > One of the most important packaging insights to understand is that > distutils is in fact worse than setuptools. It is badly outdated, > poorly extensible, and too complex to ever re-implement in a > compatible way. It is not better before the monkey patching. In a > perfect world it should be removed from the standard library too > except that removing distutils would be too impractical. > If you can believe that distutils, not setuptools, is the problem we > are trying to recover from, then the monkeypatching strategy may make > more sense. Amen! Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iEYEARECAAYFAlIhYiIACgkQ+gerLs4ltQ52SwCcDd/Nt8ETGsLIpxbiksYiayfO 59gAn1VxLMvZq4akWuzJsdCNUlBUUxGh =DFtl -----END PGP SIGNATURE----- From solipsis at pitrou.net Sat Aug 31 12:30:23 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 10:30:23 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: Message-ID: Daniel Holth gmail.com> writes: > > One of the most important packaging insights to understand is that > distutils is in fact worse than setuptools. It is badly outdated, > poorly extensible, and too complex to ever re-implement in a > compatible way. It is not better before the monkey patching. In a > perfect world it should be removed from the standard library too > except that removing distutils would be too impractical. Well, in a perfect world it would have a replacement. It would not just "be removed" and replaced with a black hole. > The new strategy is to standardize just the install (how to install a > wheel binary package after it has already been built) and runtime for > eventual standard library inclusion. These are simple enough that they > can be documented and re-implemented adequately. Distutils and > setuptools will both be equally discouraged as legacy build tools. If setuptools is "discouraged", why "bless it officially"? That doesn't make sense. > *Hopefully* a dominant 80% simpler-use-cases distutils replacement > will emerge along "Hopefully"? "Emerge" magically? Software doesn't grow on trees... > with a more complicated 20% "scipy" distutils > replacement but neither will necessarily need to be in the standard > library; If users start to have to install third-party software to build and package their own libraries, then it's a huge regression (regardless of what *you* may think about "batteries included"). > If you can believe that distutils, not setuptools, is the problem we > are trying to recover from, then the monkeypatching strategy may make > more sense. I do *not* believe there is a single "problem" we are trying to "recover from". This is IMHO a crude way of pointing fingers, while the truth is that no popular replacement has yet "emerged" despite at least 10 years of frustration. Regards Antoine. From donald at stufft.io Sat Aug 31 12:34:07 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 31 Aug 2013 06:34:07 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On Aug 31, 2013, at 6:30 AM, Antoine Pitrou wrote: > Daniel Holth gmail.com> writes: >> >> One of the most important packaging insights to understand is that >> distutils is in fact worse than setuptools. It is badly outdated, >> poorly extensible, and too complex to ever re-implement in a >> compatible way. It is not better before the monkey patching. In a >> perfect world it should be removed from the standard library too >> except that removing distutils would be too impractical. > > Well, in a perfect world it would have a replacement. It would not > just "be removed" and replaced with a black hole. > >> The new strategy is to standardize just the install (how to install a >> wheel binary package after it has already been built) and runtime for >> eventual standard library inclusion. These are simple enough that they >> can be documented and re-implemented adequately. Distutils and >> setuptools will both be equally discouraged as legacy build tools. > > If setuptools is "discouraged", why "bless it officially"? > That doesn't make sense. > >> *Hopefully* a dominant 80% simpler-use-cases distutils replacement >> will emerge along > > "Hopefully"? "Emerge" magically? Software doesn't grow on trees... > >> with a more complicated 20% "scipy" distutils >> replacement but neither will necessarily need to be in the standard >> library; > > If users start to have to install third-party software to build and > package their own libraries, then it's a huge regression (regardless > of what *you* may think about "batteries included"). I haven't followed this thread yet, but just to comment on this users are already installing third party software to build and package their own libraries. From what i've seen working with packaging distutils is either a fallback or not used at all in the bulk of cases. > >> If you can believe that distutils, not setuptools, is the problem we >> are trying to recover from, then the monkeypatching strategy may make >> more sense. > > I do *not* believe there is a single "problem" we are trying to "recover > from". This is IMHO a crude way of pointing fingers, while the truth > is that no popular replacement has yet "emerged" despite at least 10 years > of frustration. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Aug 31 12:46:17 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 31 Aug 2013 06:46:17 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: On Aug 31, 2013, at 6:30 AM, Antoine Pitrou wrote: > Daniel Holth gmail.com> writes: >> >> One of the most important packaging insights to understand is that >> distutils is in fact worse than setuptools. It is badly outdated, >> poorly extensible, and too complex to ever re-implement in a >> compatible way. It is not better before the monkey patching. In a >> perfect world it should be removed from the standard library too >> except that removing distutils would be too impractical. > > Well, in a perfect world it would have a replacement. It would not > just "be removed" and replaced with a black hole. Replacing distutils whole hog with something else is the wrong approach. It's the exact same problem we've had all along just with somewhat better symptoms. > >> The new strategy is to standardize just the install (how to install a >> wheel binary package after it has already been built) and runtime for >> eventual standard library inclusion. These are simple enough that they >> can be documented and re-implemented adequately. Distutils and >> setuptools will both be equally discouraged as legacy build tools. > > If setuptools is "discouraged", why "bless it officially"? > That doesn't make sense. Lesser of two evils. > >> *Hopefully* a dominant 80% simpler-use-cases distutils replacement >> will emerge along > > "Hopefully"? "Emerge" magically? Software doesn't grow on trees? Of course not, but that's not exactly what the statement says. Different people will create different tooling and ideally there will begin to be community consensus behind one of them. > >> with a more complicated 20% "scipy" distutils >> replacement but neither will necessarily need to be in the standard >> library; > > If users start to have to install third-party software to build and > package their own libraries, then it's a huge regression (regardless > of what *you* may think about "batteries included"). > >> If you can believe that distutils, not setuptools, is the problem we >> are trying to recover from, then the monkeypatching strategy may make >> more sense. > > I do *not* believe there is a single "problem" we are trying to "recover > from". This is IMHO a crude way of pointing fingers, while the truth > is that no popular replacement has yet "emerged" despite at least 10 years > of frustration. A lot of the problems of the packaging eco system descend from ``import distutils`` and then ``import setuptools`` making it near impossible to actually replace one or the other without gross hacks. Setuptools at least provides some mechanism for extension and actually solves a lot of problems with distutils. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Sat Aug 31 12:47:18 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 10:47:18 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: Message-ID: Donald Stufft stufft.io> writes: > >> with a more complicated 20% "scipy" distutils > >> replacement but neither will necessarily need to be in the standard > >> library; > > > > If users start to have to install third-party software to build and > > package their own libraries, then it's a huge regression (regardless > > of what *you* may think about "batteries included"). > > I haven't followed this thread yet, but just to comment on this users are > already installing third party software to build and package their own > libraries. From what i've seen working with packaging distutils is either > a fallback or not used at all in the bulk of cases. Which "bulk of cases"? If I take a look at some popular libraries (Django, Tornado, Twisted, SQLAlchemy), all are able to build without setuptools. Do you have any statistics? The sticking point is that you don't *have* to install something third-party to get yourself working on some packaging. Being able to benefit from additional features *if* you install something else is of course fine. Regards Antoine. From donald at stufft.io Sat Aug 31 12:55:30 2013 From: donald at stufft.io (Donald Stufft) Date: Sat, 31 Aug 2013 06:55:30 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: Message-ID: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> On Aug 31, 2013, at 6:47 AM, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >>>> with a more complicated 20% "scipy" distutils >>>> replacement but neither will necessarily need to be in the standard >>>> library; >>> >>> If users start to have to install third-party software to build and >>> package their own libraries, then it's a huge regression (regardless >>> of what *you* may think about "batteries included"). >> >> I haven't followed this thread yet, but just to comment on this users are >> already installing third party software to build and package their own >> libraries. From what i've seen working with packaging distutils is either >> a fallback or not used at all in the bulk of cases. > > Which "bulk of cases"? > > If I take a look at some popular libraries (Django, Tornado, Twisted, > SQLAlchemy), all are able to build without setuptools. Do you have > any statistics? I don't have statistics offhand but it's pretty easy to tell if a package was built with setuptools (it contains slightly different files). I've looked at a lot of packages while working on packaging and the vast majority of projects were built with setuptools. I can probably get some sort of numbers worked up at some point this weekend (although not sure how long it'll take to process every package on PyPI). > > The sticking point is that you don't *have* to install something third-party > to get yourself working on some packaging. Being able to benefit from > additional features *if* you install something else is of course fine. Out of the four you listed I'm most familiar with Django's packaging which has gone to significant effort *not* to require setuptools. Most packages aren't willing to go through that effort and either simply require setuptools or they include a distutils fallback which often times doesn't work correctly except in simple cases*. * Not that it couldn't work correctly just they don't use it so they never personally experience any brokenness and most people do not directly execute setup.py so the installers tend to handle the differences/brokenness for them. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Sat Aug 31 13:03:29 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 11:03:29 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: Donald Stufft stufft.io> writes: > > > > The sticking point is that you don't *have* to install something third-party > > to get yourself working on some packaging. Being able to benefit from > > additional features *if* you install something else is of course fine. > > Out of the four you listed I'm most familiar with Django's packaging which > has gone to significant effort *not* to require setuptools. Most packages > aren't willing to go through that effort and either simply require setuptools > or they include a distutils fallback which often times doesn't work correctly > except in simple cases*. > > * Not that it couldn't work correctly just they don't use it so they never personally > experience any brokenness and most people do not directly execute setup.py > so the installers tend to handle the differences/brokenness for them. Executing setup.py directly is very convenient when working with a development or custom build of Python, rather than install the additional "build tools" (which may have their own compatibility issues). For example I can easily install Tornado with Python 3.4 that way. I'm not saying most people will use setup.py directly, but there are situations where it's good to do so, especially for software authors. Regards Antoine. From oscar.j.benjamin at gmail.com Sat Aug 31 15:01:43 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 31 Aug 2013 14:01:43 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 31 August 2013 12:03, Antoine Pitrou wrote: > Donald Stufft stufft.io> writes: >> > >> > The sticking point is that you don't *have* to install something third-party >> > to get yourself working on some packaging. Being able to benefit from >> > additional features *if* you install something else is of course fine. >> >> Out of the four you listed I'm most familiar with Django's packaging which >> has gone to significant effort *not* to require setuptools. Most packages >> aren't willing to go through that effort and either simply require setuptools >> or they include a distutils fallback which often times doesn't work correctly >> except in simple cases*. >> >> * Not that it couldn't work correctly just they don't use it so they never > personally >> experience any brokenness and most people do not directly execute setup.py >> so the installers tend to handle the differences/brokenness for them. > > Executing setup.py directly is very convenient when working with a > development or > custom build of Python, rather than install the additional "build tools" (which > may have their own compatibility issues). > For example I can easily install Tornado with Python 3.4 that way. > > I'm not saying most people will use setup.py directly, but there are situations > where it's good to do so, especially for software authors. It will always be possible to ship a setup.py script that can build/install from an sdist or VCS checkout. The issue is about how to produce an sdist with a setup.py that is guaranteed to work with past, current, and future versions of distutils/pip/setuptools/some other installer so that you can upload it to PyPI and people can run 'pip install myproj'. It shouldn't be necessary for the package author to use distutils/setuptools in their setup.py just because the user wants to install with pip/setuptools or vice-versa. Distutils is tied down with backward compatibility because of the number of projects that would break if it changed. Even obvious breakage like http://bugs.python.org/issue12641 goes unfixed for years because of worries that fixing it for 10000 users would break some obscure setup for 100 users (no matter how broken that other setup might otherwise be). That kind of breakage is totally unacceptable to projects like numpy which is why they fixed the same bug in their own distutils extension 3 years ago. I claim that the only reason projects like numpy still use (extensions and monkey-patches of) distutils for building is that there is no documented way for them to distribute sdists that build using anything other than distutils. Bento tries to implement its own setup.py and when I try to install it with pip I find a bug in pip from code paths that wouldn't get hit if bento were using setuptools in their own setup.py. If it weren't for that bug and the output had instead been: $ pip install bento Downloading/unpacking bento Downloading bento-0.1.1.tar.gz (582kB): 582kB downloaded Running setup.py egg_info for package bento Installing collected packages: bento Running setup.py install for bento Error: Could not find .egg-info directory in install record for bento then we could argue about which of pip or bento was in contravention of the (non-existent) specification that defines the setup.py interface. Oscar From solipsis at pitrou.net Sat Aug 31 15:24:13 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 13:24:13 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: Oscar Benjamin gmail.com> writes: > > It will always be possible to ship a setup.py script that can > build/install from an sdist or VCS checkout. The issue is about how to > produce an sdist with a setup.py that is guaranteed to work with past, > current, and future versions of distutils/pip/setuptools/some other > installer so that you can upload it to PyPI and people can run 'pip > install myproj'. It shouldn't be necessary for the package author to > use distutils/setuptools in their setup.py just because the user wants > to install with pip/setuptools or vice-versa. Agreed... But then, deprecating setup.py in favour of setup.cfg is a more promising path for cross-tool compatibility, than trying to promote one tool over another. > Distutils is tied down with backward compatibility because of the > number of projects that would break if it changed. Even obvious > breakage like http://bugs.python.org/issue12641 goes unfixed for years > because of worries that fixing it for 10000 users would break some > obscure setup for 100 users (no matter how broken that other setup > might otherwise be). I tend to disagree. Such bugs are not fixed, not because they shouldn't / can't be fixed, but because distutils isn't really competently maintained (or not maintained at all, actually; ?ric sometimes replies on bug entries but he doesn't commit anything these days). The idea that "distutils shouldn't change" was more of a widely-promoted propaganda item than a rational decision, IMO. Most setup scripts wouldn't suffer from distutils changes or improvements; the few that *may* suffer belong to large projects which probably have other items to solve when a new Python comes out, anyway. Regards Antoine. From oscar.j.benjamin at gmail.com Sat Aug 31 16:48:44 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 31 Aug 2013 15:48:44 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 31 August 2013 14:24, Antoine Pitrou wrote: > Oscar Benjamin gmail.com> writes: >> >> It will always be possible to ship a setup.py script that can >> build/install from an sdist or VCS checkout. The issue is about how to >> produce an sdist with a setup.py that is guaranteed to work with past, >> current, and future versions of distutils/pip/setuptools/some other >> installer so that you can upload it to PyPI and people can run 'pip >> install myproj'. It shouldn't be necessary for the package author to >> use distutils/setuptools in their setup.py just because the user wants >> to install with pip/setuptools or vice-versa. > > Agreed... But then, deprecating setup.py in favour of setup.cfg is a > more promising path for cross-tool compatibility, than trying to promote > one tool over another. The difference between this # setup.py if sys.argv[1] == 'install': from myproj.build import build build() and something like this # setup.cfg [install] command = "from myproj.build import build; build()" is that one works now for all relevant Python versions and the other does not. With the setup.cfg end users cannot simply do 'python setup.py install' unless they have some additional library that can understand the setup.cfg. Even if future Python versions gain a new stdlib module/script for this current versions won't have it. That is why I agree with Nick that the best thing to do is to explicitly document what is *currently* required to make things work and guarantee that it will continue to work for the foreseeable future. Then alternative better ways to specify the build commands in future can be considered as hinted in the PEP: http://www.python.org/dev/peps/pep-0426/#metabuild-system >> Distutils is tied down with backward compatibility because of the >> number of projects that would break if it changed. Even obvious >> breakage like http://bugs.python.org/issue12641 goes unfixed for years >> because of worries that fixing it for 10000 users would break some >> obscure setup for 100 users (no matter how broken that other setup >> might otherwise be). > > I tend to disagree. Such bugs are not fixed, not because they shouldn't / > can't be fixed, but because distutils isn't really competently maintained > (or not maintained at all, actually; ?ric sometimes replies on bug entries > but he doesn't commit anything these days). So is that particular issue a lost cause? > The idea that "distutils shouldn't change" was more of a widely-promoted > propaganda item than a rational decision, IMO. Most setup scripts wouldn't > suffer from distutils changes or improvements; the few that *may* suffer > belong to large projects which probably have other items to solve when a > new Python comes out, anyway. It's not just the setup script for a particular project. It's the particular combination of compilers and setup.py invocations used by any given user for any given setup.py from each of the thousands of projects that do anything non-trivial in their setup.py. For example in the issue I mentioned above the spanner in the works came from PJE who wanted to use --compiler=mingw32 while surreptitiously placing Cygwin's gcc on PATH: http://bugs.python.org/issue12641#msg161514 It's hard for distutils to react to outside changes in e.g. external compilers because of the need to try and prevent breaking countless unknown and obscure setups for each end user. Although in that particular issue I think it's really just a responsibility thing: the current breakage can be viewed as externally caused. Fixing it trades a large amount of breakage that is gcc's fault for a small amount of breakage that would be Python's fault. Oscar From solipsis at pitrou.net Sat Aug 31 17:03:25 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 15:03:25 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: Oscar Benjamin gmail.com> writes: > > The difference between this > > # setup.py > if sys.argv[1] == 'install': > from myproj.build import build > build() > > and something like this > > # setup.cfg > [install] > command = "from myproj.build import build; build()" Well, sure, this is a rather silly way to use a ini-style declarative format. I agree we wouldn't gain anything if setup.cfg files ended up written like this. > > I tend to disagree. Such bugs are not fixed, not because they shouldn't / > > can't be fixed, but because distutils isn't really competently maintained > > (or not maintained at all, actually; ?ric sometimes replies on bug entries > > but he doesn't commit anything these days). > > So is that particular issue a lost cause? Why would it be? > > The idea that "distutils shouldn't change" was more of a widely-promoted > > propaganda item than a rational decision, IMO. Most setup scripts wouldn't > > suffer from distutils changes or improvements; the few that *may* suffer > > belong to large projects which probably have other items to solve when a > > new Python comes out, anyway. > > It's not just the setup script for a particular project. It's the > particular combination of compilers and setup.py invocations used by > any given user for any given setup.py from each of the thousands of > projects that do anything non-trivial in their setup.py. I don't know what those "thousands of projects" are. Most Python projects don't even need a compiler, except Python itself. > For example > in the issue I mentioned above the spanner in the works came from PJE > who wanted to use --compiler=mingw32 while surreptitiously placing > Cygwin's gcc on PATH: > http://bugs.python.org/issue12641#msg161514 > It's hard for distutils to react to outside changes in e.g. external > compilers because of the need to try and prevent breaking countless > unknown and obscure setups for each end user. This sounds like a deformation of reality. Most users don't have "unknown and obscure setups", they actually have quite standardized and well-known ones (think Windows, OS X, mainstream Linux distros). Sure, in some communities (scientific programming, I suppose) there may be obscure setups, but those communities have already grown their own bag of tips and tricks, AFAIK. Regards Antoine. From ncoghlan at gmail.com Sat Aug 31 17:18:43 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Sep 2013 01:18:43 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 31 Aug 2013 23:24, "Antoine Pitrou" wrote: > > Oscar Benjamin gmail.com> writes: > > > > It will always be possible to ship a setup.py script that can > > build/install from an sdist or VCS checkout. The issue is about how to > > produce an sdist with a setup.py that is guaranteed to work with past, > > current, and future versions of distutils/pip/setuptools/some other > > installer so that you can upload it to PyPI and people can run 'pip > > install myproj'. It shouldn't be necessary for the package author to > > use distutils/setuptools in their setup.py just because the user wants > > to install with pip/setuptools or vice-versa. > > Agreed... But then, deprecating setup.py in favour of setup.cfg is a > more promising path for cross-tool compatibility, than trying to promote > one tool over another. > > > Distutils is tied down with backward compatibility because of the > > number of projects that would break if it changed. Even obvious > > breakage like http://bugs.python.org/issue12641 goes unfixed for years > > because of worries that fixing it for 10000 users would break some > > obscure setup for 100 users (no matter how broken that other setup > > might otherwise be). > > I tend to disagree. Such bugs are not fixed, not because they shouldn't / > can't be fixed, but because distutils isn't really competently maintained > (or not maintained at all, actually; ?ric sometimes replies on bug entries > but he doesn't commit anything these days). > > The idea that "distutils shouldn't change" was more of a widely-promoted > propaganda item than a rational decision, IMO. Most setup scripts wouldn't > suffer from distutils changes or improvements; the few that *may* suffer > belong to large projects which probably have other items to solve when a > new Python comes out, anyway. Blessing build tools implicitly replacing distutils with setuptools is indeed an *awful* hack, but it is also the only way I see to break the chicken-and-egg adoption problem for new metadata standards. It *must* be possible for build tools to get next generation (or even current generation) metadata out of existing vanilla distutils projects, even on older versions of Python. They can't do that if they're required to use distutils exactly as it is shipped in the standard library rather than swapping in something like setuptools (or a currently still hypothetical simpler alternative that just emits the new formats without adding other setuptools features). By blessing such substitutions, most projects will implicitly support the new standards without needing to migrate to a new build system. Even the current bento issue mentioned in this thread appears to be Windows specific. Replacing or monkey-patching distutils is definitely a terrible hack, but it's also a hack that pip already uses, and one that should allow metadata 2.0 to avoid the near total lack of adoption that afflicted metadata 1.2. Cheers, Nick. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From oscar.j.benjamin at gmail.com Sat Aug 31 17:31:10 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 31 Aug 2013 16:31:10 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 31 August 2013 16:03, Antoine Pitrou wrote: > Oscar Benjamin gmail.com> writes: >> >> > I tend to disagree. Such bugs are not fixed, not because they shouldn't / >> > can't be fixed, but because distutils isn't really competently maintained >> > (or not maintained at all, actually; ?ric sometimes replies on bug entries >> > but he doesn't commit anything these days). >> >> So is that particular issue a lost cause? > > Why would it be? Because there's no maintainer to commit or reject a patch (unless I've misunderstood your comment above). >> > The idea that "distutils shouldn't change" was more of a widely-promoted >> > propaganda item than a rational decision, IMO. Most setup scripts wouldn't >> > suffer from distutils changes or improvements; the few that *may* suffer >> > belong to large projects which probably have other items to solve when a >> > new Python comes out, anyway. >> >> It's not just the setup script for a particular project. It's the >> particular combination of compilers and setup.py invocations used by >> any given user for any given setup.py from each of the thousands of >> projects that do anything non-trivial in their setup.py. > > I don't know what those "thousands of projects" are. Most Python projects > don't even need a compiler, except Python itself. Well thousands may be an exaggeration :) >> For example >> in the issue I mentioned above the spanner in the works came from PJE >> who wanted to use --compiler=mingw32 while surreptitiously placing >> Cygwin's gcc on PATH: >> http://bugs.python.org/issue12641#msg161514 >> It's hard for distutils to react to outside changes in e.g. external >> compilers because of the need to try and prevent breaking countless >> unknown and obscure setups for each end user. > > This sounds like a deformation of reality. Most users don't have > "unknown and obscure setups", they actually have quite standardized > and well-known ones (think Windows, OS X, mainstream Linux distros). True. > Sure, in some communities (scientific programming, I suppose) there > may be obscure setups, but those communities have already grown their > own bag of tips and tricks, AFAIK. Yes they do. Our trick at my work is to have professionals build everything for the obscure setups. My experience with building is just about installing on my own Windows/Ubuntu desktop machines. The point I was making is really that breakage occurs on a per-user basis rather than a per-project basis. Reasoning about what a change in distutils will do is hard because you're trying to reason about end user setups rather than just large well-maintained projects. A new build tool outside the stdlib wouldn't be anywhere near as constrained. Oscar From oscar.j.benjamin at gmail.com Sat Aug 31 17:41:02 2013 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 31 Aug 2013 16:41:02 +0100 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 31 August 2013 16:18, Nick Coghlan wrote: > > Even the current bento issue mentioned in this thread appears to be Windows specific. I don't think you read what I wrote properly. There are two aspects to the bento issue: 1) Somehow pip isn't picking up bento's egg info directory. 2) There's a bug in pip where it tries to os.remove() a file before closing it. The bug in 2) only shows up as an error on Windows and only when the code path from 1) is triggered. However it is definitely a bug in pip. For issue 1) I don't know enough about setuptools to understand what's different about bento's setup.py. The egg_info command works AFAICT: $ curl https://pypi.python.org/packages/source/b/bento/bento-0.1.1.tar.gz > b.tgz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 568k 100 568k 0 0 3039k 0 --:--:-- --:--:-- --:--:-- 3324k $ tar -xzf b.tgz $ cd bento-0.1.1/ $ ls LICENSE.txt PACKAGERS.txt README.rst THANKS bento bento.info bentomakerlib bootstrap.py bscript setup.py $ py -2.7 setup.py egg_info running egg_info running build running config $ ls LICENSE.txt README.rst bento bento.info bootstrap.py build PACKAGERS.txt THANKS bento.egg-info bentomakerlib bscript setup.py $ ls bento.egg-info/ PKG-INFO SOURCES.txt dependency_links.txt entry_points.txt ipkg.info not-zip-safe requires.txt top_level.txt Oscar From solipsis at pitrou.net Sat Aug 31 17:46:33 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 15:46:33 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: Oscar Benjamin gmail.com> writes: > > On 31 August 2013 16:03, Antoine Pitrou pitrou.net> wrote: > > Oscar Benjamin gmail.com> writes: > >> > >> > I tend to disagree. Such bugs are not fixed, not because they shouldn't / > >> > can't be fixed, but because distutils isn't really competently maintained > >> > (or not maintained at all, actually; ?ric sometimes replies on bug entries > >> > but he doesn't commit anything these days). > >> > >> So is that particular issue a lost cause? > > > > Why would it be? > > Because there's no maintainer to commit or reject a patch (unless I've > misunderstood your comment above). Ah, true. Indeed it's probably a lost cause *right now* (although a core developer could make a decision without being the official maintainer, IMO). Not eternally so, though :-) Regards Antoine. From ncoghlan at gmail.com Sat Aug 31 17:56:30 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Sep 2013 01:56:30 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 1 Sep 2013 01:04, "Antoine Pitrou" wrote: > > Oscar Benjamin gmail.com> writes: > > > > The difference between this > > > > # setup.py > > if sys.argv[1] == 'install': > > from myproj.build import build > > build() > > > > and something like this > > > > # setup.cfg > > [install] > > command = "from myproj.build import build; build()" > > Well, sure, this is a rather silly way to use a ini-style declarative > format. I agree we wouldn't gain anything if setup.cfg files ended up > written like this. > > > > I tend to disagree. Such bugs are not fixed, not because they shouldn't / > > > can't be fixed, but because distutils isn't really competently maintained > > > (or not maintained at all, actually; ?ric sometimes replies on bug entries > > > but he doesn't commit anything these days). > > > > So is that particular issue a lost cause? > > Why would it be? > > > > The idea that "distutils shouldn't change" was more of a widely-promoted > > > propaganda item than a rational decision, IMO. Most setup scripts wouldn't > > > suffer from distutils changes or improvements; the few that *may* suffer > > > belong to large projects which probably have other items to solve when a > > > new Python comes out, anyway. > > > > It's not just the setup script for a particular project. It's the > > particular combination of compilers and setup.py invocations used by > > any given user for any given setup.py from each of the thousands of > > projects that do anything non-trivial in their setup.py. > > I don't know what those "thousands of projects" are. Most Python projects > don't even need a compiler, except Python itself. > > > For example > > in the issue I mentioned above the spanner in the works came from PJE > > who wanted to use --compiler=mingw32 while surreptitiously placing > > Cygwin's gcc on PATH: > > http://bugs.python.org/issue12641#msg161514 > > It's hard for distutils to react to outside changes in e.g. external > > compilers because of the need to try and prevent breaking countless > > unknown and obscure setups for each end user. > > This sounds like a deformation of reality. Most users don't have > "unknown and obscure setups", they actually have quite standardized > and well-known ones (think Windows, OS X, mainstream Linux distros). > > Sure, in some communities (scientific programming, I suppose) there > may be obscure setups, but those communities have already grown their > own bag of tips and tricks, AFAIK. This perception is just wrong. Mac OS X is a mess due to clang vs gcc vs homebrew vs macports vs Xcode, Windows is a mess due to mingw and cygwin and platform SDKs and visual studio (Express or otherwise). Linux distros are also ridiculous, with different gcc variants, library locations and various other things, depending on flavour and version. distutils itself is nigh impossible to work on, since it is underspecified, and the fact it was added to the standard two years before unittest still shows in its test suite. (This lack of test coverage is also a large part of the reason setuptools *isn't* quite a drop-in replacement). The key reason working on Python packaging has a tendency to chew developers up and spit them back out isn't because the people trying to work on it are incompetent, but because it's a genuinely hard problem where the already formidable technical issues are dwarfed by even more complex social ones. One of the most demotivating aspects comes from core developers without extensive experience of the existing packaging ecosystem failing to perceive the complexity of the problems to be addressed, and insulting the competence of the people working on packaging tools as a result. This is an aspect of the problem I was guilty of contributing to myself until a year or so ago, and after a change in role at Red Hat led to me learning the error of my ways, reducing it was one of my main motivations for cutting python-dev out of the approval process for the packaging related PEPs that don't immediately affect CPython or the standard library. While pure Python distributions are common, there's an awful lot of "Python" code which includes C accelerator modules, wrappers around C or C++ libraries, including Cython or SWIG generated versions of both. This affects crypto, databases, networking, operating system interfaces, graphics and other domains that rely heavily on non-Python code, not just the scientific community. setuptools definitely has its issues, but it's still substantially superior to distutils, and has the critical virtue of behaving the *same* in all currently supported versions of Python. Consistency across platform versions is something you really want in a build tool, and is something a standard library module like distutils can never provide. As one of the most conservative Linux vendors, even Red Hat has acknowledged this key point by creating the Red Hat Developer Toolset to provide a more consistent build experience across different RHEL versions. Microsoft (with Visual Studio) and Apple (with XCode) have long worked the same way. Cheers, Nick. > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Aug 31 19:21:55 2013 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Aug 2013 17:21:55 +0000 (UTC) Subject: [Distutils] Comments on PEP 426 References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: Nick Coghlan gmail.com> writes: > distutils itself is nigh impossible to work on, since it is underspecified, and > the fact it was added to the standard two years before unittest still shows in > its test suite. I don't have any code coverage numbers, but distutils has a sizable test suite (and has had since Tarek started working on it): $ ./python -m test -v test_distutils [...] ---------------------------------------------------------------------- Ran 191 tests in 1.481s OK (skipped=14) Regards Antoine. From ncoghlan at gmail.com Sat Aug 31 19:34:52 2013 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Sep 2013 03:34:52 +1000 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On 1 Sep 2013 03:22, "Antoine Pitrou" wrote: > > > Nick Coghlan gmail.com> writes: > > distutils itself is nigh impossible to work on, since it is > underspecified, and > > the fact it was added to the standard two years before unittest still shows in > > its test suite. > > I don't have any code coverage numbers, but distutils has a sizable test > suite (and has had since Tarek started working on it): > > $ ./python -m test -v test_distutils > [...] > ---------------------------------------------------------------------- > Ran 191 tests in 1.481s > > OK (skipped=14) Agreed, Tarek did great work in creating a reasonable test suite, but the combinatorial explosion in possible build configurations is such that a rapid response cycle for issues is far more feasible than hoping to completely prevent regressions (at least, not until the PSF can afford the infrastructure to automatically do test installs of the whole of PyPI for new versions of the build tools across multiple platforms, which I don't see happening any time soon). We should check how well that suite runs with setuptools imported, though. Given the originally shared maintenance by Tarek, I believe it should be similar to the distribute (and hence now setuptools) test suite, but it's worth double checking that. Cheers, Nick. > > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > http://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Aug 31 22:53:58 2013 From: brett at python.org (Brett Cannon) Date: Sat, 31 Aug 2013 16:53:58 -0400 Subject: [Distutils] Comments on PEP 426 In-Reply-To: References: <2E4CB4F5-C3C1-4A15-A07E-8A0DED0B807E@stufft.io> Message-ID: On Sat, Aug 31, 2013 at 1:21 PM, Antoine Pitrou wrote: > > Nick Coghlan gmail.com> writes: > > distutils itself is nigh impossible to work on, since it is > underspecified, and > > the fact it was added to the standard two years before unittest still > shows in > > its test suite. > > I don't have any code coverage numbers, but distutils has a sizable test > suite (and has had since Tarek started working on it): > > $ ./python -m test -v test_distutils > [...] > ---------------------------------------------------------------------- > Ran 191 tests in 1.481s > > OK (skipped=14) > > >From a coverage report generated for PyCon Canada's sprints (yay devinabox): /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/__init__<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils___init__.html>100100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/archive_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_archive_util.html>9011088% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/bcppcompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_bcppcompiler.html>182158013% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/ccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_ccompiler.html>372148060% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/cmd<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_cmd.html>15423085% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/__init__<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command___init__.html> 1 0 0 100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/bdist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_bdist.html>5722061% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/bdist_dumb<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_bdist_dumb.html> 53 6 0 89% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/bdist_rpm<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_bdist_rpm.html> 264 244 0 8% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/bdist_wininst<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_bdist_wininst.html> 169 116 0 31% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/build<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_build.html>584093% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/build_clib<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_build_clib.html> 87 15 0 83% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/build_ext<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_build_ext.html> 336 105 0 69% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/build_py<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_build_py.html> 220 30 0 86% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/build_scripts<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_build_scripts.html> 98 19 0 81% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/check<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_check.html>8037054% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/clean<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_clean.html>3300100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/config<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_config.html>17481053% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install.html> 267 45 0 83% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install_data<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install_data.html> 41 0 0 100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install_egg_info<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install_egg_info.html> 34 3 0 91% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install_headers<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install_headers.html> 23 1 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install_lib<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install_lib.html> 96 4 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/install_scripts<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_install_scripts.html> 31 2 0 94% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/register<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_register.html> 169 24 0 86% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/sdist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_sdist.html>21820091% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/command/upload<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_command_upload.html>11721082% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/config<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_config.html>614093% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/core<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_core.html>8827069% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/cygwinccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_cygwinccompiler.html> 145 71 0 51% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/debug<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_debug.html>200100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/dep_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_dep_util.html>381097% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/dir_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_dir_util.html>10022078% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/dist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_dist.html>544137075% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/errors<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_errors.html>3000100% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/extension<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_extension.html>9425073% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/fancy_getopt<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_fancy_getopt.html>21636083% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/file_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_file_util.html>11438067% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/filelist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_filelist.html>1584097% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/log<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_log.html>536089% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/msvccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_msvccompiler.html>364307016% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/spawn<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_spawn.html>9234063% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/sysconfig<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_sysconfig.html>29271076% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/__init__<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests___init__.html>161094% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/support<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_support.html>1019091% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_archive_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_archive_util.html> 186 10 0 95% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_bdist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_bdist.html> 31 2 0 94% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_bdist_dumb<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_bdist_dumb.html> 56 3 0 95% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_bdist_msi<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_bdist_msi.html> 15 5 0 67% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_bdist_rpm<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_bdist_rpm.html> 82 48 0 41% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_bdist_wininst<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_bdist_wininst.html> 15 1 0 93% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_build<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_build.html> 31 1 0 97% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_build_clib<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_build_clib.html> 84 3 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_build_ext<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_build_ext.html> 303 6 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_build_py<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_build_py.html> 98 4 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_build_scripts<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_build_scripts.html> 64 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_check<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_check.html> 60 19 0 68% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_clean<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_clean.html> 31 1 0 97% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_cmd<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_cmd.html>9110089% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_config<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_config.html> 58 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_config_cmd<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_config_cmd.html> 63 2 0 97% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_core<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_core.html> 65 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_cygwinccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_cygwinccompiler.html> 90 2 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_dep_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_dep_util.html> 52 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_dir_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_dir_util.html> 88 4 0 95% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_extension<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_extension.html> 34 1 0 97% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_file_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_file_util.html> 46 2 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_filelist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_filelist.html> 159 2 0 99% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_install<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_install.html> 160 2 0 99% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_install_data<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_install_data.html> 49 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_install_headers<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_install_headers.html> 26 1 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_install_lib<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_install_lib.html> 78 1 0 99% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_install_scripts<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_install_scripts.html> 47 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_log<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_log.html>261096% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_msvc9compiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_msvc9compiler.html> 69 49 0 29% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_register<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_register.html> 163 45 0 72% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_sdist<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_sdist.html> 241 5 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_spawn<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_spawn.html> 33 5 0 85% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_sysconfig<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_sysconfig.html> 101 8 0 92% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_text_file<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_text_file.html> 51 1 0 98% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_unixccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_unixccompiler.html> 107 12 0 89% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_upload<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_upload.html> 76 2 0 97% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_util.html> 174 7 0 96% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_version<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_version.html> 31 2 0 94% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/tests/test_versionpredicate<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_tests_test_versionpredicate.html> 7 1 0 86% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/text_file<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_text_file.html>10226075% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/unixccompiler<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_unixccompiler.html>14150065% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/util<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_util.html>25781068% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/version<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_version.html>10520081% /Users/bcannon/Repositories/devinabox/cpython/Lib/distutils/versionpredicate<_Users_bcannon_Repositories_devinabox_cpython_Lib_distutils_versionpredicate.html> 54 5 0 91% -------------- next part -------------- An HTML attachment was scrubbed... URL: