From ncoghlan at gmail.com Sun Nov 1 17:58:37 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 1 Nov 2015 23:58:37 +0100 Subject: [Distutils] Please don't impose additional barriers to participation (was: build system abstraction PEP) In-Reply-To: <20151031210832.1719d821@anarchist.wooz.org> References: <85h9lbmvpg.fsf_-_@benfinney.id.au> <20151031210832.1719d821@anarchist.wooz.org> Message-ID: On 1 November 2015 at 02:08, Barry Warsaw wrote: > On Oct 29, 2015, at 10:52 PM, Marcus Smith wrote: > >>> If python-dev ends up adopting GitLab for the main PEPs repo, then we >>> should be able to move the whole process there, rather than needing to >>> maintain a separate copy. >>> >>will that be as open as pypa/interoperability-peps? if it's closed off such >>that only python devs can log PRs against PEPs once they're in the system, >>then that's no fun. If so, I'd still want to keep pypa/interoperability-peps >>as the front end tool for the actual change management. > > The way I believe it would work via GitLab is that anybody can fork the repo, > push branches to their fork, and file PRs against the PEPs repo. Pushing to > the PEPs repo would be gated via privileged members of the repo's owner, > either via git push or button push (i.e. "hey website, I'm chillin' on the > beach with my tablet so do the merge for me!") Exactly. The case can be made it would be more open than GitHub, since folks should be able to log in with any of the identity providers GitLab supports: http://doc.gitlab.com/ce/integration/omniauth.html One of the relevant benefits of migrating to a full repository management server (whether that's GitHub or GitLab) is that we'll be able to get away from the current manual approach to managing SSH keys in favour of people uploading their own keys after authenticating with the main web interface. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun Nov 1 18:12:54 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 2 Nov 2015 00:12:54 +0100 Subject: [Distutils] Brian Goetz - Stewardship: the Sobering Parts In-Reply-To: References: Message-ID: On 31 October 2015 at 14:15, Wayne Werner wrote: > First, do no harm, eh? I haven't had time to watch it yet so I don't have the full context of the observation, but that's only true if current users are considered categorically more important than future users. That's a dangerous line of thinking, as it means the cognitive burden of learning a language and ecosystem can only ever grow, and never shrink (since superseded concepts are never pruned from the set of things you need to learn, and you're also never really able to fix design mistakes resulting from limited perspectives in early iterations). Large scale migration projects like the shift away from implementation defined behaviour in the Python packaging ecosystem are cases where reducing barriers to entry for *new* users has edged out compatibility for existing users as a priority - the latter is still important, it's just acceptable for the level of compatibility to be less than 100%. Regards, Nick. P.S. From a medical perspective, there are certainly cases were doctors *do* inflict a lesser harm (e.g. amputations) to avoid a greater harm (e.g. death). "We saved the limb, but lost the patient" isn't one of the available options. From ncoghlan at gmail.com Sun Nov 1 18:45:12 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 2 Nov 2015 00:45:12 +0100 Subject: [Distutils] A smaller step towards de-specializing setuptools/distutils In-Reply-To: References: Message-ID: (Note: I'm still traveling, so I'm not fully caught up on all the threads, this one just caught my eye) On 30 October 2015 at 00:05, Donald Stufft wrote: > On October 29, 2015 at 6:59:00 PM, Robert Collins (robertc at robertcollins.net) wrote: >> On 30 October 2015 at 11:50, Robert Collins wrote: >> ... >> > However, the big step you're proposing that I think is fundamentally >> > unsound is that of requiring a wheel be built before dependencies can >> > be queried. Wheels are *expensive* to build in enough cases that I >> > really believe we have a choice between being able to fix issue 988 or >> > utilising wheels as the interface to get at install metadata. Remember >> > too that the numpy ABI discussion is going to cause install >> > dependencies to depend on the version of numpy present at build-time, >> > so the resolver is going to have to re-evaluate the dependencies for >> > those distributions when it selects a new numpy version: so its not >> > even a one-time-per-package cost, its a >> > one-time-per-arc-of-the-N!-graph-paths cost. >> >> On further thought there is one mitigating factor here - we'd have to >> be building numpy fully anyway for the cases where numpy is a >> build-dependency for things further up the stack, so even in my most >> paranoid world we will *anyway* have to pay that cost - but we can >> avoid the cost of multiple full rebuilds of C extensions built on top >> of numpy. I think avoiding that cost is still worth it, its just a >> little more into the grey area rather than being absolutely obviously >> worth it. >> > > We?d only need to pay that cost on the assumption that we don?t have a Wheel cached already right? Either in the machine local cache (~/.caches/pip/wheels/) or a process local cache (temp directory?). Since opting into the new build system mandates the ability to build wheels, for these items we can always assume we can build a wheel. > > I mentioned it on IRC, but just for the folks who aren?t on IRC, I?m also not dead set on requiring a wheel build during resolution. I did that because we currently have a bit of a race condition since we use ``setup.py egg_info`` to query the dependencies and then we run ``setup.py bdist_wheel`` to build the thing and the dependencies are not guaranteed to be the same between the invocation of these two commands. If we moved to always building Wheels then we eliminate the race condition and we make the required interface smaller. I wouldn?t be opposed to including something like ``setup.py dist-info`` in the interface if we included an assertion stating that, given no state on the machine changes (no additional packages installed, not invoking from a different Python, etc), that the metadata produced by that command must be equal to the metadata produced in the wheel, and we can put an assertion in pip to ensure it. I think this is the key. If the *core* build-and-dependency-resolution is defined in terms of: * building and caching wheels (so they get built at most once per venv, and potentially per machine) * inspecting their metadata for dependencies then "dist-info" can be introduced later as an optimisation that *just* generates the wheel metadata, without actually doing the full binary build. For wheels we're downloading, we won't need to build them, and for wheels we're installing, we'll need to build them anyway. That means we can move defining the interface for a dist-info subcommand off the critical path for decoupling the build system, and instead use the existing directory based metadata format defined for wheel files. However, I also think there's one refinement we can make that lets us drop the need for a copy-and-paste "setup.py", *without* needing to define a programmatic build system API: let setup.cfg define a module name to invoke with "python -m " instead of running "setup.py". That way, instead of the static setup.py needing to be copied and pasted into each project, there's just another line in the config file like: [bootstrap] setup_requires = my_build_system setup_cli_module = my_build_system.setup_cli If setup_cli_module isn't specified, then pip will expect to find a setup.py in the project directory. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rmcgibbo at gmail.com Sun Nov 1 19:26:13 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Sun, 1 Nov 2015 16:26:13 -0800 Subject: [Distutils] Brian Goetz - Stewardship: the Sobering Parts In-Reply-To: References: Message-ID: Hi, Thanks for sharing that video, Donald. In context, I don't think it's fair to characterize the speaker's perspective as *dangerous*, or of categorically favoring current users over potential new users. Obviously there are a lot of tradeoffs around backward compatibility, and no one-sized-fits-all solutions. One of the most powerful points Brian made is that the large user base of Java (or Python, etc) is an immensely powerful source of leverage. Each incremental improvement to the language, standard library, or associated tools will almost immediately impact a lot of users and improve their lives. It's kind of an obvious point, but I think he expressed it very well. On that note, I lurk on this list regularly, but generally don't really contribute. I do see the awesome work that ya'll are putting in on a day to day basis, and I see the results in the wild. Thanks to all of your hard work, the packaging situation has vastly improved, and continues to do so, especially with pip and wheels. All your hard work has definitely made my life better, for one. Best, Robert On Sun, Nov 1, 2015 at 3:12 PM, Nick Coghlan wrote: > On 31 October 2015 at 14:15, Wayne Werner wrote: > > First, do no harm, eh? > > I haven't had time to watch it yet so I don't have the full context of > the observation, but that's only true if current users are considered > categorically more important than future users. That's a dangerous > line of thinking, as it means the cognitive burden of learning a > language and ecosystem can only ever grow, and never shrink (since > superseded concepts are never pruned from the set of things you need > to learn, and you're also never really able to fix design mistakes > resulting from limited perspectives in early iterations). > > Large scale migration projects like the shift away from implementation > defined behaviour in the Python packaging ecosystem are cases where > reducing barriers to entry for *new* users has edged out compatibility > for existing users as a priority - the latter is still important, it's > just acceptable for the level of compatibility to be less than 100%. > > Regards, > Nick. > > P.S. From a medical perspective, there are certainly cases were > doctors *do* inflict a lesser harm (e.g. amputations) to avoid a > greater harm (e.g. death). "We saved the limb, but lost the patient" > isn't one of the available options. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Nov 1 20:19:40 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 2 Nov 2015 14:19:40 +1300 Subject: [Distutils] A smaller step towards de-specializing setuptools/distutils In-Reply-To: References: Message-ID: On 2 November 2015 at 12:45, Nick Coghlan wrote: > (Note: I'm still traveling, so I'm not fully caught up on all the > threads, this one just caught my eye) ... >> We?d only need to pay that cost on the assumption that we don?t have a Wheel cached already right? Either in the machine local cache (~/.caches/pip/wheels/) or a process local cache (temp directory?). Since opting into the new build system mandates the ability to build wheels, for these items we can always assume we can build a wheel. >> >> I mentioned it on IRC, but just for the folks who aren?t on IRC, I?m also not dead set on requiring a wheel build during resolution. I did that because we currently have a bit of a race condition since we use ``setup.py egg_info`` to query the dependencies and then we run ``setup.py bdist_wheel`` to build the thing and the dependencies are not guaranteed to be the same between the invocation of these two commands. If we moved to always building Wheels then we eliminate the race condition and we make the required interface smaller. I wouldn?t be opposed to including something like ``setup.py dist-info`` in the interface if we included an assertion stating that, given no state on the machine changes (no additional packages installed, not invoking from a different Python, etc), that the metadata produced by that command must be equal to the metadata produced in the wheel, and we can put an assertion in pip to ensure it. > > I think this is the key. If the *core* build-and-dependency-resolution > is defined in terms of: > > * building and caching wheels (so they get built at most once per > venv, and potentially per machine) > * inspecting their metadata for dependencies > > then "dist-info" can be introduced later as an optimisation that > *just* generates the wheel metadata, without actually doing the full > binary build. For wheels we're downloading, we won't need to build > them, and for wheels we're installing, we'll need to build them > anyway. ... and for wheels we *might* install we'll have to build them unnecessarily. I'm very worried that folk are underestimating the cost of complex operations in a resolver context. > That means we can move defining the interface for a dist-info > subcommand off the critical path for decoupling the build system, and > instead use the existing directory based metadata format defined for > wheel files. We can do that anyway, if we choose to encode the existing interface rather than aim at a clean one. See https://mail.python.org/pipermail/distutils-sig/2015-October/027464.html for more discussion. > However, I also think there's one refinement we can make that lets us > drop the need for a copy-and-paste "setup.py", *without* needing to > define a programmatic build system API: let setup.cfg define a module > name to invoke with "python -m " instead of running > "setup.py". This is now back to a bootstrap API which is what my PEP describes, just with different syntax wrapped around it. You can equally take my PEP and rather than piping metadata on stdout, have it write a directory and consume that. But - my PEP is stalled on the decision between what flit folk have asked for, and what Donald is concerned about (see the mail in the link above). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Sun Nov 1 20:21:16 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 1 Nov 2015 20:21:16 -0500 Subject: [Distutils] A smaller step towards de-specializing setuptools/distutils In-Reply-To: References: Message-ID: On November 1, 2015 at 6:45:16 PM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > However, I also think there's one refinement we can make that > lets us > drop the need for a copy-and-paste "setup.py", *without* needing > to > define a programmatic build system API: let setup.cfg define > a module > name to invoke with "python -m " instead of running > "setup.py?. I think we should wait on this. We can always add it later, we can?t (easily) remove it. Defining the ``setup.py`` interface like I did for the /simple/ interface has benefits even completely removed from the goal of supporting alternative build systems. Once we get the details sorted out for how it affects the world of packaging to sanely allow alternative build systems, then we can figure out what it would look like to allow invocation without a setup.py script. Defining a brand new interface is a lot harder than defining the existing interface. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From dholth at gmail.com Mon Nov 2 10:29:15 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 02 Nov 2015 15:29:15 +0000 Subject: [Distutils] A smaller step towards de-specializing setuptools/distutils In-Reply-To: References: Message-ID: On Sun, Nov 1, 2015 at 8:21 PM Donald Stufft wrote: > On November 1, 2015 at 6:45:16 PM, Nick Coghlan (ncoghlan at gmail.com) > wrote: > > > However, I also think there's one refinement we can make that > > lets us > > drop the need for a copy-and-paste "setup.py", *without* needing > > to > > define a programmatic build system API: let setup.cfg define > > a module > > name to invoke with "python -m " instead of running > > "setup.py?. > > I think we should wait on this. We can always add it later, we can?t > (easily) remove it. Defining the ``setup.py`` interface like I did for the > /simple/ interface has benefits even completely removed from the goal of > supporting alternative build systems. Once we get the details sorted out > for how it affects the world of packaging to sanely allow alternative build > systems, then we can figure out what it would look like to allow invocation > without a setup.py script. > > Defining a brand new interface is a lot harder than defining the existing > interface. One problem with setup.py is that pip doesn't like it when egg_info produces a dist-info directory. Is it impossible to define setup.py dist-info as "write current-format wheel metadata to a target directory"? Or we could just standardize the egg-info requires.txt. A plain text list of requirements, one per line, needing no extras or markers is all that pip needs at this phase. It doesn't even mind if you put that into a .dist-info directory in the target folder. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 2 19:16:21 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 2 Nov 2015 16:16:21 -0800 Subject: [Distutils] Time for a setuptools_lite?? In-Reply-To: References: <5627C546.40004@ronnypfannschmidt.de> Message-ID: Sorry all -- on vacation and only semi-online for a while... I'd managed to miss Flit's creation, so I simply wasn't aware of it. > > Now that I've had a chance to remedy that oversight, yes, flit sounds > exactly like what I meant, so it could be worth recommending it ahead of a > full distutils/setuptools based setup.py for simple projects. > neither did I - and it does look intriguing, though not (yet) what I had in mind. "my" idea of setuptool-lite would be a drop-in replacement for the parts of setuptools that we want to continue to support. i.e. not nearly as crippled as flit. That being said, *I* don't have any use for pkg_resources, so maybe flit is pretty close :-) but sdists are key. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Nov 2 20:57:35 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 2 Nov 2015 17:57:35 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: [Adding distutils-sig to the CC as a heads-up. The context is that numpy is looking at deprecating the use of 'python setup.py install' and enforcing the use of 'pip install .' instead, and running into some issues that will probably need to be addressed if 'pip install .' is going to become the standard interface to work with source trees.] On Sun, Nov 1, 2015 at 3:16 PM, Ralf Gommers wrote: [...] > Hmm, after some more testing I'm going to have to bring up a few concerns > myself: > > 1. ``pip install .`` still has a clear bug; it starts by copying everything > (including .git/ !) to a tempdir with shutil, which is very slow. And the > fix for that will go via ``setup.py sdist``, which is still slow. Ugh. If 'pip (install/wheel) .' is supposed to become the standard way to build things, then it should probably build in-place by default. Working in a temp dir makes perfect sense for 'pip install ' or 'pip install ', but if the user supplies an actual named on-disk directory then presumably the user is expecting this directory to be used, and to be able to take advantage of incremental rebuilds etc., no? > 2. ``pip install .`` silences build output, which may make sense for some > usecases, but for numpy it just sits there for minutes with no output after > printing "Running setup.py install for numpy". Users will think it hangs and > Ctrl-C it. https://github.com/pypa/pip/issues/2732 I tend to agree with the commentary there that for end users this is different but no worse than the current situation where we spit out pages of "errors" that don't mean anything :-). I posted a suggestion on that bug that might help with the apparent hanging problem. > 3. ``pip install .`` refuses to upgrade an already installed development > version. For released versions that makes sense, but if I'm in a git tree > then I don't want it to refuse because 1.11.0.dev0+githash1 compares equal > to 1.11.0.dev0+githash2. Especially after waiting a few minutes, see (1). Ugh, this is clearly just a bug -- `pip install .` should always unconditionally install, IMO. (Did you file a bug yet?) At least the workaround is just 'pip uninstall numpy; pip install .', which is still better the running 'setup.py install' and having it blithely overwrite some files and not others. The first and last issue seem like ones that will mostly only affect developers, who should mostly have the ability to deal with these weird issues (or just use setup.py install --force if that's what they prefer)? This still seems like a reasonable trade-off to me if it also has the effect of reducing the number of weird broken installs among our thousands-of-times-larger userbase. -n -- Nathaniel J. Smith -- http://vorpus.org From qwcode at gmail.com Mon Nov 2 21:13:27 2015 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 2 Nov 2015 18:13:27 -0800 Subject: [Distutils] PyPA Roadmap Message-ID: Based on discussions in another thread [1], I've posted a PR to pypa.io for a "PyPA Roadmap" PR: https://github.com/pypa/pypa.io/pull/7 built version: http://pypaio.readthedocs.org/en/roadmap/roadmap/ To be clear, I'm not trying to dictate anything here, but rather just trying to mirror what I think is going on for the sake of new (or old) people, who don't have a full picture of the major todo items. I'm asking for help to make this as accurate as possible and to keep it accurate as our plans change. thanks, Marcus [1] https://mail.python.org/pipermail/distutils-sig/2015-October/027346.html , although it seems a number of emails in this thread never made it to the archive due to the python mail server failure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Nov 2 21:51:19 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 3 Nov 2015 15:51:19 +1300 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: On 3 November 2015 at 14:57, Nathaniel Smith wrote: > [Adding distutils-sig to the CC as a heads-up. The context is that > numpy is looking at deprecating the use of 'python setup.py install' > and enforcing the use of 'pip install .' instead, and running into > some issues that will probably need to be addressed if 'pip install .' > is going to become the standard interface to work with source trees.] > > On Sun, Nov 1, 2015 at 3:16 PM, Ralf Gommers wrote: > [...] >> Hmm, after some more testing I'm going to have to bring up a few concerns >> myself: >> >> 1. ``pip install .`` still has a clear bug; it starts by copying everything >> (including .git/ !) to a tempdir with shutil, which is very slow. And the >> fix for that will go via ``setup.py sdist``, which is still slow. > > Ugh. If 'pip (install/wheel) .' is supposed to become the standard way > to build things, then it should probably build in-place by default. > Working in a temp dir makes perfect sense for 'pip install > ' or 'pip install ', but if the user supplies an > actual named on-disk directory then presumably the user is expecting > this directory to be used, and to be able to take advantage of > incremental rebuilds etc., no? Thats what 'pip install -e .' does. 'setup.py develop' -> 'pip install -e .' >> 3. ``pip install .`` refuses to upgrade an already installed development >> version. For released versions that makes sense, but if I'm in a git tree >> then I don't want it to refuse because 1.11.0.dev0+githash1 compares equal >> to 1.11.0.dev0+githash2. Especially after waiting a few minutes, see (1). > > Ugh, this is clearly just a bug -- `pip install .` should always > unconditionally install, IMO. (Did you file a bug yet?) At least the > workaround is just 'pip uninstall numpy; pip install .', which is > still better the running 'setup.py install' and having it blithely > overwrite some files and not others. There is a bug open. https://github.com/pypa/pip/issues/536 -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Mon Nov 2 22:02:30 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 2 Nov 2015 19:02:30 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: On Nov 2, 2015 6:51 PM, "Robert Collins" wrote: > > On 3 November 2015 at 14:57, Nathaniel Smith wrote: > > [Adding distutils-sig to the CC as a heads-up. The context is that > > numpy is looking at deprecating the use of 'python setup.py install' > > and enforcing the use of 'pip install .' instead, and running into > > some issues that will probably need to be addressed if 'pip install .' > > is going to become the standard interface to work with source trees.] > > > > On Sun, Nov 1, 2015 at 3:16 PM, Ralf Gommers wrote: > > [...] > >> Hmm, after some more testing I'm going to have to bring up a few concerns > >> myself: > >> > >> 1. ``pip install .`` still has a clear bug; it starts by copying everything > >> (including .git/ !) to a tempdir with shutil, which is very slow. And the > >> fix for that will go via ``setup.py sdist``, which is still slow. > > > > Ugh. If 'pip (install/wheel) .' is supposed to become the standard way > > to build things, then it should probably build in-place by default. > > Working in a temp dir makes perfect sense for 'pip install > > ' or 'pip install ', but if the user supplies an > > actual named on-disk directory then presumably the user is expecting > > this directory to be used, and to be able to take advantage of > > incremental rebuilds etc., no? > > Thats what 'pip install -e .' does. 'setup.py develop' -> 'pip install -e .' I'm not talking about in place installs, I'm talking about e.g. building a wheel and then tweaking one file and rebuilding -- traditionally build systems go to some effort to keep track of intermediate artifacts and reuse them across builds when possible, but if you always copy the source tree into a temporary directory before building then there's not much the build system can do. > >> 3. ``pip install .`` refuses to upgrade an already installed development > >> version. For released versions that makes sense, but if I'm in a git tree > >> then I don't want it to refuse because 1.11.0.dev0+githash1 compares equal > >> to 1.11.0.dev0+githash2. Especially after waiting a few minutes, see (1). > > > > Ugh, this is clearly just a bug -- `pip install .` should always > > unconditionally install, IMO. (Did you file a bug yet?) At least the > > workaround is just 'pip uninstall numpy; pip install .', which is > > still better the running 'setup.py install' and having it blithely > > overwrite some files and not others. > > There is a bug open. https://github.com/pypa/pip/issues/536 Thanks! -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Nov 2 22:05:02 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 3 Nov 2015 16:05:02 +1300 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: On 3 November 2015 at 16:02, Nathaniel Smith wrote: > On Nov 2, 2015 6:51 PM, "Robert Collins" wrote: ... >> > Ugh. If 'pip (install/wheel) .' is supposed to become the standard way >> > to build things, then it should probably build in-place by default. >> > Working in a temp dir makes perfect sense for 'pip install >> > ' or 'pip install ', but if the user supplies an >> > actual named on-disk directory then presumably the user is expecting >> > this directory to be used, and to be able to take advantage of >> > incremental rebuilds etc., no? >> >> Thats what 'pip install -e .' does. 'setup.py develop' -> 'pip install -e >> .' > > I'm not talking about in place installs, I'm talking about e.g. building a > wheel and then tweaking one file and rebuilding -- traditionally build > systems go to some effort to keep track of intermediate artifacts and reuse > them across builds when possible, but if you always copy the source tree > into a temporary directory before building then there's not much the build > system can do. Ah yes. So I don't think pip should do what it does. It a violation of the abstractions we all want to see within it. However its not me you need to convince ;). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Mon Nov 2 22:06:04 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 3 Nov 2015 16:06:04 +1300 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: BTW scipy list is rejecting all my emails (vs eg moderating), so I'm going to drop the cc in all future replies. -Rob On 3 November 2015 at 16:05, Robert Collins wrote: > On 3 November 2015 at 16:02, Nathaniel Smith wrote: >> On Nov 2, 2015 6:51 PM, "Robert Collins" wrote: > ... >>> > Ugh. If 'pip (install/wheel) .' is supposed to become the standard way >>> > to build things, then it should probably build in-place by default. >>> > Working in a temp dir makes perfect sense for 'pip install >>> > ' or 'pip install ', but if the user supplies an >>> > actual named on-disk directory then presumably the user is expecting >>> > this directory to be used, and to be able to take advantage of >>> > incremental rebuilds etc., no? >>> >>> Thats what 'pip install -e .' does. 'setup.py develop' -> 'pip install -e >>> .' >> >> I'm not talking about in place installs, I'm talking about e.g. building a >> wheel and then tweaking one file and rebuilding -- traditionally build >> systems go to some effort to keep track of intermediate artifacts and reuse >> them across builds when possible, but if you always copy the source tree >> into a temporary directory before building then there's not much the build >> system can do. > > Ah yes. So I don't think pip should do what it does. It a violation of > the abstractions we all want to see within it. However its not me you > need to convince ;). > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud -- Robert Collins Distinguished Technologist HP Converged Cloud From marius at gedmin.as Tue Nov 3 01:45:53 2015 From: marius at gedmin.as (Marius Gedminas) Date: Tue, 3 Nov 2015 08:45:53 +0200 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: <20151103064553.GA18332@platonas> On Mon, Nov 02, 2015 at 06:13:27PM -0800, Marcus Smith wrote: > Based on discussions in another thread [1], I've posted a PR to pypa.io for > a "PyPA Roadmap" > > PR: https://github.com/pypa/pypa.io/pull/7 > built version: http://pypaio.readthedocs.org/en/roadmap/roadmap/ > > To be clear, I'm not trying to dictate anything here, but rather just > trying to mirror what I think is going on for the sake of new (or old) > people, who don't have a full picture of the major todo items. > > I'm asking for help to make this as accurate as possible and to keep it > accurate as our plans change. Shouldn't Warehouse be mentioned there? Marius Gedminas -- Initially, there were few or no postal regulations governing packages mailed parcel post. To construct a bank in Vernal, Utah in 1916, a Salt Lake City Company figured out that the cheapest way to send 40 tons of bricks to the building was by Parcel Post. Each brick was individually wrapped & mailed. Postal rules were promptly rewritten. -- http://en.wikipedia.org/wiki/United_States_Postal_Service -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From setuptools at bugs.python.org Tue Nov 3 08:08:51 2015 From: setuptools at bugs.python.org (andrew31137) Date: Tue, 03 Nov 2015 13:08:51 +0000 Subject: [Distutils] [issue164] dfswferfergerge Message-ID: <1446556131.35.0.396238642425.issue164@psf.upfronthosting.co.za> New submission from andrew31137: hello googlsx ---------- files: 1.html messages: 757 nosy: andrew31137 priority: wish status: unread title: dfswferfergerge Added file: http://bugs.python.org/setuptools/file178/1.html _______________________________________________ Setuptools tracker _______________________________________________ -------------- next part -------------- ??? hello worl gloogo From qwcode at gmail.com Tue Nov 3 10:29:45 2015 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 3 Nov 2015 07:29:45 -0800 Subject: [Distutils] PyPA Roadmap In-Reply-To: <20151103064553.GA18332@platonas> References: <20151103064553.GA18332@platonas> Message-ID: > > > > Shouldn't Warehouse be mentioned there? > Indeed. I'll add it. thanks Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Nov 3 11:25:48 2015 From: brett at python.org (Brett Cannon) Date: Tue, 03 Nov 2015 16:25:48 +0000 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: Thanks for writing the roadmap, Marcus! Heck of a lot easier to understand what is or is not sitting at a PEP at the moment. On Mon, 2 Nov 2015 at 18:13 Marcus Smith wrote: > Based on discussions in another thread [1], I've posted a PR to pypa.io > for a "PyPA Roadmap" > > PR: https://github.com/pypa/pypa.io/pull/7 > built version: http://pypaio.readthedocs.org/en/roadmap/roadmap/ > > To be clear, I'm not trying to dictate anything here, but rather just > trying to mirror what I think is going on for the sake of new (or old) > people, who don't have a full picture of the major todo items. > > I'm asking for help to make this as accurate as possible and to keep it > accurate as our plans change. > > thanks, > Marcus > > [1] > https://mail.python.org/pipermail/distutils-sig/2015-October/027346.html > , although it seems a number of emails in this thread never made it to the > archive due to the python mail server failure. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Nov 3 12:10:16 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 3 Nov 2015 09:10:16 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: <561702487663958680@unknownmsgid> >> I'm not talking about in place installs, I'm talking about e.g. building a >> wheel and then tweaking one file and rebuilding -- traditionally build >> systems go to some effort to keep track of intermediate artifacts and reuse >> them across builds when possible, but if you always copy the source tree >> into a temporary directory before building then there's not much the build >> system can do. This strikes me as an optimization -- is it an important one? If I'm doing a lot of tweaking and re-running, I'm usually in develop mode. I can see that when you build a wheel, you may build it, test it, discover an wheel-specific error, and then need to repeat the cycle -- but is that a major use-case? That being said, I have been pretty frustrated debugging conda-build scripts -- there is a lot of overhead setting up the build environment each time you do a build... But with wheel building there is much less overhead, and far fewer complications requiring the edit-build cycle. And couldn't make-style this-has-already-been-done checking happen with a copy anyway? CHB > Ah yes. So I don't think pip should do what it does. It a violation of > the abstractions we all want to see within it. However its not me you > need to convince ;). > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Tue Nov 3 13:57:04 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 Nov 2015 18:57:04 +0000 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: On 3 November 2015 at 16:25, Brett Cannon wrote: > Thanks for writing the roadmap, Marcus! Heck of a lot easier to understand > what is or is not sitting at a PEP at the moment. Agreed. I'll have a proper read through it at some point and see if there's anything I can add, but on a brief read it looks good. One question - would it be worth having a "design principles" section for notes about things like how we want to allow Warehouse/PyPI to publish metadata from the distribution files, so distribution formats should include static metadata for those values? Ultimately that sort of thing would end up in a PEP or spec, but at the moment, it's typically buried in ML threads and/or people's minds. The problem is that it would be much more tentative than the other data you've recorded, and might not fit well alongside it. Paul From wes.turner at gmail.com Tue Nov 3 15:55:09 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 3 Nov 2015 14:55:09 -0600 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: thanks! http://pypaio.readthedocs.org/en/roadmap/roadmap/ PEP 426 JSONLD [1] affects/could help solve for / should be aware of: - http://pypaio.readthedocs.org/en/roadmap/roadmap/#metadata-2-0 - http://pypaio.readthedocs.org/en/roadmap/roadmap/#metadata-extensions - http://pypaio.readthedocs.org/en/roadmap/roadmap/#build-neutrality - http://pypaio.readthedocs.org/en/roadmap/roadmap/#source-distribution-2-0 - http://pypaio.readthedocs.org/en/roadmap/roadmap/#installation-database-updates - http://pypaio.readthedocs.org/en/roadmap/roadmap/#common-filename-scheme ... PyPi.Python.org / warehouse.python.org 'route' URIs and resolvable resource URLs [1] https://github.com/pypa/interoperability-peps/issues/31 #PEP426JSONLD On Nov 2, 2015 8:13 PM, "Marcus Smith" wrote: > Based on discussions in another thread [1], I've posted a PR to pypa.io > for a "PyPA Roadmap" > > PR: https://github.com/pypa/pypa.io/pull/7 > built version: http://pypaio.readthedocs.org/en/roadmap/roadmap/ > > To be clear, I'm not trying to dictate anything here, but rather just > trying to mirror what I think is going on for the sake of new (or old) > people, who don't have a full picture of the major todo items. > > I'm asking for help to make this as accurate as possible and to keep it > accurate as our plans change. > > thanks, > Marcus > > [1] > https://mail.python.org/pipermail/distutils-sig/2015-October/027346.html > , although it seems a number of emails in this thread never made it to the > archive due to the python mail server failure. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Wed Nov 4 00:06:39 2015 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 3 Nov 2015 21:06:39 -0800 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: > > > One question - would it be worth having a "design principles" section > for notes about things like how we want to allow Warehouse/PyPI to > publish metadata from the distribution files, so distribution formats > should include static metadata for those values? maybe. we can certainly beef up the summaries and status comments. I'm wary that trying to maintain "design principles" is too much for this document (at least more than I'd want to maintain). btw, I just added a sentence to the sdist section about the static metadata discussion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 4 10:17:08 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 4 Nov 2015 15:17:08 +0000 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: On 4 November 2015 at 05:06, Marcus Smith wrote: > I'm wary that trying to maintain "design principles" is too much for this > document (at least more than I'd want to maintain). That was my concern too. Let's wait & see how things play out. > btw, I just added a sentence to the sdist section about the static metadata > discussion. Thanks! That sounds like a good place to put this point. Paul From guettliml at thomas-guettler.de Wed Nov 4 15:00:58 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 4 Nov 2015 21:00:58 +0100 Subject: [Distutils] The Update Framework, integrate into PyPI? Message-ID: <563A63FA.7020402@thomas-guettler.de> I read the RoadMap (Thank you Marcus Smith) and came across this: > An effort to integrate PyPI with the ?The Update Framework? (TUF). This is specified in PEP458 I see a trend to immutable systems everywhere. Updates are a pain. Building new systems is easier. With current hardware and good software it is easier to build new systems instead of updating existing systems. It is like from pets to cattle: - pets: you give them names and care for them (do updates) - cattle: you give them numbers and if they get ill you get rid of them. Maybe I am missing something. But why is there an effort to create "The Update Framework?, and why integrate it with pypi? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From guettliml at thomas-guettler.de Wed Nov 4 15:07:47 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 4 Nov 2015 21:07:47 +0100 Subject: [Distutils] Why github and bitbucket? Message-ID: <563A6593.1010601@thomas-guettler.de> From: http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ > Python Packaging Authority (PyPA) > PyPA is a working group that maintains many of the relevant projects in Python packaging. They maintain a site at https://www.pypa.io, host projects on github and bitbucket, and discuss issues on the pypa-dev mailing list. Why are there pypa on github and bitbucket? Is one the master and the other the mirror? Or does one host part A and the other hosts part B? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From guettliml at thomas-guettler.de Wed Nov 4 15:13:23 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 4 Nov 2015 21:13:23 +0100 Subject: [Distutils] Which Build Distribution Formats do exist? Message-ID: <563A66E3.9000503@thomas-guettler.de> >From http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ > Egg > A Built Distribution format introduced by setuptools, which is being replaced by Wheel. Which other Built Distribution formats do exist beside egg and wheel? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From graffatcolmingov at gmail.com Wed Nov 4 15:14:53 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Wed, 4 Nov 2015 14:14:53 -0600 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563A6593.1010601@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> Message-ID: As I understand it, some people prefer Mercurial. Those projects tend to live on bitbucket. Git projects can live in either place although I suspect they tend to live on GitHub instead. On Wed, Nov 4, 2015 at 2:07 PM, Thomas G?ttler wrote: > From: http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ > >> Python Packaging Authority (PyPA) >> PyPA is a working group that maintains many of the relevant projects in Python packaging. They maintain a site at https://www.pypa.io, host projects on github and bitbucket, and discuss issues on the pypa-dev mailing list. > > Why are there pypa on github and bitbucket? > > Is one the master and the other the mirror? > > Or does one host part A and the other hosts part B? > > Regards, > Thomas G?ttler > > > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From njs at pobox.com Wed Nov 4 15:25:30 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 4 Nov 2015 12:25:30 -0800 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: <563A63FA.7020402@thomas-guettler.de> References: <563A63FA.7020402@thomas-guettler.de> Message-ID: Hi Thomas, It's great you're so enthusiastic about python packaging and distribution, but it might be good to keep in mind that there are a lot of people reading these lists, and answering basic questions can take time away from making important improvements? In this case, a quick google of "the update framework" or skimming of the referenced PEP 458 would have revealed that TUF is totally orthogonal to the kinds of updates that you're worried about -- it's about building a cryptographic framework to let you reliably identify what the latest version of some software is, even if e.g. someone has broken into pypi and tried to add backdoors to the software there, which is important no matter what strategy you then use to deploy those updates. In fact possibly the largest deployment of TUF is the version built into docker's latest release, to help you securely pick a good base image. -n On Nov 4, 2015 12:06 PM, "Thomas G?ttler" wrote: > I read the RoadMap (Thank you Marcus Smith) and came across this: > > > An effort to integrate PyPI with the ?The Update Framework? (TUF). This > is specified in PEP458 > > I see a trend to immutable systems everywhere. Updates are a pain. Building > new systems is easier. With current hardware and good software it is easier > to build new systems instead of updating existing systems. > > It is like from pets to cattle: > > - pets: you give them names and care for them (do updates) > - cattle: you give them numbers and if they get ill you get rid of them. > > Maybe I am missing something. But why is there an > effort to create "The Update Framework?, and why integrate > it with pypi? > > Regards, > Thomas G?ttler > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cournape at gmail.com Wed Nov 4 15:35:31 2015 From: cournape at gmail.com (David Cournapeau) Date: Wed, 4 Nov 2015 20:35:31 +0000 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: <563A63FA.7020402@thomas-guettler.de> References: <563A63FA.7020402@thomas-guettler.de> Message-ID: On Wed, Nov 4, 2015 at 8:00 PM, Thomas G?ttler wrote: > I read the RoadMap (Thank you Marcus Smith) and came across this: > > > An effort to integrate PyPI with the ?The Update Framework? (TUF). This > is specified in PEP458 > > I see a trend to immutable systems everywhere. Not everywhere. Keep in mind that there are a *lot* of different usecases for packaging/deployment. Not just web app, not just CLI tools, etc... For example, it is common for modern end user applications to use an auto-update feature (e.g. chrome). David -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Wed Nov 4 18:00:14 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 4 Nov 2015 15:00:14 -0800 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: References: <563A63FA.7020402@thomas-guettler.de> Message-ID: > > answering basic questions can take time away from making important > improvements? > to be fair, distutils-sig is mentioned as a user support list on the "Python Packaging User Guide" a few years back, there was a debate on splitting it between a user and planning list, but no traction there. one concern was that the user list wouldn't have enough experts participating to answer the questions. --Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Wed Nov 4 18:09:12 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Wed, 04 Nov 2015 18:09:12 -0500 Subject: [Distutils] Which Build Distribution Formats do exist? In-Reply-To: <563A66E3.9000503@thomas-guettler.de> References: <563A66E3.9000503@thomas-guettler.de> Message-ID: <563A9018.3000806@sdamon.com> On 11/4/2015 15:13, Thomas G?ttler wrote: > From http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ > >> Egg >> A Built Distribution format introduced by setuptools, which is being replaced by Wheel. > Which other Built Distribution formats do exist beside egg and wheel? > > Regards, > Thomas G?ttler > > EXE installers for windows. From qwcode at gmail.com Wed Nov 4 19:42:47 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 4 Nov 2015 16:42:47 -0800 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: FYI, I went ahead and merged it. https://www.pypa.io/en/latest/roadmap/ Again, help appreciated from anyone to keep it accurate as things change (and they surely will) --Marcus -------------- next part -------------- An HTML attachment was scrubbed... URL: From leorochael at gmail.com Thu Nov 5 08:12:21 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Thu, 5 Nov 2015 11:12:21 -0200 Subject: [Distutils] Which Build Distribution Formats do exist? In-Reply-To: <563A9018.3000806@sdamon.com> References: <563A66E3.9000503@thomas-guettler.de> <563A9018.3000806@sdamon.com> Message-ID: There are other formats also. This distutils doc explain the "native" ones: https://docs.python.org/2/distutils/builtdist.html On 4 November 2015 at 21:09, Alexander Walters wrote: > > > On 11/4/2015 15:13, Thomas G?ttler wrote: > >> From >> http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ >> >> Egg >>> A Built Distribution format introduced by setuptools, which is being >>> replaced by Wheel. >>> >> Which other Built Distribution formats do exist beside egg and wheel? >> >> Regards, >> Thomas G?ttler >> >> >> EXE installers for windows. > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Thu Nov 5 14:27:15 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 20:27:15 +0100 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> Message-ID: <563BAD93.6000500@thomas-guettler.de> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: > As I understand it, some people prefer Mercurial. Those projects tend > to live on bitbucket. Git projects can live in either place although I > suspect they tend to live on GitHub instead. Is there really a need for this? Which parts are on bitbucket and which are on github? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From graffatcolmingov at gmail.com Thu Nov 5 14:35:02 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 5 Nov 2015 13:35:02 -0600 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BAD93.6000500@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> Message-ID: On Thu, Nov 5, 2015 at 1:27 PM, Thomas G?ttler wrote: > Am 04.11.2015 um 21:14 schrieb Ian Cordasco: >> As I understand it, some people prefer Mercurial. Those projects tend >> to live on bitbucket. Git projects can live in either place although I >> suspect they tend to live on GitHub instead. > > Is there really a need for this? Is there a need for letting project creators work as they please with the VCS they prefer? Why not if it makes them more efficient maintainers. From donald at stufft.io Thu Nov 5 14:36:29 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 14:36:29 -0500 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BAD93.6000500@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> Message-ID: On November 5, 2015 at 2:27:36 PM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: > Am 04.11.2015 um 21:14 schrieb Ian Cordasco: > > As I understand it, some people prefer Mercurial. Those projects tend > > to live on bitbucket. Git projects can live in either place although I > > suspect they tend to live on GitHub instead. > > Is there really a need for this? > > Which parts are on bitbucket and which are on github? > PyPA is not really a ?top down? organization in a way that we can just dictate what VCS (or VCS hosting) that the sub projects are allowed to use. The only rules PyPA can have are ones that all of the maintainers of the sub projects agree with. Thus, if we want to standardize around one VCS or VCS hosting location we?d need everyone who isn?t conforming to that to change, and if they are OK with changing there is nothing stopping them from moving right now and having a defacto standard immediately. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From opensource at ronnypfannschmidt.de Thu Nov 5 14:31:48 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Thu, 05 Nov 2015 20:31:48 +0100 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BAD93.6000500@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> Message-ID: <61E1A016-860C-44CE-B9DE-80F48DF9B8C3@ronnypfannschmidt.de> I'm slowly working on something to transfer the issues, then it might be feasible to move things into one place as people agree. However currently I'm without a personal PC, so no open source work for me -- Ronny Am 5. November 2015 20:27:15 MEZ, schrieb "Thomas G?ttler" : >Am 04.11.2015 um 21:14 schrieb Ian Cordasco: >> As I understand it, some people prefer Mercurial. Those projects tend >> to live on bitbucket. Git projects can live in either place although >I >> suspect they tend to live on GitHub instead. > >Is there really a need for this? > >Which parts are on bitbucket and which are on github? > >Regards, > Thomas G?ttler > >-- >http://www.thomas-guettler.de/ >_______________________________________________ >Distutils-SIG maillist - Distutils-SIG at python.org >https://mail.python.org/mailman/listinfo/distutils-sig MFG Ronny From guettliml at thomas-guettler.de Thu Nov 5 14:42:59 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 20:42:59 +0100 Subject: [Distutils] Which Build Distribution Formats do exist? In-Reply-To: References: <563A66E3.9000503@thomas-guettler.de> <563A9018.3000806@sdamon.com> Message-ID: <563BB143.9050102@thomas-guettler.de> Am 05.11.2015 um 14:12 schrieb Leonardo Rochael Almeida: > There are other formats also. This distutils doc explain the "native" ones: > > https://docs.python.org/2/distutils/builtdist.html The PyPUG tells me to use setuptools. Now I feel on unsafe ground if I read docs from a tool I don't use (distutils). Let's see if setuptools has docs: https://pythonhosted.org/setuptools/search.html?q=Build+Distribution+Format ... no matches found. What's wrong here? It seems that there are no docs for all supported bdist formats of setuptools. Is it too much to want something like this? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From robertc at robertcollins.net Thu Nov 5 14:49:55 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 6 Nov 2015 08:49:55 +1300 Subject: [Distutils] Which Build Distribution Formats do exist? In-Reply-To: <563A66E3.9000503@thomas-guettler.de> References: <563A66E3.9000503@thomas-guettler.de> Message-ID: There are also third party things like freeze and py2app. -Rob On 5 November 2015 at 09:13, Thomas G?ttler wrote: > From http://python-packaging-user-guide.readthedocs.org/en/latest/glossary/ > >> Egg >> A Built Distribution format introduced by setuptools, which is being replaced by Wheel. > > Which other Built Distribution formats do exist beside egg and wheel? > > Regards, > Thomas G?ttler > > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -- Robert Collins Distinguished Technologist HP Converged Cloud From guettliml at thomas-guettler.de Thu Nov 5 14:51:24 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 20:51:24 +0100 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: References: <563A63FA.7020402@thomas-guettler.de> Message-ID: <563BB33C.3050204@thomas-guettler.de> Am 04.11.2015 um 21:25 schrieb Nathaniel Smith: > Hi Thomas, > > It's great you're so enthusiastic about python packaging and distribution, but it might be good to keep in mind that there are a lot of people reading these lists, and answering basic questions can take time away from making important improvements? Do you know how many developers try to understand the magic of python packaging every day? My guess is that 99% of all new comers get confused by the current docs. Somehow I have the need to speak it out. I have no clue how to improve it. That's why I ask here. If you don't like my question ... sorry they are just a mirror of the current state of the docs. For me the basic docs are more important than the "important improvements". Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From donald at stufft.io Thu Nov 5 15:08:25 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 15:08:25 -0500 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: <563BB33C.3050204@thomas-guettler.de> References: <563A63FA.7020402@thomas-guettler.de> <563BB33C.3050204@thomas-guettler.de> Message-ID: On November 5, 2015 at 2:51:46 PM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: > Am 04.11.2015 um 21:25 schrieb Nathaniel Smith: > > Hi Thomas, > > > > It's great you're so enthusiastic about python packaging and distribution, but it > might be good to keep in mind that there are a lot of people reading these lists, and answering > basic questions can take time away from making important improvements? > > Do you know how many developers try to understand the magic of python packaging every > day? > > My guess is that 99% of all new comers get confused by the current docs. > > Somehow I have the need to speak it out. I have no clue how to improve it. That's why I ask > here. > > If you don't like my question ... sorry they are just a mirror of the current state of the > docs. > > For me the basic docs are more important than the "important improvements". > For what it?s worth the road map is not going to be targeted at beginners. It?s going to be targeted more towards people who are already knowledgable and give them a way to see what?s on the pipeline for improvement. For end users they are likely to never see the words ?TUF? or ?The Update Framework?, it?ll be an implementation detail. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From guettliml at thomas-guettler.de Thu Nov 5 15:17:54 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 21:17:54 +0100 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> Message-ID: <563BB972.2030501@thomas-guettler.de> Am 05.11.2015 um 20:36 schrieb Donald Stufft: > On November 5, 2015 at 2:27:36 PM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: >> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: >>> As I understand it, some people prefer Mercurial. Those projects tend >>> to live on bitbucket. Git projects can live in either place although I >>> suspect they tend to live on GitHub instead. >> >> Is there really a need for this? >> >> Which parts are on bitbucket and which are on github? >> > > PyPA is not really a ?top down? organization in a way that we can just dictate what VCS (or VCS hosting) that the sub projects are allowed to use. > The only rules PyPA can have are ones that all of the maintainers of the sub projects agree with. > Thus, if we want to standardize around one VCS or VCS hosting location we?d need everyone who isn?t conforming to that to change, and if they are OK with changing there is nothing stopping them from moving right now and having a defacto standard immediately. I just ask myself why this was not done from the start: find a consensus. Of course I see that it is very hard to change the current state. It's a mess. Regards, Thomas -- http://www.thomas-guettler.de/ From donald at stufft.io Thu Nov 5 15:31:47 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 15:31:47 -0500 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BB972.2030501@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BB972.2030501@thomas-guettler.de> Message-ID: On November 5, 2015 at 3:18:19 PM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: > > I just ask myself why this was not done from the start: find a consensus. > Of course I see that it is very hard to > change the current state. Basically: Historical reasons. The name ?PyPA? was a joke by the pip/virtualenv developers and it was only pip and virtualenv so it was on Github. At some point setuptools started being maintained again but it was in Hg on Bitbucket. Those two projects already existed and packaging started to improve and we sort of adopted the ?PyPA? moniker for everything. At that point projects were already in a particular VCS and it was easier to just allow people to pick what VCS they wanted their project to use than attempt to standardize on one. It?s not really that big of a deal though, on the list of things to worry about it?s pretty low on it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From qwcode at gmail.com Thu Nov 5 15:40:46 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Nov 2015 12:40:46 -0800 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BB972.2030501@thomas-guettler.de> Message-ID: > > > > Basically: Historical reasons. The name ?PyPA? was a joke by the > pip/virtualenv developers and it was only pip and virtualenv so it was on > Github. here's an anecdote.... per the pypa.io history page, 'Other proposed names were ?ianb-ng?, ?cabal?, ?pack? and ?Ministry of Installation?' https://www.pypa.io/en/latest/history/ maybe even funnier that we have a history page, but it's easy to forget all that's happened, so I made one awhile back... -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu Nov 5 15:43:29 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Nov 2015 20:43:29 +0000 Subject: [Distutils] The Update Framework, integrate into PyPI? In-Reply-To: <563BB33C.3050204@thomas-guettler.de> References: <563A63FA.7020402@thomas-guettler.de> <563BB33C.3050204@thomas-guettler.de> Message-ID: On 5 November 2015 at 19:51, Thomas G?ttler wrote: > My guess is that 99% of all new comers get confused by the current docs. My guess (no more or less accurate than yours!) is that very few new users read the docs. Maybe they get confused by the UI of the tools, maybe not. But improving docs they don't read wouldn't help as much as improving the UI. And anyway, the document you asked about is aimed at people wanting to help with developing the packaging tools, *not* at end users. > Somehow I have the need to speak it out. I have no clue how to improve it. That's why I ask here. > > If you don't like my question ... sorry they are just a mirror of the current state of the docs. But you don't ask questions with any goal in mind - you're not saying "I want to do X and I can't find the information I need". You just ask questions about random things, with no explanation of what actual work you are trying to do that the information would help you achieve. (And no, "understand Python's packaging" isn't actual work - "install package X" is, as is "write a PR for pip to do X".) > For me the basic docs are more important than the "important improvements". Understood. Your opinion is noted. Many people here disagree with your priorities (at least in terms of what they wish to contribute) - although there *are* people working on the tutorial documentation, so your implication that nobody's doing what you want them to is wrong. The tone of your emails seems consistently critical. I'm willing to assume that you're frustrated and wish you could find a way to help, but it's getting hard to remain patient. Please could you try to phrase your questions in future with a bit more thought to the fact that the software you're commenting on was provided free of charge by people working in their spare time. Developer enthusiasm is hard to obtain, and very easy to lose, so needs to be treated with respect. Paul From p.f.moore at gmail.com Thu Nov 5 15:46:11 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Nov 2015 20:46:11 +0000 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BB972.2030501@thomas-guettler.de> Message-ID: On 5 November 2015 at 20:40, Marcus Smith wrote: > > https://www.pypa.io/en/latest/history/ > > maybe even funnier that we have a history page, but it's easy to forget all > that's happened, so I made one awhile back... Wow distutils was released in 2000, 15 years ago. I remember when it happened. I feel really old now :-) Paul From guettliml at thomas-guettler.de Thu Nov 5 15:50:47 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 21:50:47 +0100 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <61E1A016-860C-44CE-B9DE-80F48DF9B8C3@ronnypfannschmidt.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <61E1A016-860C-44CE-B9DE-80F48DF9B8C3@ronnypfannschmidt.de> Message-ID: <563BC127.3070308@thomas-guettler.de> Am 05.11.2015 um 20:31 schrieb Ronny Pfannschmidt: > I'm slowly working on something to transfer the issues, then it might be feasible to move things into one place as people agree. I don't get you the packing-people. You are working on something. What are you working on? I guess you need to know if the final target is github or bitbucket. What can you work on, if the consensus was not found yet? Or do you want a parallel hosting: issues created in bitbucket get created in github automatically (and vice-versa). Yes, this problem is touring complete and can be solved. It is a waste of time in my eyes. Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From guettliml at thomas-guettler.de Thu Nov 5 15:53:58 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 5 Nov 2015 21:53:58 +0100 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> Message-ID: <563BC1E6.7070600@thomas-guettler.de> Am 05.11.2015 um 20:35 schrieb Ian Cordasco: > On Thu, Nov 5, 2015 at 1:27 PM, Thomas G?ttler > wrote: >> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: >>> As I understand it, some people prefer Mercurial. Those projects tend >>> to live on bitbucket. Git projects can live in either place although I >>> suspect they tend to live on GitHub instead. >> >> Is there really a need for this? > > Is there a need for letting project creators work as they please with > the VCS they prefer? Why not if it makes them more efficient > maintainers. If the projects are unrelated then every maintainer should use what he prefers. If the projects are related and maintained by one group (PyPa), then there should be **one** hosting platform. - my opinion - Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From leorochael at gmail.com Thu Nov 5 15:54:11 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Thu, 5 Nov 2015 18:54:11 -0200 Subject: [Distutils] Which Build Distribution Formats do exist? In-Reply-To: <563BB143.9050102@thomas-guettler.de> References: <563A66E3.9000503@thomas-guettler.de> <563A9018.3000806@sdamon.com> <563BB143.9050102@thomas-guettler.de> Message-ID: Hi Thomas On 5 November 2015 at 17:42, Thomas G?ttler wrote: > Am 05.11.2015 um 14:12 schrieb Leonardo Rochael Almeida: > > There are other formats also. This distutils doc explain the "native" > ones: > > > > https://docs.python.org/2/distutils/builtdist.html > > > The PyPUG tells me to use setuptools. Now I feel on unsafe ground if > I read docs from a tool I don't use (distutils). > I don't understand why reading docs from a tool you're not using would make you "feel on unsafe ground". At worst the docs don't apply to you, at best they do, but only if you follow them. In any case, the first line of the document "Building and Distributing Packages with Setuptools" [1] reads: - "Setuptools is a collection of *enhancements* to the Python distutils...". So, if you're using setuptools, you are using (an enhanced) distutils. So, really, no reason to be scared of its documentation. [1] https://pythonhosted.org/setuptools/setuptools.html Let's see if setuptools has docs: > https://pythonhosted.org/setuptools/search.html?q=Build+Distribution+Format > > ... no matches found. > > What's wrong here? > > It seems that there are no docs for all supported bdist formats of > setuptools. > Is it too much to want something like this? No, but wanting won't make it happen. Contributing (or paying someone else to do it) will. Cheers, Leo -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Nov 5 15:56:17 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 15:56:17 -0500 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BC1E6.7070600@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BC1E6.7070600@thomas-guettler.de> Message-ID: On November 5, 2015 at 3:54:18 PM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: > Am 05.11.2015 um 20:35 schrieb Ian Cordasco: > > On Thu, Nov 5, 2015 at 1:27 PM, Thomas G?ttler > > wrote: > >> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: > >>> As I understand it, some people prefer Mercurial. Those projects tend > >>> to live on bitbucket. Git projects can live in either place although I > >>> suspect they tend to live on GitHub instead. > >> > >> Is there really a need for this? > > > > Is there a need for letting project creators work as they please with > > the VCS they prefer? Why not if it makes them more efficient > > maintainers. > > If the projects are unrelated then every maintainer should use what > he prefers. If the projects are related and maintained by one group (PyPa), > then there should be **one** hosting platform. - my opinion - > They are loosely affiliated. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Thu Nov 5 16:08:55 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 16:08:55 -0500 Subject: [Distutils] The future of invoking pip Message-ID: There is currently a semi related set of problems that I'd really like to figure out an answer too so we can begin to work on a migration path and close these out. This is dealing with a fairly fundamental aspect of pip so I'm bringing it up here to try and get wider discussion than the issue tracker or pypa-dev list. Currently pip installs a number of commands like ``pip``, ``pipX`` and ``pipX.Y`` where the X and X.Y corresponds to the version of Python that pip is installed into. Pip installs into whatever Python is currently executing it so this gives some ability to control which version of Python you're installing into (``pip2.7`` for Python 2.7 etc). However, this has a few problems: * It really only works if you only have one version of Python installed for ? each X.Y series, as soon as you have multiple versions in one series ? (including alternative implementations besides CPython) you run into a ? problem of which Python 2.7 is pip2.7 for and how do you invoke it for the ? *other* Python 2.7's. Something like pip2.7.8 is really ugly, and still ? doesn't solve the problem when you have multiple 2.7.8 installs and ? pip-pypy2-4.0 is even uglier. * The above gets *really* confusing when ``pipX`` or ``pip`` do not agree with ? what ``pythonX`` and ``python`` point to. * Having pip need to be installed into each Python means you end up with a ? bunch of independent pip installations which all need to be independently ? updated. We've made this better by having recent pips warn you if you're not ? on the latest version, but it's not unusual for me personally to have 30+ ? different installations of pip on my personal desktop. That isn't great. * Having ``pip`` on Windows requires us to create a "script wrapper" which is ? just a shim .exe that just executes a python script. This is due to the fact ? that Windows special cases .exe in places. This causes problems because you ? can't actually do ``pip install --upgrade pip`` on Windows because you'll get ? an error trying to update the script wrapper, you need to do ? ``py -m pip install --upgrade pip`` so the .exe file isn't currently open. One possible solution to the above problems is to try and move away from using ``pip``, ``pipX`` and ``pipX.Y`` and instead push people (and possibly deprecate ``pip`` &etc) towards using ``python -m pip`` instead. This makes it unambigious which python you're modifying and completely removes all of the confusion around that. However this too has problems: * There is a lot of documentation out there in many projects that tell people ? to use ``pip install ...``, the long tail on getting people moved to this ? will be very long. * It's more to type, 10 more characters on *nix and 6 more characters on ? Windows which makes it more akward and annoying to use. This is particularly ? annoying inside of a virtual environment where there isn't really any ? ambiguity when one is activated. * It still has the annoyance around having multiple pip installs all over the ? place and needing to manage those. * We still support Python 2.6 which doesn't support executing a package only ? modules via ``-m``. So we'll break Python 2.6 unless people do ? ``python -m pip.__main__`` or we move pip/* to _pip/* and make a pip.py which ? will break things for people using pip as a library (which isn't currently ? supported). Another possible option is to modify pip so that instead of installing into site-packages we instead create an "executable zip file" which is a simple zip file that has all of pip inside of it and a top level __main__.py. Python can directly execute this as if it were installed. We would no longer have any command except for ``pip`` (which is really this executable zip file). This script would default to installing into ``python``, and you can direct it to install into a different python using a (new) flag of ``-p`` or ``--python`` where you can specify either a full path or a command that we'll search $PATH for. * We'll break the usage of ``python -m pip``. * We'll break the usage of ``pipX`` and ``pipX.Y``. * We'll break the usage of anyone using pip as a library (but this is not ? actually supported). * We still can't do ``pip install --upgrade pip`` by default, and since there's ? no ability to do ``py -m pip install --upgrade pip`` we'll need to figure ? something out (perhaps skip updating the .exe? I don't know). The first option is the status quo and thus doesn't represent any *new* breakage but has all of the problems we have currently. The second option uses the ``-m`` operater but has problems with 2.6 and it's not as nice to type but it removes all the ambiguity. The third option introduces the most breakage, but it removes all the amiguity *and* it removes the need to manage a number of pip installations. A side conversation here is that pip currently bundles a number of dependencies inside of itself because it can't really have any dependencies. This works fine but it's a bit of an annoyance to maintain. The larger problem is that a number of downstreams don't like this and have attempted to route around it to varying degrees of success. One benefit of the third option is that we can remove the need to directly copy the bundled libraries into the pip source code and we can install just bundle it inside the built zip file. Thoughts? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From graffatcolmingov at gmail.com Thu Nov 5 16:21:56 2015 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Thu, 5 Nov 2015 15:21:56 -0600 Subject: [Distutils] Why github and bitbucket? In-Reply-To: <563BC1E6.7070600@thomas-guettler.de> References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BC1E6.7070600@thomas-guettler.de> Message-ID: On Thu, Nov 5, 2015 at 2:53 PM, Thomas G?ttler wrote: > Am 05.11.2015 um 20:35 schrieb Ian Cordasco: >> On Thu, Nov 5, 2015 at 1:27 PM, Thomas G?ttler >> wrote: >>> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: >>>> As I understand it, some people prefer Mercurial. Those projects tend >>>> to live on bitbucket. Git projects can live in either place although I >>>> suspect they tend to live on GitHub instead. >>> >>> Is there really a need for this? >> >> Is there a need for letting project creators work as they please with >> the VCS they prefer? Why not if it makes them more efficient >> maintainers. > > If the projects are unrelated then every maintainer should use what > he prefers. If the projects are related and maintained by one group (PyPa), > then there should be **one** hosting platform. - my opinion - That's super helpful. Thanks From barry at python.org Thu Nov 5 17:05:22 2015 From: barry at python.org (Barry Warsaw) Date: Thu, 5 Nov 2015 16:05:22 -0600 Subject: [Distutils] The future of invoking pip References: Message-ID: <20151105160522.12cc2bdc@anarchist> On Nov 05, 2015, at 04:08 PM, Donald Stufft wrote: >One benefit of the third option is that we can remove the need to directly >copy the bundled libraries into the pip source code and we can install just >bundle it inside the built zip file. This shouldn't be a problem from Debian's p.o.v. if we can adjust how packages end up in the zip file. If for example the zip contains wheels, and we finish the dirtbike (or similiar) project to rewheel distro packages, we'd use those pip-build-time generated wheels in the zip and be all good. Of course, we'd probably have to rebuild pip when we change any of those dependencies, but I think that's tractable. I think the proposed solution is a better user experience that either of pipX.Y or `pythonX.Y -m pip`, so I'm glad you're experimenting with this approach. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Thu Nov 5 17:16:27 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 17:16:27 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151105160522.12cc2bdc@anarchist> References: <20151105160522.12cc2bdc@anarchist> Message-ID: On November 5, 2015 at 5:06:07 PM, Barry Warsaw (barry at python.org) wrote: > On Nov 05, 2015, at 04:08 PM, Donald Stufft wrote: > > >One benefit of the third option is that we can remove the need to directly > >copy the bundled libraries into the pip source code and we can install just > >bundle it inside the built zip file. > > This shouldn't be a problem from Debian's p.o.v. if we can adjust how packages > end up in the zip file. If for example the zip contains wheels, and we finish > the dirtbike (or similiar) project to rewheel distro packages, we'd use those > pip-build-time generated wheels in the zip and be all good. Of course, we'd > probably have to rebuild pip when we change any of those dependencies, but I > think that's tractable. My current proof of concept ?forces? it to use something pip installed (though it could be pip installing a dirtbike generated wheel) but it really only does that because the most expedient way to make it work was to ``pip install foo ?target TMPDIR`` and then recursively copy that into the zip file. There?s a number of possible ways it could work though, and if we did it, we?d want to make sure that it worked in a way that was friendly to downstream too. > > I think the proposed solution is a better user experience that either of > pipX.Y or `pythonX.Y -m pip`, so I'm glad you're experimenting with this > approach. In a total vacuum I think so too, the biggest worry I have is that we aren?t in a vacuum and we?re going to be breaking (at some point at least, whenever the deprecation period ends) some major interfaces (pipX, pipX.Y) and (python -m pip) and some minor ones (importing pip). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ralf.gommers at gmail.com Thu Nov 5 17:24:55 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 5 Nov 2015 23:24:55 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: <561702487663958680@unknownmsgid> References: <561702487663958680@unknownmsgid> Message-ID: On Tue, Nov 3, 2015 at 6:10 PM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > >> I'm not talking about in place installs, I'm talking about e.g. > building a > >> wheel and then tweaking one file and rebuilding -- traditionally build > >> systems go to some effort to keep track of intermediate artifacts and > reuse > >> them across builds when possible, but if you always copy the source tree > >> into a temporary directory before building then there's not much the > build > >> system can do. > > This strikes me as an optimization -- is it an important one? > Yes, I think it is. At least if we want to move people towards `pip install .` instead of `python setup.py`. > If I'm doing a lot of tweaking and re-running, I'm usually in develop mode. > Everyone has a slightly different workflow. What if you install into a bunch of different venvs between tweaks? The non-caching for a package like scipy pushes rebuild time from <30 sec to ~10 min. > I can see that when you build a wheel, you may build it, test it, > discover an wheel-specific error, and then need to repeat the cycle -- > but is that a major use-case? > > That being said, I have been pretty frustrated debugging conda-build > scripts -- there is a lot of overhead setting up the build environment > each time you do a build... > > But with wheel building there is much less overhead, and far fewer > complications requiring the edit-build cycle. > > And couldn't make-style this-has-already-been-done checking happen > with a copy anyway? > The whole point of the copy is that it's a clean environment. Pip currently creates tempdirs and removes them when it's done building. So no. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Nov 5 17:29:10 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 17:29:10 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: I?m not at my computer, but does ``pip install ?no-clean ?build `` make this work?? On November 5, 2015 at 5:25:16 PM, Ralf Gommers (ralf.gommers at gmail.com) wrote: > On Tue, Nov 3, 2015 at 6:10 PM, Chris Barker - NOAA Federal < > chris.barker at noaa.gov> wrote: > > > >> I'm not talking about in place installs, I'm talking about e.g. > > building a > > >> wheel and then tweaking one file and rebuilding -- traditionally build > > >> systems go to some effort to keep track of intermediate artifacts and > > reuse > > >> them across builds when possible, but if you always copy the source tree > > >> into a temporary directory before building then there's not much the > > build > > >> system can do. > > > > This strikes me as an optimization -- is it an important one? > > > > Yes, I think it is. At least if we want to move people towards `pip install > .` instead of `python setup.py`. > > > > If I'm doing a lot of tweaking and re-running, I'm usually in develop mode. > > > > Everyone has a slightly different workflow. What if you install into a > bunch of different venvs between tweaks? The non-caching for a package like > scipy pushes rebuild time from <30 sec to ~10 min. > > > > I can see that when you build a wheel, you may build it, test it, > > discover an wheel-specific error, and then need to repeat the cycle -- > > but is that a major use-case? > > > > That being said, I have been pretty frustrated debugging conda-build > > scripts -- there is a lot of overhead setting up the build environment > > each time you do a build... > > > > But with wheel building there is much less overhead, and far fewer > > complications requiring the edit-build cycle. > > > > And couldn't make-style this-has-already-been-done checking happen > > with a copy anyway? > > > > The whole point of the copy is that it's a clean environment. Pip currently > creates tempdirs and removes them when it's done building. So no. > > Ralf > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Thu Nov 5 17:41:47 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 Nov 2015 22:41:47 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On 5 November 2015 at 21:08, Donald Stufft wrote: > Thoughts? The executable zip solution is in principle the best long-term solution. But the breakage is major, and it pretty much permanently cuts off any option to support use of pip as a library. That's probably OK, but we need to understand that fact. Further downsides to the executable zip approach: 1. We still need an "exe wrapper" on Windows. This can be as simple as prepending a stub exe to the zip, but we'd need either to maintain such a stub ourselves or find a supplier of one. 2. The Python 3.5 zipapp module includes support for a .pyz extension that is registered as a "zipped Python application". But as you noted, .exe is treated specially in certain places in Windows, and so there really is no alternative to a "pip.exe" command if we want the invocation to remain "pip". 3. On Windows, there is no guaranteed location that is on PATH. If we supply a "pip.exe" how will we ensure it's on the user's PATH? At the moment we take advantage of the Python installer (and virtualenv activate scripts) including the Python Scripts directory on PATH. If pip were an independent executable, we'd need people to manage PATH themselves. 4. Wouldn't the executable zip still be run with a specific Python, coded in the wrapper or shebang line? You say it'd install into "python" by default. But what about on Windows where the py launcher gets "the default Python" from its ini file, not from what's on PATH? And to use a different Python, you're potentially talking about "pip -p C:\Users\xxx\AppData\Local\Programs\Python\Python35\python.exe". Over my dead body :-) If I misunderstood, and your proposal is "python pip.zip", then there's still a problem as the *actual* usage would be "python C:\Whatever\Path\To\pip.zip" - which is far from user friendly. Overall I think that "python -m pip" is the best compromise. Users can write their own wrapper scripts, shell functions or aliases for common usage. But they probably won't in practice. So there is a lot of pain for users (not just command line use, every script that does subprocess.call(['pip', ...]) would need changing). Sadly, there's no really good solution :-( It seems to me that the main questions are: 1. Do we want the canonical invocation to remain "pip" or are we willing to break that? [I'm ambivalent on this, personally, but it's a significant compatibility break] 2. Do we mind if there's a different command needed to upgrade pip? [I don't, as long as pip supports *some* command to upgrade itself] 3. Do we want to move away from a pip per Python installation? [For me, it'd be somewhat convenient, but nonessential, and I suspect some people would have Python installations or virtualenvs where they want to have a *different* pip than the default one - so this change might actually be a regression for them] Paul From ralf.gommers at gmail.com Thu Nov 5 17:44:26 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Thu, 5 Nov 2015 23:44:26 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft wrote: > I?m not at my computer, but does ``pip install ?no-clean ?build build dir>`` make this work? > No, that option seems to not work at all. I tried with both a relative and an absolute path to --build. In the specified dir there are subdirs created (src.linux-i686-2.7/), but they're empty. The actual build still happens in a tempdir. Ralf P.S. adding flags for the various issues (/ things under discussion) this is what I actually had to try: pip install . --no-clean --build build/ -v --upgrade --no-deps :( > > On November 5, 2015 at 5:25:16 PM, Ralf Gommers (ralf.gommers at gmail.com) > wrote: > > On Tue, Nov 3, 2015 at 6:10 PM, Chris Barker - NOAA Federal < > > chris.barker at noaa.gov> wrote: > > > > > >> I'm not talking about in place installs, I'm talking about e.g. > > > building a > > > >> wheel and then tweaking one file and rebuilding -- traditionally > build > > > >> systems go to some effort to keep track of intermediate artifacts > and > > > reuse > > > >> them across builds when possible, but if you always copy the source > tree > > > >> into a temporary directory before building then there's not much the > > > build > > > >> system can do. > > > > > > This strikes me as an optimization -- is it an important one? > > > > > > > Yes, I think it is. At least if we want to move people towards `pip > install > > .` instead of `python setup.py`. > > > > > > > If I'm doing a lot of tweaking and re-running, I'm usually in develop > mode. > > > > > > > Everyone has a slightly different workflow. What if you install into a > > bunch of different venvs between tweaks? The non-caching for a package > like > > scipy pushes rebuild time from <30 sec to ~10 min. > > > > > > > I can see that when you build a wheel, you may build it, test it, > > > discover an wheel-specific error, and then need to repeat the cycle -- > > > but is that a major use-case? > > > > > > That being said, I have been pretty frustrated debugging conda-build > > > scripts -- there is a lot of overhead setting up the build environment > > > each time you do a build... > > > > > > But with wheel building there is much less overhead, and far fewer > > > complications requiring the edit-build cycle. > > > > > > And couldn't make-style this-has-already-been-done checking happen > > > with a copy anyway? > > > > > > > The whole point of the copy is that it's a clean environment. Pip > currently > > creates tempdirs and removes them when it's done building. So no. > > > > Ralf > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Nov 5 17:59:58 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 17:59:58 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> > On Nov 5, 2015, at 5:41 PM, Paul Moore wrote: > >> On 5 November 2015 at 21:08, Donald Stufft wrote: >> Thoughts? > > The executable zip solution is in principle the best long-term > solution. But the breakage is major, and it pretty much permanently > cuts off any option to support use of pip as a library. That's > probably OK, but we need to understand that fact. > > Further downsides to the executable zip approach: > > 1. We still need an "exe wrapper" on Windows. This can be as simple as > prepending a stub exe to the zip, but we'd need either to maintain > such a stub ourselves or find a supplier of one. I think we can still use the ones we use now, although they might not support a zip file. Afaik they just execute a python script and I guess (hope) that they don't validate that and a zip file can be a "script" too. > 2. The Python 3.5 zipapp module includes support for a .pyz extension > that is registered as a "zipped Python application". But as you noted, > .exe is treated specially in certain places in Windows, and so there > really is no alternative to a "pip.exe" command if we want the > invocation to remain "pip". > 3. On Windows, there is no guaranteed location that is on PATH. If we > supply a "pip.exe" how will we ensure it's on the user's PATH? At the > moment we take advantage of the Python installer (and virtualenv > activate scripts) including the Python Scripts directory on PATH. If > pip were an independent executable, we'd need people to manage PATH > themselves. I'm my mind we wouldn't be throwing out wheels, we'd just build this zip instead. So our wheels would only contain this executable pip zip as a script (not console entry point) and pip would automatically wrap them with exe on Windows. > 4. Wouldn't the executable zip still be run with a specific Python, > coded in the wrapper or shebang line? You say it'd install into > "python" by default. But what about on Windows where the py launcher > gets "the default Python" from its ini file, not from what's on PATH? > And to use a different Python, you're potentially talking about "pip > -p C:\Users\xxx\AppData\Local\Programs\Python\Python35\python.exe". > Over my dead body :-) I think we could integrate with py on Windows somehow so that we use the same lookup semantics as py does. I don't know enough about Windows and py.exe to know what exactly those are. > > If I misunderstood, and your proposal is "python pip.zip", then > there's still a problem as the *actual* usage would be "python > C:\Whatever\Path\To\pip.zip" - which is far from user friendly. > > Overall I think that "python -m pip" is the best compromise. Users can > write their own wrapper scripts, shell functions or aliases for common > usage. But they probably won't in practice. So there is a lot of pain > for users (not just command line use, every script that does > subprocess.call(['pip', ...]) would need changing). > > Sadly, there's no really good solution :-( Yup :( > > It seems to me that the main questions are: > > 1. Do we want the canonical invocation to remain "pip" or are we > willing to break that? [I'm ambivalent on this, personally, but it's a > significant compatibility break] I'm also somewhat ambivalent by only because I'm just going to add an alias to restore "pip". > 2. Do we mind if there's a different command needed to upgrade pip? [I > don't, as long as pip supports *some* command to upgrade itself] I'm not sure if we'd need a different command to upgrade pip. In my head pip is still installed as a Python package. It'll just have non standard build steps. > 3. Do we want to move away from a pip per Python installation? [For > me, it'd be somewhat convenient, but nonessential, and I suspect some > people would have Python installations or virtualenvs where they want > to have a *different* pip than the default one - so this change might > actually be a regression for them] There wouldn't be anything preventing you from installing multiple versions of pip into an environment as we do today. It just wouldn't be mandatory. > > Paul From ralf.gommers at gmail.com Thu Nov 5 18:09:25 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 6 Nov 2015 00:09:25 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: On Thu, Nov 5, 2015 at 11:44 PM, Ralf Gommers wrote: > > > On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft wrote: > >> I?m not at my computer, but does ``pip install ?no-clean ?build > build dir>`` make this work? >> > > No, that option seems to not work at all. I tried with both a relative and > an absolute path to --build. In the specified dir there are subdirs created > (src.linux-i686-2.7/), but they're empty. The actual build still > happens in a tempdir. > Commented on the source of the problem with both `--build` and `--no-clean` in https://github.com/pypa/pip/issues/804 Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu Nov 5 18:37:21 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 18:37:21 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: If ``pip install ?build ? ?no-clean ?`` worked to do incremental builds, would that satisfy this use case? (without the ?upgrade and ?no-deps, ?no-deps is only needed because ?upgrade and ?upgrade is needed because of another ticket that I think will get fixed at some point). On November 5, 2015 at 6:09:46 PM, Ralf Gommers (ralf.gommers at gmail.com) wrote: > On Thu, Nov 5, 2015 at 11:44 PM, Ralf Gommers > wrote: > > > > > > > On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft wrote: > > > >> I?m not at my computer, but does ``pip install ?no-clean ?build > >> build dir>`` make this work? > >> > > > > No, that option seems to not work at all. I tried with both a relative and > > an absolute path to --build. In the specified dir there are subdirs created > > (src.linux-i686-2.7/), but they're empty. The actual build still > > happens in a tempdir. > > > > Commented on the source of the problem with both `--build` and `--no-clean` > in https://github.com/pypa/pip/issues/804 > > Ralf > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From dholth at gmail.com Thu Nov 5 19:18:15 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 Nov 2015 00:18:15 +0000 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BC1E6.7070600@thomas-guettler.de> Message-ID: PyPA is very loosely organized and largely volunteer. I do not mind if Mercurial prevents you from submitting a pull request to bdist_wheel. Also before pypa you would have had to visit multiple personal accounts on each service to find the projects. On Thu, Nov 5, 2015 at 4:22 PM Ian Cordasco wrote: > On Thu, Nov 5, 2015 at 2:53 PM, Thomas G?ttler > wrote: > > Am 05.11.2015 um 20:35 schrieb Ian Cordasco: > >> On Thu, Nov 5, 2015 at 1:27 PM, Thomas G?ttler > >> wrote: > >>> Am 04.11.2015 um 21:14 schrieb Ian Cordasco: > >>>> As I understand it, some people prefer Mercurial. Those projects tend > >>>> to live on bitbucket. Git projects can live in either place although I > >>>> suspect they tend to live on GitHub instead. > >>> > >>> Is there really a need for this? > >> > >> Is there a need for letting project creators work as they please with > >> the VCS they prefer? Why not if it makes them more efficient > >> maintainers. > > > > If the projects are unrelated then every maintainer should use what > > he prefers. If the projects are related and maintained by one group > (PyPa), > > then there should be **one** hosting platform. - my opinion - > > That's super helpful. Thanks > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Nov 5 20:04:38 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 6 Nov 2015 14:04:38 +1300 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On 6 November 2015 at 10:08, Donald Stufft wrote: ... > One possible solution to the above problems is to try and move away from using > ``pip``, ``pipX`` and ``pipX.Y`` and instead push people (and possibly > deprecate ``pip`` &etc) towards using ``python -m pip`` instead. This makes it > unambigious which python you're modifying and completely removes all of the > confusion around that. However this too has problems: > > * There is a lot of documentation out there in many projects that tell people > to use ``pip install ...``, the long tail on getting people moved to this > will be very long. I don't see that as a specific problem. It drags out the deprecation period is all. > * It's more to type, 10 more characters on *nix and 6 more characters on > Windows which makes it more akward and annoying to use. This is particularly > annoying inside of a virtual environment where there isn't really any > ambiguity when one is activated. cat > /usr/bin/pip << EOF python -m pip $@ EOF Seriously - isn't the above entirely sufficient? > * It still has the annoyance around having multiple pip installs all over the > place and needing to manage those. This is a mixed thing. You *need* those installs when pip drops support for 2.6. > * We still support Python 2.6 which doesn't support executing a package only > modules via ``-m``. So we'll break Python 2.6 unless people do > ``python -m pip.__main__`` or we move pip/* to _pip/* and make a pip.py which > will break things for people using pip as a library (which isn't currently > supported). Or the same wrapper approach can deal with this - as long as there is a pip.__main__ on all Pythons. > Another possible option is to modify pip so that instead of installing into > site-packages we instead create an "executable zip file" which is a simple zip > file that has all of pip inside of it and a top level __main__.py. Python can > directly execute this as if it were installed. We would no longer have any > command except for ``pip`` (which is really this executable zip file). This > script would default to installing into ``python``, and you can direct it to > install into a different python using a (new) flag of ``-p`` or ``--python`` > where you can specify either a full path or a command that we'll search $PATH > for. I don't like this because: - pip won't be able to be interrogated in the same way as all other python packages can be - the breakage is huge :(. - it doesnt' actually solve the windows problem - it makes it hard to use pip except via a subprocess - and while its not supported, it is at least in principle something we could be working on (and there was interest in that on python-dev the other month IIRC). ... > A side conversation here is that pip currently bundles a number of dependencies > inside of itself because it can't really have any dependencies. This works > fine but it's a bit of an annoyance to maintain. The larger problem is that a > number of downstreams don't like this and have attempted to route around it to > varying degrees of success. One benefit of the third option is that we can > remove the need to directly copy the bundled libraries into the pip source code > and we can install just bundle it inside the built zip file. Well, we could do that today - only bundle in get-pip.py, have installed copies have dependencies. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Thu Nov 5 21:06:30 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 21:06:30 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On November 5, 2015 at 8:04:41 PM, Robert Collins (robertc at robertcollins.net) wrote: > On 6 November 2015 at 10:08, Donald Stufft wrote: > ... > > One possible solution to the above problems is to try and move away from using > > ``pip``, ``pipX`` and ``pipX.Y`` and instead push people (and possibly > > deprecate ``pip`` &etc) towards using ``python -m pip`` instead. This makes it > > unambigious which python you're modifying and completely removes all of the > > confusion around that. However this too has problems: > > > > * There is a lot of documentation out there in many projects that tell people > > to use ``pip install ...``, the long tail on getting people moved to this > > will be very long. > > I don't see that as a specific problem. It drags out the deprecation > period is all. > > > * It's more to type, 10 more characters on *nix and 6 more characters on > > Windows which makes it more akward and annoying to use. This is particularly > > annoying inside of a virtual environment where there isn't really any > > ambiguity when one is activated. > > cat > /usr/bin/pip << EOF > python -m pip $@ > EOF > > Seriously - isn't the above entirely sufficient? It?s not particularly hard to do this (we could even make a pip-cli package on PyPI that did it). It?s just one more step and one more ?weird? thing for newcomers.? > > > * It still has the annoyance around having multiple pip installs all over the > > place and needing to manage those. > > This is a mixed thing. You *need* those installs when pip drops support for 2.6. Well, as I said to Paul, nothing in any of these proposals mandates that you must only have one installation. The only difference is the third thing makes it *possible* to only have one install (though since ``pip`` is the only name, it means it?s harder to install pip into 2.6 when it is altinstall?d into /usr/bin along with other Pythons since both the 2.6 supporting pip and the non-2.6 supporting pip will want to be called ?pip?. It?s less of a big deal for virtual environments because they already have a separate bin directory. This would almost be akin to RHEL only supporting VM installations of their really old OSs. > > > * We still support Python 2.6 which doesn't support executing a package only > > modules via ``-m``. So we'll break Python 2.6 unless people do > > ``python -m pip.__main__`` or we move pip/* to _pip/* and make a pip.py which > > will break things for people using pip as a library (which isn't currently > > supported). > > Or the same wrapper approach can deal with this - as long as there is > a pip.__main__ on all Pythons. python -m pip is implemented via pip.__main__ so python -m pip.__main__ will function for as long as python -m pip does (unless Python drastically changes). > > > Another possible option is to modify pip so that instead of installing into > > site-packages we instead create an "executable zip file" which is a simple zip > > file that has all of pip inside of it and a top level __main__.py. Python can > > directly execute this as if it were installed. We would no longer have any > > command except for ``pip`` (which is really this executable zip file). This > > script would default to installing into ``python``, and you can direct it to > > install into a different python using a (new) flag of ``-p`` or ``--python`` > > where you can specify either a full path or a command that we'll search $PATH > > for. > > I don't like this because: > - pip won't be able to be interrogated in the same way as all other > python packages can be I?m not sure what you mean be ?interrogated? but it doesn?t actually *prevent* it, it just makes it a bit harder. You?d be able to restore the current ``import`` ability (assuming that?s what you meant by interrogated) by doing something like: ? ? $ PYTHONPATH=`which pip` python -c ?import pip? > - the breakage is huge :(. This is my biggest problem with it :( > - it doesnt' actually solve the windows problem Yea, there?s no clean (and generic) solution to that besides python -m pip. > - it makes it hard to use pip except via a subprocess - and while its > not supported, it is at least in principle something we could be > working on (and there was interest in that on python-dev the other > month IIRC). I?m almost entirely certain we?ll never really support using pip as anything but a subprocess. Having two APIs (a subprocess one, and a Python one) isn?t great and we?re always going to need a subprocess one for things like Chef/Puppet that isn?t written in Python. Even in that thread, I believe they decided to go with subprocess because of the problems of trying to run pip in process (things won?t show up as installed and such). This is a bit of a mixed thing too though, because right now it?s trivially easy to use pip via import and so people do it when it?s not appropriate to do so (like inside of ``setup.py`` files), making it more difficult would make it so people who are doing it think twice about it before they do it. > > ... > > A side conversation here is that pip currently bundles a number of dependencies > > inside of itself because it can't really have any dependencies. This works > > fine but it's a bit of an annoyance to maintain. The larger problem is that a > > number of downstreams don't like this and have attempted to route around it to > > varying degrees of success. One benefit of the third option is that we can > > remove the need to directly copy the bundled libraries into the pip source code > > and we can install just bundle it inside the built zip file. > > Well, we could do that today - only bundle in get-pip.py, have > installed copies have dependencies. Well we can?t really do it in get-pip.py in a way that would satisfy our constraints.? There are two main ways to do bundling/vendoring in Python. Either you add it to your application like we do now with pip._vendor.* which typically requires adjusting imports of the items you?re bundling/vendoring (at the very least, for any of their own dependencies). We currently deal with this by just manually adjusting those imports whenever we pull in a new version. In this version you don?t do ``import six`` but you instead do ``from pip._vendor import six``. The other way of doing it is by including the unmodified sources somewhere and then munging sys.path to add them to it before doing any imports. This is easier to deal with in an automated system since it doesn?t involve needing to modify anything, but it pollutes the sys.path of the Python process which makes it unfriendly to people trying to run things in the same process. Currently pip does the first thing. We picked that because we decided that even though we don?t support ``import pip``, that enough people did it that we wanted to not be as invasive to the current running process. If ``import pip`` became harder then that argument doesn?t really make sense any more and we can easily switch to the second way. Even ``import pip`` doesn?t become harder, it might make sense to switch though. The ?stick pip + dependencies in a zip file? actually handles all of the munging of sys.path for us because everything would just be at the top level of the zip file (which Python will add to the front of sys.path for us). So it will all just work automatically. It would remove the immediately easy ``import pip`` so there?s no longer any real concern about polluting the current running process. We could replicate that without the zip thing just by manually munging sys.path and deciding we don?t care about polluting the current running process when people do ``import pip``. Being able to do the bundling automatically and without manual patching makes it easier for systems like Debuntu to automatically do it in their build process, but in no scenario would we start supporting people to install it without doing that. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ben+python at benfinney.id.au Thu Nov 5 21:21:43 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Fri, 06 Nov 2015 13:21:43 +1100 Subject: [Distutils] The future of invoking pip References: Message-ID: <85bnb7kgug.fsf@benfinney.id.au> Donald Stufft writes: > On November 5, 2015 at 8:04:41 PM, Robert Collins (robertc at robertcollins.net) wrote: > > > cat > /usr/bin/pip << EOF > > python -m pip $@ > > EOF > > > > Seriously - isn't the above entirely sufficient? > > It?s not particularly hard to do this (we could even make a pip-cli > package on PyPI that did it). It?s just one more step and one more > ?weird? thing for newcomers. You seem to be saying that, for a newcomer wanting to invoke Pip, that the command ?pip? is weird, but that the command ?python -m pip? is less weird. If so, I think you have lost touch with Python newcomers's perspective. If not, I don't know what you mean by a simple ?pip? command being ?one more ?weird? thing for newcomers?. On the contrary, it is ?python -m pip? that is weird to a newcomer, and *that* is why I am aghast at the insistent push toward it. In order to be friendly to newcomers, and friendly to people who use more than just Python, the Python tools should not have UIs that assume they are the centre of everything. The ?python -m foo? syntax is a peculiarity of Python, is not at all intuitive for anyone already familiar with their operating system nor with other languages. The push away from simple command executables and toward the bizarre ?python -m foo? is a wart that needs to be minimised, not encouraged. Instead, the tools should go to considerable effort to *work with* the operating system in which they find themselves, and that includes having a UI that works like other tools. In this case: if the user wants to use ?pip?, then the command to invoke should not be ?python -m pip? but instead should be ?pip?. -- \ ?I have always wished for my computer to be as easy to use as | `\ my telephone; my wish has come true because I can no longer | _o__) figure out how to use my telephone.? ?Bjarne Stroustrup | Ben Finney From donald at stufft.io Thu Nov 5 21:36:18 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 Nov 2015 21:36:18 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: <85bnb7kgug.fsf@benfinney.id.au> References: <85bnb7kgug.fsf@benfinney.id.au> Message-ID: On November 5, 2015 at 9:22:14 PM, Ben Finney (ben+python at benfinney.id.au) wrote: > Donald Stufft writes: > > > On November 5, 2015 at 8:04:41 PM, Robert Collins (robertc at robertcollins.net) wrote: > > > > > cat > /usr/bin/pip << EOF > > > python -m pip $@ > > > EOF > > > > > > Seriously - isn't the above entirely sufficient? > > > > It?s not particularly hard to do this (we could even make a pip-cli > > package on PyPI that did it). It?s just one more step and one more > > ?weird? thing for newcomers. > > You seem to be saying that, for a newcomer wanting to invoke Pip, that > the command ?pip? is weird, but that the command ?python -m pip? is less > weird. > > If so, I think you have lost touch with Python newcomers's perspective. > > If not, I don't know what you mean by a simple ?pip? command being ?one > more ?weird? thing for newcomers?. No. I mean having the ?pip? package on PyPI not provide the ``pip`` command, but instead have them expected to either install another package (pip-cli?) or needing to make their own shim (or shell alias) in order to have the ``pip`` command. I think that on paper, python -m pip is probably superior to trying to do something in pip to ?select? the right version of Python (whether it be via naming scheme of the pip command, or a ``-p`` flag or whatever). However I also think that people are not going to be very fond of needing to type ``python -m pip`` instead of ``pip``. > > > On the contrary, it is ?python -m pip? that is weird to a newcomer, and > *that* is why I am aghast at the insistent push toward it. > > In order to be friendly to newcomers, and friendly to people who use > more than just Python, the Python tools should not have UIs that assume > they are the centre of everything. Well, Python *is* the center of everything for pip? there isn?t much use to pip except to modify a particular Python?s install. Generally I think that (again on paper) providing ?simple? (e.g. unversioned) binaries is the correct approach for anything where the particular Python that is being used to invoke it isn?t important (or that it?s even Python that is invoking it). Something like Mercurial would fall into this category. I?m not really sure what the right answer is for something where the particular version of Python you?re invoking it with (and that you?re actually using Python) is important. python -m makes a lot of sense in that area because it eliminates the need to have each tool create their own logic for determining what python they are operating on but I think most people are not going to be very familiar with the idea and I don?t know how well they?d warm to it. The other option (that I can come up with) is baking that logic into each tool (as pip and virtualenv do now) either via naming scheme or a flag. If I had a particularly great answer I?d be strongly advocating for one over the other. As it stands, I can?t really decide if ``python2.7 -m pip`` or ``pip -p python2.7`` is better. > > The ?python -m foo? syntax is a peculiarity of Python, is not at all > intuitive for anyone already familiar with their operating system nor > with other languages. > > The push away from simple command executables and toward the bizarre > ?python -m foo? is a wart that needs to be minimised, not encouraged. > > Instead, the tools should go to considerable effort to *work with* the > operating system in which they find themselves, and that includes having > a UI that works like other tools. > > In this case: if the user wants to use ?pip?, then the command to invoke > should not be ?python -m pip? but instead should be ?pip?. > > -- > \ ?I have always wished for my computer to be as easy to use as | > `\ my telephone; my wish has come true because I can no longer | > _o__) figure out how to use my telephone.? ?Bjarne Stroustrup | > Ben Finney > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From glyph at twistedmatrix.com Thu Nov 5 21:49:58 2015 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 5 Nov 2015 18:49:58 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> > On Nov 5, 2015, at 5:04 PM, Robert Collins wrote: > > cat > /usr/bin/pip << EOF > python -m pip $@ > EOF > > Seriously - isn't the above entirely sufficient? Since I don't think anyone has pointed this out yet: No, it's not sufficient. It doesn't work on Windows. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Thu Nov 5 22:08:04 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Thu, 5 Nov 2015 19:08:04 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: Hi, My perspective as a user of pip, but not a developer, is that having the command line executable `pip` is much preferable to `python -m pip`. Most of the use cases that militate against the command line executable seem to be issues that face developers and ultra-power-users (keeping track of many versions of pip installed, etc). But many casual users, I think, just have one version of python/pip installed, and benefit from having the easy-to-call executable. They're also the least capable of adding new script wrappers and bash aliases. Just my $0.02 Best, -Robert On Thu, Nov 5, 2015 at 6:49 PM, Glyph Lefkowitz wrote: > > On Nov 5, 2015, at 5:04 PM, Robert Collins > wrote: > > cat > /usr/bin/pip << EOF > python -m pip $@ > EOF > > Seriously - isn't the above entirely sufficient? > > > Since I don't think anyone has pointed this out yet: > > No, it's not sufficient. It doesn't work on Windows. > > -glyph > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Nov 5 22:32:41 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 6 Nov 2015 16:32:41 +1300 Subject: [Distutils] New PEP : dependency specification Message-ID: Since we ended up with a hard dependency on this for the bootstrap thing (regardless of 'smaller step' or not) - I've broken this out of PEP 426, made it an encoding of the current status quo rather than an aspirational change. Since it has a dependency on markers, I had to choose whether to block on James' marker PEP, contribute to that, or include it. I think on balance it makes sense to have it in one document since the markers bit is actually quite shallow, so I've done that (after discussing with James). This could perhaps replace PEP 496 then, or be given a new number. Donald has graciously agreed to be a BDFL-delegate for it. The PR for it is https://github.com/pypa/interoperability-peps/pull/56 Full text follows. :PEP: XX :Title: Dependency specification for Python Software Packages :Version: $Revision$ :Last-Modified: $Date$ :Author: Robert Collins BDFL-Delegate: Donald Stufft :Discussions-To: distutils-sig :Status: Draft :Type: Standards Track :Content-Type: text/x-rst :Created: 11-Nov-2015 :Post-History: XX Abstract ======== This PEP specifies the language used to describe dependencies for packages. It draws a border at the edge of describing a single dependency - the different sorts of dependencies and when they should be installed is a higher level problem. The intent is provide a building block for higher layer specifications. The job of a dependency is to enable tools like pip [#pip]_ to find the right package to install. Sometimes this is very loose - just specifying a name, and sometimes very specific - referring to a specific file to install. Sometimes dependencies are only relevant in one platform, or only some versions are acceptable, so the language permits describing all these cases. The language defined is a compact line based format which is already in widespread use in pip requirements files, though we do not specify the command line option handling that those files permit. There is one caveat - the URL reference form, specified in PEP-440 [#pep440]_ is not actually implemented in pip, but since PEP-440 is accepted, we use that format rather than pip's current native format. Motivation ========== Any specification in the Python packaging ecosystem that needs to consume lists of dependencies needs to build on an approved PEP for such, but PEP-426 [#pep426]_ is mostly aspirational - and there are already existing implementations of the dependency specification which we can instead adopt. The existing implementations are battle proven and user friendly, so adopting them is arguably much better than approving an aspirational, unconsumed, format. Specification ============= Examples -------- All features of the language shown with a name based lookup:: requests \ [security] >= 2.8.1, == 2.8.* \ ; python_version < "2.7.10" \ # Fix HTTPS on older Python versions. A minimal URL based lookup:: pip @ https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 Concepts -------- A dependency specification always specifies a distribution name. It may include extras, which expand the dependencies of the named distribution to enable optional features. The version installed can be controlled using version limits, or giving the URL to a specific artifact to install. Finally the dependency can be made conditional using environment markers. Grammar ------- We first cover the grammar briefly and the drill into the semantics of each section later. A distribution specification is written in ASCII text. We use ABNF [#abnf]_ to provide a precise grammar. Specifications may have comments starting with a '#' and running to the end of the line:: comment = "#" *(WSP / VCHAR) Specifications may be spread across multiple lines if desired using continuations - a single backslash followed by a new line ('\\\\n'):: CSP = 1*(WSP / ("\\" LF)) Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: URI is defined in std-66 [#std66]_:: version-cmp = "<" / "<=" / "!=" / "==" / ">=" / ">" / "~=" / "===" version = 1*( DIGIT / ALPHA / "-" / "_" / "." / "*" ) versionspec = version-cmp version *(',' version-cmp version) urlspec = "@" URI Environment markers allow making a specification only take effect in some environments:: marker-op = version-cmp / "in" / "not in" python-str-c = (WSP / ALPHA / DIGIT / "(" / ")" / "." / "{" / "}" / "-" / "_" / "*" python-str = "'" *(python-str-c / DQUOTE) "'" python-str =/ DQUOTE *(python-str-c / "'") DQUOTE marker-var = python-str / "python_version" / "python_full_version" / "os_name"" / "sys_platform" / "platform_release" / "platform_version" / "platform_machine" / "platform_python_implementation" / "implementation_name" / "implementation_version" / "platform_dist_name" "platform_dist_version" / "platform_dist_id" marker-expr = "(" marker ")" / (marker-var [marker-op marker-var]) marker = marker-expr *( ("and" / "or") marker-expr) name-marker = ";" *CSP marker url-marker = ";" 1*CSP marker Optional components of a distribution may be specified using the extras field:: identifier = 1*( DIGIT / ALPHA / "-" / "_" ) name = identifier extras = "[" identifier *("," identifier) "]" Giving us a rule for name based requirements:: name_req = name [CSP extras] [CSP versionspec] [CSP name-marker] And a rule for direct reference specifications:: url_req = name [CSP extras] urlspec [CSP url-marker] Leading to the unified rule that can specify a dependency:: specification = (name_req / location_req) [CSP comment] Whitespace ---------- Non line-breaking whitespace is optional and has no semantic meaning. A line break indicates the end of a specification. Specifications can be continued across multiple lines using a continuation. Comments -------- A specification can have a comment added to it by starting the comment with a "#". After a "#" the rest of the line can contain any text whatsoever. Continuations within a comment are ignored. Names ----- Python distribution names are currently defined in PEP-345 [#pep345]_. Names act as the primary identifier for distributions. They are present in all dependency specifications, and are sufficient to be a specification on their own. Extras ------ An extra is an optional part of a distribution. Distributions can specify as many extras as they wish, and each extra results in the declaration of additional dependencies of the distribution **when** the extra is used in a dependency specification. For instance:: requests[security] Extras union in the dependencies they define with the dependencies of the distribution they are attached to. The example above would result in requests being installed, and requests own dependencies, and also any dependencies that are listed in the "security" extra of requests. If multiple extras are listed, all the dependencies are unioned together. Versions -------- See PEP-440 [#pep440]_ for more detail on both version numbers and version comparisons. Version specifications limit the versions of a distribution that can be used. They only apply to distributions looked up by name, rather than via a URL. Version comparison are also used in the markers feature. Environment Markers ------------------- Environment markers allow a dependency specification to provide a rule that describes when the dependency should be used. For instance, consider a package that needs argparse. In Python 2.7 argparse is always present. On older Python versions it has to be installed as a dependency. This can be expressed as so:: argparse;python_version<"2.7" A marker expression evalutes to either True or False. When it evaluates to False, the dependency specification should be ignored. The marker language is a subset of Python itself, chosen for the ability to safely evaluate it without running arbitrary code that could become a security vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This PEP fixes some issues that were observed in the described in PEP-426 [#pep426]_. Comparisons in marker expressions are typed by the comparison operator. The operators that are not in perform the same as they do for strings in Python. The operators use the PEP-440 [#pep440]_ version comparison rules if both sides are valid versions. If either side is not a valid version, then the comparsion falls back to the same behaviour as in for string in Python if the operator exists in Python. For those operators which are not defined in Python, the result should be False. The variables in the marker grammar such as "os_name" resolve to values looked up in the Python runtime. If a particular value is not available (such as ``sys.implementation.name`` in versions of Python prior to 3.3, or ``platform.dist()`` on non-Linux systems), the default value will be used. .. list-table:: :header-rows: 1 * - Marker - Python equivalent - Sample values - Default if unavailable * - ``os_name`` - ``os.name`` - ``posix``, ``java`` - "" * - ``sys_platform`` - ``sys.platform`` - ``linux``, ``darwin``, ``java1.8.0_51`` - "" * - ``platform_release`` - ``platform.release()`` - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` - "" * - ``platform_machine`` - ``platform.machine()`` - ``x86_64`` - "" * - ``platform_python_implementation`` - ``platform.python_implementation()`` - ``CPython``, ``Jython`` - "" * - ``implementation_name`` - ``sys.implementation.name`` - ``cpython`` - "" * - ``platform_version`` - ``platform.version()`` - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; root:xnu-2782.40.9~2/RELEASE_X86_64`` - "" * - ``platform_dist_name`` - ``platform.dist()[0]`` - ``Ubuntu`` - "" * - ``platform_dist_version`` - ``platform.dist()[1]`` - ``14.04`` - "" * - ``platform_dist_id`` - ``platform.dist()[2]`` - ``trusty`` - "" * - ``python_version`` - ``platform.python_version()[:3]`` - ``3.4``, ``2.7`` - "0" * - ``python_full_version`` - see definition below - ``3.4.0``, ``3.5.0b1`` - "0" * - ``implementation_version`` - see definition below - ``3.4.0``, ``3.5.0b1`` - "0" The ``python_full_version`` and ``implementation_version`` marker variables are derived from ``sys.version_info`` and ``sys.implementation.version`` respectively, in accordance with the following algorithm:: def format_full_version(info): version = '{0.major}.{0.minor}.{0.micro}'.format(info) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version python_full_version = format_full_version(sys.version_info) implementation_version = format_full_version(sys.implementation.version) ``python_full_version`` will typically correspond to ``sys.version.split()[0]``. If a particular version number value is not available (such as ``sys.implementation.version`` in versions of Python prior to 3.3) the corresponding marker variable returned by setuptools will be set to ``0`` Backwards Compatibility ======================= Most of this PEP is already widely deployed and thus offers no compatibiltiy concerns. There are however two key points where the PEP differs from the deployed base. Firstly, PEP-440 direct references haven't actually been deployed in the wild, but they were designed to be compatibly added, and there are no known obstacles to adding them to pip or other tools that consume the existing dependency metadata in distributions. Secondly, PEP-426 markers which have had some reasonable deployment, particularly in wheels and pip, will handle version comparisons with ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is less than "2.7.9". This backward incompatibility is deliberate. We are also defining new operators - "~=" and "===", and new variables - the ``platform_dist_*`` variables which are not present in older marker implementations. The variables will fall back to "" on those implementations, permitting reasonably graceful upgrade. The new version comparisons will cause errors, so adoption may require waiting some time for deployment to be widespread. Rationale ========= In order to move forward with any new PEPs that depend on environment markers, we needed a specification that included them. The requirement specifier EBNF is lifted from setuptools pkg_resources documentation, since we can't sensible depend on a defacto standard. References ========== .. [#pip] pip, the recommended installer for Python packages (http://pip.readthedocs.org/en/stable/) .. [#pep345] PEP-345, Python distribution metadata version 1.2. (https://www.python.org/dev/peps/pep-0345/) .. [#pep426] PEP-426, Python distribution metadata. (https://www.python.org/dev/peps/pep-0426/) .. [#pep440] PEP-440, Python distribution metadata. (https://www.python.org/dev/peps/pep-0440/) .. [#abnf] ABNF specification. (https://tools.ietf.org/html/rfc5234) .. [#std66] The URL specification. (https://tools.ietf.org/html/rfc3986) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Thu Nov 5 22:34:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 6 Nov 2015 16:34:57 +1300 Subject: [Distutils] The future of invoking pip In-Reply-To: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On 6 November 2015 at 15:49, Glyph Lefkowitz wrote: > > On Nov 5, 2015, at 5:04 PM, Robert Collins > wrote: > > cat > /usr/bin/pip << EOF > python -m pip $@ > EOF > > Seriously - isn't the above entirely sufficient? > > > Since I don't think anyone has pointed this out yet: > > No, it's not sufficient. It doesn't work on Windows. Why not? (Ignore the language I wrote my pseudocode in, an actual thing would be a Python script that install would turn into a .exe) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From glyph at twistedmatrix.com Thu Nov 5 22:49:48 2015 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 5 Nov 2015 19:49:48 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: > On Nov 5, 2015, at 7:34 PM, Robert Collins wrote: > > Why not? (Ignore the language I wrote my pseudocode in, an actual > thing would be a Python script that install would turn into a .exe) It was not clear, in the example that you gave, that I was supposed to ignore the example that you gave ;). -g -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Thu Nov 5 22:52:52 2015 From: glyph at twistedmatrix.com (Glyph Lefkowitz) Date: Thu, 5 Nov 2015 19:52:52 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <85bnb7kgug.fsf@benfinney.id.au> Message-ID: > On Nov 5, 2015, at 6:36 PM, Donald Stufft wrote: > > I?m not really sure what the right answer is for something where the particular version of Python you?re invoking it with (and that you?re actually using Python) is important. python -m makes a lot of sense in that area because it eliminates the need to have each tool create their own logic for determining what python they are operating on but I think most people are not going to be very familiar with the idea and I don?t know how well they?d warm to it. The other option (that I can come up with) is baking that logic into each tool (as pip and virtualenv do now) either via naming scheme or a flag. Rather than trying to figure out what the "right" way for users to invoke `pip? to begin with is, why not just have Pip start providing more information about potential problems when you invoke it? If you invoke 'pip[X.Y]' and it matches 'python -m pip' in your current virtualenv, don't say anything; similarly if you invoke 'python -m pip' and 'which pip' matches. But if there's a mismatch, pip can print information in both cases. This would go a long way to alleviating the confusion that occurs when users back themselves into one of these corners, and would alert users to potential issues before they become a problem; right now you have to be a dogged investigative journalist to figure out why pip is doing the wrong thing in some cases. -glyph -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Nov 6 00:47:10 2015 From: qwcode at gmail.com (Marcus Smith) Date: Thu, 5 Nov 2015 21:47:10 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: sorry, I feel like I have confirm my translation of your intro paragraph : ) maybe it will help some others... ended up with a hard dependency on this my understanding is that you were depending on having PEP426 metadata, e.g. for build_requires. since this PEP, as you say doesn't handle the "higher level problem" of specifying the types of dependencies (like PEP426 does), I guess you'll have another PEP in the works as well on top of this? and then your build PEP would be depending on that. bootstrap thing you mean your other PEP idea for supporting any build system using the indirect mapping thing to processes... > 'smaller step' or not you mean Donald's idea of using "setup.py" as the build interface for now Donald has graciously agreed to be a BDFL-delegate for it. > doesn't Nick actually don this delegation?... not that Donald wouldn't be great. > and there are already existing > implementations of the dependency specification which we can instead adopt. > The existing implementations are battle proven and user friendly for reference, which ones are those? > The language defined is a compact line based format which is already in > widespread use in pip requirements files to be clear though, this PEP doesn't commit to how lines are put together in the metadata. theoretically, this spec could be consistent with a higher-level spec that used json, right? probably better to refer to "pip syntax" than "pip requirements files" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Nov 6 02:39:14 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Fri, 6 Nov 2015 08:39:14 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: On Fri, Nov 6, 2015 at 12:37 AM, Donald Stufft wrote: > If ``pip install ?build ? ?no-clean ?`` worked to do incremental builds, > would that satisfy this use case? (without the ?upgrade and ?no-deps, > ?no-deps is only needed because ?upgrade and ?upgrade is needed because of > another ticket that I think will get fixed at some point). > Then there's at least a way to do it, but it's all very unsatisfying. Users are again going to have a hard time finding this. And I'd hate to have to type that every time. Robert and Nathaniel have argued the main points already so I'm not going to try to go in more detail, but I think the main point is: - we want to replace `python setup.py install` with `pip install .` in order to get proper uninstalls and dependency handling. - except for those two things, `python setup.py install` does the expected thing while pip is trying to be way too clever which is unhelpful. Ralf > On November 5, 2015 at 6:09:46 PM, Ralf Gommers (ralf.gommers at gmail.com) > wrote: > > On Thu, Nov 5, 2015 at 11:44 PM, Ralf Gommers > > wrote: > > > > > > > > > > > On Thu, Nov 5, 2015 at 11:29 PM, Donald Stufft wrote: > > > > > >> I?m not at my computer, but does ``pip install ?no-clean ?build > >> > build dir>`` make this work? > > >> > > > > > > No, that option seems to not work at all. I tried with both a relative > and > > > an absolute path to --build. In the specified dir there are subdirs > created > > > (src.linux-i686-2.7/), but they're empty. The actual build still > > > happens in a tempdir. > > > > > > > Commented on the source of the problem with both `--build` and > `--no-clean` > > in https://github.com/pypa/pip/issues/804 > > > > Ralf > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Nov 6 04:10:36 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 09:10:36 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> Message-ID: On 5 November 2015 at 22:59, Donald Stufft wrote: > I think we could integrate with py on Windows somehow so that we use the same lookup semantics as py does. I don't know enough about Windows and py.exe to know what exactly those are. Hmm, I'm reluctant to get into complex interactions with C code like the launcher on Windows. The number of people we'd have able to maintain such code is approaching zero, I suspect :-( We may be able to run py.exe as a subprocess, but that could easily get out of hand (running pip could then easily start up a chain of 6 processes on Windows, which while not disastrous, certainly isn't cheap). Paul From p.f.moore at gmail.com Fri Nov 6 05:20:36 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 10:20:36 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <85bnb7kgug.fsf@benfinney.id.au> Message-ID: On 6 November 2015 at 03:52, Glyph Lefkowitz wrote: > If you invoke 'pip[X.Y]' and it matches 'python -m pip' in your current > virtualenv, don't say anything; similarly if you invoke 'python -m pip' and > 'which pip' matches. But if there's a mismatch, pip can print information > in both cases. This would go a long way to alleviating the confusion that > occurs when users back themselves into one of these corners, and would alert > users to potential issues before they become a problem; right now you have > to be a dogged investigative journalist to figure out why pip is doing the > wrong thing in some cases. I don't see how such checks would work on Windows. The "simple" approach would involve invoking a subprocess to check what "python" resolved to, which is a non-trivial overhead on Windows. Could you explain how you'd do a check like this on Windows? Paul From donald at stufft.io Fri Nov 6 07:10:10 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Nov 2015 07:10:10 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> Message-ID: On November 6, 2015 at 4:10:39 AM, Paul Moore (p.f.moore at gmail.com) wrote: > On 5 November 2015 at 22:59, Donald Stufft wrote: > > I think we could integrate with py on Windows somehow so that we use the same lookup semantics > as py does. I don't know enough about Windows and py.exe to know what exactly those are. > > Hmm, I'm reluctant to get into complex interactions with C code like > the launcher on Windows. The number of people we'd have able to > maintain such code is approaching zero, I suspect :-( We may be able > to run py.exe as a subprocess, but that could easily get out of hand > (running pip could then easily start up a chain of 6 processes on > Windows, which while not disastrous, certainly isn't cheap). > > Paul >? Doesn?t py.exe just look at some ini files and environment variables to decide what to invoke? I?m not sure why we couldn?t just replicate that behavior. Is there something that py.exe does that we can?t also do in Python? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Fri Nov 6 08:36:18 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 13:36:18 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: On 6 November 2015 at 07:39, Ralf Gommers wrote: > On Fri, Nov 6, 2015 at 12:37 AM, Donald Stufft wrote: >> >> If ``pip install ?build ? ?no-clean ?`` worked to do incremental builds, >> would that satisfy this use case? (without the ?upgrade and ?no-deps, >> ?no-deps is only needed because ?upgrade and ?upgrade is needed because of >> another ticket that I think will get fixed at some point). > > > Then there's at least a way to do it, but it's all very unsatisfying. Users > are again going to have a hard time finding this. And I'd hate to have to > type that every time. > > Robert and Nathaniel have argued the main points already so I'm not going to > try to go in more detail, but I think the main point is: > > - we want to replace `python setup.py install` with `pip install .` in > order to get proper uninstalls and dependency handling. > - except for those two things, `python setup.py install` does the expected > thing while pip is trying to be way too clever which is unhelpful. While I understand what you're trying to achieve (and I'm in favour, in general) it should be remembered that pip's core goal is installing packages - not being a component of a development workflow. We absolutely need to make pip useful in the development workflow type of situation (that's why pip install -e exists, after all). But I don't think it's so much pip "trying to be too clever" as incremental rebuilds wasn't the original use case that "pip install ." was designed for. What we'll probably have to do is be *more* clever to special case out the situations where a development-style support for incremental rebuilds is more appropriate than the current behaviour. Paul From p.f.moore at gmail.com Fri Nov 6 08:39:17 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 13:39:17 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> Message-ID: On 6 November 2015 at 12:10, Donald Stufft wrote: > Doesn?t py.exe just look at some ini files and environment variables to decide what to invoke? I?m not sure why we couldn?t just replicate that behavior. Is there something that py.exe does that we can?t also do in Python? What's there might be possible in Python, but there are some nasty special cases. The launcher code looks in the registry, and in order to support running both 32 and 64 bit Pythons from the same launcher exe, it needs to play tricks to see both the 32 and 64 bit registry values, and I'm not sure the APIs to do that are exposed to Python so it may need ctypes. Doable certainly, easy not so sure. We could of course do a simplified version and not try to be precisely equivalent to the launcher behaviour - but that may also cause user confusion. The simplest cases are easy, it's the corner cases that get nasty. Paul From njs at pobox.com Fri Nov 6 09:46:23 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 Nov 2015 06:46:23 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> Message-ID: On Nov 6, 2015 5:39 AM, "Paul Moore" wrote: > > On 6 November 2015 at 12:10, Donald Stufft wrote: > > Doesn?t py.exe just look at some ini files and environment variables to decide what to invoke? I?m not sure why we couldn?t just replicate that behavior. Is there something that py.exe does that we can?t also do in Python? > > What's there might be possible in Python, but there are some nasty > special cases. The launcher code looks in the registry, and in order > to support running both 32 and 64 bit Pythons from the same launcher > exe, it needs to play tricks to see both the 32 and 64 bit registry > values, and I'm not sure the APIs to do that are exposed to Python so > it may need ctypes. > > Doable certainly, easy not so sure. We could of course do a simplified > version and not try to be precisely equivalent to the launcher > behaviour - but that may also cause user confusion. The simplest cases > are easy, it's the corner cases that get nasty. One option would be to add a "py -which" mode that just does the configuration lookup and then prints the path to the real python executable. This would add the cost of one subprocess startup per pip, but not 6? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at timgolden.me.uk Fri Nov 6 09:56:34 2015 From: mail at timgolden.me.uk (Tim Golden) Date: Fri, 6 Nov 2015 14:56:34 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> Message-ID: <563CBFA2.2070607@timgolden.me.uk> On 06/11/2015 14:46, Nathaniel Smith wrote: > On Nov 6, 2015 5:39 AM, "Paul Moore" > wrote: >> >> On 6 November 2015 at 12:10, Donald Stufft > wrote: >> > Doesn?t py.exe just look at some ini files and environment variables > to decide what to invoke? I?m not sure why we couldn?t just replicate > that behavior. Is there something that py.exe does that we can?t also do > in Python? >> >> What's there might be possible in Python, but there are some nasty >> special cases. The launcher code looks in the registry, and in order >> to support running both 32 and 64 bit Pythons from the same launcher >> exe, it needs to play tricks to see both the 32 and 64 bit registry >> values, and I'm not sure the APIs to do that are exposed to Python so >> it may need ctypes. >> >> Doable certainly, easy not so sure. We could of course do a simplified >> version and not try to be precisely equivalent to the launcher >> behaviour - but that may also cause user confusion. The simplest cases >> are easy, it's the corner cases that get nasty. > > One option would be to add a "py -which" mode that just does the > configuration lookup and then prints the path to the real python > executable. This would add the cost of one subprocess startup per pip, > but not 6? FWIW this will do that: py -c "import sys; print(sys.executable)" TJG From marius at gedmin.as Fri Nov 6 10:23:42 2015 From: marius at gedmin.as (Marius Gedminas) Date: Fri, 6 Nov 2015 17:23:42 +0200 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: <20151106152342.GA17079@platonas> On Fri, Nov 06, 2015 at 02:04:38PM +1300, Robert Collins wrote: > On 6 November 2015 at 10:08, Donald Stufft wrote: > > * It's more to type, 10 more characters on *nix and 6 more characters on > > Windows which makes it more akward and annoying to use. This is particularly > > annoying inside of a virtual environment where there isn't really any > > ambiguity when one is activated. > > cat > /usr/bin/pip << EOF > python -m pip $@ > EOF > > Seriously - isn't the above entirely sufficient? It doesn't help with my usual pattern, which used to be $ virtualenv . $ bin/pip install foo but then changed[*] into $ virtualenv .venv && ln -sfn .venv/bin bin $ bin/pip install foo and is likely to change[+] into $ virtualenv .venv && mkdir -p bin && ln -sfn .venv/bin/pip bin/pip $ bin/pip install foo I am not running these by hand -- I have Makefiles to set up my app environment by creating a local virtualenv and pip installing all the tools, plus '-e .', into it. But once the basic environment is done, I'm often installing ad-hoc one-time-use extra tools with commands like $ bin/pip install runsnakerun --- [*] because virtualenv's root is becoming too cluttered with files like pip-selftest.json, and because some evil packages on PyPI install files named README.txt into the virtualenv root. [+] because if you symlink just the bin/ directory, bin/python fails to set up sys.path correctly Marius Gedminas -- I'm sure it would be possible to speed apport up a lot, after we're done making boot and login instantaneous. -- Lars Wirzenius -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From inesbarataborges at gmail.com Fri Nov 6 09:33:31 2015 From: inesbarataborges at gmail.com (Ines Barata) Date: Fri, 6 Nov 2015 14:33:31 +0000 Subject: [Distutils] Installing packages using pip Message-ID: Hello, I want to install the following version of pygame in my windows10.pro:install pygame-1.9.2a0-cp33-none-win_amd64.whl, but the file is in *.whl* format which I dont have any program to open it I checked the information in https://docs.python.org/3/installing/ to understand how to use pip. But didn't make it... My result is the following, using *python3.5.0 IDLE*: >>> pip *install* pygame-1.9.2a0-cp34-none-win32.whl SyntaxError: invalid syntax >>> pip *pygame*-1.9.2a0-cp34-none-win32.whl SyntaxError: invalid syntax >>> python -m *pip* install pygame-1.9.2a0-cp33-none-win_amd64.whl SyntaxError: invalid syntax >>> python3.*5*.0 -m pip install pygame-1.9.2a0-cp33-none-win_amd64.whl SyntaxError: invalid syntax >>> Can you help me solve my problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Nov 6 11:04:14 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 16:04:14 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: <563CBFA2.2070607@timgolden.me.uk> References: <05235595-6C56-44A1-BB42-3E1E2F2D24AC@stufft.io> <563CBFA2.2070607@timgolden.me.uk> Message-ID: On 6 November 2015 at 14:56, Tim Golden wrote: >> One option would be to add a "py -which" mode that just does the >> configuration lookup and then prints the path to the real python >> executable. This would add the cost of one subprocess startup per pip, >> but not 6? > > FWIW this will do that: > > py -c "import sys; print(sys.executable)" Indeed. We don't want to require modifications to py.exe, as that would imply some sort of "Python 3.6 only" requirement (there are backport options, as the launcher is largely independent from Python itself, but that's yet more obstacles for end users). Don't take the 6 processes too literally, but just to explain: pip launcher - runs Python. Either directly (= 2 processes so far) or via py (= 3 processes so far) - checks what Python it's running (just code to read sys.executable in pip) - processes the -p flag - runs py -c "import sys; print(sys.executable)" to work out what version the -p flag is requesting (unless -p requires the full pathname of an executable, which is lousy UI) That's 2 more processes run, albeit only for a short time - Finds out it needs a different exe, launches py for the correct Python (add 2 more processes to the stack, we're at 5 now). - The subordinate pip runs sys.executable to execute setup.py (1 more process) which in turn launches the C compiler (that's a stack of 7 processes) It's quite likely that this can be trimmed, and normal cases won't be nearly as bad. And while creating new processes on Windows is distinctly worse than on Linux, it's not *that* bad. But when wheels were introduced, installing via wheels rather than via setup.py caused *substantial* time savings when installing pure Python wheels, simply from removing process creation overheads (creating a virtualenv, which installed setuptools and pip at the time, i.e., 2 wheels instead of 2 sdists, went from something like 30 seconds down to a couple of seconds, if I recall). I'm not wanting to make a big issue out of this, but it needs to be considered... Paul From p.f.moore at gmail.com Fri Nov 6 11:06:53 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 Nov 2015 16:06:53 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: Message-ID: On 6 November 2015 at 14:33, Ines Barata wrote: > Hello, > > I want to install the following version of pygame in my > windows10.pro:install pygame-1.9.2a0-cp33-none-win_amd64.whl, but the file > is in .whl format which I dont have any program to open it > I checked the information in https://docs.python.org/3/installing/ to > understand how to use pip. But didn't make it... > > My result is the following, using python3.5.0 IDLE: >>>> pip install pygame-1.9.2a0-cp34-none-win32.whl > SyntaxError: invalid syntax That's the correct command, but you need to run it from the Windows command prompt, not from within IDLE. Paul From wolfgang.maier at biologie.uni-freiburg.de Fri Nov 6 12:22:09 2015 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Fri, 6 Nov 2015 18:22:09 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On 11/05/2015 10:08 PM, Donald Stufft wrote: > > * There is a lot of documentation out there in many projects that tell people > to use ``pip install ...``, the long tail on getting people moved to this > will be very long. > The deprecation period will probably have to be long, but the current situation is not so bad that you could not live with it for a bit longer. > * It's more to type, 10 more characters on *nix and 6 more characters on > Windows which makes it more akward and annoying to use. This is particularly > annoying inside of a virtual environment where there isn't really any > ambiguity when one is activated. > I have no problem with the extra characters, just as I don't have a problem with typing: java -jar xy.jar The extra typing might be annoying for the pypa devs, but remember that many regular users have to type this only once in a while when they install some new package. For them, its far more important that things work reliably than with the shortest possible command. > * It still has the annoyance around having multiple pip installs all over the > place and needing to manage those. > Another experts problem. People who are just using "Python" and a few third-party packages are not suffering from this. Once they get to a level where they use multiple versions of python, they will be able to cope with the multiple pip installations. > * We still support Python 2.6 which doesn't support executing a package only > modules via ``-m``. So we'll break Python 2.6 unless people do > ``python -m pip.__main__`` or we move pip/* to _pip/* and make a pip.py which > will break things for people using pip as a library (which isn't currently > supported). > How much longer are you planning to support Python 2.6? Why not just deprecate pip and pipX.Y (emit a warning to users) for newer versions of Python. Then once you drop Python 2.6 support remove pip and pipX.Y. python -m pip ... may not read that beautifully, but if lack of beauty is the last remaining problem to Python packaging, then that's a reason to celebrate, isn't it. Best, Wolfgang From qwcode at gmail.com Fri Nov 6 12:45:21 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 6 Nov 2015 09:45:21 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > The language defined is a compact line based format which is already in > widespread use this is the most critical thing for me, and the reason this approach seems more attractive than the path of PEP426, although I'd certainly like to see Nick's reaction. PEP426 tries to cover how names/specifiers/extras/markers would be put together in abstract "in-memory representation" (that can be serialized to json), but it's left open to pip (and other tools) to lay down a standard (via implementation) for how these pieces are put together and used by users. this PEP would dictate both, right? the user way, and the internal metadata way.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Nov 6 13:22:38 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 Nov 2015 18:22:38 +0000 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: LGTM To clarify in this spec to specify a couple of requirements for the [foo] extra would you have to say [foo] requests [foo] sqlalchemy Compare to requires.txt from setuptools which IIRC is a plain text file like so, with normal requirements not in a section, and extra or conditional requirements in sections named [extra_name;marker]: unconditional requirements==4.7 [;marker] non-extra requirement if marker evaluates to true [extra] unconditional requirements for extra [extra;marker] requirement for extra with if marker evaluates to true On Fri, Nov 6, 2015 at 12:45 PM Marcus Smith wrote: > >> The language defined is a compact line based format which is already in >> widespread use > > > this is the most critical thing for me, and the reason this approach seems > more attractive than the path of PEP426, although I'd certainly like to see > Nick's reaction. > > PEP426 tries to cover how names/specifiers/extras/markers would be put > together in abstract "in-memory representation" (that can be serialized to > json), but it's left open to pip (and other tools) to lay down a standard > (via implementation) for how these pieces are put together and used by > users. > > this PEP would dictate both, right? the user way, and the internal > metadata way.... > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Nov 6 13:31:31 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 7 Nov 2015 07:31:31 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 7 November 2015 at 06:45, Marcus Smith wrote: >> >> The language defined is a compact line based format which is already in >> widespread use > > > this is the most critical thing for me, and the reason this approach seems > more attractive than the path of PEP426, although I'd certainly like to see > Nick's reaction. > > PEP426 tries to cover how names/specifiers/extras/markers would be put > together in abstract "in-memory representation" (that can be serialized to > json), but it's left open to pip (and other tools) to lay down a standard > (via implementation) for how these pieces are put together and used by > users. > > this PEP would dictate both, right? the user way, and the internal metadata > way.... No - it specifies the serialisation format for names/specifiers/extras/markers that is in common use, but doesn't specify a programming API. It is intended as an interop building block, so we don't have to say 'that thing that pkg_resources is the defacto std for'. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From contact at ionelmc.ro Fri Nov 6 13:33:48 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 6 Nov 2015 20:33:48 +0200 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Thu, Nov 5, 2015 at 11:08 PM, Donald Stufft wrote: > Currently pip installs a number of commands like ``pip``, ``pipX`` and > ``pipX.Y`` where the X and X.Y corresponds to the version of Python that > pip > is installed into. Pip installs into whatever Python is currently > executing it > so this gives some ability to control which version of Python you're > installing > into (``pip2.7`` for Python 2.7 etc). > ?Why not consider having a "pip" launcher?? Seems the obvious thing to me - python has the "py" launcher on windows and it works great! Eg: "pip -3" to launch pip using python3, "pip -3.5" to launch pip using python3.5 - just like the "py" launcher. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Nov 6 13:42:44 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 6 Nov 2015 10:42:44 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > > No - it specifies the serialisation format for > names/specifiers/extras/markers that is in common use, but doesn't > specify a programming API. It is intended as an interop building > block, I'm not not talking about programming API. this PEP would set the format used in interop formats, as you say, but also doesn't it effectively standardize the format used in the pip UI? or maybe it's just that pip could easily say.. "our syntax for dependencies mirrors PEPXX exactly" -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Nov 6 13:46:17 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 7 Nov 2015 07:46:17 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 7 November 2015 at 07:42, Marcus Smith wrote: >> >> >> No - it specifies the serialisation format for >> names/specifiers/extras/markers that is in common use, but doesn't >> specify a programming API. It is intended as an interop building >> block, > > > I'm not not talking about programming API. > this PEP would set the format used in interop formats, as you say, but also > doesn't it effectively standardize the format used in the pip UI? > or maybe it's just that pip could easily say.. "our syntax for dependencies > mirrors PEPXX exactly" Yes, I think that would be a sane thing for pip to say. E.g. the full description would be something like "Requirements files are a super-set of PEPXX. You can also include command line options from the pip CLI. Options on a line on their own are globally applies. Options on a specification line apply to that specification alone. Empty and comment-only lines are supported." -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From rmcgibbo at gmail.com Fri Nov 6 13:48:15 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 10:48:15 -0800 Subject: [Distutils] Platform tags for OS X binary wheels Message-ID: Hi, I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it proceeds to download and compile the tarball from PyPI from source (very slow). I see, however, that pre-compiled OS X wheel files are available on PyPI for OS X 10.6 and later. Checking the code, it looks like pip is picking up the platform tag through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' on this machine. At root, I think this comes from the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at `python3.5/config-3.5m/Makefile`. I know that this value is used by distutils compiling python extension modules -- presumably so that they can be distributed to any target machine with OS X >=10.5 -- so that's good. But is this the right thing for pip to be using when checking whether a binary wheel is compatible? I see it mentioned in PEP 425, so perhaps this was already hashed out on the list. Best, Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Fri Nov 6 13:57:57 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 6 Nov 2015 10:57:57 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > So both the abstract build system PEP and donalds setup.py interface > depend on having a bootstrap dependency list written into a file in > the source tree. your build PEP said stuff like this "Additional data *may* be included, but the ``build_requires`` and ``metadata_version`` keys must be present" that leads me to think you need more than just this specification of a single dependency. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Nov 6 14:00:26 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 Nov 2015 19:00:26 +0000 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: It should already be sorted. Try python -c "import pprint, pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" Do none of the tags for the available numpy wheels appear in that list? On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon wrote: > Hi, > > I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it > proceeds to download and compile the tarball from PyPI from source (very > slow). I see, however, that pre-compiled OS X wheel files are available on > PyPI for OS X 10.6 and later. > > Checking the code, it looks like pip is picking up the platform tag > through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' > on this machine. At root, I think this comes from > the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at > `python3.5/config-3.5m/Makefile`. I know that this value is used by > distutils compiling python extension modules -- presumably so that they can > be distributed to any target machine with OS X >=10.5 -- so that's good. > But is this the right thing for pip to be using when checking whether a > binary wheel is compatible? I see it mentioned > in PEP 425, so perhaps > this was already hashed out on the list. > > Best, > Robert > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Fri Nov 6 14:07:33 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 11:07:33 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: I don't think it's the sorting, per se. All of the get_supported() tags are 10.5 or earlier. Here's the output: https://gist.github.com/rmcgibbo/1d0f5d166ca48253b5a9 On Fri, Nov 6, 2015 at 11:00 AM, Daniel Holth wrote: > It should already be sorted. Try python -c "import pprint, > pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" > > Do none of the tags for the available numpy wheels appear in that list? > > On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon wrote: > >> Hi, >> >> I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it >> proceeds to download and compile the tarball from PyPI from source (very >> slow). I see, however, that pre-compiled OS X wheel files are available on >> PyPI for OS X 10.6 and later. >> >> Checking the code, it looks like pip is picking up the platform tag >> through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' >> on this machine. At root, I think this comes from >> the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at >> `python3.5/config-3.5m/Makefile`. I know that this value is used by >> distutils compiling python extension modules -- presumably so that they can >> be distributed to any target machine with OS X >=10.5 -- so that's good. >> But is this the right thing for pip to be using when checking whether a >> binary wheel is compatible? I see it mentioned >> in PEP 425, so perhaps >> this was already hashed out on the list. >> >> Best, >> Robert >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Nov 6 14:10:30 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 Nov 2015 19:10:30 +0000 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: I see what you mean. Sounds like a bug to me. On Fri, Nov 6, 2015 at 2:07 PM Robert McGibbon wrote: > I don't think it's the sorting, per se. All of the get_supported() tags > are 10.5 or earlier. Here's the output: > https://gist.github.com/rmcgibbo/1d0f5d166ca48253b5a9 > > > On Fri, Nov 6, 2015 at 11:00 AM, Daniel Holth wrote: > >> It should already be sorted. Try python -c "import pprint, >> pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" >> >> Do none of the tags for the available numpy wheels appear in that list? >> >> On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon >> wrote: >> >>> Hi, >>> >>> I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it >>> proceeds to download and compile the tarball from PyPI from source (very >>> slow). I see, however, that pre-compiled OS X wheel files are available on >>> PyPI for OS X 10.6 and later. >>> >>> Checking the code, it looks like pip is picking up the platform tag >>> through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' >>> on this machine. At root, I think this comes from >>> the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at >>> `python3.5/config-3.5m/Makefile`. I know that this value is used by >>> distutils compiling python extension modules -- presumably so that they can >>> be distributed to any target machine with OS X >=10.5 -- so that's good. >>> But is this the right thing for pip to be using when checking whether a >>> binary wheel is compatible? I see it mentioned >>> in PEP 425, so perhaps >>> this was already hashed out on the list. >>> >>> Best, >>> Robert >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmericke at gmail.com Fri Nov 6 14:10:58 2015 From: mmericke at gmail.com (Michael Merickel) Date: Fri, 6 Nov 2015 13:10:58 -0600 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Fri, Nov 6, 2015 at 12:33 PM, Ionel Cristian M?rie? wrote: > ?Why not consider having a "pip" launcher?? Seems the obvious thing to me > - python has the "py" launcher on windows and it works great! > > Eg: "pip -3" to launch pip using python3, "pip -3.5" to launch pip using > python3.5 - just like the "py" launcher. > This sounds similar to node's approach to bundling npm where you have a version of npm that runs for all projects using that node runtime. In effect you use the same npm binary independent of which virtualenv you are using, it's tied to the interpreter installation. So you run the pip that came with that python, such as pip2.7 installed with python2.7 and then just tell it which virtualenv to install packages into. You could go further and install a pip alias into the virtualenv that links to the appropriate pip, but there's still only one copy running for all virtualenvs off of that interpreter. This model is really nice for updating because you don't need to update pip inside every single virtualenv. Another cool feature of npm is that it's also effectively auto-using the virtualenv since it defaults to the local node_modules folder so you don't have to specify it as something like "pip --virtualenv=env install". Anyway I'm not suggesting such a drastic change but it's nice to look at what else is out there and the benefits. It doesn't mesh particularly well verbatim with how PYTHONPATH+virtualenv works of course. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Nov 6 14:13:34 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 7 Nov 2015 08:13:34 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 7 November 2015 at 07:57, Marcus Smith wrote: >> >> So both the abstract build system PEP and donalds setup.py interface >> depend on having a bootstrap dependency list written into a file in >> the source tree. > > > your build PEP said stuff like this "Additional data *may* be included, but > the ``build_requires`` and ``metadata_version`` keys must be present" > > that leads me to think you need more than just this specification of a > single dependency. Thats the next layer up - and if we adopt the egg-info-as-carrier format we don't need to issue a new PEP. I've yet to look closely. But any which way, it doesn't affect this PEP, and this PEP acts as a robust building block. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From rmcgibbo at gmail.com Fri Nov 6 14:20:03 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 11:20:03 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: For OS X, the pip get_platform function eventually calls into here: https://github.com/python/cpython/blob/master/Lib/_osx_support.py#L429-L439, and I think the comment kind of explains the bug. -Robert On Fri, Nov 6, 2015 at 11:10 AM, Daniel Holth wrote: > I see what you mean. Sounds like a bug to me. > > On Fri, Nov 6, 2015 at 2:07 PM Robert McGibbon wrote: > >> I don't think it's the sorting, per se. All of the get_supported() tags >> are 10.5 or earlier. Here's the output: >> https://gist.github.com/rmcgibbo/1d0f5d166ca48253b5a9 >> >> >> On Fri, Nov 6, 2015 at 11:00 AM, Daniel Holth wrote: >> >>> It should already be sorted. Try python -c "import pprint, >>> pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" >>> >>> Do none of the tags for the available numpy wheels appear in that list? >>> >>> On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon >>> wrote: >>> >>>> Hi, >>>> >>>> I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it >>>> proceeds to download and compile the tarball from PyPI from source (very >>>> slow). I see, however, that pre-compiled OS X wheel files are available on >>>> PyPI for OS X 10.6 and later. >>>> >>>> Checking the code, it looks like pip is picking up the platform tag >>>> through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' >>>> on this machine. At root, I think this comes from >>>> the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at >>>> `python3.5/config-3.5m/Makefile`. I know that this value is used by >>>> distutils compiling python extension modules -- presumably so that they can >>>> be distributed to any target machine with OS X >=10.5 -- so that's good. >>>> But is this the right thing for pip to be using when checking whether a >>>> binary wheel is compatible? I see it mentioned >>>> in PEP 425, so >>>> perhaps this was already hashed out on the list. >>>> >>>> Best, >>>> Robert >>>> _______________________________________________ >>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Nov 6 14:23:05 2015 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 Nov 2015 19:23:05 +0000 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: If you would like to fix the problem, figure out how to get the real OSX version into pip.pep425tags. On Fri, Nov 6, 2015 at 2:20 PM Robert McGibbon wrote: > For OS X, the pip get_platform function eventually calls into here: > https://github.com/python/cpython/blob/master/Lib/_osx_support.py#L429-L439, > and I think the comment kind of explains the bug. > > -Robert > > > > On Fri, Nov 6, 2015 at 11:10 AM, Daniel Holth wrote: > >> I see what you mean. Sounds like a bug to me. >> >> On Fri, Nov 6, 2015 at 2:07 PM Robert McGibbon >> wrote: >> >>> I don't think it's the sorting, per se. All of the get_supported() tags >>> are 10.5 or earlier. Here's the output: >>> https://gist.github.com/rmcgibbo/1d0f5d166ca48253b5a9 >>> >>> >>> On Fri, Nov 6, 2015 at 11:00 AM, Daniel Holth wrote: >>> >>>> It should already be sorted. Try python -c "import pprint, >>>> pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" >>>> >>>> Do none of the tags for the available numpy wheels appear in that list? >>>> >>>> On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it >>>>> proceeds to download and compile the tarball from PyPI from source (very >>>>> slow). I see, however, that pre-compiled OS X wheel files are available on >>>>> PyPI for OS X 10.6 and later. >>>>> >>>>> Checking the code, it looks like pip is picking up the platform tag >>>>> through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' >>>>> on this machine. At root, I think this comes from >>>>> the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at >>>>> `python3.5/config-3.5m/Makefile`. I know that this value is used by >>>>> distutils compiling python extension modules -- presumably so that they can >>>>> be distributed to any target machine with OS X >=10.5 -- so that's good. >>>>> But is this the right thing for pip to be using when checking whether a >>>>> binary wheel is compatible? I see it mentioned >>>>> in PEP 425, so >>>>> perhaps this was already hashed out on the list. >>>>> >>>>> Best, >>>>> Robert >>>>> _______________________________________________ >>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Fri Nov 6 14:36:03 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 11:36:03 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: Sounds good. I'll take a look. -Robert On Fri, Nov 6, 2015 at 11:23 AM, Daniel Holth wrote: > If you would like to fix the problem, figure out how to get the real OSX > version into pip.pep425tags. > > On Fri, Nov 6, 2015 at 2:20 PM Robert McGibbon wrote: > >> For OS X, the pip get_platform function eventually calls into here: >> https://github.com/python/cpython/blob/master/Lib/_osx_support.py#L429-L439, >> and I think the comment kind of explains the bug. >> >> -Robert >> >> >> >> On Fri, Nov 6, 2015 at 11:10 AM, Daniel Holth wrote: >> >>> I see what you mean. Sounds like a bug to me. >>> >>> On Fri, Nov 6, 2015 at 2:07 PM Robert McGibbon >>> wrote: >>> >>>> I don't think it's the sorting, per se. All of the get_supported() >>>> tags are 10.5 or earlier. Here's the output: >>>> https://gist.github.com/rmcgibbo/1d0f5d166ca48253b5a9 >>>> >>>> >>>> On Fri, Nov 6, 2015 at 11:00 AM, Daniel Holth wrote: >>>> >>>>> It should already be sorted. Try python -c "import pprint, >>>>> pip.pep425tags; pprint.pprint(pip.pep425tags.get_supported())" >>>>> >>>>> Do none of the tags for the available numpy wheels appear in that list? >>>>> >>>>> On Fri, Nov 6, 2015 at 1:48 PM Robert McGibbon >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I just tried to run `pip install numpy` on my OS X 10.10.3 box, and >>>>>> it proceeds to download and compile the tarball from PyPI from source (very >>>>>> slow). I see, however, that pre-compiled OS X wheel files are available on >>>>>> PyPI for OS X 10.6 and later. >>>>>> >>>>>> Checking the code, it looks like pip is picking up the platform tag >>>>>> through `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' >>>>>> on this machine. At root, I think this comes from >>>>>> the MACOSX_DEPLOYMENT_TARGET=10.5 entry in the Makefile at >>>>>> `python3.5/config-3.5m/Makefile`. I know that this value is used by >>>>>> distutils compiling python extension modules -- presumably so that they can >>>>>> be distributed to any target machine with OS X >=10.5 -- so that's good. >>>>>> But is this the right thing for pip to be using when checking whether a >>>>>> binary wheel is compatible? I see it mentioned >>>>>> in PEP 425, so >>>>>> perhaps this was already hashed out on the list. >>>>>> >>>>>> Best, >>>>>> Robert >>>>>> _______________________________________________ >>>>>> Distutils-SIG maillist - Distutils-SIG at python.org >>>>>> https://mail.python.org/mailman/listinfo/distutils-sig >>>>>> >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Fri Nov 6 14:36:55 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 7 Nov 2015 08:36:55 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 7 November 2015 at 07:22, Daniel Holth wrote: > LGTM > > To clarify in this spec to specify a couple of requirements for the [foo] > extra would you have to say > > [foo] requests > [foo] sqlalchemy Oh, so this spec describes how to specify a dependency, not how to provide the metadata to e.g. setuptools. I may be misinterpreting your question, but assuming you are referring to the setup.py interface... it should be insertable in a compatible way with the existing setuptools idioms. e.g. in setuptools today you'd say extras_require={'foo': ["requests", "sqlalchemy"]} And I think you'd keep doing that. What you might want to do though is say that for the foo extra that one of the dependencies only applies sometimes. Today you do this manual separation thing: extras_require={ 'foo': ["requests", "sqlalchemy"], 'foo:python_version<"2.7"': ["requests[security]"]} A patch to setuptools to allow dependencies per this PEP would allow: extras_require={ 'foo': ["requests", "sqlalchemy", "requests[security];python_version<'2.7'"]} > Compare to requires.txt from setuptools which IIRC is a plain text file like > so, with normal requirements not in a section, and extra or conditional > requirements in sections named [extra_name;marker]: >... OTOH you might be talking about how we'd using this in an PKG_INFO / egg-info / dist-info metadata serialisation format. We haven't specified that yet - since there is strictly no new capabilities over those existing on-disk formats, they are just different, I think we have an ok status quo. It's not as great as having a fully specified consistent thing end to end format though :(. So taking a higher level view, we currently have three formats to describe this: - requires.txt, which can't specify markers at the granularity of a dependency, only by having larger structure which also defines extras and uses the empty extra to define marker based rules - PEP-345 PKG-INFO, which nothing outputs as PKG-INFO, but wheel outputs as dist-info/METADATA - and is very nearly compatible with this PEP (it just adds brackets around version specificiers). I think adding those in as optional but not recommended elements, would permit a single unified parser for METADATA specifiers and the ones in this PEP. - requirements.txt files which I decided to use as the basis because its been hammered on most IMO :). I don't have a PEP yet to specify the *export* of distribution metadata using this format. If we adopt 'what is in egg-info' as the interop basis for now, then such a PEP can come later, but at the cost of having a non-PEP defined format included by reference. If we adopt what is in dist-info as the format, we get a define thing, but out of date (because markers have evolved since PEP-345) - but we should be able to do a very minimal update to 345 that solely references this new PEP to update the dependency specification (if we do the brackets thing above), ... which gives us sufficient coverage I think. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Fri Nov 6 15:11:41 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sat, 7 Nov 2015 09:11:41 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Here's an update I just pushed into git that addresses the defects found so far. diff --git a/dependency-specification.rst b/dependency-specification.rst index f8171d8..486636d 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -73,7 +73,7 @@ the dependency can be made conditional using environment markers. Grammar ------- -We first cover the grammar briefly and the drill into the semantics of each +We first cover the grammar briefly and then drill into the semantics of each section later. A distribution specification is written in ASCII text. We use ABNF [#abnf]_ to @@ -92,7 +92,7 @@ URI is defined in std-66 [#std66]_:: version-cmp = "<" / "<=" / "!=" / "==" / ">=" / ">" / "~=" / "===" version = 1*( DIGIT / ALPHA / "-" / "_" / "." / "*" ) - versionspec = version-cmp version *(',' version-cmp version) + versionspec = ["("] version-cmp version *(',' version-cmp version) [")"] urlspec = "@" URI Environment markers allow making a specification only take effect in some @@ -179,7 +179,9 @@ Versions See PEP-440 [#pep440]_ for more detail on both version numbers and version comparisons. Version specifications limit the versions of a distribution that can be used. They only apply to distributions looked up by name, rather than -via a URL. Version comparison are also used in the markers feature. +via a URL. Version comparison are also used in the markers feature. The +optional brackets around a version are present for compatibility with PEP-345 +[#pep345]_ but should not be generated, only accepted. Environment Markers ------------------- @@ -302,7 +304,7 @@ Backwards Compatibility Most of this PEP is already widely deployed and thus offers no compatibiltiy concerns. -There are however two key points where the PEP differs from the deployed base. +There are however a few points where the PEP differs from the deployed base. Firstly, PEP-440 direct references haven't actually been deployed in the wild, but they were designed to be compatibly added, and there are no known @@ -320,15 +322,21 @@ permitting reasonably graceful upgrade. The new version comparisons will cause errors, so adoption may require waiting some time for deployment to be widespread. +Thirdly, PEP-345 required brackets around version specifiers. In order to +accept PEP-345 dependency specifications, brackets are accepted, but they +should not be generated. + Rationale ========= In order to move forward with any new PEPs that depend on environment markers, -we needed a specification that included them. +we needed a specification that included them in their modern form. This PEP +brings together all the currently unspecified components into a specified +form. The requirement specifier EBNF is lifted from setuptools pkg_resources -documentation, since we can't sensible depend on a defacto standard. - +documentation, since we wish to avoid depending on a defacto, vs PEP +specified, standard. References ========== -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Fri Nov 6 16:13:02 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Nov 2015 16:13:02 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: > On Nov 6, 2015, at 1:33 PM, Ionel Cristian M?rie? wrote: > > >> On Thu, Nov 5, 2015 at 11:08 PM, Donald Stufft wrote: >> Currently pip installs a number of commands like ``pip``, ``pipX`` and >> ``pipX.Y`` where the X and X.Y corresponds to the version of Python that pip >> is installed into. Pip installs into whatever Python is currently executing it >> so this gives some ability to control which version of Python you're installing >> into (``pip2.7`` for Python 2.7 etc). > > ?Why not consider having a "pip" launcher?? Seems the obvious thing to me - python has the "py" launcher on windows and it works great! > > Eg: "pip -3" to launch pip using python3, "pip -3.5" to launch pip using python3.5 - just like the "py" Isn't this basically what the third option is? Just the launcher is also the entire program. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Nov 6 16:56:45 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 Nov 2015 13:56:45 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: Message-ID: On Mon, Nov 2, 2015 at 5:57 PM, Nathaniel Smith wrote: > On Sun, Nov 1, 2015 at 3:16 PM, Ralf Gommers wrote: >> 2. ``pip install .`` silences build output, which may make sense for some >> usecases, but for numpy it just sits there for minutes with no output after >> printing "Running setup.py install for numpy". Users will think it hangs and >> Ctrl-C it. https://github.com/pypa/pip/issues/2732 > > I tend to agree with the commentary there that for end users this is > different but no worse than the current situation where we spit out > pages of "errors" that don't mean anything :-). I posted a suggestion > on that bug that might help with the apparent hanging problem. For the record, this is now fixed in pip's "develop" branch and should be in the next release. For commands like 'setup.py install', pip now displays a spinner that ticks over whenever the underlying process prints to stdout/stderr. So if the underlying process hangs, then the spinner will stop (it's not just lying to you), but normally it works nicely. https://github.com/pypa/pip/pull/3224 -n -- Nathaniel J. Smith -- http://vorpus.org From contact at ionelmc.ro Fri Nov 6 16:59:15 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 6 Nov 2015 23:59:15 +0200 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Fri, Nov 6, 2015 at 11:13 PM, Donald Stufft wrote: > Eg: "pip -3" to launch pip using python3, "pip -3.5" to launch pip using > python3.5 - just like the "py" > > > Isn't this basically what the third option is? Just the launcher is also > the entire program. > ?If you mean the initial mail you have send, I only saw two proposals: `python -mpip` and something with zipfiles. Not sure if having a zipfile around counts as a launcher, you wouldn't call something a launcher if it contains the target completely no? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From lkraider at gmail.com Fri Nov 6 17:20:01 2015 From: lkraider at gmail.com (Paul Eipper) Date: Fri, 6 Nov 2015 20:20:01 -0200 Subject: [Distutils] Why github and bitbucket? In-Reply-To: References: <563A6593.1010601@thomas-guettler.de> <563BAD93.6000500@thomas-guettler.de> <563BB972.2030501@thomas-guettler.de> Message-ID: On Thu, Nov 5, 2015 at 6:40 PM, Marcus Smith wrote: >> >> >> Basically: Historical reasons. The name ?PyPA? was a joke by the >> pip/virtualenv developers and it was only pip and virtualenv so it was on >> Github. > > > here's an anecdote.... per the pypa.io history page, 'Other proposed names > were ?ianb-ng?, ?cabal?, ?pack? and ?Ministry of Installation?' > > https://www.pypa.io/en/latest/history/ > > maybe even funnier that we have a history page, but it's easy to forget all > that's happened, so I made one awhile back... > Oh man, too bad ?Ministry of Installation? was not chosen, my favorite! -- Paul Eipper From donald at stufft.io Fri Nov 6 17:24:11 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 Nov 2015 17:24:11 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On November 6, 2015 at 4:59:39 PM, Ionel Cristian M?rie? (contact at ionelmc.ro) wrote: > On Fri, Nov 6, 2015 at 11:13 PM, Donald Stufft wrote: > > > Eg: "pip -3" to launch pip using python3, "pip -3.5" to launch pip using > > python3.5 - just like the "py" > > > > > > Isn't this basically what the third option is? Just the launcher is also > > the entire program. > > > > If you mean the initial mail you have send, I only saw two proposals: > `python -mpip` and something with zipfiles. Not sure if having a zipfile > around counts as a launcher, you wouldn't call something a launcher if it > contains the target completely no? > Well, it?s not really a launcher no, but you?d do ``pip -p python2 install foo`` or something like that. It?s the same UI. Having just a ?launcher? I think is actually more confusing (and we already had that in the past with -E and removed it because it was confusing). Since you?ll have different versions of pip in different environments (Python or virtual) things break or act confusingly. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From xav.fernandez at gmail.com Fri Nov 6 17:32:02 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Fri, 6 Nov 2015 23:32:02 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <85bnb7kgug.fsf@benfinney.id.au> Message-ID: On Fri, Nov 6, 2015 at 4:52 AM, Glyph Lefkowitz wrote: > > Rather than trying to figure out what the "right" way for users to invoke > `pip? to begin with is, why not just have Pip start providing more > *information* about potential problems when you invoke it? > > If you invoke 'pip[X.Y]' and it matches 'python -m pip' in your current > virtualenv, don't say anything; similarly if you invoke 'python -m pip' and > 'which pip' matches. But if there's a mismatch, pip can print information > in both cases. This would go a long way to alleviating the confusion that > occurs when users back themselves into one of these corners, and would > alert users to potential issues before they become a problem; right now you > have to be a dogged investigative journalist to figure out why pip is doing > the wrong thing in some cases. > I like this solution which is easy to implement and should directly help the user without any deprecation process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Fri Nov 6 17:50:59 2015 From: nad at acm.org (Ned Deily) Date: Fri, 06 Nov 2015 18:50:59 -0400 Subject: [Distutils] Platform tags for OS X binary wheels References: Message-ID: In article , Robert McGibbon wrote: > I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it > proceeds to download and compile the tarball from PyPI from source (very > slow). I see, however, that pre-compiled OS X wheel files are available on > PyPI for OS X 10.6 and later. > > Checking the code, it looks like pip is picking up the platform tag through > `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' on this > machine. At root, I think this comes from the MACOSX_DEPLOYMENT_TARGET=10.5 > entry in the Makefile at `python3.5/config-3.5m/Makefile`. I know that this > value is used by distutils compiling python extension modules -- presumably > so that they can be distributed to any target machine with OS X >=10.5 -- > so that's good. But is this the right thing for pip to be using when > checking whether a binary wheel is compatible? I see it mentioned > in PEP 425, so perhaps > this was already hashed out on the list. Are you using an OS X Python installed from a python.org installer? If so, be aware that there are two different OS X installers on Python.org for each current release. One is intended for 10.5 systems, although it will work on later OS X systems. The other is for 10.6 and later systems. Unless you have a need to run on 10.5 or build something that works on 10.5, download and use the 10.6+ installers instead. Then the existing whls for products like Numpy should work just fine. -- Ned Deily, nad at acm.org From rmcgibbo at gmail.com Fri Nov 6 18:57:19 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 15:57:19 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: I'm using the Python from the Miniconda installer with py35 released last week. What does the python.org installer build for 10.6+ return for `distutils.util.get_platform()`? -Robert On Fri, Nov 6, 2015 at 2:50 PM, Ned Deily wrote: > In article > , > Robert McGibbon wrote: > > I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it > > proceeds to download and compile the tarball from PyPI from source (very > > slow). I see, however, that pre-compiled OS X wheel files are available > on > > PyPI for OS X 10.6 and later. > > > > Checking the code, it looks like pip is picking up the platform tag > through > > `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' on > this > > machine. At root, I think this comes from the > MACOSX_DEPLOYMENT_TARGET=10.5 > > entry in the Makefile at `python3.5/config-3.5m/Makefile`. I know that > this > > value is used by distutils compiling python extension modules -- > presumably > > so that they can be distributed to any target machine with OS X >=10.5 -- > > so that's good. But is this the right thing for pip to be using when > > checking whether a binary wheel is compatible? I see it mentioned > > in PEP 425, so perhaps > > this was already hashed out on the list. > > Are you using an OS X Python installed from a python.org installer? If > so, be aware that there are two different OS X installers on Python.org > for each current release. One is intended for 10.5 systems, although it > will work on later OS X systems. The other is for 10.6 and later > systems. Unless you have a need to run on 10.5 or build something that > works on 10.5, download and use the 10.6+ installers instead. Then the > existing whls for products like Numpy should work just fine. > > -- > Ned Deily, > nad at acm.org > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Fri Nov 6 19:02:11 2015 From: nad at acm.org (Ned Deily) Date: Fri, 06 Nov 2015 19:02:11 -0500 Subject: [Distutils] Platform tags for OS X binary wheels References: Message-ID: In article , Robert McGibbon wrote: > I'm using the Python from the Miniconda installer with py35 released last > week. > > What does the python.org installer build for 10.6+ return for > `distutils.util.get_platform()`? $ /usr/local/bin/python3.5 Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import distutils.util >>> distutils.util.get_platform() 'macosx-10.6-intel' I can only assume they built their Python with a deployment target of 10.5 (MACOSX_DEPLOYMENT_TARGET=10.5). -- Ned Deily, nad at acm.org From chris.barker at noaa.gov Fri Nov 6 19:04:21 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 6 Nov 2015 16:04:21 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: References: Message-ID: <72999195152837618@unknownmsgid> On Nov 6, 2015, at 3:57 PM, Robert McGibbon wrote: I'm using the Python from the Miniconda installer with py35 released last week. Then you should not expect it to be able to find compatible binary wheels on PyPi. Pretty much the entire point of conda is to support Numpy and friends. It's actually really good that it DIDN'T go and install a binary wheel. You want: conda install numpy Trust me on that :-) There are some cases where pip installing a source package into a conda Python is fine -- but mostly only pure-Python packages. -CHB What does the python.org installer build for 10.6+ return for `distutils.util.get_platform()`? -Robert On Fri, Nov 6, 2015 at 2:50 PM, Ned Deily wrote: > In article > , > Robert McGibbon wrote: > > I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it > > proceeds to download and compile the tarball from PyPI from source (very > > slow). I see, however, that pre-compiled OS X wheel files are available > on > > PyPI for OS X 10.6 and later. > > > > Checking the code, it looks like pip is picking up the platform tag > through > > `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' on > this > > machine. At root, I think this comes from the > MACOSX_DEPLOYMENT_TARGET=10.5 > > entry in the Makefile at `python3.5/config-3.5m/Makefile`. I know that > this > > value is used by distutils compiling python extension modules -- > presumably > > so that they can be distributed to any target machine with OS X >=10.5 -- > > so that's good. But is this the right thing for pip to be using when > > checking whether a binary wheel is compatible? I see it mentioned > > in PEP 425, so perhaps > > this was already hashed out on the list. > > Are you using an OS X Python installed from a python.org installer? If > so, be aware that there are two different OS X installers on Python.org > > for each current release. One is intended for 10.5 systems, although it > will work on later OS X systems. The other is for 10.6 and later > systems. Unless you have a need to run on 10.5 or build something that > works on 10.5, download and use the 10.6+ installers instead. Then the > existing whls for products like Numpy should work just fine. > > -- > Ned Deily, > nad at acm.org > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Fri Nov 6 19:08:46 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Fri, 6 Nov 2015 16:08:46 -0800 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: <72999195152837618@unknownmsgid> References: <72999195152837618@unknownmsgid> Message-ID: Hi, > Trust me on that :-) That's not really the point -- I use both conda and pip, maintain https://github.com/omnia-md/conda-recipes, and have made multiple upstream contributions to conda-build. The point of this thread, from my perspective, was to confirm that there's a small bug in pip in the way it determines the supported pep425 tags. I think I've confirmed that, and I'll file a PR shortly. -Robert On Fri, Nov 6, 2015 at 4:04 PM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > On Nov 6, 2015, at 3:57 PM, Robert McGibbon wrote: > > I'm using the Python from the Miniconda installer with py35 released last > week. > > > Then you should not expect it to be able to find compatible binary wheels > on PyPi. > > Pretty much the entire point of conda is to support Numpy and friends. > It's actually really good that it DIDN'T go and install a binary wheel. > > You want: > > conda install numpy > > Trust me on that :-) > > There are some cases where pip installing a source package into a conda > Python is fine -- but mostly only pure-Python packages. > > -CHB > > > > What does the python.org installer build for 10.6+ return for > `distutils.util.get_platform()`? > > -Robert > > On Fri, Nov 6, 2015 at 2:50 PM, Ned Deily wrote: > >> In article >> , >> Robert McGibbon wrote: >> > I just tried to run `pip install numpy` on my OS X 10.10.3 box, and it >> > proceeds to download and compile the tarball from PyPI from source (very >> > slow). I see, however, that pre-compiled OS X wheel files are available >> on >> > PyPI for OS X 10.6 and later. >> > >> > Checking the code, it looks like pip is picking up the platform tag >> through >> > `distutils.util.get_platform()`, which returns 'macosx-10.5-x86_64' on >> this >> > machine. At root, I think this comes from the >> MACOSX_DEPLOYMENT_TARGET=10.5 >> > entry in the Makefile at `python3.5/config-3.5m/Makefile`. I know that >> this >> > value is used by distutils compiling python extension modules -- >> presumably >> > so that they can be distributed to any target machine with OS X >=10.5 >> -- >> > so that's good. But is this the right thing for pip to be using when >> > checking whether a binary wheel is compatible? I see it mentioned >> > in PEP 425, so perhaps >> > this was already hashed out on the list. >> >> Are you using an OS X Python installed from a python.org installer? If >> so, be aware that there are two different OS X installers on Python.org >> >> for each current release. One is intended for 10.5 systems, although it >> will work on later OS X systems. The other is for 10.6 and later >> systems. Unless you have a need to run on 10.5 or build something that >> works on 10.5, download and use the 10.6+ installers instead. Then the >> existing whls for products like Numpy should work just fine. >> >> -- >> Ned Deily, >> nad at acm.org >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Fri Nov 6 19:20:22 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 7 Nov 2015 01:20:22 +0100 Subject: [Distutils] Platform tags for OS X binary wheels In-Reply-To: <72999195152837618@unknownmsgid> References: <72999195152837618@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 1:04 AM, Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > On Nov 6, 2015, at 3:57 PM, Robert McGibbon wrote: > > I'm using the Python from the Miniconda installer with py35 released last > week. > > > Then you should not expect it to be able to find compatible binary wheels > on PyPi. > > Pretty much the entire point of conda is to support Numpy and friends. > It's actually really good that it DIDN'T go and install a binary wheel. > > You want: > > conda install numpy > > Trust me on that :-) > > There are some cases where pip installing a source package into a conda > Python is fine -- but mostly only pure-Python packages. > Actually, the situation with pip on OS X is quite good. This should work with at least python.org Python, MacPython and Homebrew (using wheels): pip install numpy scipy matplotlib pandas scikit-image scikit-learn Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Nov 6 20:26:13 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 6 Nov 2015 17:26:13 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> Message-ID: <-4238370622781470465@unknownmsgid> > While I understand what you're trying to achieve (and I'm in favour, > in general) it should be remembered that pip's core goal is installing > packages - not being a component of a development workflow. Yes -- clear separation of concerns here! So what IS supposed to be used in the development workflow? The new mythical build system? This brings. me back to my setuptools-lite concept -- while we are waiting for a new build system, you can use setuptools-lite, and get a setup.py install or setup.py develop that does what it's supposed to do and nothing else.... OK, I'll go away now :-) -Chris > We absolutely need to make pip useful in the development workflow type > of situation (that's why pip install -e exists, after all). But I > don't think it's so much pip "trying to be too clever" as incremental > rebuilds wasn't the original use case that "pip install ." was > designed for. What we'll probably have to do is be *more* clever to > special case out the situations where a development-style support for > incremental rebuilds is more appropriate than the current behaviour. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From njs at pobox.com Sat Nov 7 00:03:07 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 Nov 2015 21:03:07 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On Thu, Nov 5, 2015 at 7:32 PM, Robert Collins wrote: > Since we ended up with a hard dependency on this for the bootstrap > thing (regardless of 'smaller step' or not) - I've broken this out of > PEP 426, made it an encoding of the current status quo rather than an > aspirational change. Since it has a dependency on markers, I had to > choose whether to block on James' marker PEP, contribute to that, or > include it. I think on balance it makes sense to have it in one > document since the markers bit is actually quite shallow, so I've done > that (after discussing with James). This could perhaps replace PEP 496 > then, or be given a new number. > > Donald has graciously agreed to be a BDFL-delegate for it. > > The PR for it is https://github.com/pypa/interoperability-peps/pull/56 Thanks, this is really great! I made a bunch of fiddly comments inline on the PR, but some more general comments: 1) Also mentioned this in the PR, but it's probably worth putting up for more general discussion here: do we want to define some graceful degradation for how to handle unrecognized variable names in the environment marker syntax, so as to allow more easily for future extensions? In the current PEP draft, an unrecognized variable name is simply a syntax error (all the variable names are effectively keywords). One option would be to declare that any expression that contains an unrecognized variable simply evaluates to False. The downside of this is that it's somewhat accident prone: "os_name == posix" would silently be always false (because "posix" should be quoted as a string, not left unquoted and treated as a variable name), and "os_nmae == 'posix'" would also be silently accepted. 2) The PEP seems a little uncertain about whether it wants to talk about the "framing protocol" or not -- like whether there's a higher-level structure defining the edges of each individual requirement or not. In setup.py and in existing METADATA files, you have some higher level list-of-strings syntax and then each string is parsed as a single individual requirement, and comments don't make much sense; in requirements.txt then newlines are meaningful and comments are important. The PEP worries about newlines and newlines, but doesn't quite want to come out and say that it's defining requirements.txt -- it wants to be more general. Maybe it would be clearer to drop the comment and newline handling stuff from the core requirement specifier syntax (declaring that newlines are simply a syntax error), and assume that there's some higher-level framing protocol taking care of that stuff? So METADATA files would use whatever rules they use to pick out a single requirement specifier string (probably not allowing e.g. comments) and then parse it using this PEP's rules, and separately we'd have a definition for requirements.txt which is basically "a text file where you strip out comments, delete newlines that are preceded by backslash, split into lines, discard empty lines, and each resulting string is parsed as a requirement specifier". 3) The way extras are specified in METADATA files currently (ab)uses the environment marker syntax. E.g. here's the METADATA files from two popular packages: https://gist.github.com/njsmith/ed74851c0311e858f0f7 (Nice to see Metadata-Version: 2.0 getting some real-world use! I guess??? I thought I was starting to understand what is going on with python packaging standards but now I am baffled again. Anyway!) So apparently this is how you say that there is an extra called "doc" and that this extra adds a dependency on Sphinx: Provides-Extra: doc Requires-Dist: Sphinx (>=1.1); extra == 'doc' (https://gist.github.com/njsmith/ed74851c0311e858f0f7#file-ipython-4-0-0-wheel-metadata-L39) And here's how you say that the "terminal" extra adds a dependency on pyreadline (but only on windows): Requires-Dist: pyreadline (>=2); sys_platform == "win32" and extra == 'terminal' (https://gist.github.com/njsmith/ed74851c0311e858f0f7#file-ipython-4-0-0-wheel-metadata-L64) I'm not sure this is the syntax that I would have come up with, but I guess it's too late to put the genie back in the bottle, so this PEP should have some way to cope with these things? Currently they are simply syntax errors according to the PEP's grammar. -n -- Nathaniel J. Smith -- http://vorpus.org From p.f.moore at gmail.com Sat Nov 7 08:02:56 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 13:02:56 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: <-4238370622781470465@unknownmsgid> References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 01:26, Chris Barker - NOAA Federal wrote: > So what IS supposed to be used in the development workflow? The new > mythical build system? Fair question. Unfortunately, the answer is honestly that there's no simple answer - pip is not a bad option, but it's not its core use case so there are some rough edges. I'd argue that the best way to use pip is with pip install -e, but others in this thread have said that doesn't suit their workflow, which is fine. I don't know of any other really good options, though. I think it would be good to see if we can ensure pip is useful for this use case as well, all I was pointing out was that people shouldn't assume that it "should" work right now, and that changing it to work might involve some trade-offs that we don't want to make, if it compromises the core functionality of installing packages. Paul From ralf.gommers at gmail.com Sat Nov 7 08:55:34 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 7 Nov 2015 14:55:34 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore wrote: > On 7 November 2015 at 01:26, Chris Barker - NOAA Federal > wrote: > > So what IS supposed to be used in the development workflow? The new > > mythical build system? > I'd like to point out again that this is not just about development workflow. This is just as much about simply *installing* from a local git repo, or downloaded sources/sdist. The "pip install . should reinstall" discussion in https://github.com/pypa/pip/issues/536 is also pretty much the same argument. Fair question. Unfortunately, the answer is honestly that there's no > simple answer - pip is not a bad option, but it's not its core use > case so there are some rough edges. My impression is that right now pip's core use-case is not "installing", but "installing from PyPi (and similar repos". There are a lot of rough edges around installing from anything on your own hard drive. > I'd argue that the best way to use > pip is with pip install -e, but others in this thread have said that > doesn't suit their workflow, which is fine. I don't know of any other > really good options, though. > > I think it would be good to see if we can ensure pip is useful for > this use case as well, all I was pointing out was that people > shouldn't assume that it "should" work right now, and that changing it > to work might involve some trade-offs that we don't want to make, if > it compromises the core functionality of installing packages. > It might be helpful to describe the actual trade-offs then, because as far as I can tell no one has actually described how this would either hurt another use-case or make pip internals much more complicated. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Sat Nov 7 09:37:00 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Sat, 7 Nov 2015 15:37:00 +0100 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server Message-ID: <563E0C8C.9070609@thomas-guettler.de> I wrote down a tought about Serverside Dependency Resolution and Virtualenv Build Server What do you think? Latest version: https://github.com/guettli/virtualenv-build-server virtualenv-build-server ####################### Rough roadmap how a server to build virtualenvs for the python programming language could be implemented. Highlevel goal -------------- Make creating new virtual envionments for the Python programming language easy and fast. Input: fuzzy requirements like this: django>=1.8, requests=>2.7 Output: virtualenv with packages installed. Two APIs ------------ #. Resolve fuzzy requirements to a fixed set of packages with exactly pinned versions. #. Read fixed set of packages. Build virtualenv according to given platform. Steps ----- Steps: #. Client sends list of fuzzy requirements to server: * I need: django>=1.8, requests=>2.7, ... #. Server solves the fuzzy requirements to a fixed set of requirememnts: django==1.8.2, requests==2.8.1, ... #. Client reads the fixed set of requirements. #. Optional: Client sends fixed set of requirements to the server. Telling him the plattform * My platform: sys.version==2.7.6 and sys.platform=linux2 #. Server builds a virtualenv according to the fixed set of requirements. #. Server sends the environment to the client #. Client unpacks the data and has a usable virtualenv Benefits -------- Speed: * There is only one round-trip from client to server. If the dependencies get resolved on the client the client would need to download the available version information. * Caching: If the server gets input parameters (fuzzy requirements and platform information) which he has seen before, he can return the cached result from the previous request. Possible Implementations ------------------------ APIs ==== Both APIs could be implementated by a webservice/Rest interface passing json or yaml. Serverside ========== Implementation Strategie "PostgreSQL" ..................................... Since the API is de-coupled from the internals the implementation could be exchanged without the need for changes at the client side. I suggest using the PostgreSQL und resolving the dependcy graph using SQL (WITH RECURSIVE). The package and version data gets stored in PostgreSQL via ORM (Django or SQLAlchemy). The version numbers need to be normalized to ascii to allow fast comparision. Related: https://www.python.org/dev/peps/pep-0440/ Implementation Strategie "Node.js" .................................. I like python, but I am not married with it. Why not use a different tools that is already working? Maybe the node package manager: https://www.npmjs.com/ Questions --------- Are virtualenv relocatable? AFAIK they are not. General Thoughts ---------------- * Ignore Updates. Focus on creating new virtualenvs. The server can do caching and that's why I prefer creating virtualenvs which never get updated. They get created and removed (immutable). I won't implement it -------------------- This idea is in the public domain. If you are young and brave or old and wise: Go ahead, try to implement it. Please communicate early and often. Ask on mailing-lists or me for feedback. Good luck :-) I love feedback --------------- Please tell me want you like or dislike: * typos and spelling stuff (I am not a native speaker) * alternative implementation strategies. * existing software which does this (even if implemented in a different programming language). * ... -- http://www.thomas-guettler.de/ From donald at stufft.io Sat Nov 7 09:49:55 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 09:49:55 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 8:56:02 AM, Ralf Gommers (ralf.gommers at gmail.com) wrote: > On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore wrote: > > > On 7 November 2015 at 01:26, Chris Barker - NOAA Federal > > wrote: > > > So what IS supposed to be used in the development workflow? The new > > > mythical build system? > > > > I'd like to point out again that this is not just about development > workflow. This is just as much about simply *installing* from a local git > repo, or downloaded sources/sdist. > > The "pip install . should reinstall" discussion in > https://github.com/pypa/pip/issues/536 is also pretty much the same > argument. I think that everyone on that ticket has agreed that ``pip install .`` (where . is any local path) should reinstall. I think the thing that is being asked for here though is for pip to use that directory as the build directory, rather than copying everything to a temporary directory and using that. I?m hesitant to do that because it?s going to add another slightly different way that things could be installed and I?m trying to reduce those (and instead have two ?paths? for installation, the normal one and the development one). IOW, I think in development ``-e`` is the right answer if you want to build and use the local directory. Otherwise you shouldn?t expect it to modify your current directory or the tarball at all. I do think we can make sure that specifying a build directory and instructing us not to clean it will function to have incremental builds though. > > Fair question. Unfortunately, the answer is honestly that there's no > > simple answer - pip is not a bad option, but it's not its core use > > case so there are some rough edges. > > > My impression is that right now pip's core use-case is not "installing", > but "installing from PyPi (and similar repos". There are a lot of rough > edges around installing from anything on your own hard drive. This is probably true just in the fact that the bulk of the time when people use it, they are using it to install from a remote repository. There are rough edges for stuff on your own hard drive, but I think we can clean them up though, we just need to figure out what the answer is for each of those rough cases. > > > > I'd argue that the best way to use > > pip is with pip install -e, but others in this thread have said that > > doesn't suit their workflow, which is fine. I don't know of any other > > really good options, though. > > > > I think it would be good to see if we can ensure pip is useful for > > this use case as well, all I was pointing out was that people > > shouldn't assume that it "should" work right now, and that changing it > > to work might involve some trade-offs that we don't want to make, if > > it compromises the core functionality of installing packages. > > > > It might be helpful to describe the actual trade-offs then, because as far > as I can tell no one has actually described how this would either hurt > another use-case or make pip internals much more complicated. > > Ralf > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sat Nov 7 09:57:24 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 14:57:24 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 13:55, Ralf Gommers wrote: > On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore wrote: >> >> On 7 November 2015 at 01:26, Chris Barker - NOAA Federal >> wrote: >> > So what IS supposed to be used in the development workflow? The new >> > mythical build system? > > I'd like to point out again that this is not just about development > workflow. This is just as much about simply *installing* from a local git > repo, or downloaded sources/sdist. Possibly I'm misunderstanding here. > The "pip install . should reinstall" discussion in > https://github.com/pypa/pip/issues/536 is also pretty much the same > argument. Well, that one is about pip reinstalling if you install from a local directory, and not skipping the install if the local directory version is the same as the installed version. As I noted there, I'm OK with this, it seems reasonable to me to say that if someone has a directory of files, they may have updated something but not (yet) bumped the version. The debate over there has gone on to whether we force reinstall for a local *file* (wheel or sdist) which I'm less comfortable with. But that's is being covered over there. The discussion *here* is, I thought, about skipping build steps when possible because you can reuse build artifacts. That's not "should pip do the install?", but rather "*how* should pip do the install?" Specifically, to reuse build artifacts it's necessaryto *not* do what pip currently does for all (non-editable) installs, which is to isolate the build in a temporary directory and do a clean build. That's a sensible debate to have, but it's very different from the issue you referenced. IMO, the discussions currently are complex enough that isolating independent concerns is crucial if anyone is to keep track. (It certainly is for me!) >> Fair question. Unfortunately, the answer is honestly that there's no >> simple answer - pip is not a bad option, but it's not its core use >> case so there are some rough edges. > > My impression is that right now pip's core use-case is not "installing", but > "installing from PyPi (and similar repos". There are a lot of rough edges > around installing from anything on your own hard drive. Not true. The rough edges are around installing things where (a) you don't want to rely in the invariant that name and version uniquely identify an installation (that's issue 536) and (b) where you don't want to do a clean build, because building is complex, slow, or otherwise something you want to optimise (that's this discussion). I routinely download wheels and use them to install. I also sometimes download sdists and install from them, although 99.99% of the time, I download them, build them into wheels and install them from wheels. It *always* works exactly as I'd expect. But if I'm doing development, I use -e. That seems to be the problem here, there are rough edges if you want a development workflow that doesn't rely on editable installs. I think that's what I already said :-) >> I'd argue that the best way to use >> pip is with pip install -e, but others in this thread have said that >> doesn't suit their workflow, which is fine. I don't know of any other >> really good options, though. >> >> I think it would be good to see if we can ensure pip is useful for >> this use case as well, all I was pointing out was that people >> shouldn't assume that it "should" work right now, and that changing it >> to work might involve some trade-offs that we don't want to make, if >> it compromises the core functionality of installing packages. > > It might be helpful to describe the actual trade-offs then, because as far > as I can tell no one has actually described how this would either hurt > another use-case or make pip internals much more complicated. 1. (For issue 536, not this thread) Pip and users can't rely on the invariant that name and version uniquely identify a release. You could have version 1.2dev4 installed, and it may have come from your local working directory (with changes you made) or from a wheel that's on your local hard drive that you built last week, or from the release on PyPI you made last month. All 3 may behave differently. Also wheel caching is based on name/version - it would need to be switched off in cases where name/version doesn't guarantee repeatable code. 2. (For here) Builds are not isolated from what's in the development directory. So if you have your sdist definition wrong, what you build locally may work, but when you release it it may fail. Obviously that can be fixed by proper development and testing practices, but pip is designed currently to isolate builds to protect against mistakes like this, we'd need to remove that protection for cases where we wanted to do in-place builds. 3. The logic inside pip for doing builds is already pretty tricky. Adding code to sometimes build in place and sometimes in a temporary directory is going to make it even more complex. That might not be a concern for end users, but it makes maintaining pip harder, and risks there being subtle bugs in the logic that could bite end users. If you want specifics, I can't give them at the moment, because I don't know what the code to do the proposed in-place building would look like. I hope that helps. It's probably not as specific or explicit as you'd like, but to be fair, nor is the proposal. What we currently have on the table is "If 'pip (install/wheel) .' is supposed to become the standard way to build things, then it should probably build in-place by default." For my personal use cases, I don't actually agree with any of that, but my use cases are not even remotely like those of numpy developers, so I don't want to dismiss the requirement. But if it's to go anywhere, it needs to be better explained. Just to be clear, *my* position (for projects simpler than numpy and friends) is: 1. The standard way to install should be "pip install ". 2. The standard way to build should be "pip wheel ". The directory should be a clean checkout of something you plan to release, with a unique version number. 3. The standard way to develop should be "pip install -e ." 4. Builds (pip wheel) should always unpack to a temporary location and build there. When building from a directory, in effect build a sdist and unpack it to the temporary location. I hear the message that for things like numpy these rules won't work. But I'm completely unclear on why. Sure, builds take ages unless done incrementally. That's what pip install -e does, I don't understand why that's not acceptable. If the discussion needs to go to the next level of detail, maybe that applies to the requirements as well as to the objections? Paul PS Alternatively, feel free to ignore my comments. I'm not likely to ever have the time to code any of the proposals being discussed here, but I won't block other pip developers either doing so or merging code, so my comments are not intended as anything more than input from someone who knows a bit about how pip is coded, how it's currently used, and what issues our users currently encounter. Seriously - I'm happy to say my piece and leave it at that if you prefer. From qwcode at gmail.com Sat Nov 7 10:44:01 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 7 Nov 2015 07:44:01 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > Maybe it would be clearer to drop the comment and newline handling > stuff from the core requirement specifier syntax (declaring that > newlines are simply a syntax error), and assume that there's some > higher-level framing protocol taking care of that stuff? that sounds right to me. > https://gist.github.com/njsmith/ed74851c0311e858f0f7 > > (Nice to see Metadata-Version: 2.0 getting some real-world use! I > guess??? the use of 2.0 already is odd see here for old discussion in the wheel tracker: https://bitbucket.org/pypa/wheel/issues/96/metadata-version-20 -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Sat Nov 7 11:08:59 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 7 Nov 2015 08:08:59 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > I'm not sure this is the syntax that I would have come up with, but I > guess it's too late to put the genie back in the bottle, so this PEP > should have some way to cope with these things? why would this PEP deal with this? the higher level PEP that builds on top of this would bump the wheel metadata version (probably to 3.0, due to the jump to 2.0 already) *then* tools will cope with this based on the metadata version -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sat Nov 7 11:33:10 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sat, 7 Nov 2015 17:33:10 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 3:57 PM, Paul Moore wrote: > On 7 November 2015 at 13:55, Ralf Gommers wrote: > > On Sat, Nov 7, 2015 at 2:02 PM, Paul Moore wrote: > >> > >> On 7 November 2015 at 01:26, Chris Barker - NOAA Federal > >> wrote: > >> > So what IS supposed to be used in the development workflow? The new > >> > mythical build system? > > > > I'd like to point out again that this is not just about development > > workflow. This is just as much about simply *installing* from a local git > > repo, or downloaded sources/sdist. > > Possibly I'm misunderstanding here. > I had an example above of installing into different venvs. Full rebuilds for that each time are very expensive. And this whole thread is basically about `pip install .`, not about inplace builds for development. As another example of why even for a single build/install it's helpful to just let the build system do what it wants to do instead of first copying stuff over, here are some timing results. This is for PyWavelets, which isn't that complicated a build (mostly pure Python, 1 Cython extension): 1. python setup.py install: 40 s 2. pip install . --upgrade --no-deps: 58 s # OK, (2) is slow due to using shutil, to be fixed to work like (3): 3. python setup.py sdist: 8 s pip install dist/PyWavelets0.4.0.dev0+da1c6b4.tar.gz: 41 s # so total time for (3) will be 41 + 8 = 49 s # and a better alternative to (1) 4. python setup.py bdist_wheel: 34 s pip install dist/PyWavelets-xxx.whl: 6 s # so total time for (3) will be 34 + 6 = 40 s Not super-scientific, but the conclusion is clear: what pip does is a lot slower than what for me is the expected behavior. And note that without the Cython compile, the difference in timing will get even larger. That expected behavior is: a) Just ask the build system to spit out a wheel (without any magic) b) Install that wheel (always) > > The "pip install . should reinstall" discussion in > > https://github.com/pypa/pip/issues/536 is also pretty much the same > > argument. > > Well, that one is about pip reinstalling if you install from a local > directory, and not skipping the install if the local directory version > is the same as the installed version. As I noted there, I'm OK with > this, it seems reasonable to me to say that if someone has a directory > of files, they may have updated something but not (yet) bumped the > version. > > The debate over there has gone on to whether we force reinstall for a > local *file* (wheel or sdist) which I'm less comfortable with. But > that's is being covered over there. > > The discussion *here* is, I thought, about skipping build steps when > possible because you can reuse build artifacts. That's not "should pip > do the install?", but rather "*how* should pip do the install?" > Specifically, to reuse build artifacts it's necessaryto *not* do what > pip currently does for all (non-editable) installs, which is to > isolate the build in a temporary directory and do a clean build. > That's a sensible debate to have, but it's very different from the > issue you referenced. > > IMO, the discussions currently are complex enough that isolating > independent concerns is crucial if anyone is to keep track. (It > certainly is for me!) > Agreed that the discussions are complex now. But imho they're mostly complex because the basic principles of what pip should be doing are not completely clear, at least to me. If it's "build a wheel, install the wheel" then a lot of things become simpler. >> Fair question. Unfortunately, the answer is honestly that there's no > >> simple answer - pip is not a bad option, but it's not its core use > >> case so there are some rough edges. > > > > My impression is that right now pip's core use-case is not "installing", > but > > "installing from PyPi (and similar repos". There are a lot of rough edges > > around installing from anything on your own hard drive. > > Not true. The rough edges are around installing things where (a) you > don't want to rely in the invariant that name and version uniquely > identify an installation (that's issue 536) and (b) where you don't > want to do a clean build, because building is complex, slow, or > otherwise something you want to optimise (that's this discussion). > > I routinely download wheels and use them to install. I also sometimes > download sdists and install from them, although 99.99% of the time, I > download them, build them into wheels and install them from wheels. It > *always* works exactly as I'd expect. But if I'm doing development, I > use -e. That seems to be the problem here, there are rough edges if > you want a development workflow that doesn't rely on editable > installs. I think that's what I already said :-) > It always works as you expect because you're very familiar with how things work I suspect. I honestly started working on docs/code to make people use `pip install .` and immediately ran into 3 issues (start of this thread). This build caching is #4. And that doesn't even count --upgrade (that was issue #0). There are a vast amount of users that are used to `setup.py install`. They'll be downloading a released/dev version or do a git/hg clone, and run that `setup.py install` command. If we'll tell them to replace that by `pip install .`, then at the moment there's a lot of rough edges that they are going to run into. Now some of those rough edges are bugs, some are things like "does pip build from where you run it or in an isolated tmpdir". I'd like to get to the situation where: - the bugs are fixed - the behavior/performance is >= `setup.py install` - with the difference then being some UI tweaks like by default hiding the build log >> > >> I think it would be good to see if we can ensure pip is useful for > >> this use case as well, all I was pointing out was that people > >> shouldn't assume that it "should" work right now, and that changing it > >> to work might involve some trade-offs that we don't want to make, if > >> it compromises the core functionality of installing packages. > > > > It might be helpful to describe the actual trade-offs then, because as > far > > as I can tell no one has actually described how this would either hurt > > another use-case or make pip internals much more complicated. > > > 2. (For here) Builds are not isolated from what's in the development > directory. So if you have your sdist definition wrong, what you build > locally may work, but when you release it it may fail. Obviously that > can be fixed by proper development and testing practices, but pip is > designed currently to isolate builds to protect against mistakes like > this, we'd need to remove that protection for cases where we wanted to > do in-place builds. > Now this is an actual development work feature/choice. "sdist definition wrong" may help developers which don't test install via sdist in their CI. It doesn't really help end users directly. > 3. The logic inside pip for doing builds is already pretty tricky. > Adding code to sometimes build in place and sometimes in a temporary > directory is going to make it even more complex. That might not be a > concern for end users, but it makes maintaining pip harder, and risks > there being subtle bugs in the logic that could bite end users. If you > want specifics, I can't give them at the moment, because I don't know > what the code to do the proposed in-place building would look like. > > I hope that helps. It's probably not as specific or explicit as you'd > like, but to be fair, nor is the proposal. > It does help, thanks. I don't think I can make the proposal much more concrete than "build a wheel, install a wheel (without magic)" though. At least without starting to implement that proposal. > What we currently have on the table is "If 'pip (install/wheel) .' is > supposed to become the standard way to build things, then it should > probably build in-place by default." For my personal use cases, I > don't actually agree with any of that, but my use cases are not even > remotely like those of numpy developers, so I don't want to dismiss > the requirement. But if it's to go anywhere, it needs to be better > explained. > > Just to be clear, *my* position (for projects simpler than numpy and > friends) is: > > 1. The standard way to install should be "pip install wheel>". > 2. The standard way to build should be "pip wheel directory>". The directory should be a clean checkout of something you > plan to release, with a unique version number. > 3. The standard way to develop should be "pip install -e ." > Agree with all of those. > 4. Builds (pip wheel) should always unpack to a temporary location and > build there. When building from a directory, in effect build a sdist > and unpack it to the temporary location. > Here we seem to disagree. Your only concrete argument for it so far is aimed at developers, and I think it (a) is an extra step that adds complexity to the implementation, and (b) is inherently slower. I hear the message that for things like numpy these rules won't work. > But I'm completely unclear on why. Sure, builds take ages unless done > incrementally. That's what pip install -e does, I don't understand why > that's not acceptable. > I hope my replies above make clear why -e isn't too relevant here. > > If the discussion needs to go to the next level of detail, maybe that > applies to the requirements as well as to the objections? > Maybe this isn't 100% correct because I'm not that familiar with pip internals yet, but I'll give it a try: For `pip install `,: - Avoid using anything in pip/download.py - Instead, construct the cmdoptions and pass them to WheelBuilder - Install the built wheel - Optionally: store a pip log somewhere so it knows what it did in . Might come in handy for something. Could be that that adds complexity instead of reduces it, but I don't yet see it. > Paul > > PS Alternatively, feel free to ignore my comments. I won't, your detailed reply was quite helpful. Ralf > I'm not likely to > ever have the time to code any of the proposals being discussed here, > but I won't block other pip developers either doing so or merging > code, so my comments are not intended as anything more than input from > someone who knows a bit about how pip is coded, how it's currently > used, and what issues our users currently encounter. Seriously - I'm > happy to say my piece and leave it at that if you prefer. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sat Nov 7 15:32:43 2015 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 8 Nov 2015 09:32:43 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On unknown names, current pkg_resources raises SyntaxError. So I think we need to update the spec from that perspective to be clear. All PEP 426 defined variables are handled without error by current pkg_resources/markerlib implementation. Perhaps the new variables should raise rather than evaluating to '' / 0 ? Some discussion / thought would be good. Certainly when used and evaluated by an existing pkg_resources they will error - so perhaps we should not add any new variables at this point? So, if we don't unify this with the wheel encoding of extras, it will require multiple parsers indefinitely. I need to think more about whether it makes sense or not. Wheel certainly needs a way to say 'this distribution exports extras X, Y, Z (and their respective dependencies)'. flit and other tools producing wheels need the same facility. https://www.python.org/dev/peps/pep-0427 doesn't define this use of markers; but pip and wheel have collaborated on it. PEP-345 doesn't describe Provides-Extra, which pkg_resources uses when parsing .dist-info directories as well (it determines which extra variables get substituted into the set of requires to determine the values of the extras...). So there's basically still a bunch of underspecified behaviours out there in the wild, and since my strategy is to minimal variation vs whats there, we need to figure out the right places to split things to layer this well. Specifying a new variable of 'extra' is fairly easy: we need to decide on the values it will take, and thats well defined but layer crossing: when processing a dependency with one or more extras, you need to loop over all the dependency specifications once with each extra defined (including I suspect for completeness '' for the non-extras) and then union together the results. So at this layer I think we could say that: - extra is only valid if the context that is interpreting the specification defines it - when invalid it will raise SyntaxError This allows a single implementation to handle .dist-info directories as it does today, while specifying it more fully. It leaves it open in future for us to continue modelling exported extras as marker-filtered-specifications + a Provides-Extra, or to move to exported extras as something in a hash in a richer serialisation format, or some third thing. This is good I think. I do like the idea of comments and line continuations being removed. We can then explicitly say that this DSL is going to be embedded in a larger context such as requirements.txt files, requires headers etc, and that those contexts may provide multi line handling as desired. I'll apply Nathaniel's excellent review details + this on Monday and issue an update. -Rob On 8 November 2015 at 05:08, Marcus Smith wrote: >> >> I'm not sure this is the syntax that I would have come up with, but I >> guess it's too late to put the genie back in the bottle, so this PEP >> should have some way to cope with these things? > > > why would this PEP deal with this? > the higher level PEP that builds on top of this would bump the wheel > metadata version (probably to 3.0, due to the jump to 2.0 already) > *then* tools will cope with this based on the metadata version > -- Robert Collins Distinguished Technologist HP Converged Cloud From qwcode at gmail.com Sat Nov 7 16:11:07 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sat, 7 Nov 2015 13:11:07 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: > > > PEP-345 doesn't > describe Provides-Extra, which pkg_resources uses when parsing > .dist-info directories as well fwiw, this provides a bit of history on the "Provides-Extra": https://github.com/pypa/interoperability-peps/issues/44 -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Nov 7 17:21:34 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 7 Nov 2015 23:21:34 +0100 Subject: [Distutils] The future of invoking pip References: Message-ID: <20151107232134.5190af14@fsol> The actual question is: which problem are you trying to solve *that current users are actually experiencing*? I'm -1 on removing the "pip" command. "python -m pip" is frankly not a reasonable subtitution if we want to *promote* pip. > * The above gets *really* confusing when ``pipX`` or ``pip`` do not agree with > ? what ``pythonX`` and ``python`` point to. That's a problem for foreign package managers and distributions. Let them deal with it. Regards Antoine. From p.f.moore at gmail.com Sat Nov 7 17:44:55 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 22:44:55 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 16:33, Ralf Gommers wrote: > I had an example above of installing into different venvs. Full rebuilds for > that each time are very expensive. Why doesn't wheel caching solve this problem? That's what it's *for*, surely? Paul From donald at stufft.io Sat Nov 7 17:46:07 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 17:46:07 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 5:45:24 PM, Paul Moore (p.f.moore at gmail.com) wrote: > On 7 November 2015 at 16:33, Ralf Gommers wrote: > > I had an example above of installing into different venvs. Full rebuilds for > > that each time are very expensive. > > Why doesn't wheel caching solve this problem? That's what it's *for*, surely? > I?m pretty sure we don?t cache wheels for local file paths. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sat Nov 7 17:46:50 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 22:46:50 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 22:46, Donald Stufft wrote: > On November 7, 2015 at 5:45:24 PM, Paul Moore (p.f.moore at gmail.com) wrote: >> On 7 November 2015 at 16:33, Ralf Gommers wrote: >> > I had an example above of installing into different venvs. Full rebuilds for >> > that each time are very expensive. >> >> Why doesn't wheel caching solve this problem? That's what it's *for*, surely? >> > > I?m pretty sure we don?t cache wheels for local file paths. So is this an argument that we should? Paul From donald at stufft.io Sat Nov 7 17:47:33 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 17:47:33 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 5:46:53 PM, Paul Moore (p.f.moore at gmail.com) wrote: > On 7 November 2015 at 22:46, Donald Stufft wrote: > > On November 7, 2015 at 5:45:24 PM, Paul Moore (p.f.moore at gmail.com) wrote: > >> On 7 November 2015 at 16:33, Ralf Gommers wrote: > >> > I had an example above of installing into different venvs. Full rebuilds for > >> > that each time are very expensive. > >> > >> Why doesn't wheel caching solve this problem? That's what it's *for*, surely? > >> > > > > I?m pretty sure we don?t cache wheels for local file paths. > > So is this an argument that we should? > Paul >? Only if we think we can trust the version numbers to be unique from random paths on the file system. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sat Nov 7 18:07:44 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 23:07:44 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 22:47, Donald Stufft wrote: > Only if we think we can trust the version numbers to be unique from random paths on the file system. Precisely. And that's the sort of trade-off that Ralf was asking to be clarified. Here, the trade off is that if we *are* allowed to rely on the fact that name/version uniquely identifies the build, then we can optimise build times via wheel cacheing. If we can't make that assumption, we can't do the optimisation. The request here seems to be that we provide the best of both worlds - provide optimal builds *without* making the assumptions we use for the "install a released version" case. Paul From njs at pobox.com Sat Nov 7 18:15:06 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 15:15:06 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 2:44 PM, Paul Moore wrote: > On 7 November 2015 at 16:33, Ralf Gommers wrote: >> I had an example above of installing into different venvs. Full rebuilds for >> that each time are very expensive. > > Why doesn't wheel caching solve this problem? That's what it's *for*, surely? The wheel cache maps (name, version) -> wheel. If I hand you a source directory, you may not even be able to determine the (name, version) except via building a wheel (depending on the resolution to that other thread about egg_info/dist-info commands). And it's certainly not true in general that you can trust the (name, version) from a working directory to indicating anything meaningful -- e.g. every commit to pip mainline right now creates a new different "pip 8.0.0.dev0". So what would you even use for your cache key? I don't see how wheel caching can really help here. -n -- Nathaniel J. Smith -- http://vorpus.org From donald at stufft.io Sat Nov 7 18:16:34 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 18:16:34 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 6:07:47 PM, Paul Moore (p.f.moore at gmail.com) wrote: > On 7 November 2015 at 22:47, Donald Stufft wrote: > > Only if we think we can trust the version numbers to be unique from random paths on the > file system. > > Precisely. And that's the sort of trade-off that Ralf was asking to be > clarified. Here, the trade off is that if we *are* allowed to rely on > the fact that name/version uniquely identifies the build, then we can > optimise build times via wheel cacheing. If we can't make that > assumption, we can't do the optimisation. > > The request here seems to be that we provide the best of both worlds - > provide optimal builds *without* making the assumptions we use for the > "install a released version" case. > Paul >? Well, you can get the optimized builds by not copying the path into a temporary location when you do ``pip install .`` and just letting the build system handle whether or not it caches the build output between multiple runs. I don?t want to start doing this, because I want to make a different change that will make it harder (impossible?) to do that. I want to reduce the ?paths? that an installation can go down. Right now we have: 1. I have a wheel and pip installs it. 2. I have an sdist and pip turns it into a wheel and then pip installs it. 3. I have an sdist and pip installs it. 4. I have a directory and pip installs it. 5. I have a directory and pip installs it in editable mode. The outcome of all of these types of installs are subtly different and we?ve had a number of users regularly get confused when they act differently over the years. I do not think it?s possible to make (5) act like anything else because it is inherently different, however I think we can get to the point that 1-4 all act the exact same way. and I think the way to do it is to change these so instead it is like: 1. I have a wheel and pip installs it. 2. I have an sdist and pip turns it into a wheel and then pip installs it. 3. I have a directory and pip turns it into a sdist and then pip turns that sdist into a wheel and then pip installs it. 4. I have a directory and pip installs it in editable mode. Essentially, this is removing two ?different? types of installations, one where we install directly from a sdist (without ever going through a wheel) and one where we install directly from a path (without ever going through a sdist or a wheel). Omitting the whole editable mode from the consideration, we get to a point where installs ONLY ever happen to go from a ?Arbitrary Directory? to an Sdist to a Wheel to installation and the only real differences are at what point in that process the item we?re trying to install is already at. Of course development/editable installs are always going to be weird because they are in-place. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From njs at pobox.com Sat Nov 7 18:37:53 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 15:37:53 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 6:57 AM, Paul Moore wrote: > 2. (For here) Builds are not isolated from what's in the development > directory. So if you have your sdist definition wrong, what you build > locally may work, but when you release it it may fail. Obviously that > can be fixed by proper development and testing practices, but pip is > designed currently to isolate builds to protect against mistakes like > this, we'd need to remove that protection for cases where we wanted to > do in-place builds. I agree that it would be nice to make sdist generation more reliable and tested by default, but I don't think this quite works as a solution. 1) There's no guarantee that building an sdist from some dirty working tree will produce anything like what you'd have for a release sdist, or even a clean isolated build. (E.g. a very common mistake is adding a new file to the working directory for forgetting to run 'git/hg add'. To protect against this, you have to either have to have a build system that's smart enough to talk to the VCS when figuring out what files to include, or better yet you have to work from a clean checkout.) And as currently specified these "isolated" build trees might even end up including partial build detritus from previous in-place builds, copied from the source directory into the temporary directory. 2) Sometimes people will want to download an sdist, unpack it, and then run 'pip install .' from it. In your proposal this would require first building a new sdist from the unpacked working tree. But there's no guarantee that you can generate an sdist from an sdist. None of the proposals for a new build system interface have contemplated adding an "sdist" command, and even if they did, then a clever sdist command might well fail, e.g. because it is only designed to build sdists from a checkout with full VCS metadata that it can use to figure out what files to include :-). 3) And anyway, it's pretty weird logically to include a mandatory sdist command inside an interface that 99% of the time will be working *from* an sdist :-). The rule of thumb I've used for the build interface stuff so far is that it should be the minimal stuff that is needed to provide a convenient interface for people who just want to install packages, because the actual devs on a particular project can use whatever project/build-system-specific interfaces make sense for their workflow. And end-users don't build sdists. But for the operations that pip does provide, like 'pip wheel' and 'pip install', they should be usable by devs, because devs will use them. > 3. The logic inside pip for doing builds is already pretty tricky. > Adding code to sometimes build in place and sometimes in a temporary > directory is going to make it even more complex. That might not be a > concern for end users, but it makes maintaining pip harder, and risks > there being subtle bugs in the logic that could bite end users. If you > want specifics, I can't give them at the moment, because I don't know > what the code to do the proposed in-place building would look like. Yeah, this is always a concern for any change. The tradeoff is that you get to delete the code for "downloading" unpacked directories into a temporary directory (which currently doesn't even use sdist -- it just blindly copies everything, including e.g. the full git history). And you get to skip specifying a standard build-an-sdist interface that pip and every build system backend would all have to support and interoperate on. Basically AFAICT the logic should be: 1) Arrange for the existence of a build directory: If building from a directory: great, we have one, use that else if building from a file/url: download it and unpack it, then use that 2) do the build using the build directory 3) if it's a temporary directory and the build succeeded, clean up (Possibly with some complications like providing options for people to specify a non-temporary directory to use for unpacking downloaded sdists.) It might need a bit of refactoring so that the "arrange for the existence of a build directory" step returns the chosen build directory instead of taking it as a parameter like I assume it does now, but it doesn't seem like the intrinsic complexity is very high. > I hope that helps. It's probably not as specific or explicit as you'd > like, but to be fair, nor is the proposal. > > What we currently have on the table is "If 'pip (install/wheel) .' is > supposed to become the standard way to build things, then it should > probably build in-place by default." For my personal use cases, I > don't actually agree with any of that, but my use cases are not even > remotely like those of numpy developers, so I don't want to dismiss > the requirement. But if it's to go anywhere, it needs to be better > explained. > > Just to be clear, *my* position (for projects simpler than numpy and > friends) is: > > 1. The standard way to install should be "pip install ". > 2. The standard way to build should be "pip wheel directory>". The directory should be a clean checkout of something you > plan to release, with a unique version number. > 3. The standard way to develop should be "pip install -e ." > 4. Builds (pip wheel) should always unpack to a temporary location and > build there. When building from a directory, in effect build a sdist > and unpack it to the temporary location. > > I hear the message that for things like numpy these rules won't work. > But I'm completely unclear on why. Sure, builds take ages unless done > incrementally. That's what pip install -e does, I don't understand why > that's not acceptable. To me this feels like mixing two orthogonal issues. 'pip install' and 'pip install -e' have different *semantics* -- one installs a snapshot into an environment, and one installs a symlink-like-thing into an environment -- and that's orthogonal to the question of whether you want to implement that using a "clean build" or not. (Also, it's totally reasonable to want partial builds in 'pip wheel': 'pip wheel .', get a compiler error, fix it, try again...) Furthermore, I actually really dislike 'pip install -e' and am surprised to see so many people talking about it as if it were the obvious choice for all development :-). I understand it takes all kinds, etc., I'm not arguing that it should be removed or anything (though I probably would if I thought it had any chance of getting consensus :-)). But from my point of view, 'pip install -e' is a weird intrinsically-kinda-broken wart that provides no value outside of some rare use cases that most people never encounter. I say "intrinsically-kinda-broken" because as soon as you do an editable install, the metadata in .egg/dist-info starts to drift out of sync from your actual source tree, so that it necessarily makes the installed package database less reliable, undermining a lot of the work that's being done to make installation and resolution more robust. I also am really unsure about why people use it. I generally don't *want* to install code-under-development into a full-fledged virtualenv. I see lots of people who have a primary virtualenv that they use for day-to-day work, and they 'pip install -e' all the packages that they work on into this environment, and then run into all kinds of weird problems because they're using a bunch of untested code together, or they switch to a different branch of one package to check something and then forget about it when they context switch to some other project and everything is broken. And then they try to install some other package, and it depends on foo >= 1.2, and they have an editable install of foo that claims to be 1.1 (because that was the last time the .egg-info was regenerated) but really it's 1.3 and all kinds of weird things happen. And for packages with binary extensions, it doesn't really work, anyway, because you still have to rebuild every time (and you can get extra bonus forms of weird skew, where when you import the package then you get the up-to-date version of some source files -- the .py ones -- combined with out-of-date versions of others -- the .pyx / .c / .cpp ones). Even if I do decide that I want to install a non-official release into some virtualenv, I'd like to install a consistent snapshot that gets upgraded or uninstalled all together as an atomic unit. What I actually do when working on NumPy is that I use a little script [1] that does the equivalent of: $ rm -rf ./.tmpdir $ pip install . -d ./.tmpdir $ cd ./.tmpdir $ python -c 'import numpy; numpy.test()' OTOH, for packages without binary extensions, I just run my tests or start a REPL from the root of my source dir, and that works fine without the hassle of creating and activating a virtualenv, or polluting my normal environment with untested code. Also, 'pip install -e' intrinsically pollutes your source tree with build artifacts. I come from the build system tradition that says that build artifacts should all be shunted to the side and leave the actual directories uncluttered: https://www.gnu.org/software/automake/manual/html_node/VPATH-Builds.html and I think that a valid approach that build system authors might want to make is to enforce the invariant that the build system never writes to anywhere outside of $srcdir/build/ or similar. If we insist that editable installs are the only way to work, then we take this option away from projects. So there simply isn't any problem I have where editable installs are the best solution, and I see them causing problems for people all the time. That said, there are two theoretical advantages I can see to editable installs: 1) Unlike starting an interpreter from the root of your source tree, they trigger the install of runtime dependencies. I solve this by just installing those into my working environment myself, but for projects with complex dependencies I guess 'install -e' might ATM be the most convenient way to get this set up. This isn't a very compelling argument, though, because one could trivally provide better support for just this ('pip install-dependencies .' or something) without bringing along the intrinsically tricky bits of editable installs. 2) For people working on complex projects that involve multiple pure-python packages that are distributed separately but that require coordinated changes in sync (maybe OpenStack is like this?), so each round of your edit/test cycle involves edits to multiple different projects, then 'pip install -e' kinda solves a genuine problem, because it lets you assemble a single working environment that contains the editable versions of everything together. This seems like a genuine use case -- but it's what I meant at the top about how they seem like a very specialized tool for rare cases, because very few people are working on meta-projects composed of multiple pure-python sub-projects evolving in lock-step. Anyway, like I said, I'm not trying to argue that 'pip install -e' should be deprecated -- I understand that many people love it for reasons that I don't fully understand. My goal is just to help those who think 'pip install -e' is obviously the one-and-only way to do python development to understand my perspective, and why we might want to support other options as well. I think the actual bottom line for pip as a project is: we all agree that sooner or later we have to move users away from running 'setup.py install'. Practically speaking, that's only going to happen if 'pip install' actually functions as a real replacement, and doesn't create regressions in people's workflows. Right now it does. The thing that started this whole thread is that numpy had actually settled on going ahead and making the switch to requiring pip install, but then got derailed by issues like these... -n [1] https://github.com/numpy/numpy/blob/master/runtests.py -- Nathaniel J. Smith -- http://vorpus.org From donald at stufft.io Sat Nov 7 18:38:49 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 18:38:49 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 6:16:34 PM, Donald Stufft (donald at stufft.io) wrote: > > I want to reduce the ?paths? that an installation can go down.? I decided I?d make a little visual aid to help explain what I mean here (omitting development/editable installs because they are weird and will always be weird)! Here?s essentially the way that installs can happen right now?https://caremad.io/s/Ol1TuV6R9K/. Each of these types of installations act subtly different in ways that are not very obvious to most people. Here?s what I want it to be:?https://caremad.io/s/uJYeVzBlQG/. In this way no matter what a user is installing from (Wheel, Source Dist, Directory) the outcome will be the same and there won?t be subtly different behaviors based on what is being provided. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sat Nov 7 18:41:25 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 Nov 2015 23:41:25 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151107232134.5190af14@fsol> References: <20151107232134.5190af14@fsol> Message-ID: On 7 November 2015 at 22:21, Antoine Pitrou wrote: > The actual question is: which problem are you trying to solve *that > current users are actually experiencing*? Typically, people using "pip" to install stuff, and finding it gets installed into the "wrong" Python installation (i.e., not the one they expected). I'm not clear myself on how this happens, but it seems to be common on some Linux distros (and I think on OSX as well) where system and user-installed Pythons get confused. Whether removing the pip command in favour of explicitly using the name of the python you want to install into is a reasonable solution, or an over-reaction, is what we're trying to establish. But it is a very real problem and we see a fair number of bug reports based on it. Paul From njs at pobox.com Sat Nov 7 18:43:46 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 15:43:46 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 3:16 PM, Donald Stufft wrote: [...] > The outcome of all of these types of installs are subtly different and we?ve had a number of users regularly get confused when they act differently over the years. I do not think it?s possible to make (5) act like anything else because it is inherently different, however I think we can get to the point that 1-4 all act the exact same way. and I think the way to do it is to change these so instead it is like: > > 1. I have a wheel and pip installs it. > 2. I have an sdist and pip turns it into a wheel and then pip installs it. > 3. I have a directory and pip turns it into a sdist and then pip turns that sdist into a wheel and then pip installs it. > 4. I have a directory and pip installs it in editable mode. I wrote some more detailed comments on this idea in the reply I just posted to Paul's message, but briefly, the alternative way to approach this would be: 1. I have a wheel and pip installs it 2. I have an sdist and pip unpacks it into a directory and builds a wheel from that directory and then pip installs it. 3. I have a directory and pip builds a wheel from that directory and then pip installs it. 4. I have a directory and pip installs it in editable mode. This is actually simpler, because we've eliminated the "create an sdist" operation and replaced it with the far-more-trivial "unpack an sdist". And it isn't even a replacement, because your 2 and my 2 are actually identical when you look at what it means to turn an sdist into a wheel :-). -n -- Nathaniel J. Smith -- http://vorpus.org From donald at stufft.io Sat Nov 7 19:02:04 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 19:02:04 -0500 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On November 7, 2015 at 6:43:50 PM, Nathaniel Smith (njs at pobox.com) wrote: > On Sat, Nov 7, 2015 at 3:16 PM, Donald Stufft wrote: > [...] > > The outcome of all of these types of installs are subtly different and we?ve had a number > of users regularly get confused when they act differently over the years. I do not think > it?s possible to make (5) act like anything else because it is inherently different, > however I think we can get to the point that 1-4 all act the exact same way. and I think the > way to do it is to change these so instead it is like: > > > > 1. I have a wheel and pip installs it. > > 2. I have an sdist and pip turns it into a wheel and then pip installs it. > > 3. I have a directory and pip turns it into a sdist and then pip turns that sdist into a wheel > and then pip installs it. > > 4. I have a directory and pip installs it in editable mode. > > I wrote some more detailed comments on this idea in the reply I just > posted to Paul's message, but briefly, the alternative way to approach > this would be: > > 1. I have a wheel and pip installs it > 2. I have an sdist and pip unpacks it into a directory and builds a > wheel from that directory and then pip installs it. > 3. I have a directory and pip builds a wheel from that directory and > then pip installs it. > 4. I have a directory and pip installs it in editable mode. > > This is actually simpler, because we've eliminated the "create an > sdist" operation and replaced it with the far-more-trivial "unpack an > sdist". And it isn't even a replacement, because your 2 and my 2 are > actually identical when you look at what it means to turn an sdist > into a wheel :-). > The problem is that an sdist and a directory are not the same things even though they may trivially appear to be. A very common problem people run into right now is that they don?t adjust their MANIFEST.in so that some new file they?ve added gets included in the sdist. In the current system and your proposed system if someone types ``pip install .`` that just silently works. Then they go ?Ok great, my package works? and they create a sdist and send that off? except the sdist is broken because it?s missing that file they needed. Since we?ve disabled the ability to delete + reupload files to PyPI I get probably once or twice a week someone contacting me asking if I can let them re-upload a file because they created an sdist that was missing a file. A decent number of those told me that they had ?tested? it by running ``pip install .`` or ``setup.py install`` into a fresh virtual environment and that it had worked. It?s true that the MANIFEST.in system exacerbates this problem by being a pretty crummy and error prone system to begin with, however the same thing is going to exist for any system where you have a path that builds a sdist (and may or may not include a file in that sdist) and a path that goes direct to wheel. It might not only be files that didn?t get to be included because of a mistake either. Some files might not get generated until sdist build time, something like LXML generates .c files from Cython sources at sdist creation time and then they build a Wheel from those .c files. They do this to prevent people from needing to have Cython available on a machine other than a development machine. In your proposed work flow their ?build wheel? command needs to be able to deal with the fact that the .c files may or may not be available (and will need to figure out a way to indicate that Cython is a build dependency if they are not). In my proposed workflow their wheel build command gets to be simpler, it only needs to deal with .c files and their sdist command gets used to create the .c files while the sdist is being generated. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sat Nov 7 19:03:31 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 00:03:31 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 7 November 2015 at 16:33, Ralf Gommers wrote: > Your only concrete argument for it so far is aimed at developers I feel that there's some confusion over the classes of people involved here ("developers", "users", etc). For me, the core user base for pip is people who use "pip install" to install *released* distributions of packages. For those people, name and version uniquely identifies a build, they often won't have a build environment installed, etc. These people *do* however sometimes download wheels manually and install them locally (the main example of this is Christoph Gohlke's builds, which are not published as a custom PyPI-style index, and so have to be downloaded and installed from a local directory). The other important category of user is people developing those released distributions. They often want to do "pip install -e", they install their own package from a working directory where the code may change without a corresponding version change, they expect to build from source and want that build cycle to be fast. Historically, they have *not* used pip, they have used setup.py directly (or setup.py develop, or maybe custom build tools like bento). So pip is not optimised for their use cases. Invocations like "pip install " cater for the first category. Invocations like "pip install " cater for the second (although currently, mostly by treating the local directory as an unpacked sdist, which as I say is not optimised for this use case). Invocations like "pip install " are in the grey area - but I'd argue that it's more often used by the first category of users, I can't think of a development workflow that would need it. Regarding the point you made this comment about: >> 4. Builds (pip wheel) should always unpack to a temporary location and >> build there. When building from a directory, in effect build a sdist >> and unpack it to the temporary location. I see building a wheel as a release activity. As such, it should produce a reproducible result, and so should not be affected by arbitrary state in the development directory. I don't know whether you consider "ensuring the wheels aren't wrong" as aimed at developers or at end users, it seems to me that both parties benefit. Personally, I'm deeply uncomfortable about *ever* encountering, or producing (as a developer) sdists or wheels with the same version number but functional differences. I am OK with installing a development version (i.e., direct from a development directory into a site-packages, either as -e or as a normal install) where the version number doesn't change even though the code does, but for me the act of producing release artifacts (wheels and sdists) should freeze the version number. I've been bitten too often by confusion caused by trying to install something with the same version but different code, to want to see that happen. Paul From solipsis at pitrou.net Sat Nov 7 18:53:37 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 8 Nov 2015 00:53:37 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> Message-ID: <20151108005337.285d2f9e@fsol> On Sat, 7 Nov 2015 23:41:25 +0000 Paul Moore wrote: > On 7 November 2015 at 22:21, Antoine Pitrou wrote: > > The actual question is: which problem are you trying to solve *that > > current users are actually experiencing*? > > Typically, people using "pip" to install stuff, and finding it gets > installed into the "wrong" Python installation (i.e., not the one they > expected). I'm not clear myself on how this happens, but it seems to > be common on some Linux distros (and I think on OSX as well) where > system and user-installed Pythons get confused. Well, the problem is that "python -m pip" isn't any better. If you don't know what the current "pip" is, then chances are you don't know what the current "python" is, either. (I'm not trying to deny the issue, I sometimes wonder what "pip" will install into exactly, but removing the command in favour of a "-m" switch wouldn't do any any good IMO, and it would make Python package management "even more baroque" than it currently is) Regarrds Antoine. From donald at stufft.io Sat Nov 7 19:16:55 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 19:16:55 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151108005337.285d2f9e@fsol> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On November 7, 2015 at 7:12:59 PM, Antoine Pitrou (solipsis at pitrou.net) wrote: > On Sat, 7 Nov 2015 23:41:25 +0000 > Paul Moore wrote: > > On 7 November 2015 at 22:21, Antoine Pitrou wrote: > > > The actual question is: which problem are you trying to solve *that > > > current users are actually experiencing*? > > > > Typically, people using "pip" to install stuff, and finding it gets > > installed into the "wrong" Python installation (i.e., not the one they > > expected). I'm not clear myself on how this happens, but it seems to > > be common on some Linux distros (and I think on OSX as well) where > > system and user-installed Pythons get confused. > > Well, the problem is that "python -m pip" isn't any better. If you > don't know what the current "pip" is, then chances are you don't know > what the current "python" is, either. > > (I'm not trying to deny the issue, I sometimes wonder what "pip" will > install into exactly, but removing the command in favour of a "-m" > switch wouldn't do any any good IMO, and it would make Python package > management "even more baroque" than it currently is) > The largest problem comes when ``python`` and ``pip`` disagree about which Python is being invoked. A further problem is that we also need a way beyond just X.Y to differentiate what version of Python something is being installed into. What should the command be to install into PyPy 2.4.0? What about PyPy3 2.4.0? What if someone has /usr/bin/python2.7 and /usr/bin/pip2.7 and they then install another Python 2.7 into /usr/local/bin/python2.7 but they don?t have pip installed there? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From solipsis at pitrou.net Sat Nov 7 19:22:04 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 8 Nov 2015 01:22:04 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: <20151108012204.35939337@fsol> On Sat, 7 Nov 2015 19:16:55 -0500 Donald Stufft wrote: > > The largest problem comes when ``python`` and ``pip`` disagree about which Python is being invoked. As a said, this is a problem for package managers and distributions. "pip" isn't the only affected command, e.g. "pydoc" is as well. > What should the command be to install into PyPy 2.4.0? If you are using a virtualenv (or a conda environment, assuming you did a conda package for pypy), just "pip". > What if someone has /usr/bin/python2.7 > and /usr/bin/pip2.7 and they then install another Python 2.7 > into /usr/local/bin/python2.7 but they don?t have pip installed there? Why wouldn't they? I thought the plan is to have "pip" bundled with every recent Python version? AFAIR someone even said it was a bug if pip wasn't installed together with Python... Regards Antoine. From donald at stufft.io Sat Nov 7 19:37:03 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 Nov 2015 19:37:03 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151108012204.35939337@fsol> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> Message-ID: On November 7, 2015 at 7:22:23 PM, Antoine Pitrou (solipsis at pitrou.net) wrote: > On Sat, 7 Nov 2015 19:16:55 -0500 > Donald Stufft wrote: > > > > The largest problem comes when ``python`` and ``pip`` disagree about which Python > is being invoked. > > As a said, this is a problem for package managers and distributions. > "pip" isn't the only affected command, e.g. "pydoc" is as well. Package managers and distributions aren?t the only place that Python comes from. People might have installed their own Python (such as via pyenv or even manually) or they may be using two different package managers one that comes with pip by default and one that doesn?t. This probably does also affect pydoc but I would suggest that more people are invoking pip in various ?weird? situation than are invoking pydoc. I know personally I?ve *never* invoked pydoc. In fact, the pyvenv script has been deprecated and is going to be removed in Python 3.8 in favor of `python -m venv` for similar reasons that I've described here. > > > What should the command be to install into PyPy 2.4.0? > > If you are using a virtualenv (or a conda environment, assuming you > did a conda package for pypy), just "pip?. And if you?re not using a virtual environment or a conda environment? If you have /usr/bin/python and /usr/bin/pypy how should I install something into PyPy? We can?t just pretend that the only time someone wants to install into PyPy is inside of a virtual environment. > > > What if someone has /usr/bin/python2.7 > > and /usr/bin/pip2.7 and they then install another Python 2.7 > > into /usr/local/bin/python2.7 but they don?t have pip installed there? > > Why wouldn't they? I thought the plan is to have "pip" bundled with > every recent Python version? AFAIR someone even said it was a bug if pip > wasn't installed together with Python? Python 2.7.9+ and Python 3.4+ come with ensurepip which can be used to install pip into an environment. In Python 3.4+ it will be installed by default by the Makefile and by the OSX/Windows installers. In Python 2.7.9+ it?s only installed by default in the OSX/Windows installers but it is not in the Makefile. So how might they not get pip? * They installed from a Makefile (or their distribution did) and they either accepted the 2.7 default or they disabled it when installing on 3.4. * They are using a version of Python that didn?t come with pip. * They uninstalled pip (because pip isn?t part of the standard library, it?s just another Python package that can be uninstalled or upgraded). * They installed into a virtual environment that was created without pip being installed. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From madewokherd at gmail.com Sat Nov 7 19:53:26 2015 From: madewokherd at gmail.com (Vincent Povirk) Date: Sat, 7 Nov 2015 18:53:26 -0600 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> Message-ID: > Typically, people using "pip" to install stuff, and finding it gets > installed into the "wrong" Python installation (i.e., not the one they > expected). I'm not clear myself on how this happens, but it seems to > be common on some Linux distros (and I think on OSX as well) where > system and user-installed Pythons get confused. FWIW, the approach I'm taking for the oneget provider is that, if there's more than one version of python into which you "can" install a package (the definition of "can install" is a bit hairy, but for right now it means the versions with wheels if there are any wheels), it'll print a message listing the installs you can use. You can then specify an install using -PythonVersion 3.4 or -PythonLocation c:\python34\python.exe. The -PythonVersion switch implicitly uses a wildcard compare (3.4.*) even though the wildcard is not specified. From dmertz at continuum.io Sat Nov 7 20:14:26 2015 From: dmertz at continuum.io (David Mertz) Date: Sat, 7 Nov 2015 17:14:26 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> Message-ID: I found I really did typically have that problem that Paul describes pretty often until I switched to using predominantly conda. I would always make symlinks for pip2 and pip3 (and maybe for pip3.3 vs. pip3.4) to make sure things went the right places. I suppose this problem was largely because I didn't really use virtualenv much, and now that I'm a conda person the environments come "for free" with the installation. On Sat, Nov 7, 2015 at 3:41 PM, Paul Moore wrote: > On 7 November 2015 at 22:21, Antoine Pitrou wrote: > > The actual question is: which problem are you trying to solve *that > > current users are actually experiencing*? > > Typically, people using "pip" to install stuff, and finding it gets > installed into the "wrong" Python installation (i.e., not the one they > expected). I'm not clear myself on how this happens, but it seems to > be common on some Linux distros (and I think on OSX as well) where > system and user-installed Pythons get confused. > > Whether removing the pip command in favour of explicitly using the > name of the python you want to install into is a reasonable solution, > or an over-reaction, is what we're trying to establish. But it is a > very real problem and we see a fair number of bug reports based on it. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- *David Mertz, Ph.D.* *Senior Software Engineer and Senior Trainer* -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sat Nov 7 22:30:18 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 19:30:18 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Thu, Nov 5, 2015 at 1:08 PM, Donald Stufft wrote: > Another possible option is to modify pip so that instead of installing into > site-packages we instead create an "executable zip file" which is a simple zip > file that has all of pip inside of it and a top level __main__.py. Python can > directly execute this as if it were installed. We would no longer have any > command except for ``pip`` (which is really this executable zip file). This > script would default to installing into ``python``, and you can direct it to > install into a different python using a (new) flag of ``-p`` or ``--python`` > where you can specify either a full path or a command that we'll search $PATH > for. I'm not sure if I like this idea or not, but I think it's an interesting one. I'd frame it differently, though. It seems to me that the core idea is: A basic problem that pip has to solve, before it can do anything else, is that it has to identify the python environment that it's supposed to be working on. Right now, the standard way it does this is by looking at sys.executable -- so pip is always doing an odd dance where it's rearranging the universe around itself, and conversely, every environment needs to have its own copy of pip inside itself. An alternative approach would be to totally decouple the host python used to execute pip from the target python that pip acts upon, on the grounds that these are logically distinct things. (As a thought experiment you can even imagine a package manager called, say, 'cpip', which is a standalone program written in C or some other non-Python-language, but that happens to know how to manipulate Python environments. I'm not saying porting pip to another language makes is in any way a good idea, just that imagining such a thing is a useful exercise to clarify the difference between the host environment and the target environment.) In this approach, you'd have a program called 'pip', and when run it uses some set of rules to figure out which environment it should work on, consulting command line arguments, environment variables, the current PATH, whatever. (Windows has its own complicated system that I don't understand, but for Unix-likes, the default rule would probably be: search the path for "python" + sys.argv[0][3:], and use that. So you could install a main pip executable + symlinks named pip3, pip3.4, etc., and invoking each of them would automatically target the same environments you get when you run python, python3, python3.4, etc.) Having decoupled things like this, then it doesn't really matter how pip is distributed, so long as there is a pip program and it works. It'd be in a similar position to, say, how mercurial is shipped. You could use pyinstaller to make a fully standalone package for Windows that didn't even depend on the system python install, and Debian could ship a version that's hosted by the system default python, and if you want to have multiple pip's installed then you could do that just like you do now by installing them into their own venvs. Other interesting consequences: - pip could get more aggressive about dropping *host* support for older versions of python -- e.g. eventually pip itself could be written in pure python 3 while still preserving support for installing stuff into a python 2 target environment. Or less aggressively, you could imagine dropping host support for 2.6 ahead of dropping target support for 2.6. - If pip is a thing that lives "outside" environments instead of "inside" them, then it might be natural for it to grow some interface tools for working with environments directly, becoming kinda the one-stop-friendly-UI for Python developers. E.g. pip new-env my-env/ -p pypy3 -r requirements.txt to create a new virtual environment and install stuff into it in one go. (Probably pip is going to have to gain some related functionality anyway to install setup-requires into isolated build environments...) Obviously there's a desire not to shove everything in the world into pip, but having a single friendly frontend to installation/virtualenv/venv is *really* nice for newbies. And obviously this would a be "someday maybe" thing at best, just an interesting possibility down the road. - The eventual interface seems nice enough... path/to/python -m pip -> pip -p path/to/python pip3, pip3.5 -> still would work in my suggested interface path/to/venv/bin/pip -> pip -p path/to/venv/bin/python or pip -E path/to/venv though the transition would be tricky/painful. I don't have any conclusion, I just think it's just an interesting idea to think about. -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Sat Nov 7 22:42:19 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 19:42:19 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 4:03 PM, Paul Moore wrote: > I see building a wheel as a release activity. As such, it should > produce a reproducible result, and so should not be affected by > arbitrary state in the development directory. I don't know whether you > consider "ensuring the wheels aren't wrong" as aimed at developers or > at end users, it seems to me that both parties benefit. > > Personally, I'm deeply uncomfortable about *ever* encountering, or > producing (as a developer) sdists or wheels with the same version > number but functional differences. I am OK with installing a > development version (i.e., direct from a development directory into a > site-packages, either as -e or as a normal install) where the version > number doesn't change even though the code does, but for me the act of > producing release artifacts (wheels and sdists) should freeze the > version number. The problem with this is that we want to get rid of "direct installs" entirely, and move to doing wheel-based installs always -- direct installs require that every build system has to know about every possible install configuration, and it's just not viable. I think the way to approach this is to assume that 'pip install ' will always 100% of the time involve a wheel; the distinction is that sometimes that wheel is treated as a reliable artifact that can be cached etc., and that sometimes it's treated as a temporary intermediate format that's immediately discarded. (As a separate point I do think it would be good to encourage people to use + versions like 1.2+dev for VCS trees, or better yet 1.2+dev., to emphasize that these are not real reliable version numbers. (Recall that PEP 440 defines + as defining a "local version" that's explicitly somewhat unreliable, not allowed on index servers, etc.) But even pip itself doesn't follow this rule right now so for the foreseeable future we'll have to assume that source directories have unreliable version numbers.) -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Sat Nov 7 23:03:27 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 20:03:27 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sat, Nov 7, 2015 at 4:02 PM, Donald Stufft wrote: > On November 7, 2015 at 6:43:50 PM, Nathaniel Smith (njs at pobox.com) wrote: >> On Sat, Nov 7, 2015 at 3:16 PM, Donald Stufft wrote: >> [...] >> > The outcome of all of these types of installs are subtly different and we?ve had a number >> of users regularly get confused when they act differently over the years. I do not think >> it?s possible to make (5) act like anything else because it is inherently different, >> however I think we can get to the point that 1-4 all act the exact same way. and I think the >> way to do it is to change these so instead it is like: >> > >> > 1. I have a wheel and pip installs it. >> > 2. I have an sdist and pip turns it into a wheel and then pip installs it. >> > 3. I have a directory and pip turns it into a sdist and then pip turns that sdist into a wheel >> and then pip installs it. >> > 4. I have a directory and pip installs it in editable mode. >> >> I wrote some more detailed comments on this idea in the reply I just >> posted to Paul's message, but briefly, the alternative way to approach >> this would be: >> >> 1. I have a wheel and pip installs it >> 2. I have an sdist and pip unpacks it into a directory and builds a >> wheel from that directory and then pip installs it. >> 3. I have a directory and pip builds a wheel from that directory and >> then pip installs it. >> 4. I have a directory and pip installs it in editable mode. >> >> This is actually simpler, because we've eliminated the "create an >> sdist" operation and replaced it with the far-more-trivial "unpack an >> sdist". And it isn't even a replacement, because your 2 and my 2 are >> actually identical when you look at what it means to turn an sdist >> into a wheel :-). >> > > The problem is that an sdist and a directory are not the same things even though they may trivially appear to be. A very common problem people run into right now is that they don?t adjust their MANIFEST.in so that some new file they?ve added gets included in the sdist. In the current system and your proposed system if someone types ``pip install .`` that just silently works. Then they go ?Ok great, my package works? and they create a sdist and send that off? except the sdist is broken because it?s missing that file they needed. > > Since we?ve disabled the ability to delete + reupload files to PyPI I get probably once or twice a week someone contacting me asking if I can let them re-upload a file because they created an sdist that was missing a file. A decent number of those told me that they had ?tested? it by running ``pip install .`` or ``setup.py install`` into a fresh virtual environment and that it had worked. > > It?s true that the MANIFEST.in system exacerbates this problem by being a pretty crummy and error prone system to begin with, however the same thing is going to exist for any system where you have a path that builds a sdist (and may or may not include a file in that sdist) and a path that goes direct to wheel. > > It might not only be files that didn?t get to be included because of a mistake either. Some files might not get generated until sdist build time, something like LXML generates .c files from Cython sources at sdist creation time and then they build a Wheel from those .c files. They do this to prevent people from needing to have Cython available on a machine other than a development machine. In your proposed work flow their ?build wheel? command needs to be able to deal with the fact that the .c files may or may not be available (and will need to figure out a way to indicate that Cython is a build dependency if they are not). In my proposed workflow their wheel build command gets to be simpler, it only needs to deal with .c files and their sdist command gets used to create the .c files while the sdist is being generated. I'm not sure how to respond, because I sympathize and agree with all of these points, but I just think that the trade-offs are such that pip is the wrong place to try and fix this. Even if pip always copies the source tree to a temp dir, or even builds an sdist and unpacks it to a temp dir, then this doesn't actually guarantee that the final distribution will work, because of the reasons I mentioned in my other email -- you can still forget to check things in, have random detritus in your working directory (orphaned .pyc files create all kinds of fun, since python will happily import them even if the corresponding .py file has been deleted), etc. Which isn't to say that it's hopeless to try and improve matters, but I don't think we should do so at the expense of adding otherwise unneeded complexity to the pip <-> project-build-system interface. ("Otherwise unneeded" because nothing else in pip cares about generating sdists.) And just in general, your plan to improve matters has an ocean-boiling feel to it to me, because you're pushing against a huge weight of history and conventions that expect builds to happen inside the source tree and to re-use partial build results etc. Convincing people to lengthen their edit/compile/test cycle is almost always a losing proposition, no matter how good your reason is. If people just refuse to use pip in favor of setup.py install, then what have we really gained? Paving cow paths, remember... So I think that on balance, the right place to tackle this problem is within the build system itself. Heck, bdist_wheel could probably today be modified to call sdist, unpack the resulting sdist, and then perform the resulting build, right? It'd be slow, but it'd work. And it would leave our options open for later, without enshrining this into the Standard Build System Interface that can't be changed without PEPs and elaborate transition plans. And there are all sorts of strategies that a new build system could use to guarantee reliable sdists (e.g. using the same source-of-truth for locating files to build as it uses for locating files to include in the sdist!) without giving up on incremental rebuilds, so long as pip doesn't just rule out incremental builds entirely. LXML's build system can use the sdist->wheel strategy if they think it makes sense to them -- they don't need anything from pip to do that. -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Sat Nov 7 23:10:46 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 Nov 2015 20:10:46 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On Sat, Nov 7, 2015 at 12:32 PM, Robert Collins wrote: > On unknown names, current pkg_resources raises SyntaxError. So I think > we need to update the spec from that perspective to be clear. All PEP > 426 defined variables are handled without error by current > pkg_resources/markerlib implementation. Perhaps the new variables > should raise rather than evaluating to '' / 0 ? Some discussion / > thought would be good. Certainly when used and evaluated by an > existing pkg_resources they will error - so perhaps we should not add > any new variables at this point? I just thought of one possible strategy for allowing future extensions without opening the door wide to every typo: add a variable which contains the version number of the environment marker evaluation system, define the and/or operators as being short-circuiting, and keep the rule that trying to access an unknown variable name raises an error. Then you could use write things like: # 'foo is only really needed if some_new_attr > "2", but doesn't hurt otherwise. # So when using an old installation tool that doesn't know about some_new_attr, # install it unconditionally. Requires-Dist: foo; marker_handling >= "2" and some_new_attr > 2 Requires-Dist: foo; marker_handling < "2" > So, if we don't unify this with the wheel encoding of extras, it will > require multiple parsers indefinitely. I need to think more about > whether it makes sense or not. > > Wheel certainly needs a way to say 'this distribution exports extras > X, Y, Z (and their respective dependencies)'. flit and other tools > producing wheels need the same facility. > > https://www.python.org/dev/peps/pep-0427 doesn't define this use of > markers; but pip and wheel have collaborated on it. PEP-345 doesn't > describe Provides-Extra, which pkg_resources uses when parsing > .dist-info directories as well (it determines which extra variables > get substituted into the set of requires to determine the values of > the extras...). So there's basically still a bunch of underspecified > behaviours out there in the wild, and since my strategy is to minimal > variation vs whats there, we need to figure out the right places to > split things to layer this well. > > Specifying a new variable of 'extra' is fairly easy: we need to decide > on the values it will take, and thats well defined but layer crossing: > when processing a dependency with one or more extras, you need to loop > over all the dependency specifications once with each extra defined > (including I suspect for completeness '' for the non-extras) and then > union together the results. > > So at this layer I think we could say that: > - extra is only valid if the context that is interpreting the > specification defines it > - when invalid it will raise SyntaxError > > This allows a single implementation to handle .dist-info directories > as it does today, while specifying it more fully. It leaves it open in > future for us to continue modelling exported extras as > marker-filtered-specifications + a Provides-Extra, or to move to > exported extras as something in a hash in a richer serialisation > format, or some third thing. This is good I think. This seems like a good strategy to me. -n -- Nathaniel J. Smith -- http://vorpus.org From dmertz at continuum.io Sun Nov 8 00:34:46 2015 From: dmertz at continuum.io (David Mertz) Date: Sat, 7 Nov 2015 21:34:46 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Nov 7, 2015 7:30 PM, "Nathaniel Smith" wrote: > alternative approach would be to totally decouple the host python used > to execute pip from the target python that pip acts upon, on the > grounds that these are logically distinct things. (As a thought > experiment you can even imagine a package manager called, say, 'cpip', > which is a standalone program written in C or some other > non-Python-language, but that happens to know how to manipulate Python > environments. This is a great idea! It exists, and it is spelled "conda". Well, it's written in some particular version of Python, but the package and environment management is completely decoupled. You can in install R packages or Julia packages or different Python versions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 8 06:13:59 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Nov 2015 12:13:59 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 1:03 AM, Paul Moore wrote: > On 7 November 2015 at 16:33, Ralf Gommers wrote: > > Your only concrete argument for it so far is aimed at developers > > I feel that there's some confusion over the classes of people involved > here ("developers", "users", etc). > Good point. I meant your second category below. > For me, the core user base for pip is people who use "pip install" to > install *released* distributions of packages. For those people, name > and version uniquely identifies a build, they often won't have a build > environment installed, etc. These people *do* however sometimes > download wheels manually and install them locally (the main example of > this is Christoph Gohlke's builds, which are not published as a custom > PyPI-style index, and so have to be downloaded and installed from a > local directory). > > The other important category of user is people developing those > released distributions. They often want to do "pip install -e", they > install their own package from a working directory where the code may > change without a corresponding version change, they expect to build > from source and want that build cycle to be fast. Historically, they > have *not* used pip, they have used setup.py directly (or setup.py > develop, or maybe custom build tools like bento). So pip is not > optimised for their use cases. > You only have two categories? I'm missing at least one important category: users who install things from a vcs or manually downloaded code (pre-release that's not on pypi for example). This category is probably a lot larger that than that of developers. > Invocations like "pip install " cater for the first > category. Invocations like "pip install " cater for > the second (although currently, mostly by treating the local directory > as an unpacked sdist, which as I say is not optimised for this use > case). Invocations like "pip install " are in the grey > area - but I'd argue that it's more often used by the first category > of users, I can't think of a development workflow that would need it. > > Regarding the point you made this comment about: > > >> 4. Builds (pip wheel) should always unpack to a temporary location and > >> build there. When building from a directory, in effect build a sdist > >> and unpack it to the temporary location. > > I see building a wheel as a release activity. It's not just that. My third category of users above is building wheels all the time. Often without even realizing it, if they use pip. > As such, it should > produce a reproducible result, and so should not be affected by > arbitrary state in the development directory. I don't know whether you > consider "ensuring the wheels aren't wrong" as aimed at developers or > at end users, it seems to me that both parties benefit. > Ensuring wheels aren't wrong is something that developers need to do. End users may benefit, but they benefit from many things developers do. Personally, I'm deeply uncomfortable about *ever* encountering, or > producing (as a developer) sdists or wheels with the same version > number but functional differences. As soon as you produce a wheel with any compiled code inside, it matters with which compiler (and build flags, etc.) you build it. There are typically subtle, and sometimes very obvious, functional differences. Same for sdists, contents for example depend on the Cython version you have installed when you generate it. > I am OK with installing a > development version (i.e., direct from a development directory into a > site-packages, either as -e or as a normal install) where the version > number doesn't change even though the code does, but for me the act of > producing release artifacts (wheels and sdists) should freeze the > version number. I've been bitten too often by confusion caused by > trying to install something with the same version but different code, > to want to see that happen. > "wheels and sdists" != "release artifacts" I fully agree of course that we want things on PyPi (which are release artifacts) to have unique version numbers etc. But wheels and sdists are produced all the time, and only sometimes are they release artifacts. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 8 07:00:05 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Nov 2015 13:00:05 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 12:38 AM, Donald Stufft wrote: > On November 7, 2015 at 6:16:34 PM, Donald Stufft (donald at stufft.io) wrote: > > > I want to reduce the ?paths? that an installation can go down. > > I decided I?d make a little visual aid to help explain what I mean here > (omitting development/editable installs because they are weird and will > always be weird)! > > Here?s essentially the way that installs can happen right now > https://caremad.io/s/Ol1TuV6R9K/. Each of these types of installations > act subtly different in ways that are not very obvious to most people. > > Here?s what I want it to be: https://caremad.io/s/uJYeVzBlQG/. In this > way no matter what a user is installing from (Wheel, Source Dist, > Directory) the outcome will be the same and there won?t be subtly different > behaviors based on what is being provided. > Thanks, clear figures. Your final situation is definitely way better than what it's now. Here is what I proposed in a picture: https://github.com/pypa/pip/pull/3219#issuecomment-154810578 Comparison: - same number of arrows in flowchart - total path length in my proposal is 1 shorter - my proposal requires one less build system interface to be specified (sdist) Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From waynejwerner at gmail.com Sun Nov 8 07:48:01 2015 From: waynejwerner at gmail.com (Wayne Werner) Date: Sun, 08 Nov 2015 12:48:01 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> Message-ID: On Sat, Nov 7, 2015, 5:41 PM Paul Moore wrote: On 7 November 2015 at 22:21, Antoine Pitrou wrote: > The actual question is: which problem are you trying to solve *that > current users are actually experiencing*? Typically, people using "pip" to install stuff, and finding it gets installed into the "wrong" Python installation (i.e., not the one they expected). I'm not clear myself on how this happens, but it seems to be common on some Linux distros (and I think on OSX as well) where system and user-installed Pythons get confused. I've actually stopped using "pip install" in favor of "python -m pip install" for this very reason. At the very least I know that if my next command is "python " I'll have those packages installed in that same python. On windows I used "py -3.4 -m pip". -W -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Nov 8 08:23:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 13:23:34 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 8 November 2015 at 11:13, Ralf Gommers wrote: > You only have two categories? I'm missing at least one important category: > users who install things from a vcs or manually downloaded code (pre-release > that's not on pypi for example). This category is probably a lot larger that > than that of developers. Hmm, I very occasionally will install the dev version of pip to get a fix I need. But I don't consider myself in that role as someone who pip should cater for - rather I expect to manage doing so myself, whether that's by editing the pip code to add a local version ID, or just by dealing with the odd edge cases. I find it hard to imagine that there are a significant number of users who install from development sources but who aren't developers (at least to the extent that testers of pre-release code are also developers). > As soon as you produce a wheel with any compiled code inside, it matters with > which compiler (and build flags, etc.) you build it. There are typically subtle, and > sometimes very obvious, functional differences. Same for sdists, contents for > example depend on the Cython version you have installed when you generate it. By "functional differences" I mean code changes, not build flag or compiler version changes. My rule is that if 2 wheels have the same version, they should be buildable from the exact same source code. (Ideally I'd like to say sdist here, but I don't want to get sucked into the distinction between source directory and sdist again, which is yet another completely independent debate). > "wheels and sdists" != "release artifacts" Please explain. All you've done here is state that you don't agree with me, but given no reasons. Let me restate my comment, without using the disputed term: "for me the act of producing wheels and sdists should freeze the version number" I find it hard to understand what the point of a version number *is*, if it's not to identify a specific set of source code that has been used to generate the wheels and sdists that are tagged with that version number. Temporary development builds can be in all sorts of inconsistent states, and one of those states might be "the code has been changed but the version number hasn't" but as soon as you give a wheel or sdist to anyone else, you have a responsibility to identify the source code that the wheel/sdist came from, and the version number is how you do that. Personally, I think the issue here is that there are a lot of people in the scientific community who people outside that community would class as "developers", but who aren't considered that way from within the community. I tend to try to assign to these people the expertise and responsibilities that I would expect of a developer, not of an end user. If in fact they are a distinct class of user, then I think the scientific community need to explain more clearly what expertise and responsibilities pip can expect of these users. And why treating them as developers isn't reasonable. Paul From ralf.gommers at gmail.com Sun Nov 8 08:34:21 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Nov 2015 14:34:21 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore wrote: > On 8 November 2015 at 11:13, Ralf Gommers wrote: > > > "wheels and sdists" != "release artifacts" > > Please explain. All you've done here is state that you don't agree > with me, but given no reasons. > Come on, I elaborated in the sentence right below it. Which you cut out in your reply. Here it is again: "I fully agree of course that we want things on PyPi (which are release artifacts) to have unique version numbers etc. But wheels and sdists are produced all the time, and only sometimes are they release artifacts." Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Nov 8 08:45:40 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 13:45:40 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 8 November 2015 at 13:34, Ralf Gommers wrote: > On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore wrote: >> >> On 8 November 2015 at 11:13, Ralf Gommers wrote: >> >> > "wheels and sdists" != "release artifacts" >> >> Please explain. All you've done here is state that you don't agree >> with me, but given no reasons. > > Come on, I elaborated in the sentence right below it. Which you cut out in > your reply. Here it is again: > > "I fully agree of course that we want things on PyPi (which are release > artifacts) to have unique version numbers etc. But wheels and sdists are > produced all the time, and only sometimes are they release artifacts." Sorry, my mistake. I didn't see how this part related (and still don't). What are wheels and sdists if they are not not "release artifacts"? Are we just quibbling about the what term "release artifact" means? If so, I'll revert to using "wheels and sdists" as I did in my repsonse. I thought it was obvious that wheels and sdists *are* the release artifacts in the process of producing Python packages. It doesn't matter where they are released *to*, it can be to PyPI, or a local server, or just to a wheelhouse or other directory on your PC that you keep for personal use only. Once they are created by you as anything other than a temporary file in a multi-step install process they are "release artifacts" as I understand/mean the term. But terminology's not a big deal, as long as we understand each other. Paul From ralf.gommers at gmail.com Sun Nov 8 08:51:05 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Nov 2015 14:51:05 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore wrote: > On 8 November 2015 at 11:13, Ralf Gommers wrote: > > You only have two categories? I'm missing at least one important > category: > > users who install things from a vcs or manually downloaded code > (pre-release > > that's not on pypi for example). This category is probably a lot larger > that > > than that of developers. > > Hmm, I very occasionally will install the dev version of pip to get a > fix I need. But I don't consider myself in that role as someone who > pip should cater for - rather I expect to manage doing so myself, > whether that's by editing the pip code to add a local version ID, or > just by dealing with the odd edge cases. > > I find it hard to imagine that there are a significant number of users > who install from development sources but who aren't developers There are way more of those users than actual developers, I'm quite sure of that. See below for numbers. > (at least to the extent that testers of pre-release code are also > developers). > ... > That's not a very helpful way to look at it from my point of view. Those users may just want to check that their code still works, or they need a bugfix that's not in the released version, or .... > Personally, I think the issue here is that there are a lot of people > in the scientific community who people outside that community would > class as "developers", Then I guess those "outside" would be web/app developers? For anyone developing a library or some other infrastructure to be used somewhere other than via a graphical or command line UI, I think the distinction I make (I'll elaborate below) will be clear. > but who aren't considered that way from within > the community. I tend to try to assign to these people the expertise > and responsibilities that I would expect of a developer, not of an end > user. If in fact they are a distinct class of user, then I think the > scientific community need to explain more clearly what expertise and > responsibilities pip can expect of these users. And why treating them > as developers isn't reasonable. > To give an example for Numpy: - there are 5-10 active developers with commit rights - there are 50-100 contributors who submit PRs - there are O(1000) people who read the mailing list - there are O(1 million) downloads/installs per year Downloads/users are hard to count correctly, but there are at least 1000x more users than developers (this will be the case for many popular packages). Those users are often responsible for installing the package themselves. They aren't trained programmers, only know Python to the extent that they can get their work done, and they don't know much (if anything) about packaging, wheels, etc. All they know may be "I have to execute `python setup.py install`". Those are the users I'm concerned about. There's no reasonable way you can classify/treat them as developers I think. By the way, everything we discuss here has absolutely no impact on what you defined as "user" (the released-version only PyPi user). While it's critical for what I defined as "the second kind of user". Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at gmail.com Sun Nov 8 08:59:21 2015 From: ralf.gommers at gmail.com (Ralf Gommers) Date: Sun, 8 Nov 2015 14:59:21 +0100 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 2:45 PM, Paul Moore wrote: > On 8 November 2015 at 13:34, Ralf Gommers wrote: > > On Sun, Nov 8, 2015 at 2:23 PM, Paul Moore wrote: > >> > >> On 8 November 2015 at 11:13, Ralf Gommers > wrote: > >> > >> > "wheels and sdists" != "release artifacts" > >> > >> Please explain. All you've done here is state that you don't agree > >> with me, but given no reasons. > > > > Come on, I elaborated in the sentence right below it. Which you cut out > in > > your reply. Here it is again: > > > > "I fully agree of course that we want things on PyPi (which are release > > artifacts) to have unique version numbers etc. But wheels and sdists are > > produced all the time, and only sometimes are they release artifacts." > > Sorry, my mistake. I didn't see how this part related (and still > don't). What are wheels and sdists if they are not not "release > artifacts"? Are we just quibbling about the what term "release > artifact" means? I'm not sure about that, I don't think it's just terminology (see below). They obviously can be release artifacts, but they don't have to be - that's what I meant with !=. > If so, I'll revert to using "wheels and sdists" as I > did in my repsonse. I thought it was obvious that wheels and sdists > *are* the release artifacts in the process of producing Python > packages. It doesn't matter where they are released *to*, it can be to > PyPI, or a local server, or just to a wheelhouse or other directory on > your PC that you keep for personal use only. Once they are created by > you as anything other than a temporary file in a multi-step install > process they are "release artifacts" as I understand/mean the term. > To me there's a fairly fundamental difference between things that are actually released (by the release manager of a project usually, or maybe someone building a local wheelhouse) and things that are produced under the hood by pip. For someone typing `pip install .`, sdist/wheel is an implementation detail that is invisible to him/her and he/she shouldn't have to care about imho. > But terminology's not a big deal, as long as we understand each other. > Agreed. Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Nov 8 08:59:53 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 13:59:53 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> Message-ID: On 8 November 2015 at 00:37, Donald Stufft wrote: > This probably does also affect pydoc but I would suggest that more people are invoking pip in various ?weird? situation than are invoking pydoc. I know personally I?ve *never* invoked pydoc. In fact, the pyvenv script has been deprecated and is going to be removed in Python 3.8 in favor of `python -m venv` for similar reasons that I've described here. It applies to pretty much everything. It's basically why the -m flag was added to Python. It's not taken off because it's not exactly ideal, but the alternative is a bit of a mess too (more or less so depending on your OS, distribution and/or working practices). Unfortunately, pip is orders of magnitude more frequently used than the other commands that suffer from this problem, so it *feels* like a pip issue. But it isn't really, it's either a Python issue or an OS issue, depending on how you view it. Java has a very similar issue. The "java -jar myproject.jar" syntax is basically the same as "python -m". The alternative used in the Java world is a plethora of Windows batch files and Unix shell scripts, usually combined with an installation process that goes something like "here's the driver scripts, work out for yourself how to get them on your PATH". Plus environment variables to say which is your preferred Java installation. (Java doesn't have anything like virtual environments, so actually the problem they face is *simpler* than Python's). Paul From solipsis at pitrou.net Sun Nov 8 09:13:28 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 8 Nov 2015 15:13:28 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> Message-ID: <20151108151328.65528550@fsol> On Sat, 7 Nov 2015 19:37:03 -0500 Donald Stufft wrote: > In fact, the pyvenv script has been deprecated and is going to be > removed in Python 3.8 in favor of `python -m venv` for similar > reasons that I've described here. That's not an argument, since the decision was taken by exactly the same people. Just because you do something twice doesn't mean it was a good thing to do, especially when no sizable data was brought in support. The fact that you decided to deprecate "pyvenv" while its ancestor - the "virtualenv" script itself - was not deprecated despite existing for a much longer time hints that the issue may be blown out of proportion. In the end, I like "python -m " for many user interfaces; but for managing package installations I think it's really much too wordy. Regards Antoine. From donald at stufft.io Sun Nov 8 09:20:42 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 Nov 2015 09:20:42 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151108151328.65528550@fsol> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <20151108151328.65528550@fsol> Message-ID: On November 8, 2015 at 9:13:48 AM, Antoine Pitrou (solipsis at pitrou.net) wrote: > On Sat, 7 Nov 2015 19:37:03 -0500 > Donald Stufft wrote: > > In fact, the pyvenv script has been deprecated and is going to be > > removed in Python 3.8 in favor of `python -m venv` for similar > > reasons that I've described here. > > That's not an argument, since the decision was taken by exactly the > same people. Just because you do something twice doesn't mean it was a > good thing to do, especially when no sizable data was brought in > support. > > The fact that you decided to deprecate "pyvenv" while its ancestor - the > "virtualenv" script itself - was not deprecated despite existing for a > much longer time hints that the issue may be blown out of proportion. > > In the end, I like "python -m " for many user interfaces; > but for managing package installations I think it's really much too > wordy. > Uhhhh I had nothing to do with deprecating the pyvenv script. Brett Cannon suggested it. The virtualenv script wasn?t deprecated (and I haven?t suggested doing it) because the virtualenv script already functions similarly to how I suggested in my last option. You aren?t expected to install virtualenv into every Python and then invoke the correct one based on which version of Python you want to interact with (as pyvenv and pip require you to do). Instead you tell it which version of Python you want to interact with using the ``-p`` flag (which accepts things like ``python2`` or full paths). So there is no ambiguity about which version of Python you?re going to be interacting with. The only thing that matters at all for which version of Python virtualenv is installed into is that it controls the *default*, but that?s just the default and on some systems (like Debian) virtualenv is installed into Python 3.x and the default was switched to 2.x still. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Sun Nov 8 10:40:05 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 15:40:05 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 8 November 2015 at 13:51, Ralf Gommers wrote: > To give an example for Numpy: > - there are 5-10 active developers with commit rights > - there are 50-100 contributors who submit PRs > - there are O(1000) people who read the mailing list > - there are O(1 million) downloads/installs per year > Downloads/users are hard to count correctly, but there are at least 1000x > more users than developers (this will be the case for many popular > packages). Those users are often responsible for installing the package > themselves. They aren't trained programmers, only know Python to the extent > that they can get their work done, and they don't know much (if anything) > about packaging, wheels, etc. All they know may be "I have to execute > `python setup.py install`". Those are the users I'm concerned about. There's > no reasonable way you can classify/treat them as developers I think. Agreed. But they (by which I assume you mean the 3rd and 4th categories in your list) should be using released versions, surely? So they should be using "pip install ", not downloading source and building it. Maybe they have to download and build right now, but that's precisely what we're trying to move away from, surely? Only the 100 or so developers and contributors (plus maybe as many more redistributors who build platform-specific wheels for platforms the project doesn't support directly) need to build from source, everyone else just installs prereleased wheels. Paul From contact at ionelmc.ro Sun Nov 8 11:01:47 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Sun, 8 Nov 2015 18:01:47 +0200 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Sat, Nov 7, 2015 at 12:24 AM, Donald Stufft wrote: > Well, it?s not really a launcher no, but you?d do ``pip -p python2 install > foo`` or something like that. It?s the same UI. Having just a ?launcher? I > think is actually more confusing (and we already had that in the past with > -E and removed it because it was confusing). Since you?ll have different > versions of pip in different environments (Python or virtual) things break > or act confusingly. > ?That can't be worse than the current situation. And I'm not asking to bring `-E` back. The idea is that the pip bin becomes a launcher file, just like py.exe - it would just try to discover an appropiate python and run `-mpip` with it. This doesn't even need? ?to be implemented in pip - linux distributions can do this. For windows it's more tricky - but if Python on windows has getpip ?why can't it bundle a pip.exe, just like py.exe? Another issue that is being conflated here is the most frequent scenario: using virtualenv activation. We should should be really be talking about deprecating the activation shell scripts - messing with $PATH is what we should really look at - not deprecating `pip` bin over to the overly tedious `python -mpip`. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Sun Nov 8 12:42:32 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 8 Nov 2015 09:42:32 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Nov 8, 2015 5:23 AM, "Paul Moore" wrote: > [...] > I find it hard to imagine that there are a significant number of users > who install from development sources but who aren't developers (at > least to the extent that testers of pre-release code are also > developers). I'm not sure exactly what's at stake in this terminological/ontological debate, but it certainly is fairly common for developers to have conversations like "thanks for reporting that issue, I think it's fixed in master but can't reproduce myself so can you try 'pip install https://github.com/pydata/patsy/archive/master.zip' and report back whether it helps?" And often the person on the other end of this conversation knows absolutely nothing about python packaging, might have started learning python last week, etc. (Or maybe more to the point, you as a developer have absolutely no idea how much they know or what reasonable or unreasonable things they'll try if something goes wrong, and don't have time to have a long tutorial discussion to figure it out, so you need to be able to give instructions that are robust enough to work regardless of your interlocutor's actual knowledge level.) Probably the absolute numbers aren't large, but when you're one of the 5-10 people maintaining a package that has complicated build/install/OS issues and is used by O(a million) people, many of whom are learning programming for the first time via your package and immediately using it for their real work, then these kinds of fuzzy middle cases take up a lot of time :-). (Maybe this should become an official ui metric. "How easy is your tool to use interactively", "how easy is your tool to use in an automated way via shell script", "how easy is your tool to use in a semi-automated way where we've replaced the shell with a human being who first learned what the terminal was one week ago".) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Sun Nov 8 15:41:01 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 09:41:01 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 8 November 2015 at 17:10, Nathaniel Smith wrote: > On Sat, Nov 7, 2015 at 12:32 PM, Robert Collins > wrote: >> On unknown names, current pkg_resources raises SyntaxError. So I think >> we need to update the spec from that perspective to be clear. All PEP >> 426 defined variables are handled without error by current >> pkg_resources/markerlib implementation. Perhaps the new variables >> should raise rather than evaluating to '' / 0 ? Some discussion / >> thought would be good. Certainly when used and evaluated by an >> existing pkg_resources they will error - so perhaps we should not add >> any new variables at this point? > > I just thought of one possible strategy for allowing future extensions > without opening the door wide to every typo: add a variable which > contains the version number of the environment marker evaluation > system, define the and/or operators as being short-circuiting, and > keep the rule that trying to access an unknown variable name raises an > error. Then you could use write things like: > > # 'foo is only really needed if some_new_attr > "2", but doesn't hurt otherwise. > # So when using an old installation tool that doesn't know about some_new_attr, > # install it unconditionally. > Requires-Dist: foo; marker_handling >= "2" and some_new_attr > 2 > Requires-Dist: foo; marker_handling < "2" It's a decent strategy, but unfortunately a breaking change to introduce it. I think right now we should just not introduce new symbols, since we have such a long runway before they can be used, and instead look for an opportunity where another break is required, and we can bundle them together. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Sun Nov 8 15:52:54 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 Nov 2015 20:52:54 +0000 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On 8 November 2015 at 17:42, Nathaniel Smith wrote: > I'm not sure exactly what's at stake in this terminological/ontological > debate, but it certainly is fairly common for developers to have > conversations like "thanks for reporting that issue, I think it's fixed in > master but can't reproduce myself so can you try 'pip install > https://github.com/pydata/patsy/archive/master.zip' and report back whether > it helps?" Well, reviewing this scenario is probably much more useful than the endless terminology debates that I seem to be forever starting, so thanks for stopping me! It seems to me that in this situation, optimising rebuild times probably isn't too important. The user is likely to only be building once or twice, so reusing object files from a previous build isn't likely to be a killer benefit. However, if the user does as you asked here, they'd likely be pretty surprised (and it'd be a nasty situation for you to debug) if pip didn't install what the user asked. In all honesty, You could argue that this implies that pip should unconditionally install files specified on the command line, but I'd suggest that you should actually be asking the user to run 'pip install --ignore-installed https://github.com/pydata/patsy/archive/master.zip'. That avoids any risk that whatever the user has currently installed could mess things up, and is explicit that it's doing so (and equally, it's explicit that it'll overwrite the currently installed version, which the user might not want to do in his main environment). Maybe you could argue that you want --ignore-installed to be the default (probably only when a file is specified rather than a requirement, assuming that distinguishing between a file and a requirement is practical). But if we did that, we'd still need a --dont-ignore-installed flag to restore the current behaviour. For example, because Christoph Gohlke's builds must be manually downloaded, I find it's quite common to download a wheel from his site and "pip install" it in a number of environments, with the meaning "only if it'd be an upgrade to whatever is currently installed". So this specific example seems to me to be entirely covered by current pip behaviour. Paul From wolfgang.maier at biologie.uni-freiburg.de Sun Nov 8 16:03:49 2015 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Sun, 8 Nov 2015 22:03:49 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On 05.11.2015 22:08, Donald Stufft wrote: > > * Having pip need to be installed into each Python means you end up with a > bunch of independent pip installations which all need to be independently > updated. We've made this better by having recent pips warn you if you're not > on the latest version, but it's not unusual for me personally to have 30+ > different installations of pip on my personal desktop. That isn't great. > For the record, it is possible to have just one pip accessed by multiple installed Pythons right now. Just add a .pth file to site-packages of each Python with an entry to the same shared directory. Install pip into one Python, then mv the pip and the accompanying dist-info folder to that shared directory. It's a bit of effort to set up the first time, but you'll only have one pip to update afterwards. From ben+python at benfinney.id.au Sun Nov 8 16:31:36 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 09 Nov 2015 08:31:36 +1100 Subject: [Distutils] The future of invoking pip References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> Message-ID: <85h9kwi3ev.fsf@benfinney.id.au> Paul Moore writes: > Unfortunately, pip is orders of magnitude more frequently used than > the other commands that suffer from this problem, so it *feels* like a > pip issue. But it isn't really, it's either a Python issue or an OS > issue, depending on how you view it. +1. Addressing this by insisting on ?python -m foo? is not a solution. It's a plaster over a problem that will remain until the underlying conflict is resolved. That's not to say PyPA should ignore the issue, certainly there are things that can be done to help. But ?python -m foo? is an ugly wart, and I really want the rhetoric to acknowledge that instead of considering it a satisfactory end point. -- \ ?We cannot solve our problems with the same thinking we used | `\ when we created them.? ?Albert Einstein | _o__) | Ben Finney From baptiste at bitsofnetworks.org Sun Nov 8 15:04:42 2015 From: baptiste at bitsofnetworks.org (Baptiste Jonglez) Date: Sun, 8 Nov 2015 21:04:42 +0100 Subject: [Distutils] Missing IPv6 support on pypi.python.org Message-ID: <20151108200441.GB28216@lud.polynome.dn42> Hi, pypi.python.org is currently not reachable over IPv6. I know this issue was brought up before [1,2]. This is a real issue for us, because our backend servers are IPv6-only (clients never need to talk to backend servers, they go through IPv4-enabled HTTP frontends). So, deploying packages from pypi on the IPv6-only servers is currently a pain. What is the roadmap to add IPv6 support? It seems that Fastly has already deployed IPv6 [3]. Thanks, Baptiste [1] https://mail.python.org/pipermail/distutils-sig/2014-June/024465.html [2] https://bitbucket.org/pypa/pypi/issues/90/missing-ipv6-connectivity [3] http://bgp.he.net/AS54113#_prefixes6 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From robertc at robertcollins.net Sun Nov 8 16:41:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 10:41:57 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 7 November 2015 at 18:03, Nathaniel Smith wrote: > Thanks, this is really great! > > I made a bunch of fiddly comments inline on the PR, but some more > general comments: > > 1) Also mentioned this in the PR, but it's probably worth putting up > for more general discussion here: do we want to define some graceful > degradation for how to handle unrecognized variable names in the > environment marker syntax, so as to allow more easily for future > extensions? In the current PEP draft, an unrecognized variable name is > simply a syntax error (all the variable names are effectively > keywords). I don't think we can, because all the implementations today error. My goal is to document what is, rather than what might be, except where variation will be safe or gated behind other checks (such as urls - they are not legal in PyPI even in PEP-440, so including them as a desired feature with the PEP-440 syntax does not impede adoption). > 2) The PEP seems a little uncertain about whether it wants to talk > about the "framing protocol" or not -- like whether there's a > higher-level structure defining the edges of each individual > requirement or not. In setup.py and in existing METADATA files, you > have some higher level list-of-strings syntax and then each string is > parsed as a single individual requirement, and comments don't make > much sense; in requirements.txt then newlines are meaningful and > comments are important. The PEP worries about newlines and newlines, > but doesn't quite want to come out and say that it's defining > requirements.txt -- it wants to be more general. Being lower level was an explicit goal. I've dropped comments and newline continuations and said embedding it should include such features if desired. > 3) The way extras are specified in METADATA files currently (ab)uses > the environment marker syntax. E.g. here's the METADATA files from two > popular packages: > > https://gist.github.com/njsmith/ed74851c0311e858f0f7 > > (Nice to see Metadata-Version: 2.0 getting some real-world use! I > guess??? I thought I was starting to understand what is going on with > python packaging standards but now I am baffled again. Anyway!) > > So apparently this is how you say that there is an extra called "doc" > and that this extra adds a dependency on Sphinx: > > Provides-Extra: doc > Requires-Dist: Sphinx (>=1.1); extra == 'doc' > > (https://gist.github.com/njsmith/ed74851c0311e858f0f7#file-ipython-4-0-0-wheel-metadata-L39) > > And here's how you say that the "terminal" extra adds a dependency on > pyreadline (but only on windows): > > Requires-Dist: pyreadline (>=2); sys_platform == "win32" and extra == 'terminal' > > (https://gist.github.com/njsmith/ed74851c0311e858f0f7#file-ipython-4-0-0-wheel-metadata-L64) > > I'm not sure this is the syntax that I would have come up with, but I > guess it's too late to put the genie back in the bottle, so this PEP > should have some way to cope with these things? Currently they are > simply syntax errors according to the PEP's grammar. As we discussed, I've made the specification support this, but limited to specific contexts such as wheel, not present-by-default. This happens to make markerlib instant-buggy (because it defaults extra to None, but on Python3 it will blow up anyhow in various situations because e.g. "foo" < None :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Sun Nov 8 16:43:06 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 10:43:06 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Here is the full diff vs the last push to github. diff --git a/dependency-specification.rst b/dependency-specification.rst index 6f85b98..03f2a0d 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -52,10 +52,7 @@ Examples All features of the language shown with a name based lookup:: - requests \ - [security] >= 2.8.1, == 2.8.* \ - ; python_version < "2.7.10" \ - # Fix HTTPS on older Python versions. + requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < "2.7.10" A minimal URL based lookup:: @@ -77,29 +74,25 @@ We first cover the grammar briefly and then drill into the semantics of each section later. A distribution specification is written in ASCII text. We use ABNF [#abnf]_ to -provide a precise grammar. Specifications may have comments starting with a -'#' and running to the end of the line:: - - comment = "#" *(WSP / VCHAR) - -Specifications may be spread across multiple lines if desired using -continuations - a single backslash followed by a new line ('\\\\n'):: - - CSP = 1*(WSP / ("\\" LF)) +provide a precise grammar. The grammar covers just the specification itself. +The expectation is that the specification will be embedded into a larger +system which offers framing such as comments, multiple line support via +continuations, or other such features. Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: URI is defined in std-66 [#std66]_:: version-cmp = "<" / "<=" / "!=" / "==" / ">=" / ">" / "~=" / "===" version = 1*( DIGIT / ALPHA / "-" / "_" / "." / "*" ) - versionspec = ["("] version-cmp version *(',' version-cmp version) [")"] + version-inner = version-cmp version *(',' version-cmp version) + versionspec = ("(" version-inner ")") / version-inner urlspec = "@" URI Environment markers allow making a specification only take effect in some environments:: marker-op = version-cmp / "in" / "not in" - python-str-c = (WSP / ALPHA / DIGIT / "(" / ")" / "." / "{" / "}" / + python-str-c = WSP / ALPHA / DIGIT / "(" / ")" / "." / "{" / "}" / "-" / "_" / "*" python-str = "'" *(python-str-c / DQUOTE) "'" python-str =/ DQUOTE *(python-str-c / "'") DQUOTE @@ -109,10 +102,12 @@ environments:: "platform_python_implementation" / "implementation_name" / "implementation_version" / "platform_dist_name" "platform_dist_version" / "platform_dist_id" - marker-expr = "(" marker ")" / (marker-var [marker-op marker-var]) - marker = marker-expr *( ("and" / "or") marker-expr) - name-marker = ";" *CSP marker - url-marker = ";" 1*CSP marker + marker-var =/ "extra" ; ONLY when defined by a containing layer + marker-expr = "(" *WSP marker *WSP ")" + =/ (marker-var [*WSP marker-op *WSP marker-var]) + marker = *WSP marker-expr *( *WSP ("and" / "or") *WSP marker-expr) + name-marker = ";" *WSP marker + url-marker = ";" 1*WSP marker Optional components of a distribution may be specified using the extras field:: @@ -123,31 +118,21 @@ field:: Giving us a rule for name based requirements:: - name_req = name [CSP extras] [CSP versionspec] [CSP name-marker] + name_req = name [*WSP extras] [*WSP versionspec] [*WSP name-marker] And a rule for direct reference specifications:: - url_req = name [CSP extras] urlspec [CSP url-marker] + url_req = name [*WSP extras] urlspec [*WSP url-marker] Leading to the unified rule that can specify a dependency:: - specification = (name_req / location_req) [CSP comment] + specification = name_req / location_req Whitespace ---------- Non line-breaking whitespace is optional and has no semantic meaning. -A line break indicates the end of a specification. Specifications can be -continued across multiple lines using a continuation. - -Comments --------- - -A specification can have a comment added to it by starting the comment with a -"#". After a "#" the rest of the line can contain any text whatsoever. -Continuations within a comment are ignored. - Names ----- @@ -209,42 +194,45 @@ either side is not a valid version, then the comparsion falls back to the same behaviour as in for string in Python if the operator exists in Python. For those operators which are not defined in Python, the result should be False. -The variables in the marker grammar such as "os_name" resolve to values looked -up in the Python runtime. If a particular value is not available (such as -``sys.implementation.name`` in versions of Python prior to 3.3, or -``platform.dist()`` on non-Linux systems), the default value will be used. +User supplied constants are always encoded as strings with either ``'`` or +``"`` quote marks. Note that backslash escapes are not defined, but existing +implementations do support them them. They are not included in this +specification because none of the variables to which constants can be compared +contain quote-requiring values. +The variables in the marker grammar such as "os_name" resolve to values looked +up in the Python runtime. With the exception of "extra" all values are defined +on all Python versions today - it is an error in the implementation of markers +if a value is not defined. + +Unknown variables must raise a SyntaxError rather than resulting in a +comparison that evaluates to True or False. + +The "extra" variable is special. It is used by wheels to signal which +specifications apply to a given extra in the wheel ``METADATA`` file, but +since the ``METADATA`` file is based on a draft version of PEP-426, there is +no current specification for this. Regardless, outside of a context where this +special handling is taking place, the "extra" variable should result in a +SyntaxError like all other unknown variables. + .. list-table:: :header-rows: 1 * - Marker - Python equivalent - Sample values - - Default if unavailable * - ``os_name`` - ``os.name`` - ``posix``, ``java`` - - "" * - ``sys_platform`` - ``sys.platform`` - ``linux``, ``darwin``, ``java1.8.0_51`` - - "" - * - ``platform_release`` - - ``platform.release()`` - - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` - - "" * - ``platform_machine`` - ``platform.machine()`` - ``x86_64`` - - "" - * - ``platform_python_implementation`` + * - ``python_implementation`` - ``platform.python_implementation()`` - ``CPython``, ``Jython`` - - "" - * - ``implementation_name`` - - ``sys.implementation.name`` - - ``cpython`` - - "" * - ``platform_version`` - ``platform.version()`` - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` @@ -252,51 +240,16 @@ up in the Python runtime. If a particular value is not available (such as ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; root:xnu-2782.40.9~2/RELEASE_X86_64`` - - "" - * - ``platform_dist_name`` - - ``platform.dist()[0]`` - - ``Ubuntu`` - - "" - * - ``platform_dist_version`` - - ``platform.dist()[1]`` - - ``14.04`` - - "" - * - ``platform_dist_id`` - - ``platform.dist()[2]`` - - ``trusty`` - - "" * - ``python_version`` - ``platform.python_version()[:3]`` - ``3.4``, ``2.7`` - - "0" * - ``python_full_version`` - see definition below - ``3.4.0``, ``3.5.0b1`` - - "0" - * - ``implementation_version`` - - see definition below - - ``3.4.0``, ``3.5.0b1`` - - "0" - -The ``python_full_version`` and ``implementation_version`` marker variables -are derived from ``sys.version_info`` and ``sys.implementation.version`` -respectively, in accordance with the following algorithm:: - - def format_full_version(info): - version = '{0.major}.{0.minor}.{0.micro}'.format(info) - kind = info.releaselevel - if kind != 'final': - version += kind[0] + str(info.serial) - return version - - python_full_version = format_full_version(sys.version_info) - implementation_version = format_full_version(sys.implementation.version) - -``python_full_version`` will typically correspond to ``sys.version.split()[0]``. - -If a particular version number value is not available (such as -``sys.implementation.version`` in versions of Python prior to 3.3) the -corresponding marker variable returned by setuptools will be set to ``0`` + * - ``extra`` + - A SyntaxError except when defined by the context interpreting the + specification. + - ``test`` Backwards Compatibility ======================= From robertc at robertcollins.net Sun Nov 8 16:48:05 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 10:48:05 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Bah, just remembered to trim the marker operators too. Doing so now, I won't include a diff for that. -Rob From donald at stufft.io Sun Nov 8 20:13:41 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 Nov 2015 20:13:41 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On November 5, 2015 at 4:08:56 PM, Donald Stufft (donald at stufft.io) wrote: > > Another possible option is to modify pip so that instead of installing into > site-packages we instead create an "executable zip file" which is a simple zip > file that has all of pip inside of it and a top level __main__.py. Python can > directly execute this as if it were installed. We would no longer have any > command except for ``pip`` (which is really this executable zip file). This > script would default to installing into ``python``, and you can direct it to > install into a different python using a (new) flag of ``-p`` or ``--python`` > where you can specify either a full path or a command that we'll search $PATH > for. > I?ve implemented a proof of concept of this which can be found at?https://github.com/pypa/pip/pull/3234. It?s quick and dirty so probably will break down in edge cases. I also know it currently doesn?t generate a ``pip.exe`` on Windows, so if someone runs it on Windows they?ll need to execute it as ``python path/to/Scripts/pip``. I?ve not tested trying to install it via wheel or even installing it via pip, so you?ll want to check out the branch or download a tarball and run ``python setup.py install`` (or you can run ``python setup.py build`` and just execute ``build/script-X.Y/pip``). You might see errors if you don?t build it in a clean virtual environment. That?s just because I haven?t bothered to try and isolate the build by default yet. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From robertc at robertcollins.net Sun Nov 8 20:28:26 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 14:28:26 +1300 Subject: [Distutils] build system abstraction PEP In-Reply-To: References: Message-ID: Now that the dependent spec is up for review, I've refreshed this build system abstraction one. Key changes: - pep-426 -> wheel METADATA files as the distribution description interface - dropped yaml for JSON as its in the stdlib I've kept the indirection via a build system rather than adopting setup.py as-is: I judge the risk for user confusion to be large enough that its worth doing that - even though we recommend a setuptools shim, forcing it on everyone is unpleasant. commit 6de544a6e6aa2d0505e2ac8eb364b8b95e27790d Author: Robert Collins Date: Wed Oct 28 19:21:48 2015 +1300 PEP for build system abstraction. diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst new file mode 100644 index 0000000..b8232c8 --- /dev/null +++ b/build-system-abstraction.rst @@ -0,0 +1,419 @@ +:PEP: XX +:Title: Build system abstraction for pip/conda etc +:Version: $Revision$ +:Last-Modified: $Date$ +:Author: Robert Collins , + Nathaniel Smith +:BDFL-Delegate: Donald Stufft +:Discussions-To: distutils-sig +:Status: Draft +:Type: Standards Track +:Content-Type: text/x-rst +:Created: 26-Oct-2015 +:Post-History: XX +:Requires: The new dependency-specification PEP. + + +Abstract +======== + +This PEP specifies a programmatic interface for pip [#pip]_ and other +distribution or installation tools to use when working with Python +source trees (both the developer tree - e.g. the git tree - and source +distributions). + +The programmatic interface allows decoupling of pip from its current +hard dependency on setuptools [#setuptools]_ able for two +key reasons: + +1. It enables new build systems that may be much easier to use without + requiring them to even appear to be setuptools. + +2. It facilitates setuptools itself changing its user interface without + breaking pip, giving looser coupling. + +The programmatic interface also enables pip to install build time requirements +for packages which is an important step in getting pip to full feature parity +with the installation components of easy-install. + +As PEP-426 [#pep426]_ is draft, we cannot utilise the metadata format it +defined. However PEP-427 wheels are in wide use and fairly well specified, so +we have adopted the METADATA format from that for specifying distribution +dependencies. However something was needed for communicating bootstrap +requirements and build requirements - but a thin JSON schema is sufficient. + +Motivation +========== + +There is significant pent-up frustration in the Python packaging ecosystem +around the current lock-in between build system and pip. Breaking that lock-in +is better for pip, for setuptools, and for other build systems like flit +[#flit]_. + +Specification +============= + +Overview +-------- + +Build tools will be located by reading a file ``pypa.json`` from the root +directory of the source tree. That file describes how to get the build tool. +The build tool is then self-describing. This avoids having stale descriptions +of build tools encoded into sdists that have been uploaded to PyPI [#pypi]_. + +The interface involves executing processes rather than loading Python modules, +because tools like pip will often interact with many different versions of the +same build tool during a single invocation, and in-process APIs are much +harder to manage in that situation. + +Process interface +----------------- + +Where a process needs to be run, we need to be able to assemble the subprocess +call to make. For this we use a simple command string with in-Python variable +interpolation. Only variables defined in this PEP will be honoured: the use of +additional variables is an error. Additional variables can only be added with +a schema version increase. Basic variable expressions such as ``${PYTHON}``, +``${PYTHON:-python}``, ``${PYTHON:+PYTHON -m coverage}`` can be used. An +implementation of this is in shellvars [#shellvars]_. + +Processes will be run with the current working directory set to the root of +the source tree. + +Available variables +------------------- + +PYTHON + The Python interpreter in use. This is important to enable calling things + which are just Python entry points:: + + ${PYTHON} -m foo + +OUTPUT_DIR + Where to create requested output. If not set, outputting to the current + working directory is appropriate. This is required by ``pip`` which needs + to have wheels output in a known location. + +pypa.json +--------- + +The yaml file has the following schema. Extra keys are ignored. + +schema + The version of the schema. This PEP defines version 1. + Defaults to 1 + +bootstrap-requires + A list of dependency specifications that must be installed before + running the build tool. For instance, if using flit, then the requirements + might be:: + + bootstrap-requires: + - flit + +build-tool + A command to run to query the build tool for its complete interface. + The build tool should output on stdout a build tool description JSON + document. Stdin may not be read from, and stderr may be handled however + the calling process desires. + +build tool description +---------------------- + +The build tool description schema. Extra keys are ignored. + +schema + The version of the schema. This PEP defines version 1. + Defaults to 1 + +build-requires + Command to run to query build requirements. Build requirements are + returned as a JSON document with one key ``build_requires`` consisting of + a list of dependency specifications. Additional keys must be ignored. + +metadata + Command to run to generate metadata for the project. The metadata should + be output on stdout, stdin may not be consumed, and stderr handling is up + to the discretion of the calling process. The build-requires for the + project will be present in the Python environment when the metadata + command is run. pip would run metadata just once to determine what other + packages need to be downloaded and installed. The metadata is output as a + wheel METADATA file per PEP-427 [#pep427]_. + + Note that the metadata generated by the metadata command, and the metadata + present in a generated wheel must be identical. + +wheel + Command to run to build a wheel of the project. OUTPUT_DIR will be set and + point to an existing directory where the wheel should be output. Stdin + may not be consumed, stdout and stderr handling is at the discretion of + the calling process. The build-requires for the project will be present in + the Python environment when the wheel command is run. Only one file + should be output - if more are output then pip would pick an arbitrary one + to consume. + +develop + Command to do an in-place 'development' installation of the project. + Stdin may not be consumed, stdout and stderr handling is at the discretion + of the calling process. The build-requires for the project will be + present in the Python environment when the develop command is run. + + Not all build systems will be able to perform develop installs. If a build + system cannot do develop installs, then this key can be omitted. + + Note that when it is omitted, commands like ``pip install -e foo`` + will be unable to complete. + +provided-by + Optional distribution name that provides this build system. + This is used to facilitate caching the build tool description. If absent + then the build tool description cannot be cached, which will incure an + extra subprocess per package being built (to query the build tool). + Specifically, where the resolved bootstrap-requires results in the same + version of the named distribution being installed, the build tool + description is presumed to be identical. + +Python environments and hermetic builds +--------------------------------------- + +This specification does not prescribe whether builds should be hermetic or not. +Existing build tools like setuptools will use installed versions of build time +requirements (e.g. setuptools_scm) and only install other versions on version +conflicts or missing dependencies. However its likely that better consistency +can be created by always isolation builds and using only the specified dependencies. + +However there are nuanced problems there - such as how can users force the +avoidance of a bad version of a build requirement which meets some packages +dependencies. Future PEPs may tackle this problem, but it is not currently in +scope - it does not affect the metadata required to coordinate between build +systems and things that need to do builds. + +Upgrades +-------- + +Both 'pypa.json' and the build tool description are versioned to permit future +incompatible changes. There is a sequence dependency here. + +Upgrades to the schemas defined in this specification must proceed with the +consumers first. To ensure that consumers should refuse to operate when +'pypa.json' has a schema version that they do not recognise. + +Build tools listed in a 'pypa.json' with schema version 1 must not generate a +build tool description with a version other than 1 (or absent, as the default +is 1). + +Thus the sequence for upgrading either of schemas in a new PEP will be: + +1. Issue new PEP defining build tool description and 'pypa.json' schemas. The + 'pypa.yaml' schema version must change if either the 'pypa.yaml' schema has + changed or the build tool description schema has changed. The build tool + description schema version must change if the build tool description schema + has changed. +2. Consumers (e.g. pip) implement support for the new schema version. +3. Build tool authors implement the new schemas, and publish updated reference + 'pypa.json' files for their users. 4. Package authors opt into the new + schema when they are happy to introduce a dependency on the version of + 'pip' (and potentially other consumers) that introduced support for the new + schema version. + +The *same* process will take place for the initial deployment of this PEP:- +the propogation of the capability to use this PEP without a `setuptools shim`_ +will be largely gated by the adoption rate of the first version of pip that +supports it. + +Static metadata in sdists +------------------------- + +This PEP does not tackle the current inability to trust static metadata in +sdists. That is a separate problem to identifying and consuming the build +system that is in use in a source tree, whether it came from an sdist or not. + +Handling of compiler options +---------------------------- + +Handling of different compiler options is out of scope for this specification. + +pip currently handles compiler options by appending user supplied strings to +the command line it runs when running setuptools. This approach is sufficient +to work with the build system interface defined in this PEP, with the +exception that globally specified options will stop working globally as +different build systems evolve. That problem can be solved in pip (or conda or +other installers). + +In the long term, wheels should be able to express the difference between +wheels built with one compiler or options vs another. + +Examples +======== + +An example 'pypa.json' for using flit:: + + bootstrap-requires: + - flit + build-tool: flit --dump-build-description + +When 'pip' reads this it would prepare an environment with flit in it and +run `flit --dump-build-description` which would output something like:: + + build-requires: flit --dump-build-requires + metadata: flit --dump-metadata + provided-by: flit + wheel: flit wheel -d $OUTPUT_DIR + +The `--dump-` switches in this example would be needed to be added to +flit (or someone else could write an adapter). Because flit doesn't have +setup-requires support today, `--dump-build-requires` would just output a +constant string:: + + {"build_requires": []} + +`--dump-metadata` would interrogate `flit.ini` and marshal the metadata into +a wheel METADATA file and output that on stdout. + +flit wheel would need a `-d` parameter that tells it where to output the +wheel (pip needs this). + +Backwards Compatibility +======================= + +Older pips will remain unable to handle alternative build systems. +This is no worse than the status quo - and individual build system +projects can decide whether to include a shim ``setup.py`` or not. + +All existing build systems that can product wheels and do develop installs +should be able to run under this abstraction and will only need a specific +adapter for them constructed and published on PyPI. + +In the absence of a ``pypa.json`` file, tools like pip should assume a +setuptools build system and use setuptools commands directly. + + +Network effects +--------------- + +Projects that adopt build systems that are not setuptools compatible - that +is that they have no setup.py, or the setup.py doesn't accept commands that +existing tools try to use - will not be installable by those existing tools. + +Where those projects are used by other projects, this effect will cascade. + +In particular, because pip does not handle setup-requires today, any project +(A) that adopts a setuptools-incompatible build system and is consumed as a +setup-requirement by a second project (B) which has not itself transitioned to +having a pypa.json will make B uninstallable by any version of pip. This is +because setup.py in B will trigger easy-install when 'setup.py egg_info' is +run by pip, and that will try and fail to install A. + +As such we recommend that tools which are currently used as setup-requires +either ensure that they keep a `setuptools shim`_ or find their consumers and +get them all to upgrade to the use of a `pypa.json` in advance of moving +themselves. Pragmatically that is impossible, so the advice is to keep a +setuptools shim indefinitely - both for projects like pbr, setuptools_scm and +also projects like numpy. + +setuptools shim +--------------- + +It would be possible to write a generic setuptools shim that looks like +``setup.py`` and under the hood uses ``pypa.json`` to drive the builds. This +is not needed for pip to use the system, but would allow package authors to +use the new features while still retaining compatibility with older pip +versions. + +Rationale +========= + +This PEP started with a long mailing list thread on distutils-sig [#thread]_. +Subsequent to that a online meeting was held to debug all the positions folk +had. Minutes from that were posted to the list [#minutes]_. + +This specification is a translation of the consensus reached there into PEP +form, along with some arbitrary choices on the minor remaining questions. + +The basic heuristic for the design has to been to focus on introducing an +abstraction without requiring development not strictly tied to the +abstraction. Where the gap is small to improvements, or the cost of using the +existing interface is very high, then we've taken on having the improvement as +a dependency, but otherwise defered such to future iterations. + +We chose wheel METADATA files rather than defining a new specification, +because pip can already handle wheel .dist-info directories which encode all +the necessary data in a METADATA file. PEP-426 can't be used as it's still +draft, and defining a new metadata format, while we should do that, is a +separate problem. Using a directory on disk would not add any value to the +interface (pip has to do that today due to limitations in the setuptools +CLI). + +The use of 'develop' as a command is because there is no PEP specifying the +interoperability of things that do what 'setuptools develop' does - so we'll +need to define that before pip can take on the responsibility for doing the +'develop' step. Once thats done we can issue a successor PEP to this one. + +The use of a command line API rather than a Python API is a little +contentious. Fundamentally anything can be made to work, and Robert wants to +pick something thats sufficiently lowest common denominator that +implementation is straight forward on all sides. Picking a CLI for that makes +sense because all build systems will need a CLI for end users to use anyway. + +The choice of JSON as a file format is a compromise between several +constraints. Firstly there is no stdlib YAML interpreter, nor one for any of +the other low-friction structured file formats. Secondly, INIParser is a poor +format for a number of reasons, primarily that it has very minimal structure - +but pip's maintainers are not fond of it. JSON is in the stdlib, has +sufficient structure to permit embedding anything we want in future without +requiring embedded DSL's. + +Donald suggested using ``setup.cfg`` and the existing setuptools command line +rather than inventing something new. While that would permit interoperability +with less visible changes, it requires nearly as much engineering on the pip +side - looking for the new key in setup.cfg, implementing the non-installed +environments to run the build in. And the desire from other build system +authors not to confuse their users by delivering something that looks like but +behaves quite differently to setuptools seems like a bigger issue than pip +learning how to invoke a custom build tool. + +References +========== + +.. [#pip] pip, the recommended installer for Python packages + (http://pip.readthedocs.org/en/stable/) + +.. [#setuptools] setuptools, the defacto Python package build system + (https://pythonhosted.org/setuptools/) + +.. [#flit] flit, a simple way to put packages in PyPI + (http://flit.readthedocs.org/en/latest/) + +.. [#pypi] PyPI, the Python Package Index + (https://pypi.python.org/) + +.. [#shellvars] Shellvars, an implementation of shell variable rules for Python. + (https://github.com/testing-cabal/shellvars) + +.. [#pep426] PEP-426, Python distribution metadata. + (https://www.python.org/dev/peps/pep-0426/) + +.. [#pep427] PEP-427, Python distribution metadata. + (https://www.python.org/dev/peps/pep-0427/) + +.. [#thread] The kick-off thread. + (https://mail.python.org/pipermail/distutils-sig/2015-October/026925.html) + +.. [#minutes] The minutes. + (https://mail.python.org/pipermail/distutils-sig/2015-October/027214.html) + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Sun Nov 8 21:13:49 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 Nov 2015 21:13:49 -0500 Subject: [Distutils] Missing IPv6 support on pypi.python.org In-Reply-To: <20151108200441.GB28216@lud.polynome.dn42> References: <20151108200441.GB28216@lud.polynome.dn42> Message-ID: I?m pretty sure that PyPI will get IPv6 support as soon as Fastly supports it and not any sooner. I know they?re working on making it happen but I don?t think they have a public timeline for it yet. On November 8, 2015 at 4:34:32 PM, Baptiste Jonglez (baptiste at bitsofnetworks.org) wrote: > Hi, > > pypi.python.org is currently not reachable over IPv6. > > I know this issue was brought up before [1,2]. This is a real issue for > us, because our backend servers are IPv6-only (clients never need to talk > to backend servers, they go through IPv4-enabled HTTP frontends). > > So, deploying packages from pypi on the IPv6-only servers is currently a > pain. What is the roadmap to add IPv6 support? It seems that Fastly has > already deployed IPv6 [3]. > > Thanks, > Baptiste > > > [1] https://mail.python.org/pipermail/distutils-sig/2014-June/024465.html > [2] https://bitbucket.org/pypa/pypi/issues/90/missing-ipv6-connectivity > [3] http://bgp.he.net/AS54113#_prefixes6 > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From njs at pobox.com Sun Nov 8 23:55:22 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 8 Nov 2015 20:55:22 -0800 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: The new version is looking pretty good to me! My main concern still is that specification of whitespace handling is still kinda confusing/underspecified. The text says "all whitespace is optional", but the grammar says that it's mandatory in some cases (e.g. url-marker, still not sure why -- I'd understand if it were mandatory before the ";" since ";" is a valid character in URLs, but it says it's mandatory afterward?), and the grammar is still wrong about whitespace in some cases (e.g. it says ">= 1.0" is an illegal versionspec). I guess the two options are either to go through carefully sprinkling *WSP's about at all the appropriate places, or else to tackle things more systematically by adding a lexer layer... Also, unrelated: do you want to import the text for PEP 426 about the requirements for hashes in URLs? -n On Sun, Nov 8, 2015 at 1:48 PM, Robert Collins wrote: > Bah, just remembered to trim the marker operators too. Doing so now, I > won't include a diff for that. > > -Rob -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Mon Nov 9 00:20:10 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 8 Nov 2015 21:20:10 -0800 Subject: [Distutils] Update to my skeletal PEP for a new build system interface Message-ID: Hi all, Here's a quick update to my draft PEP for a new build system interface, last seen here: https://mail.python.org/pipermail/distutils-sig/2015-October/027360.html There isn't terribly much here, and Robert and I should really figure out how to reconcile what we have, but since I was rearranging some stuff anyway and prepping possible new sections, I figured I'd at least post this. I addressed all the previous comments so hopefully it is boring and non-controversial :-). Changes: - It wasn't clear that the sdist metadata stuff was really helping, so I took it out for now. So it doesn't get lost, I split it out as a standalone deferred-status PEP to the pypa repository: https://github.com/pypa/interoperability-peps/pull/57 - Rewrote stuff to deal with Paul's comments - Added new terminology: "build frontend" for something like pip, and "build backend" for the project specific hooks that pip calls. Seems helpful. -n ---- PEP: ?? Title: A build-system independent format for source trees Version: $Revision$ Last-Modified: $Date$ Author: Nathaniel J. Smith Status: Draft Type: Standards-Track Content-Type: text/x-rst Created: 30-Sep-2015 Post-History: 1 Oct 2015, 25 Oct 2015 Discussions-To: Abstract ======== While ``distutils`` / ``setuptools`` have taken us a long way, they suffer from three serious problems: (a) they're missing important features like usable build-time dependency declaration, autoconfiguration, and even basic ergonomic niceties like `DRY `_-compliant version number management, and (b) extending them is difficult, so while there do exist various solutions to the above problems, they're often quirky, fragile, and expensive to maintain, and yet (c) it's very difficult to use anything else, because distutils/setuptools provide the standard interface for installing packages expected by both users and installation tools like ``pip``. Previous efforts (e.g. distutils2 or setuptools itself) have attempted to solve problems (a) and/or (b). This proposal aims to solve (c). The goal of this PEP is get distutils-sig out of the business of being a gatekeeper for Python build systems. If you want to use distutils, great; if you want to use something else, then that should be easy to do using standardized methods. The difficulty of interfacing with distutils means that there aren't many such systems right now, but to give a sense of what we're thinking about see `flit `_ or `bento `_. Fortunately, wheels have now solved many of the hard problems here -- e.g. it's no longer necessary that a build system also know about every possible installation configuration -- so pretty much all we really need from a build system is that it have some way to spit out standard-compliant wheels and sdists. We therefore propose a new, relatively minimal interface for installation tools like ``pip`` to interact with package source trees and source distributions. Terminology and goals ===================== A *source tree* is something like a VCS checkout. We need a standard interface for installing from this format, to support usages like ``pip install some-directory/``. A *source distribution* is a static snapshot representing a particular release of some source code, like ``lxml-3.4.4.zip``. Source distributions serve many purposes: they form an archival record of releases, they provide a stupid-simple de facto standard for tools that want to ingest and process large corpora of code, possibly written in many languages (e.g. code search), they act as the input to downstream packaging systems like Debian/Fedora/Conda/..., and so forth. In the Python ecosystem they additionally have a particularly important role to play, because packaging tools like ``pip`` are able to use source distributions to fulfill binary dependencies, e.g. if there is a distribution ``foo.whl`` which declares a dependency on ``bar``, then we need to support the case where ``pip install bar`` or ``pip install foo`` automatically locates the sdist for ``bar``, downloads it, builds it, and installs the resulting package. Source distributions are also known as *sdists* for short. A *build frontend* is a tool that users might run that takes arbitrary source trees or source distributions and builds wheels from them. The actual building is done by each source tree's *build backend*. In a command like ``pip wheel some-directory/``, pip is acting as a build frontend. An *integration frontend* is a tool that users might run that takes a set of package requirements (e.g. a requirements.txt file) and attempts to update a working environment to satisfy those requirements. This may require locating, building, and installing a combination of wheels and sdists. In a command like ``pip install lxml==2.4.0``, pip is acting as an integration frontend. Source trees ============ We retroactively declare the legacy source tree format involving ``setup.py`` to be "version 0". We don't try to specify it further; its de facto specification is encoded in the source code and documentation of ``distutils``, ``setuptools``, ``pip``, and other tools. A "version 1" (or greater) source tree is any directory which contains a file named ``pypackage.cfg``, which will -- in some manner whose details are TBD -- describe the package's build dependencies and how to invoke the project-specific build backend. This mechanism: - Will allow for both static and dynamic specification of build dependencies - Will have some degree of isolation of different builds from each other, so that it will be possible for a single run of pip to install one package that build-depends on ``foo == 1.1`` and another package that build-depends on ``foo == 1.2``. - Will leave the actual installation of the package in the hands of a specialized installation tool like pip (i.e. individual package build systems will not need to know about things like --user versus --global or make decisions about when and how to modify .pth files) [TBD: the exact set of operations to be supported and their detailed semantics] [TBD: should builds be performed in a fully isolated environment, or should they get access to packages that are already installed in the target install environment? The former simplifies a number of things, but Robert was skeptical it would be possible.] [TBD: the form of the communication channel between an installation tool like ``pip`` and the build system, over which these operations are requested] [TBD: the syntactic details of the configuration file format itself. We can change the name too if we want, I just think it's useful to have a single name to refer to it for now, and this is the last and least interesting thing to figure out.] Source distributions ==================== For now, we continue with the legacy sdist format which is mostly undefined, but basically comes down to: a file named {NAME}-{PACKAGE}.{EXT}, which unpacks into a buildable source tree. Traditionally these have always contained "version 0" source trees; we now allow them to also contain version 1+ source trees. Integration frontends require that an sdist named {NAME}-{PACKAGE}.{EXT} will generate a wheel named {NAME}-{PACKAGE}-{COMPAT-INFO}.whl. [TBD: whether we want to adopt a new sdist format along with this -- my read of the room is that it's sounding like people are leaning towards deferring that for a separate round of standardization, but we'll see what we think once some of the important details above have been hammered out] Evolutionary notes ================== A goal here is to make it as simple as possible to convert old-style sdists to new-style sdists. (E.g., this is one motivation for supporting dynamic build requirements.) The ideal would be that there would be a single static pypackage.cfg that could be dropped into any "version 0" VCS checkout to convert it to the new shiny. This is probably not 100% possible, but we can get close, and it's important to keep track of how close we are... hence this section. A rough plan would be: Create a build system package (``setuptools_pypackage`` or whatever) that knows how to speak whatever hook language we come up with, and convert them into calls to ``setup.py``. This will probably require some sort of hooking or monkeypatching to setuptools to provide a way to extract the ``setup_requires=`` argument when needed, and to provide a new version of the sdist command that generates the new-style format. This all seems doable and sufficient for a large proportion of packages (though obviously we'll want to prototype such a system before we finalize anything here). (Alternatively, these changes could be made to setuptools itself rather than going into a separate package.) But there remain two obstacles that mean we probably won't be able to automatically upgrade packages to the new format: 1) There currently exist packages which insist on particular packages being available in their environment before setup.py is executed. This means that if we decide to execute build scripts in an isolated virtualenv-like environment, then projects will need to check whether they do this, and if so then when upgrading to the new system they will have to start explicitly declaring these dependencies (either via ``setup_requires=`` or via static declaration in ``pypackage.cfg``). 2) There currently exist packages which do not declare consistent metadata (e.g. ``egg_info`` and ``bdist_wheel`` might get different ``install_requires=``). When upgrading to the new system, projects will have to evaluate whether this applies to them, and if so they will need to stop doing that. Copyright ========= This document has been placed in the public domain. -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Mon Nov 9 00:20:56 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 8 Nov 2015 21:20:56 -0800 Subject: [Distutils] Proposed language for how build environments work in the new build system interface Message-ID: Hi all, Following the strategy of trying to break out the different controversial parts of the new build system interface, here's some proposed text defining the environment that a build frontend like pip provides to a project-specific build backend. Robert's PEP currently disclaims all of this as out-of-scope, but I think it's good to get something down, since in practice we'll have to figure something out before any implementations can exist. And I think the text below pretty much hits the right points. What might be controversial about this nonetheless is that I'm not sure that pip *can* reasonably implement all the requirements as written without adding a dependency on virtualenv (at least for older pythons -- obviously this is no big deal for new pythons since venv is now part of the stdlib). I think the requirements are correct, so... Donald, what do you think? -n ---- The build environment --------------------- One of the responsibilities of a build frontend is to set up the environment in which the build backend will run. We do not require that any particular "virtual environment" mechanism be used; a build frontend might use virtualenv, or venv, or no special mechanism at all. But whatever mechanism is used MUST meet the following criteria: - All requirements specified by the project's build-requirements must be available for import from Python. - This must remain true even for new Python subprocesses spawned by the build environment, e.g. code like:: import sys, subprocess subprocess.check_call([sys.executable, ...]) must spawn a Python process which has access to all the project's build-requirements. This is necessary e.g. for build backends that want to run legacy ``setup.py`` scripts in a subprocess. [TBD: the exact wording here will probably need some tweaking depending on whether we end up using an entrypoint-like mechanism for specifying build backend hooks (in which case we can assume that hooks automatically have access to sys.executable), or a subprocess-based mechanism (in which case we'll need some other way to communicate the path to the python interpreter to the build backend, e.g. a PYTHON= envvar). But the basic requirement is pretty much the same either way.] - All command-line scripts provided by the build-required packages must be present in the build environment's PATH. For example, if a project declares a build-requirement on `flit `_, then the following must work as a mechanism for running the flit command-line tool:: import subprocess subprocess.check_call(["flit", ...]) A build backend MUST be prepared to function in any environment which meets the above criteria. In particular, it MUST NOT assume that it has access to any packages except those that are present in the stdlib, or that are explicitly declared as build-requirements. Recommendations for build frontends (non-normative) ................................................... A build frontend MAY use any mechanism for setting up a build environment that meets the above criteria. For example, simply installing all build-requirements into the global environment would be sufficient to build any compliant package -- but this would be sub-optimal for a number of reasons. This section contains non-normative advice to frontend implementors. A build frontend SHOULD, by default, create an isolated environment for each build, containing only the standard library and any explicitly requested build-dependencies. This has two benefits: - It allows for a single installation run to build multiple packages that have contradictory build-requirements. E.g. if package1 build-requires pbr==1.8.1, and package2 build-requires pbr==1.7.2, then these cannot both be installed simultaneously into the global environment -- which is a problem when the user requests ``pip install package1 package2``. Or if the user already has pbr==1.8.1 installed in their global environment, and a package build-requires pbr==1.7.2, then downgrading the user's version would be rather rude. - It acts as a kind of public health measure to maximize the number of packages that actually do declare accurate build-dependencies. We can write all the strongly worded admonitions to package authors we want, but if build frontends don't enforce isolation by default, then we'll inevitably end up with lots of packages on PyPI that build fine on the original author's machine and nowhere else, which is a headache that no-one needs. However, there will also be situations where build-requirements are problematic in various ways. For example, a package author might accidentally leave off some crucial requirement despite our best efforts; or, a package might declare a build-requirement on `foo >= 1.0` which worked great when 1.0 was the latest version, but now 1.1 is out and it has a showstopper bug; or, the user might decide to build a package against numpy==1.7 -- overriding the package's preferred numpy==1.8 -- to guarantee that the resulting build will be compatible at the C ABI level with an older version of numpy (even if this means the resulting build is unsupported upstream). Therefore, build frontends SHOULD provide some mechanism for users to override the above defaults. For example, a build frontend could have a ``--build-with-system-site-packages`` option that causes the ``--system-site-packages`` option to be passed to virtualenv-or-equivalent when creating build environments, or a ``--build-requirements-override=my-requirements.txt`` option that overrides the project's normal build-requirements. The general principle here is that we want to enforce hygiene on package *authors*, while still allowing *end-users* to open up the hood and apply duct tape when necessary. -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Mon Nov 9 00:45:01 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 18:45:01 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 9 November 2015 at 17:55, Nathaniel Smith wrote: > The new version is looking pretty good to me! > > My main concern still is that specification of whitespace handling is > still kinda confusing/underspecified. The text says "all whitespace is > optional", but the grammar says that it's mandatory in some cases > (e.g. url-marker, still not sure why -- I'd understand if it were > mandatory before the ";" since ";" is a valid character in URLs, but > it says it's mandatory afterward?), and the grammar is still wrong > about whitespace in some cases (e.g. it says ">= 1.0" is an illegal > versionspec). > > I guess the two options are either to go through carefully sprinkling > *WSP's about at all the appropriate places, or else to tackle things > more systematically by adding a lexer layer... I'm happy either way. You are right though that there is one spot where it is not optional. Thats how "url; marker stuff here" is defined in pip today. We could in principle define a new rule here, such as putting markers before the url. But as markers aren't self delimiting (blame PEP-345) that is a bit fugly. We could say 'url 1*WSP ";" *WSP marker', which would be a bit more consistent, but different to pip's current handling. Of course, the @ syntax is already different, so it seems reasonable to do so to me. > Also, unrelated: do you want to import the text for PEP 426 about the > requirements for hashes in URLs? No, thats a PEP-440 concern [whether it should be or not] and already documented there. If we were revising that requirement, sure, but we're not. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Mon Nov 9 01:06:17 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 9 Nov 2015 19:06:17 +1300 Subject: [Distutils] Update to my skeletal PEP for a new build system interface In-Reply-To: References: Message-ID: So I think the big thing to reconcile is the thing where I say out of scope - using static metadata in distributions. The dynamic/non-dynamic split may or may not be needed; but examining source trees and sdists as different only makes sense to me if you're gaining something - and pregenerated declarative dependencies might be one of those things. OTOH since we haven't solved the numpy ABI problem yet... perhaps thats premature. btw - note that this: ---- 2) There currently exist packages which do not declare consistent metadata (e.g. ``egg_info`` and ``bdist_wheel`` might get different ``install_requires=``). When upgrading to the new system, projects will have to evaluate whether this applies to them, and if so they will need to stop doing that. ---- is actually a feature today. egg_info gets you the follow-these-rules-locally with older pip versions like 1.5.4, and bdist_wheel gets you a marker-ready set of dependencies. -Rob From wolfgang.maier at biologie.uni-freiburg.de Mon Nov 9 05:44:47 2015 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Mon, 9 Nov 2015 11:44:47 +0100 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: <5640791F.6020204@biologie.uni-freiburg.de> On 09.11.2015 02:13, Donald Stufft wrote: > On November 5, 2015 at 4:08:56 PM, Donald Stufft (donald at stufft.io) wrote: >> >> Another possible option is to modify pip so that instead of installing into >> site-packages we instead create an "executable zip file" which is a simple zip >> file that has all of pip inside of it and a top level __main__.py. Python can >> directly execute this as if it were installed. We would no longer have any >> command except for ``pip`` (which is really this executable zip file). This >> script would default to installing into ``python``, and you can direct it to >> install into a different python using a (new) flag of ``-p`` or ``--python`` >> where you can specify either a full path or a command that we'll search $PATH >> for. >> > > I?ve implemented a proof of concept of this which can be found at https://github.com/pypa/pip/pull/3234. It?s quick and dirty so probably will break down in edge cases. I also know it currently doesn?t generate a ``pip.exe`` on Windows, so if someone runs it on Windows they?ll need to execute it as ``python path/to/Scripts/pip``. > > I?ve not tested trying to install it via wheel or even installing it via pip, so you?ll want to check out the branch or download a tarball and run ``python setup.py install`` (or you can run ``python setup.py build`` and just execute ``build/script-X.Y/pip``). > > You might see errors if you don?t build it in a clean virtual environment. That?s just because I haven?t bothered to try and isolate the build by default yet. > From your comments on the PR: > - We could possibly restore python -m pip and import pip with a > sufficiently magical .pth file installed into site-packages. Something like this should be done. I like the idea to have just one pip installed, but I really wouldn't like python -m pip to disappear. Something I miss in all the discussions taking place here is the fact that python -m pip is the officially documented way of invoking pip at https://docs.python.org/3/installing/index.html#basic-usage and it is not particularly helpful if that recommendation keeps changing back and forth. I know some people don't like the wordy invocation, but other people (including me) use and teach it because it works reliably. Just because a pip executable based invocation pattern looks better, I don't think it justifies the change. > - We could add some short hands inspired by py.exe with things like > -2, -3, 3.4, etc that will translate to -ppython2, -ppython3, > -ppython3.4, etc. Sure, but this extra-effort will be needed just to re-enable something that is already possible now via py -m pip. Your .pth file idea makes me wonder whether an alternative solution could be to share one regular pip installation between Python versions that way. Basically, ensurepip and get-pip.py could default to installing pip (possibly as an executable zip file) into a separate folder and add a .pth file to the sys.executable's site-packages. Later invocations of ensurepip/get-pip.py through a different Python could then detect presence of pip in its folder and simply add the .pth file to the current site-packages. This would solve the issue of having to manage multiple installations of pip, but preserve all current usage patterns (invocation as executable, python -m pip, import pip). > - If we drop say, Python 2.6 and someone wants to install an older > version, we might have to make it possible to override the name or > something (python setup.py install --script-suffix 2.6?) to enable > that particular special case to still work. This would only be > needed really when altinstall'ing multiple Pythons into the same > bin dir when one or more of those Pythons are not supported by the > "main" version of pip you're using. Yes, but such cases will occur more often as new Python versions are released and you drop support for old ones. Effectively, any developer who wants to support older versions of Python than the latest pip supports will face the problem of explaining users what to do. So something really convincing needs to be worked out here. From oscar.j.benjamin at gmail.com Mon Nov 9 06:16:56 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Mon, 9 Nov 2015 11:16:56 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: <5640791F.6020204@biologie.uni-freiburg.de> References: <5640791F.6020204@biologie.uni-freiburg.de> Message-ID: On 9 November 2015 at 10:44, Wolfgang Maier wrote: > > Something I miss in all the discussions taking place here is the fact that > python -m pip is the officially documented way of invoking pip at > https://docs.python.org/3/installing/index.html#basic-usage and it is not > particularly helpful if that recommendation keeps changing back and forth. > > I know some people don't like the wordy invocation, but other people > (including me) use and teach it because it works reliably. Just because a > pip executable based invocation pattern looks better, I don't think it > justifies the change. I also teach this invocation. Somehow you have to select the Python version you're interested in and I really don't see why $ pip -p ./foo/bar/python ... is better than $ ./foo/bar/python -m pip ... I already need to explain to students how to ensure that their Python executable is on PATH. Needing pip to be on PATH as well is just another possible source of confusion (even if there's only one Python installation). -- Oscar From ubernostrum at gmail.com Mon Nov 9 06:27:26 2015 From: ubernostrum at gmail.com (James Bennett) Date: Mon, 9 Nov 2015 05:27:26 -0600 Subject: [Distutils] The future of invoking pip In-Reply-To: <85h9kwi3ev.fsf@benfinney.id.au> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Sunday, November 8, 2015, Ben Finney wrote: > > +1. Addressing this by insisting on ?python -m foo? is not a solution. > It's a plaster over a problem that will remain until the underlying > conflict is resolved. > > That's not to say PyPA should ignore the issue, certainly there are > things that can be done to help. But ?python -m foo? is an ugly wart, > and I really want the rhetoric to acknowledge that instead of > considering it a satisfactory end point. > I agree with this, and with the feeling that we're just kicking the failure down the line: if someone doesn't know what Python is being invoked by 'pip', they likely will have the same problem with other tools, too, and ultimately the ability to run Python scripts directly and without having to do hackery with supporting/requiring 'python -m' or similar is too useful and commonly used. So faced with either (essentially) forcing a trend of every command-line tool having to be invoked with 'python -m', or requiring people with complex multi-Python installations to be more careful, I choose the "be more careful" option (i.e., I would strenuously resist changing Django's admin script to "python -m django" if this were proposed to Django today). -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Nov 9 07:01:37 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 9 Nov 2015 12:01:37 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On 9 November 2015 at 11:27, James Bennett wrote: > I agree with this, and with the feeling that we're just kicking the failure > down the line: if someone doesn't know what Python is being invoked by > 'pip', they likely will have the same problem with other tools, too, and > ultimately the ability to run Python scripts directly and without having to > do hackery with supporting/requiring 'python -m' or similar is too useful > and commonly used. So faced with either (essentially) forcing a trend of > every command-line tool having to be invoked with 'python -m', or requiring > people with complex multi-Python installations to be more careful, I choose > the "be more careful" option (i.e., I would strenuously resist changing > Django's admin script to "python -m django" if this were proposed to Django > today). This is pretty much why I said earlier that this isn't really a pip issue. It applies just as much to Django, to pydoc, etc. I'm concerned that what is happening at the moment is that every project implements its own workaround for the issues with wrapper commands and PATH. Either that or most projects simply ignore the issue (after all, 99% of projects aren't installed and used in quite as many of a user's Python installations as pip is). The one thing that *is* special about pip is that it actually *modifies* the Python installation it runs under. So running pip with the "wrong" Python makes persistent changes somewhere you weren't expecting. Whereas running the wrong Django presumably just fires up a website you weren't expecting, which is easily fixed. That makes the issues with wrapper commands and PATH more pressing for pip than for other projects. (But I suspect, for example, that IPython may well encounter similar issues, if I run the "wrong" IPython wrapper it could start up my notebook using the wrong Python interpreter.) I'm no closer to having a good suggestion for a solution here, just trying to point out that by thinking about this from a pip-only perspective we might be missing better solutions that apply from a broader perspective. Paul From solipsis at pitrou.net Mon Nov 9 07:13:37 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 9 Nov 2015 13:13:37 +0100 Subject: [Distutils] The future of invoking pip References: <5640791F.6020204@biologie.uni-freiburg.de> Message-ID: <20151109131337.4d95ec82@fsol> On Mon, 9 Nov 2015 11:16:56 +0000 Oscar Benjamin wrote: > On 9 November 2015 at 10:44, Wolfgang Maier > wrote: > > > > Something I miss in all the discussions taking place here is the fact that > > python -m pip is the officially documented way of invoking pip at > > https://docs.python.org/3/installing/index.html#basic-usage and it is not > > particularly helpful if that recommendation keeps changing back and forth. > > > > I know some people don't like the wordy invocation, but other people > > (including me) use and teach it because it works reliably. Just because a > > pip executable based invocation pattern looks better, I don't think it > > justifies the change. > > I also teach this invocation. Somehow you have to select the Python > version you're interested in and I really don't see why "Selecting the Python version you're interested in", in many cases, is done /a priori/ by activating the appropriate environment (whether virtualenv-, pyvenv- or conda-based). When I'm using a conda environment I certainly don't want to type "python -m pip" when "pip" is sufficient. It would be nice if people here could acknowledge the diversity of existing workflows. Regards Antoine. From solipsis at pitrou.net Mon Nov 9 07:15:05 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 9 Nov 2015 13:15:05 +0100 Subject: [Distutils] The future of invoking pip References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: <20151109131505.0b67699d@fsol> On Mon, 9 Nov 2015 12:01:37 +0000 Paul Moore wrote: > > The one thing that *is* special about pip is that it actually > *modifies* the Python installation it runs under. Fortunately, though, if you are running the system pip without having root privileges activated, it will most certainly fail with a permission error. Regards Antoine. From donald at stufft.io Mon Nov 9 07:36:27 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 Nov 2015 07:36:27 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On November 9, 2015 at 7:01:57 AM, Paul Moore (p.f.moore at gmail.com) wrote: > > This is pretty much why I said earlier that this isn't really a pip > issue. It applies just as much to Django, to pydoc, etc. > > I'm concerned that what is happening at the moment is that every > project implements its own workaround for the issues with wrapper > commands and PATH. Either that or most projects simply ignore the > issue (after all, 99% of projects aren't installed and used in quite > as many of a user's Python installations as pip is). I don?t think every project is going to implement it?s own work around because I don?t think the problems even affect most projects. For instance, the ``django-admin`` command isn?t (typically) going to matter which particular Python is invoking it. For most people the only time they ever invoke ``django-admin`` is when they are creating a new project which the particular version of Python you?re using to invoke Django doesn?t really matter for that. Beyond that they tend to use ./manage.py which has a shebang at that top that they can use to adjust which version of Python is being used. You also have things like tox, twine, virtualenv, invoke, pypi-cli, supervisor, vanity, awscli, Carbon/Graphite, etc which are written in Python but the fact that they are is basically an implementation detail and what version of Python is executing them simply doesn?t matter at all. I believe that the vast majority of cases where a Python project comes with a CLI, the particular version of Python that is being used to *invoke* that CLI is typically unimportant, or if it is important is only important in edge cases. This makes it obviously the right thing to just use ``mycoolcli`` as the standard command, and maybe have a ``python -m mycoolcli`` for the weird edge cases. Projects like pip (and yes IPython too) care a lot about what version of Python is invoking them (or more specifically what version of Python they are targeting, which is currently controlled by inspecting sys.executable). I believe that these projects are the odd ones out because I?m having a problem even naming many of them at all and of the ones I can think of they are mostly aimed at Python developers vs being something that even people who don?t care about Python at all might be using. The other thing that?s different about pip is just the fact that it?s a project that is basically installed into every single Python environment and will be used to ?target? every single one of those Python environments. I think it?s very unusual for someone to have a Python environment where they aren?t using pip to install at least something into that environment. The default now is to install pip into every environment (virtual or real) which means that a lot of people will have a great many number of pips installed into a variety of environments. Someone who uses Django a lot might also have Django installed a lot into many environments, but a primary difference there is Django is going to (typically) be something that is foremost in that person?s mind about what version they are going to be using. However pip is an incidental thing, people don?t typically think about what version of pip they have, they just sort of use whatever is there. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From waynejwerner at gmail.com Mon Nov 9 07:46:51 2015 From: waynejwerner at gmail.com (Wayne Werner) Date: Mon, 9 Nov 2015 06:46:51 -0600 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Mon, Nov 9, 2015 at 6:01 AM, Paul Moore wrote: > > The one thing that *is* special about pip is that it actually > *modifies* the Python installation it runs under. So running pip with > the "wrong" Python makes persistent changes somewhere you weren't > expecting. Whereas running the wrong Django presumably just fires up a > website you weren't expecting, which is easily fixed. That makes the > issues with wrapper commands and PATH more pressing for pip than for > other projects. > > (But I suspect, for example, that IPython may well encounter similar > issues, if I run the "wrong" IPython wrapper it could start up my > notebook using the wrong Python interpreter.) My experience(s) with the latest IPython is that it's freaking magic - in a good way :) And by that, I mean when I've had a venv activated it says something to the effect of, "Hey, we noticed that you're running inside of a virtual environment so we've taken the pains to activate that for you. Sure, we're the system installed IPython, but we've done a bit of fiddling so there's an off chance that things go sideways on you. If that's the case, you may want to invoke ipython with this other incantation to stop this behavior". Of course, I've not had any problems with it's magic default behavior, so that's a nice thing. But presumably it does the same sort of thing we're talking about wanting pip to do, vs. `python -m pip`. -W -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Nov 9 07:54:53 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 Nov 2015 07:54:53 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <5640791F.6020204@biologie.uni-freiburg.de> Message-ID: On November 9, 2015 at 6:17:41 AM, Oscar Benjamin (oscar.j.benjamin at gmail.com) wrote: > On 9 November 2015 at 10:44, Wolfgang Maier > wrote: > > > > Something I miss in all the discussions taking place here is the fact that > > python -m pip is the officially documented way of invoking pip at > > https://docs.python.org/3/installing/index.html#basic-usage and it is not > > particularly helpful if that recommendation keeps changing back and forth. > > > > I know some people don't like the wordy invocation, but other people > > (including me) use and teach it because it works reliably. Just because a > > pip executable based invocation pattern looks better, I don't think it > > justifies the change. > > I also teach this invocation. Somehow you have to select the Python > version you're interested in and I really don't see why > > $ pip -p ./foo/bar/python ... > > is better than > > $ ./foo/bar/python -m pip ... > > I already need to explain to students how to ensure that their Python > executable is on PATH. Needing pip to be on PATH as well is just > another possible source of confusion (even if there's only one Python > installation). > The primary difference is one of verbosity, particularly in common cases. You?ll have situations like: * I don?t care what version of Python something is being installed into, I just want to install it to use the CLI of some project that just happens to be written in python. ``pip install`` vs ``python -m pip install``. * I?ve already selected which version of Python should be used either via virtual environment, conda environment, or some other environment manager. ``pip install`` vs ``python -m pip install``. * I just want to install into the same thing as ``python3`` or ``python2`` or ``python3.3``. ``pip -3 install`` vs ``python3 -m pip install`` or ``pip -2 install`` vs ``python2 -m pip install`` or ``pip -3.3 install`` vs ``python3.3 -m pip install``. * I want to install into ``pypy``. ``pip -p peppy install`` vs ``pypy -m pip install``. * I want to install into ``./foo/bar/python``. ``pip -p ./foo/bar/python install`` vs ./foo/bar/python -m pip install``. I think the main thing this illustrates is that the ``pip -p`` version is nicer in the common scenarios like not caring what version of Python you?re installing into, having pre-selected the version of Python, or just wanting to install into a particular CPython of X or X.Y version [1]. It only really degrades into an equivalent invocation in the less common cases where you need to install into a non CPython interpreter or your interpreter isn?t one of ``pythonX`` or ``pythonX.Y`` on your path. Of course, the main benefit of ``python -m`` is that it?s a standard (within Python) interface and it already works and doesn?t require any backwards incompatibilities (except to possibly remove the ``pip`` or ``pipX.Y`` commands due to their footman-ish nature). It also delays the need to ensure that the distutils bindir is on $PATH (though you?ll quickly need to ensure it?s there anyways if you install anything that doesn?t use ``-m`` to be invoked). [1] This could be made better by shipping a ``py`` launcher on *nix too, which brings it down to ``py -3 -m pip`` which is still a bit longer than ``pip -3`` but is better than ``python3 -m pip``. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Mon Nov 9 08:26:31 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 9 Nov 2015 13:26:31 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On 9 November 2015 at 12:46, Wayne Werner wrote: > My experience(s) with the latest IPython is that it's freaking magic - in > a good way :) Nice :-) Maybe pip could learn something useful from how the IPython guys do that. Paul From donald at stufft.io Mon Nov 9 08:34:53 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 Nov 2015 08:34:53 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On November 9, 2015 at 8:26:50 AM, Paul Moore (p.f.moore at gmail.com) wrote: > On 9 November 2015 at 12:46, Wayne Werner wrote: > > My experience(s) with the latest IPython is that it's freaking magic - in > > a good way :) > > Nice :-) Maybe pip could learn something useful from how the IPython > guys do that. > Paul It?s not likely to be a great option. It essentially just takes /usr/bin/ipython and if the ``VIRTUAL_ENV`` variable is defined it munges sys.path so that it also adds the ``site-packages`` from the virtual environment. This means that it totally ignores ?no-site-packages because the ?outside? Python sys.path is still there. The ``-p`` flag in my PoC works by munging the sys.path, but the difference is that it?s adding something like /usr/bin/pip to the sys.path (which is a zip file) which doesn?t otherwise pollute the sys.path with other random crap from within the site-packages. It?d be trivial to have the ``-p`` flag default to something that just automatically takes into account a virtual environment though and python -m pip would implicitly automatically take a virtual environment into account. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Mon Nov 9 10:34:25 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 9 Nov 2015 15:34:25 +0000 Subject: [Distutils] Update to my skeletal PEP for a new build system interface In-Reply-To: References: Message-ID: On 9 November 2015 at 05:20, Nathaniel Smith wrote: > A *source tree* is something like a VCS checkout. We need a standard > interface for installing from this format, to support usages like > ``pip install some-directory/``. I still find these two definitions unhelpful, sorry. We don't *need* an interface to install from a source tree. It's entirely feasible to have a standard interface to build a sdist from a source tree and go source tree -> sdist -> wheel -> install. That doesn't cater for editable installs, nor does it cater for reusing things like object files from previous builds, so there may be *benefits* to having a richer interface than this, but it's wrong to say it's needed. I suspect you're reluctant to require a "source tree -> sdist" interface, because the author of flit isn't comfortable with having such a thing. That's OK - if you want to note that a benefit of going direct to install (or wheel) is that tools that don't allow you to create a sdist are supported, then let's make that explicit. Expect plenty of pushback on the idea of tools that don't supply sdists though... > A *source distribution* is a static snapshot representing a particular > release of some source code, like ``lxml-3.4.4.zip``. Source > distributions serve many purposes: they form an archival record of > releases, they provide a stupid-simple de facto standard for tools > that want to ingest and process large corpora of code, possibly > written in many languages (e.g. code search), they act as the input to > downstream packaging systems like Debian/Fedora/Conda/..., and so > forth. In the Python ecosystem they additionally have a particularly > important role to play, because packaging tools like ``pip`` are able > to use source distributions to fulfill binary dependencies, e.g. if > there is a distribution ``foo.whl`` which declares a dependency on > ``bar``, then we need to support the case where ``pip install bar`` or > ``pip install foo`` automatically locates the sdist for ``bar``, > downloads it, builds it, and installs the resulting package. > > Source distributions are also known as *sdists* for short. One key feature of the current sdists that you are either overlooking or ignoring is that they can, and do, contain *built* files. The best example is projects using Cython. The sdist contains generated C files, so that users building wheels from the sdist don't need cython installed. Certainly your definition of a sdist is general enough that it doesn't preclude such things. But on the other hand, it doesn't offer any suggestion that this is an important feature of a sdist (and it is - I say that as someone who has needed to build wheels from a sdist and doesn't have Cython installed). From your definition, people will infer that zipping up a development directory makes a sdist, and so that's what they'll do. Because after all, making Cython a build requirement and generating the C at build time is *also* an option, it's just not as friendly to the average user. Paul From donald at stufft.io Mon Nov 9 10:50:27 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 Nov 2015 10:50:27 -0500 Subject: [Distutils] Update to my skeletal PEP for a new build system interface In-Reply-To: References: Message-ID: On November 9, 2015 at 10:35:24 AM, Paul Moore (p.f.moore at gmail.com) wrote: > On 9 November 2015 at 05:20, Nathaniel Smith wrote: > > A *source tree* is something like a VCS checkout. We need a standard > > interface for installing from this format, to support usages like > > ``pip install some-directory/``. > > I still find these two definitions unhelpful, sorry. > > We don't *need* an interface to install from a source tree. It's > entirely feasible to have a standard interface to build a sdist from a > source tree and go source tree -> sdist -> wheel -> install. That > doesn't cater for editable installs, nor does it cater for reusing > things like object files from previous builds, so there may be > *benefits* to having a richer interface than this, but it's wrong to > say it's needed. > > I suspect you're reluctant to require a "source tree -> sdist" > interface, because the author of flit isn't comfortable with having > such a thing. That's OK - if you want to note that a benefit of going > direct to install (or wheel) is that tools that don't allow you to > create a sdist are supported, then let's make that explicit. Expect > plenty of pushback on the idea of tools that don't supply sdists > though? Regardless of whether we end up mandating a source tree -> sdist -> wheel -> install path or if we support two paths, source tree -> sdist -> wheel -> install and source tree -> wheel -> install, I don?t think it?s likely we?re going to ever get to a place that sdists are an optional or non-standard artifact or interface. I think it is mandatory that we treat (and recognize) that a sdist is different than an arbitrary directory and that they may (or may not) have a structure that matches what it looks like on disk. It is entirely possible (and likely) that at some point in the future we will have a new sdist format that looks less like someone just zipped up their VCS checkout and more like a structured format. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From dholth at gmail.com Mon Nov 9 11:13:47 2015 From: dholth at gmail.com (Daniel Holth) Date: Mon, 09 Nov 2015 16:13:47 +0000 Subject: [Distutils] free idea: whim file Message-ID: Sometimes it might be desirable to do wheel-like installs without actually creating an archive. Instead, a whim (wheel internal manifest) file could communicate the idea of wheel, just a bunch of files in categories, without the zip file. The format would be no more than a mapping of category names 'purelib', 'platlib', 'headers', 'scripts', 'data', to a list of tuples with the file's path on the disk and its path relative to the category. { "category" : [ ('path on disk', 'path relative to category'), ... ] } The dist-info directory could be segregated into its own category 'metadata' with each target path as "distname-1.0.dist-info/FILE" i.e. its full path relative to the root of a wheel file. An installer could consume whim directly bypassing zip. A wheel archiver could consume a whim file and produce the archive with correct MANIFEST. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Nov 9 12:21:47 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 09:21:47 -0800 Subject: [Distutils] Update to my skeletal PEP for a new build system interface In-Reply-To: References: Message-ID: On Mon, Nov 9, 2015 at 7:34 AM, Paul Moore wrote: > On 9 November 2015 at 05:20, Nathaniel Smith wrote: >> A *source tree* is something like a VCS checkout. We need a standard >> interface for installing from this format, to support usages like >> ``pip install some-directory/``. > > I still find these two definitions unhelpful, sorry. > > We don't *need* an interface to install from a source tree. It's > entirely feasible to have a standard interface to build a sdist from a > source tree and go source tree -> sdist -> wheel -> install. That > doesn't cater for editable installs, nor does it cater for reusing > things like object files from previous builds, so there may be > *benefits* to having a richer interface than this, but it's wrong to > say it's needed. I am confuse. All that sentence is saying is that (a) it is useful to have the phrase "source tree" as distinct from "sdist" so we can talk about them (which I assume you agree about because you use that phrase in your response :-)), and (b) there must be *some* interface that allows people to type "pip install some-directory/" and have it work because that's a feature we have to support (which I assume you agree about because you immediately propose an interface for supporting that feature). It sounds like we do disagree about the details of what this interface should look like and thus how "pip install some-directory/" should work internally, but that's not a problem with the definition (or indeed something that this PEP's text currently takes any stance on at all :-)). > I suspect you're reluctant to require a "source tree -> sdist" > interface, because the author of flit isn't comfortable with having > such a thing. That's OK - if you want to note that a benefit of going > direct to install (or wheel) is that tools that don't allow you to > create a sdist are supported, then let's make that explicit. Expect > plenty of pushback on the idea of tools that don't supply sdists > though... I actually haven't talked to Thomas about this particular point at all, and actually part of what started all this was my looking at flit and going "this is cool, but c'mon, you can't just throw away sdists" :-). The reason I'm reluctant to require a "source tree -> sdist" interface is described here: https://mail.python.org/pipermail/distutils-sig/2015-November/027636.html and also at the very top of this long email (which for some reason I can't seem to find in the mail.python.org archives?): https://www.mail-archive.com/distutils-sig at python.org/msg23144.html The TL;DR is: obviously we need source tree -> sdist operations somewhere, and obviously we need mechanisms to increase the reliability of builds -- we all agree that there's some irreducible complexity there, those issues need to be addressed, the question is just where to put that complexity. I think putting it into the PEP for the build frontend <-> build backend interface is the wrong place, because it increases spec complexity (the worst kind of complexity) and it rules out the useful feature of incremental rebuilds. (And by "useful feature" there I mean "if we regress from distutils by failing to support this, then there's a good chance downstream devs will simply refuse to use our new design".) >> A *source distribution* is a static snapshot representing a particular >> release of some source code, like ``lxml-3.4.4.zip``. Source >> distributions serve many purposes: they form an archival record of >> releases, they provide a stupid-simple de facto standard for tools >> that want to ingest and process large corpora of code, possibly >> written in many languages (e.g. code search), they act as the input to >> downstream packaging systems like Debian/Fedora/Conda/..., and so >> forth. In the Python ecosystem they additionally have a particularly >> important role to play, because packaging tools like ``pip`` are able >> to use source distributions to fulfill binary dependencies, e.g. if >> there is a distribution ``foo.whl`` which declares a dependency on >> ``bar``, then we need to support the case where ``pip install bar`` or >> ``pip install foo`` automatically locates the sdist for ``bar``, >> downloads it, builds it, and installs the resulting package. >> >> Source distributions are also known as *sdists* for short. > > One key feature of the current sdists that you are either overlooking > or ignoring is that they can, and do, contain *built* files. The best > example is projects using Cython. The sdist contains generated C > files, so that users building wheels from the sdist don't need cython > installed. > > Certainly your definition of a sdist is general enough that it doesn't > preclude such things. But on the other hand, it doesn't offer any > suggestion that this is an important feature of a sdist (and it is - I > say that as someone who has needed to build wheels from a sdist and > doesn't have Cython installed). From your definition, people will > infer that zipping up a development directory makes a sdist, and so > that's what they'll do. Because after all, making Cython a build > requirement and generating the C at build time is *also* an option, > it's just not as friendly to the average user. Hmm, I certainly agree that it doesn't preclude such things, because I am very aware of this use case (I maintain projects that handle Cython in exactly the way you describe), and it never occurred to me that this could *not* be supported :-). I'm not sure what you're worried about exactly? Right now, zipping up a development directory actually is a valid way of making an sdist, and nonetheless projects actually do go to elaborate lengths to trick distutils into including generated .c files. So I don't think it's likely they'll stop because of some PEP that neglected to explicitly point out that this was possible :-). But if you think the wording could be improved I'm certainly open to that. (I guess I do have some generic preference that we not insist on PEPs serving as end-user documentation -- the intended audience here is experts, the definitions are written to mean exactly what they say, etc., and there are real trade-offs between being precise and being easily comprehensible by non-experts. But I also would like you to be happy :-).) -n -- Nathaniel J. Smith -- http://vorpus.org From p.f.moore at gmail.com Mon Nov 9 13:42:34 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 9 Nov 2015 18:42:34 +0000 Subject: [Distutils] Update to my skeletal PEP for a new build system interface In-Reply-To: References: Message-ID: On 9 November 2015 at 17:21, Nathaniel Smith wrote: > On Mon, Nov 9, 2015 at 7:34 AM, Paul Moore wrote: >> On 9 November 2015 at 05:20, Nathaniel Smith wrote: >>> A *source tree* is something like a VCS checkout. We need a standard >>> interface for installing from this format, to support usages like >>> ``pip install some-directory/``. >> >> I still find these two definitions unhelpful, sorry. >> >> We don't *need* an interface to install from a source tree. It's >> entirely feasible to have a standard interface to build a sdist from a >> source tree and go source tree -> sdist -> wheel -> install. That >> doesn't cater for editable installs, nor does it cater for reusing >> things like object files from previous builds, so there may be >> *benefits* to having a richer interface than this, but it's wrong to >> say it's needed. > > I am confuse. All that sentence is saying is that (a) it is useful to > have the phrase "source tree" as distinct from "sdist" so we can talk > about them (which I assume you agree about because you use that phrase > in your response :-)), Agreed. > and (b) there must be *some* interface that > allows people to type "pip install some-directory/" and have it work > because that's a feature we have to support (which I assume you agree > about because you immediately propose an interface for supporting that > feature). Are we talking at cross purposes here? The end user interface "pip install directory" is OK. What I think this PEP is saying is that we need a way for pip to *implement* that functionality in terms of primitive operations that the "source tree" must support. That, again, I'm fine with. But you're then saying (I think) that the primitive operation a source tree must provide is an "install" operation - and that's what I fundamentally disagree with. The source tree should provide a "build" primitive. If we agree on that (which I think we do, but I don't think the PEP says so), then there's still a further point, on which I think we do disagree, and that's over sdists. I think that there are *two* steps within the build process, and these need to be separated out: 1. Make a structured archive of the project's sources. This includes creation of all generated source files that can be created in a target-independent way. This would include (static) metadata, generated source files such as cython output, etc. The point about this archive is that it is fully target-independent, and does not require any tools to build it that are not fundamentally target-dependent. This is what I consider to be the "sdist". There should only ever need to be one sdist for a given name/version of a project, precisely because it's totally portable, by design. 2. Create target-dependent installable wheels. This is the "build" step, in the sense that it's when you run a compiler to create platform-specific binaries. With this model, the install process is specifically source tree ---> sdist ---> wheel ---> installed package It is possible that tools could merge some of these steps, but a generic tool like pip that manages the running of the steps in an appropriate order needs to work in terms of the fundamental building blocks. So I am strongly opposed to proposals that treat source tree ---> wheel as a primitive operation, because they hamper pip's ability to manage things at the level of the fundamental steps. One of the worst aspects of distutils, and one that pip is still far from free of, is the fact that distutils provides merged steps like source tree ---> installed package, and we (mistakenly, in hindsight) used them to "optimise" the way pip works. It did optimise things in some ways, I guess, but it makes it really hard to disentangle things when we want to modularise processing. The above is of course idealised. Editable installs are one example of something that simply doesn't follow this pattern, and as far as I can see they make no sense *except* as a source tree --> editable install one-step operation. Also, modularising the steps to this extent does have downsides - separating source tree --> sdist and sdist --> wheel makes it harder to do "in place rebuild" optimisations. We can agree or disagree on the trade-offs, or we can work on trying to get the best of both worlds, but I still think we should be starting (certainly when working at the spec/PEP level) from a clean conceptual model. > It sounds like we do disagree about the details of what this interface > should look like and thus how "pip install some-directory/" should > work internally, but that's not a problem with the definition (or > indeed something that this PEP's text currently takes any stance on at > all :-)). As I say, I think we're talking at cross purposes. I read the PEP as trying to specify (the wrong) primitives for pip to use. I'm not sure what you intend the PEP to say - maybe that "pip install " is the canonical install command? I don't think that needs a PEP, it's just how pip works (and other tools may choose to expose things in a different manner). >> I suspect you're reluctant to require a "source tree -> sdist" >> interface, because the author of flit isn't comfortable with having >> such a thing. That's OK - if you want to note that a benefit of going >> direct to install (or wheel) is that tools that don't allow you to >> create a sdist are supported, then let's make that explicit. Expect >> plenty of pushback on the idea of tools that don't supply sdists >> though... > > I actually haven't talked to Thomas about this particular point at > all, and actually part of what started all this was my looking at flit > and going "this is cool, but c'mon, you can't just throw away sdists" > :-). > > The reason I'm reluctant to require a "source tree -> sdist" interface > is described here: > https://mail.python.org/pipermail/distutils-sig/2015-November/027636.html > > and also at the very top of this long email (which for some reason I > can't seem to find in the mail.python.org archives?): > https://www.mail-archive.com/distutils-sig at python.org/msg23144.html > > The TL;DR is: obviously we need source tree -> sdist operations > somewhere, and obviously we need mechanisms to increase the > reliability of builds -- we all agree that there's some irreducible > complexity there, those issues need to be addressed, the question is > just where to put that complexity. I think putting it into the PEP for > the build frontend <-> build backend interface is the wrong place, > because it increases spec complexity (the worst kind of complexity) > and it rules out the useful feature of incremental rebuilds. (And by > "useful feature" there I mean "if we regress from distutils by failing > to support this, then there's a good chance downstream devs will > simply refuse to use our new design".) But here I think we have a new term that's adding confusion. Pip isn't a "build frontend". In 99% of cases pip does no building at all. Basically, pip is a manager of build and install steps, and to manage those steps successfully, it needs clear definitions of the steps involved. In the extreme case, if there's a step "take a source tree and install it" you've left nothing for pip to manage, and you may as well go back to setup.py install. I think that extracting and formalising the fundamental ("atomic" if you like) steps that constitute going from a source tree to an installed package, is precisely the sort of simplification a spec/PEP *must* do. In doing so, there are engineering trade-offs such as how we reintroduce incremental rebuilds without compromising the model. Such trade-offs may imply a need to add complexity to the spec (maybe in terms of optional "combined" steps such as source tree --> wheel), but it should be clear that these are (a) optional (as in, the process works fine with just the atomic steps) and (b) optimisations (as in, they can't alter the ultimate behaviour as defined in terms of atomic steps). >> Certainly your definition of a sdist is general enough that it doesn't >> preclude such things. But on the other hand, it doesn't offer any >> suggestion that this is an important feature of a sdist (and it is - I >> say that as someone who has needed to build wheels from a sdist and >> doesn't have Cython installed). From your definition, people will >> infer that zipping up a development directory makes a sdist, and so >> that's what they'll do. Because after all, making Cython a build >> requirement and generating the C at build time is *also* an option, >> it's just not as friendly to the average user. > > Hmm, I certainly agree that it doesn't preclude such things, because I > am very aware of this use case (I maintain projects that handle Cython > in exactly the way you describe), and it never occurred to me that > this could *not* be supported :-). I'm not sure what you're worried > about exactly? Right now, zipping up a development directory actually > is a valid way of making an sdist, and nonetheless projects actually > do go to elaborate lengths to trick distutils into including generated > .c files. So I don't think it's likely they'll stop because of some > PEP that neglected to explicitly point out that this was possible :-). > But if you think the wording could be improved I'm certainly open to > that. I think that we currently have so much confusion over "what a sdist is" that a new over-general definition isn't going to help. What we need to do is to *pin down* the definition of a sdist, not allow the term to continue to mean too much (and hence, ultimately, very little). Does my definition of a sdist above in terms of being target-independent but containing all files that can be generated in a target-independent way clarify what I'm intending? I'd be happy if there was wording that left it as optional how much a project needed to eliminate build dependencies by including the output of those dependencies in the sdist, but I'd much prefer it if there was a strong implication that if files could be generated without reference to the target architecture, and doing so eliminated a build dependency, then they should. (To give a specific example, I'd prefer it if it was clear that sdists should always include C sources generated by cython - even though that requirement isn't enforceable in any practical sense). > (I guess I do have some generic preference that we not insist on PEPs > serving as end-user documentation -- the intended audience here is > experts, the definitions are written to mean exactly what they say, > etc., and there are real trade-offs between being precise and being > easily comprehensible by non-experts. But I also would like you to be > happy :-).) Agreed we don't intend these things to be for end users. But I think it's important that the experts have something detailed and precise, as ultimately they'll have to implement code based on the PEP. And worse still, anyone wanting to implement an alternative to pip has a right to expect that everything they need is in a PEP, not in "people's understanding". I don't know if it's clear (I hope it is but it's hard to be sure :-)) but my comments are from the perspective of someone who knows the internals of pip, but would like to be able to (re-) write it without ever having to refer to pip's code in order to do so. I think that's a reasonable goal to aim for, as not being able to do that is precisely what got us into the mess where we daren't touch distutils because we don't know what it's supposed to do other than "what it does"... Thanks for considering my happiness :-) It's not too easy to make me miserable, so don't worry - the big issue is that I enjoy long complex detail-oriented debates, so you're better off not trying *too* hard to increase my happiness in that direction!!! :-) Paul From robertc at robertcollins.net Mon Nov 9 13:56:23 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 10 Nov 2015 07:56:23 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Pushed up this edit.. On 9 November 2015 at 18:45, Robert Collins wrote: > On 9 November 2015 at 17:55, Nathaniel Smith wrote: >> The new version is looking pretty good to me! >> >> My main concern still is that specification of whitespace handling is >> still kinda confusing/underspecified. The text says "all whitespace is >> optional", but the grammar says that it's mandatory in some cases >> (e.g. url-marker, still not sure why -- I'd understand if it were >> mandatory before the ";" since ";" is a valid character in URLs, but >> it says it's mandatory afterward?), and the grammar is still wrong >> about whitespace in some cases (e.g. it says ">= 1.0" is an illegal >> versionspec). >> >> I guess the two options are either to go through carefully sprinkling >> *WSP's about at all the appropriate places, or else to tackle things >> more systematically by adding a lexer layer... > > I'm happy either way. You are right though that there is one spot > where it is not optional. Thats how "url; marker stuff here" is > defined in pip today. We could in principle define a new rule here, > such as putting markers before the url. But as markers aren't self > delimiting (blame PEP-345) that is a bit fugly. We could say 'url > 1*WSP ";" *WSP marker', which would be a bit more consistent, but > different to pip's current handling. Of course, the @ syntax is > already different, so it seems reasonable to do so to me. diff --git a/dependency-specification.rst b/dependency-specification.rst index 9e95417..6afe288 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -84,8 +84,9 @@ URI is defined in std-66 [#std66]_:: version-cmp = "<" / "<=" / "!=" / "==" / ">=" / ">" version = 1*( DIGIT / ALPHA / "-" / "_" / "." / "*" ) - version-inner = version-cmp version *(',' version-cmp version) - versionspec = ("(" version-inner ")") / version-inner + version-one = *WSP version-cmp *WSP version + version-many = version-one *(*WSP "," version-one) + versionspec = ("(" version-many ")") / version-many urlspec = "@" URI Environment markers allow making a specification only take effect in some @@ -107,7 +108,7 @@ environments:: =/ (marker-var [*WSP marker-op *WSP marker-var]) marker = *WSP marker-expr *( *WSP ("and" / "or") *WSP marker-expr) name-marker = ";" *WSP marker - url-marker = ";" 1*WSP marker + url-marker = WSP ";" *WSP marker Optional components of a distribution may be specified using the extras field:: @@ -131,7 +132,8 @@ Leading to the unified rule that can specify a dependency:: Whitespace ---------- -Non line-breaking whitespace is optional and has no semantic meaning. +Non line-breaking whitespace is mostly optional with no semantic meaning. The +sole exception is detecting the end of a URL requirement. Names ----- -- Robert Collins Distinguished Technologist HP Converged Cloud From chris.barker at noaa.gov Mon Nov 9 18:24:46 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 Nov 2015 15:24:46 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: wow! a really long thread here. Trying not to duplicate too much. I am coming primarily from the perspective of someone that teaches python to beginners (I'm also a user and package developer, but I, myself, can deal with any of these options...) My perspective as a user of pip, but not a developer, is that having the > command line executable `pip` is much preferable to `python -m pip`. Most > of the use cases that militate against the command line executable seem to > be issues that face developers and ultra-power-users (keeping track of many > versions of pip installed, etc). > This is absolutely not true -- the ultra-power-users (am I one of those? -- cool! I need to find a good use for that ultra power!) understand these complexities and can deal with them. the real losers ar newbies that have, for one reason or another, multiple pythons on their system. Both Linux and OS-Z tend to have system installed pythons, and it is very, very common that a user needs (maybe for a class, or ...) to install a different one when they are still rank beginners. Also, people who are new to python coding have very different backgrounds with the CLI, and manipulating PATH, and all that. so I think the target user is someone that is new to both python and CLI use, and also has more than one version of python in their system. As it happens, I am in the middle of a intro class that's using python3.4 or 3.5 right now -- and I am telling everyone to do: python3 -m pip install Yes, plain old "pip install" is nicer, but only a little bit, and the biggest reason we really, really want that to still work is that there are a LOT of instructions all over the web telling people to do that -- so really too bad if it doesn't work! But the reality is that it often DOESN'T work now! and when it doesn't newbies really have no idea why in the heck not! personally, I think the best approach is to deprecate plain old "pip install" -- if it's not there as an option, I expect no one will find it odd that to install something for python, you might use python to do that! which brings up an idea -- to make it clean, why not really integrate "ensurepip" into python: python --install some_package It will take a long time to propagate through the versions and installs, but doesn't it make sense that the officially supported package installer actually be invoked directly from python? But many casual users, I think, just have one version of python/pip > installed, and benefit from having the easy-to-call executable. > I suppose it's not so bad that in the case of one python, that pip install just works. They're also the least capable of adding new script wrappers and bash > aliases. > absolutely --- that should inly be required for us "ultra-power-users" :-) glyph wrote: > Rather than trying to figure out what the "right" way for users to invoke > `pip? to begin with is, why not just have Pip start providing more > *information* about potential problems when you invoke it? > This is a great idea! we will be stuck with users expecting "pip install" to work for a long time. If they at least get a helpful hint that something weird is going on -- that would help a lot. And this wourl require only changes to pip itself, no changes to documentation the world over. We should do this in the interim, regardless of other paths forward. I've lost track of the technical details on the option to have a self-contained pip executable -- so no comment there. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 9 18:28:33 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 Nov 2015 15:28:33 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: <20151108005337.285d2f9e@fsol> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On Sat, Nov 7, 2015 at 3:53 PM, Antoine Pitrou wrote: > Well, the problem is that "python -m pip" isn't any better. If you > don't know what the current "pip" is, then chances are you don't know > what the current "python" is, either. > sure you do (well, maybe not, but all you know is that when you type "python" you get soemthing). the problem really is when someone does: pip install some_package and it all seems to work fine then they type "python" and "import some_package" and it fails. This really does happend with newbies, and it really is a problem, trust me on that. granted, it's also a problem that people type "python" and can import what they want, then they go to run their code in an IDE, and it doesn't work -- but that's not a problem pip can address ----.. (I'm not trying to deny the issue, I sometimes wonder what "pip" will > install into exactly, but removing the command in favour of a "-m" > switch wouldn't do any any good IMO, and it would make Python package > management "even more baroque" than it currently is) > only minimally more baroque, and at least one large class of confusing errors would be impossible. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Nov 9 18:41:30 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 Nov 2015 15:41:30 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Mon, Nov 9, 2015 at 3:27 AM, James Bennett wrote: > Python scripts directly and without having to do hackery with > supporting/requiring 'python -m' or similar is too useful and commonly > used. So faced with either (essentially) forcing a trend of every > command-line tool having to be invoked with 'python -m', > pip is a special case -- for MOST python command line tools, the user does not care which python it is running with -- if it works, it works. the failure case we are trying to address here is when "pip install" works sjtu fine -- it finds and installs the package into the python pip is associated with -- it just doesn't do what the user wants and expects! any other script can be run with any python that will work -- if a user has ten different versions of python installed, and every different python-bsed tool they use uses a different one -- who cares, as long as it works. or requiring people with complex multi-Python installations to be more > careful, I choose the "be more careful" option (i.e., I would strenuously > resist changing Django's admin script to "python -m django" if this were > proposed to Django today). > Well, the exception to the above is people developing those scripts, but they should know better -- and so should a sysadmin instaling a django app. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From baptiste at bitsofnetworks.org Mon Nov 9 19:17:13 2015 From: baptiste at bitsofnetworks.org (Baptiste Jonglez) Date: Tue, 10 Nov 2015 01:17:13 +0100 Subject: [Distutils] Missing IPv6 support on pypi.python.org In-Reply-To: References: <20151108200441.GB28216@lud.polynome.dn42> Message-ID: <20151110001713.GD4381@tuxmachine.lan> Thanks for your quick answer! Let's hope Fastly will deploy IPv6 soon, then. Baptiste On Sun, Nov 08, 2015 at 09:13:49PM -0500, Donald Stufft wrote: > I?m pretty sure that PyPI will get IPv6 support as soon as Fastly supports it and not any sooner. I know they?re working on making it happen but I don?t think they have a public timeline for it yet. > > On November 8, 2015 at 4:34:32 PM, Baptiste Jonglez (baptiste at bitsofnetworks.org) wrote: > > Hi, > > > > pypi.python.org is currently not reachable over IPv6. > > > > I know this issue was brought up before [1,2]. This is a real issue for > > us, because our backend servers are IPv6-only (clients never need to talk > > to backend servers, they go through IPv4-enabled HTTP frontends). > > > > So, deploying packages from pypi on the IPv6-only servers is currently a > > pain. What is the roadmap to add IPv6 support? It seems that Fastly has > > already deployed IPv6 [3]. > > > > Thanks, > > Baptiste > > > > > > [1] https://mail.python.org/pipermail/distutils-sig/2014-June/024465.html > > [2] https://bitbucket.org/pypa/pypi/issues/90/missing-ipv6-connectivity > > [3] http://bgp.he.net/AS54113#_prefixes6 > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From njs at pobox.com Mon Nov 9 20:05:05 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 17:05:05 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On Mon, Nov 9, 2015 at 3:28 PM, Chris Barker wrote: > On Sat, Nov 7, 2015 at 3:53 PM, Antoine Pitrou wrote: >> >> Well, the problem is that "python -m pip" isn't any better. If you >> don't know what the current "pip" is, then chances are you don't know >> what the current "python" is, either. > > > sure you do (well, maybe not, but all you know is that when you type > "python" you get soemthing). > > the problem really is when someone does: > > pip install some_package > > and it all seems to work fine > > then they type "python" and "import some_package" and it fails. > > > This really does happend with newbies, and it really is a problem, trust me > on that. Here's an interesting situation to illustrate the kind of weird problem that newbies are so good at tripping over: $ virtualenv -p python3 test-env1 Already using interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in test-env1/bin/python3 Also creating executable in test-env1/bin/python Installing setuptools, pip...done. $ source test-env1/bin/activate (test-env1)$ which python /home/njs/test-env1/bin/python (test-env1)$ which pip /home/njs/test-env1/bin/pip # So far so good... now let's create a second environment from inside the first (test-env1)$ python -m venv test-env2 (test-env1)$ source test-env2/bin/activate (test-env2) $ which python /home/njs/test-env2/bin/python (test-env2) $ which pip /usr/bin/pip (test-env2) $ ls test-env2/bin activate activate.csh activate.fish python@ python3@ (test-env2) $ So apparently if you use 'python -m venv' to create a new *venv* while inside a *virtualenv*, then it seems to complete successfully but leaves you with a venv that doesn't contain pip. At least on my machine (up-to-date Debian testing). I'm sure I should file a bug somewhere, but I'm not even sure where... (Interesting fact: I also tried this but using a conda environment for the first environment instead of virtualenv, and it failed differently -- the 'python -m venv' call spat out an inscrutable error involving ensurepip, and then I was left with a non-functional environment -- the test-env2/ directory exists, as does test-env2/bin/python, but test-env2/bin/activate is missing entirely.) This is "just a bug", but it seems fair to assume that there will continue to exist some weird corner-case bugs in Python packaging/distribution/environment-creation for a while yet... -n -- Nathaniel J. Smith -- http://vorpus.org From donald at stufft.io Mon Nov 9 20:06:59 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 Nov 2015 20:06:59 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On November 9, 2015 at 8:05:29 PM, Nathaniel Smith (njs at pobox.com) wrote: > > This is "just a bug", but it seems fair to assume that there will > continue to exist some weird corner-case bugs in Python > packaging/distribution/environment-creation for a while > yet? Note: I think my PoC will correctly handle all of these cases pretty easily and would do it currently with ``-p`` (but by default it uses sys.executable, not sure if that makes more sense or if ``python`` does). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From njs at pobox.com Mon Nov 9 20:12:57 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 17:12:57 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: Message-ID: On Sun, Nov 8, 2015 at 8:01 AM, Ionel Cristian M?rie? wrote: > > On Sat, Nov 7, 2015 at 12:24 AM, Donald Stufft wrote: >> >> Well, it?s not really a launcher no, but you?d do ``pip -p python2 install >> foo`` or something like that. It?s the same UI. Having just a ?launcher? I >> think is actually more confusing (and we already had that in the past with >> -E and removed it because it was confusing). Since you?ll have different >> versions of pip in different environments (Python or virtual) things break >> or act confusingly. > > > That can't be worse than the current situation. And I'm not asking to bring > `-E` back. > > The idea is that the pip bin becomes a launcher file, just like py.exe - it > would just try to discover an appropiate python and run `-mpip` with it. > This doesn't even need > > to be implemented in pip - linux distributions can do this. I think it's fair to raise an eyebrow at this, though -- basically the problem is that pip has a really unusual argument passing convention, which is that it expects to receive the path to the target python in sys.executable. So instead of fixing this weird argument passing convention, we're setting up a shim to take the UI we actually want ("discover an appropriate python"), and converting it to the weird argument passing convention. Maybe it's the best approach given the weight of history and so forth, but we can at least acknowledge that it's pretty weird! -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Mon Nov 9 20:17:47 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 17:17:47 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <85bnb7kgug.fsf@benfinney.id.au> Message-ID: On Thu, Nov 5, 2015 at 7:52 PM, Glyph Lefkowitz wrote: > > If you invoke 'pip[X.Y]' and it matches 'python -m pip' in your current > virtualenv, don't say anything; similarly if you invoke 'python -m pip' and > 'which pip' matches. But if there's a mismatch, pip can print information > in both cases. This would go a long way to alleviating the confusion that > occurs when users back themselves into one of these corners, and would alert > users to potential issues before they become a problem; right now you have > to be a dogged investigative journalist to figure out why pip is doing the > wrong thing in some cases. Here's a sketch of how something like this might look: https://gist.github.com/njsmith/c051a8298cc641bcfef4 -n -- Nathaniel J. Smith -- http://vorpus.org From contact at ionelmc.ro Mon Nov 9 20:20:02 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 10 Nov 2015 03:20:02 +0200 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On Tue, Nov 10, 2015 at 3:05 AM, Nathaniel Smith wrote: > So apparently if you use 'python -m venv' to create a new *venv* while > inside a *virtualenv*, then it seems to complete successfully but > leaves you with a venv that doesn't contain pip. At least on my > machine (up-to-date Debian testing). > ?How is this relevant? Are you trying to suggest that `pip -p path/to/python` would? exhibit the same class of bugs and failures `virtualenv -p path/to/python` has? ?I'm not saying it wouldn't but? ?how much of that `no pip in venv`? bug is a virtualenv issue or a problem with the design of a pip launcher or standalone pip? It seems to me that it's a bit irrelevant, but correct me if you're not using Ubuntu (were venv is broken in ludicrous ways). Plus virtualenv is broken in it's own way (bootstrapping, see this and this ) ... Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Nov 9 20:35:08 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 17:35:08 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: On Mon, Nov 9, 2015 at 5:20 PM, Ionel Cristian M?rie? wrote: > > On Tue, Nov 10, 2015 at 3:05 AM, Nathaniel Smith wrote: >> >> So apparently if you use 'python -m venv' to create a new *venv* while >> inside a *virtualenv*, then it seems to complete successfully but >> leaves you with a venv that doesn't contain pip. At least on my >> machine (up-to-date Debian testing). > > > How is this relevant? Are you trying to suggest that `pip -p path/to/python` > would exhibit the same class of bugs and failures `virtualenv -p > path/to/python` has? Uh... no, no idea what that even means. It was just intended as an example of how it's possible to end up in weird situations where 'pip' and 'python' mean different things, even though I seemingly did everything right. Evidence for Chris's assertion that "This really does happend with newbies, and it really is a problem, trust me on that.". (And maybe a small cry for help, since this really did bite me just now.) In this situation, if pip's default for finding the python environment were to search the path for 'python' instead of using sys.executable, then it wouldn't matter whether pip were installed into the venv or not, it would have automatically targeted the active environment. And 'python -m pip' would at least have failed cleanly with a sensible error message. Current pip just silently targets the wrong environment. > I'm not saying it wouldn't but how much of that `no pip in venv` bug is a > virtualenv issue or a problem with the design of a pip launcher or > standalone pip? It seems to me that it's a bit irrelevant, but correct me if > you're not using Ubuntu (were venv is broken in ludicrous ways). Plus > virtualenv is broken in it's own way (bootstrapping, see this and this) ... I'm using Debian, as noted in the text you quoted :-). No idea if Debian's venv is also broken in ludicrous ways (though I wouldn't be shocked if you told me it was -- I love Debian but I know their Python team<->distutils communication is not always the greatest...) -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Mon Nov 9 21:03:29 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 Nov 2015 18:03:29 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) Message-ID: On Sun, Nov 8, 2015 at 5:28 PM, Robert Collins wrote: > +The use of a command line API rather than a Python API is a little > +contentious. Fundamentally anything can be made to work, and Robert wants to > +pick something thats sufficiently lowest common denominator that > +implementation is straight forward on all sides. Picking a CLI for that makes > +sense because all build systems will need a CLI for end users to use anyway. I agree that this is not terribly important, and anything can be made to work. Having pondered it all for a few more weeks though I think that the "entrypoints-style" interface actually is unambiguously better, so let me see about making that case. What's at stake? ---------------------- Option 1, as in Robert's PEP: The build configuration file contains a string like "flit --dump-build-description" (or whatever), which names a command to run, and then a protocol for running this command to get information on the actual build system interface. Build operations are performed by executing these commands as subprocesses. Option 2, my preference: The build configuration file contains a string like "flit:build_system_api" (or whatever) which names a Python object accessed like import flit flit.build_system_api (This is the same syntax used for naming entry points.) Which would then have attributes and methods describing the actual build system interface. Build operations are performed by calling these methods. Why does it matter? ---------------------------- First, to be clear: I think that no matter which choice we make here, the final actual execution path is going to end up looking very similar. Because even if we go with the entry-point-style Python hooks, the build frontends like pip will still want to spawn a child to do the actual calls -- this is important for isolating pip from the build backend and the build backend from pip, it's important because the build backend needs to execute in a different environment than pip itself, etc. So no matter what, we're going to have some subprocess calls and some IPC. The difference is that in the subprocess approach, the IPC machinery is all written into the spec, and build frontends like pip implement one half while build backends implement the other half. In the Python API approach, the spec just specifies the Python calling conventions, and both halves of the IPC code live are implemented inside each build backend. Concretely, the way I imagine this would work is that pip would set up the build environment, and then it would run build-environment/bin/python path/to/pip-worker-script.py where pip-worker-script.py is distributed as part of pip. (In simple cases it could simply be a file inside pip's package directory; if we want to support execution from pip-inside-a-zip-file then we need a bit of code to unpack it to a tempfile before executing it. Creating a tempfile is not a huge additional burden given that by the time we call build hooks we will have already created a whole temporary python environment...) In the subprocess approach, we have to write a ton of text describing all the intricacies of IPC. We have to specify how the command line gets split (or is it passed to the shell?), and specify a JSON-based protocol, and what happens to stdin/stdout/stderr, and etc. etc. In the Python API approach, we still have to do all the work of figuring these things out, but they would live inside pip's code, instead of in a PEP. The actual PEP text would be much smaller. It's not clear which approach leads to smaller code overall. If there are F frontends and B backends, then in the subprocess approach we collectively have to write F+B pieces of IPC code, and in the Python API approach we collectively have to write 2*F pieces of IPC code. So on this metric the Python API is a win if F < B, which would happen if e.g. everyone ends up using pip for their frontend but with lots of different backends, which seems plausible? But who knows. But now suppose that there's some bug in that complicated IPC protocol (which I would rate as about a 99.3% likelihood in our first attempt, because cross-platform compatible cross-process IPC is super annoying and fiddly). In the subprocess approach, fixing this means that we need to (a) write a PEP, and then (b) fix F+B pieces of code simultaneously on some flag day, and possibly test F*B combinations for correct interoperation. In the Python API approach, fixing this means patching whichever frontend has the bug, no PEPs or flag days necessary. In addition, the ability to evolve the two halves of the IPC channel together allows for better efficiency. For example, in Robert's current PEP there's some machinery added that hopes to let pip cache the result of the "--dump-build-description" call. This is needed because in the subprocess approach, the minimum number of subprocess calls you need to do something is two: one to ask what command to call, and a second to actually execute the command. In the python API approach, you can just go ahead and spawn a subprocess that knows what method it wants to call, and it can locate that method and then call it in a single shot, thus avoiding the need for an error-prone caching scheme. And the flexibility also helps in the face of future changes, too. Like, suppose that we start out with a do_build hook, and then later add a do_build2 hook that takes an extra argument or something, and pip wants to call do_build2 if it exists, and fall back on do_build otherwise. In the subprocess approach, you have to get the build description, check which hooks are provided, and then once you've decided which one you want to call you can spawn a second subprocess to do that. In the python API approach, pip can move this fallback logic directly into its hook-calling worker. (If it wants to.) So it still avoids the extra subprocess call. Finally, I think that it probably is nicer for pip to bite the bullet and take on more of the complexity budget here in order to make things simpler for build backends, because pip is already a highly complex project that undergoes lots of scrutiny from experts, which is almost certainly not going to be as true for all build backends. And the Python API approach is dead simple to explain and implement for the build backend side. I understand that the pip devs who are reading this might disagree, which is why I also wrote down the (IMO) much more compelling arguments above :-). But hey, still worth mentioning... -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Mon Nov 9 21:11:22 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 10 Nov 2015 15:11:22 +1300 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 10 November 2015 at 15:03, Nathaniel Smith wrote: > On Sun, Nov 8, 2015 at 5:28 PM, Robert Collins > wrote: >> +The use of a command line API rather than a Python API is a little >> +contentious. Fundamentally anything can be made to work, and Robert wants to >> +pick something thats sufficiently lowest common denominator that >> +implementation is straight forward on all sides. Picking a CLI for that makes >> +sense because all build systems will need a CLI for end users to use anyway. > > I agree that this is not terribly important, and anything can be made > to work. Having pondered it all for a few more weeks though I think > that the "entrypoints-style" interface actually is unambiguously > better, so let me see about making that case. > > What's at stake? > ---------------------- > > Option 1, as in Robert's PEP: > > The build configuration file contains a string like "flit > --dump-build-description" (or whatever), which names a command to run, > and then a protocol for running this command to get information on the > actual build system interface. Build operations are performed by > executing these commands as subprocesses. > > Option 2, my preference: > > The build configuration file contains a string like > "flit:build_system_api" (or whatever) which names a Python object > accessed like > > import flit > flit.build_system_api > > (This is the same syntax used for naming entry points.) Which would > then have attributes and methods describing the actual build system > interface. Build operations are performed by calling these methods. Option 3 expressed by Donald on IRC (and implied by his 'smaller step' email - hard code the CLI). A compromise position from 'setup.py Distinguished Technologist HP Converged Cloud From qwcode at gmail.com Mon Nov 9 23:03:21 2015 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 9 Nov 2015 20:03:21 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: > Because even if we go with the entry-point-style Python > hooks, the build frontends like pip will still want to spawn a child > to do the actual calls -- this is important for isolating pip from the > build backend and the build backend from pip, it's important because > the build backend needs to execute in a different environment than pip > itself, etc. [...] > Concretely, the way I imagine this would work is that pip would set up > the build environment, and then it would run > > build-environment/bin/python path/to/pip-worker-script.py > fwiw, such a worker is what I was describing in an earlier thread with Robert last work https://mail.python.org/pipermail/distutils-sig/2015-October/027443.html although I wasn't arguing for it in that context, but rather just using it to be clear that a python api approach could still be used with build environment isolation -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Nov 9 23:19:57 2015 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 9 Nov 2015 20:19:57 -0800 Subject: [Distutils] PyPA Roadmap In-Reply-To: References: Message-ID: btw, I'm very aware that recent discussions may be changing the roadmap... : ) I'm holding fast for the smoke to clear... On Wed, Nov 4, 2015 at 4:42 PM, Marcus Smith wrote: > FYI, I went ahead and merged it. > > https://www.pypa.io/en/latest/roadmap/ > > Again, help appreciated from anyone to keep it accurate as things change > (and they surely will) > > --Marcus > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Mon Nov 9 23:44:04 2015 From: qwcode at gmail.com (Marcus Smith) Date: Mon, 9 Nov 2015 20:44:04 -0800 Subject: [Distutils] Proposed language for how build environments work in the new build system interface In-Reply-To: References: Message-ID: this reads ok to me... On Sun, Nov 8, 2015 at 9:20 PM, Nathaniel Smith wrote: > Hi all, > > Following the strategy of trying to break out the different > controversial parts of the new build system interface, here's some > proposed text defining the environment that a build frontend like pip > provides to a project-specific build backend. > > Robert's PEP currently disclaims all of this as out-of-scope, but I > think it's good to get something down, since in practice we'll have to > figure something out before any implementations can exist. And I think > the text below pretty much hits the right points. > > What might be controversial about this nonetheless is that I'm not > sure that pip *can* reasonably implement all the requirements as > written without adding a dependency on virtualenv (at least for older > pythons -- obviously this is no big deal for new pythons since venv is > now part of the stdlib). I think the requirements are correct, so... > Donald, what do you think? > > -n > > ---- > > The build environment > --------------------- > > One of the responsibilities of a build frontend is to set up the > environment in which the build backend will run. > > We do not require that any particular "virtual environment" mechanism > be used; a build frontend might use virtualenv, or venv, or no special > mechanism at all. But whatever mechanism is used MUST meet the > following criteria: > > - All requirements specified by the project's build-requirements must > be available for import from Python. > > - This must remain true even for new Python subprocesses spawned by > the build environment, e.g. code like:: > > import sys, subprocess > subprocess.check_call([sys.executable, ...]) > > must spawn a Python process which has access to all the project's > build-requirements. This is necessary e.g. for build backends that > want to run legacy ``setup.py`` scripts in a subprocess. > > [TBD: the exact wording here will probably need some tweaking > depending on whether we end up using an entrypoint-like mechanism for > specifying build backend hooks (in which case we can assume that hooks > automatically have access to sys.executable), or a subprocess-based > mechanism (in which case we'll need some other way to communicate the > path to the python interpreter to the build backend, e.g. a PYTHON= > envvar). But the basic requirement is pretty much the same either > way.] > > - All command-line scripts provided by the build-required packages > must be present in the build environment's PATH. For example, if a > project declares a build-requirement on `flit > `_, then the following must > work as a mechanism for running the flit command-line tool:: > > import subprocess > subprocess.check_call(["flit", ...]) > > A build backend MUST be prepared to function in any environment which > meets the above criteria. In particular, it MUST NOT assume that it > has access to any packages except those that are present in the > stdlib, or that are explicitly declared as build-requirements. > > > Recommendations for build frontends (non-normative) > ................................................... > > A build frontend MAY use any mechanism for setting up a build > environment that meets the above criteria. For example, simply > installing all build-requirements into the global environment would be > sufficient to build any compliant package -- but this would be > sub-optimal for a number of reasons. This section contains > non-normative advice to frontend implementors. > > A build frontend SHOULD, by default, create an isolated environment > for each build, containing only the standard library and any > explicitly requested build-dependencies. This has two benefits: > > - It allows for a single installation run to build multiple packages > that have contradictory build-requirements. E.g. if package1 > build-requires pbr==1.8.1, and package2 build-requires pbr==1.7.2, > then these cannot both be installed simultaneously into the global > environment -- which is a problem when the user requests ``pip install > package1 package2``. Or if the user already has pbr==1.8.1 installed > in their global environment, and a package build-requires pbr==1.7.2, > then downgrading the user's version would be rather rude. > > - It acts as a kind of public health measure to maximize the number of > packages that actually do declare accurate build-dependencies. We can > write all the strongly worded admonitions to package authors we want, > but if build frontends don't enforce isolation by default, then we'll > inevitably end up with lots of packages on PyPI that build fine on the > original author's machine and nowhere else, which is a headache that > no-one needs. > > However, there will also be situations where build-requirements are > problematic in various ways. For example, a package author might > accidentally leave off some crucial requirement despite our best > efforts; or, a package might declare a build-requirement on `foo >= > 1.0` which worked great when 1.0 was the latest version, but now 1.1 > is out and it has a showstopper bug; or, the user might decide to > build a package against numpy==1.7 -- overriding the package's > preferred numpy==1.8 -- to guarantee that the resulting build will be > compatible at the C ABI level with an older version of numpy (even if > this means the resulting build is unsupported upstream). Therefore, > build frontends SHOULD provide some mechanism for users to override > the above defaults. For example, a build frontend could have a > ``--build-with-system-site-packages`` option that causes the > ``--system-site-packages`` option to be passed to > virtualenv-or-equivalent when creating build environments, or a > ``--build-requirements-override=my-requirements.txt`` option that > overrides the project's normal build-requirements. > > The general principle here is that we want to enforce hygiene on > package *authors*, while still allowing *end-users* to open up the > hood and apply duct tape when necessary. > > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Nov 10 01:14:08 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 10 Nov 2015 16:14:08 +1000 Subject: [Distutils] BDFL Delegates for distutils-sig PEPs In-Reply-To: References: Message-ID: On 30 October 2015 at 16:27, Marcus Smith wrote: >> >> ================================= >> Whenever a new PEP is put forward on distutils-sig, any PyPA core >> reviewer that believes they are suitably experienced to make the final >> decision on that PEP may offer to serve as the BDFL's delegate (or >> "PEP czar") for that PEP. If their self-nomination is accepted by the >> other PyPA core reviewer, the lead PyPI maintainer and the lead >> CPython representative on distutils-sig, then they will have the >> authority to approve (or reject) that PEP. >> ================================= > > > Nick: > just be clear, if nobody nominates themselves, then you still remain (by > default) the active delegate who's responsible for ruling on a Pypa-related > PEP? For anything PyPI related, it defaults to being Donald's call as lead maintainer, for other interoperability specs, it defaults to me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Nov 10 01:58:13 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 10 Nov 2015 16:58:13 +1000 Subject: [Distutils] BDFL Delegates for distutils-sig PEPs In-Reply-To: References: Message-ID: On 10 November 2015 at 16:14, Nick Coghlan wrote: > On 30 October 2015 at 16:27, Marcus Smith wrote: >>> >>> ================================= >>> Whenever a new PEP is put forward on distutils-sig, any PyPA core >>> reviewer that believes they are suitably experienced to make the final >>> decision on that PEP may offer to serve as the BDFL's delegate (or >>> "PEP czar") for that PEP. If their self-nomination is accepted by the >>> other PyPA core reviewer, the lead PyPI maintainer and the lead >>> CPython representative on distutils-sig, then they will have the >>> authority to approve (or reject) that PEP. >>> ================================= >> >> >> Nick: >> just be clear, if nobody nominates themselves, then you still remain (by >> default) the active delegate who's responsible for ruling on a Pypa-related >> PEP? > > For anything PyPI related, it defaults to being Donald's call as lead > maintainer, for other interoperability specs, it defaults to me. Although I'll also note that whenever Donald wants to handle an interoperability PEP himself (as with the dependency specifier one), I'm highly unlikely to object :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Tue Nov 10 04:55:13 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 10 Nov 2015 09:55:13 +0000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 10 November 2015 at 04:03, Marcus Smith wrote: > although I wasn't arguing for it in that context, but rather just using it > to be clear that a python api approach could still be used with build > environment isolation Which is a good point - it's easy enough to write adapters from one convention to another (I'm inclined to think it's easier to adapt a Python API to a CLI interface than the other way around, but I may be wrong about that). Paul From donald at stufft.io Tue Nov 10 07:08:22 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 Nov 2015 07:08:22 -0500 Subject: [Distutils] BDFL Delegates for distutils-sig PEPs In-Reply-To: References: Message-ID: On November 10, 2015 at 1:58:38 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 10 November 2015 at 16:14, Nick Coghlan wrote: > > On 30 October 2015 at 16:27, Marcus Smith wrote: > >>> > >>> ================================= > >>> Whenever a new PEP is put forward on distutils-sig, any PyPA core > >>> reviewer that believes they are suitably experienced to make the final > >>> decision on that PEP may offer to serve as the BDFL's delegate (or > >>> "PEP czar") for that PEP. If their self-nomination is accepted by the > >>> other PyPA core reviewer, the lead PyPI maintainer and the lead > >>> CPython representative on distutils-sig, then they will have the > >>> authority to approve (or reject) that PEP. > >>> ================================= > >> > >> > >> Nick: > >> just be clear, if nobody nominates themselves, then you still remain (by > >> default) the active delegate who's responsible for ruling on a Pypa-related > >> PEP? > > > > For anything PyPI related, it defaults to being Donald's call as lead > > maintainer, for other interoperability specs, it defaults to me. > > Although I'll also note that whenever Donald wants to handle an > interoperability PEP himself (as with the dependency specifier one), > I'm highly unlikely to object :) > And of course, it?s likely that a lot of PyPI PEPs will end up having someone other than myself as the BDFL-Delegate since it?s likely a fair number of them will be written by me heh. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From wes.turner at gmail.com Tue Nov 10 15:54:23 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 10 Nov 2015 14:54:23 -0600 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: <563E0C8C.9070609@thomas-guettler.de> References: <563E0C8C.9070609@thomas-guettler.de> Message-ID: * It is [currently [#PEP426JSONLD)] necessary to run setup.py with each given destination platform, because parameters are expanded within the scope of setup.py. * Because of this, client side dependency resolution (with a given platform) is currently the only viable option for something like this ... * Build: Docker, Tox (Dox) to build package(s) * Each assembly of packages is / could be a package with a setup.py (and/or a requirements.txt) * And tests: * http://conda.pydata.org/docs/building/meta-yaml.html#test-section * Release: DevPi * http://doc.devpi.net/latest/ * conda env environment.yml YAML: http://conda.pydata.org/docs/using/envs.html * [x] conda packages * [x] pip packages * [ ] system packages (configuration management) And then, really, Is there a stored version of this instance of a named Docker image? #reproducibility #linkedreproducibility On Sat, Nov 7, 2015 at 8:37 AM, Thomas G?ttler wrote: > I wrote down a tought about Serverside Dependency Resolution and > Virtualenv Build Server > > > What do you think? > > Latest version: https://github.com/guettli/virtualenv-build-server > > virtualenv-build-server > ####################### > > Rough roadmap how a server to build virtualenvs for the python programming > language could be implemented. > > Highlevel goal > -------------- > > Make creating new virtual envionments for the Python programming language > easy and fast. > > Input: fuzzy requirements like this: django>=1.8, requests=>2.7 > > Output: virtualenv with packages installed. > > Two APIs > ------------ > > #. Resolve fuzzy requirements to a fixed set of packages with exactly > pinned versions. > #. Read fixed set of packages. Build virtualenv according to given > platform. > > > Steps > ----- > > Steps: > > #. Client sends list of fuzzy requirements to server: > > * I need: django>=1.8, requests=>2.7, ... > > > #. Server solves the fuzzy requirements to a fixed set of requirememnts: > django==1.8.2, requests==2.8.1, ... > > #. Client reads the fixed set of requirements. > > #. Optional: Client sends fixed set of requirements to the server. Telling > him the plattform > > * My platform: sys.version==2.7.6 and sys.platform=linux2 > > #. Server builds a virtualenv according to the fixed set of requirements. > > #. Server sends the environment to the client > > #. Client unpacks the data and has a usable virtualenv > > Benefits > -------- > > Speed: > > * There is only one round-trip from client to server. If the dependencies > get resolved on the client the client would need to download the available > version information. > * Caching: If the server gets input parameters (fuzzy requirements and > platform information) which he has seen before, he can return the cached > result from the previous request. > > Possible Implementations > ------------------------ > > APIs > ==== > Both APIs could be implementated by a webservice/Rest interface passing > json or yaml. > > Serverside > ========== > > > Implementation Strategie "PostgreSQL" > ..................................... > > Since the API is de-coupled from the internals the implementation could be > exchanged without the need for changes at the client side. > > I suggest using the PostgreSQL und resolving the dependcy graph using SQL > (WITH RECURSIVE). > > The package and version data gets stored in PostgreSQL via ORM (Django or > SQLAlchemy). > > The version numbers need to be normalized to ascii to allow fast > comparision. > > Related: https://www.python.org/dev/peps/pep-0440/ > > Implementation Strategie "Node.js" > .................................. > > I like python, but I am not married with it. Why not use a different tools > that is already working? Maybe the node package manager: > https://www.npmjs.com/ > > Questions > --------- > Are virtualenv relocatable? AFAIK they are not. > > General Thoughts > ---------------- > > * Ignore Updates. Focus on creating new virtualenvs. The server can do > caching and that's why I prefer creating virtualenvs which never get > updated. They get created and removed (immutable). > > > I won't implement it > -------------------- > > This idea is in the public domain. If you are young and brave or old and > wise: Go ahead, try to implement it. Please communicate early and often. > Ask on mailing-lists or me for feedback. Good luck :-) > > I love feedback > --------------- > > Please tell me want you like or dislike: > > * typos and spelling stuff (I am not a native speaker) > * alternative implementation strategies. > * existing software which does this (even if implemented in a different > programming language). > * ... > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Nov 10 17:44:43 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 Nov 2015 14:44:43 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Mon, Nov 9, 2015 at 6:11 PM, Robert Collins wrote: > On 10 November 2015 at 15:03, Nathaniel Smith wrote: >> On Sun, Nov 8, 2015 at 5:28 PM, Robert Collins >> wrote: >>> +The use of a command line API rather than a Python API is a little >>> +contentious. Fundamentally anything can be made to work, and Robert wants to >>> +pick something thats sufficiently lowest common denominator that >>> +implementation is straight forward on all sides. Picking a CLI for that makes >>> +sense because all build systems will need a CLI for end users to use anyway. >> >> I agree that this is not terribly important, and anything can be made >> to work. Having pondered it all for a few more weeks though I think >> that the "entrypoints-style" interface actually is unambiguously >> better, so let me see about making that case. >> >> What's at stake? >> ---------------------- >> >> Option 1, as in Robert's PEP: >> >> The build configuration file contains a string like "flit >> --dump-build-description" (or whatever), which names a command to run, >> and then a protocol for running this command to get information on the >> actual build system interface. Build operations are performed by >> executing these commands as subprocesses. >> >> Option 2, my preference: >> >> The build configuration file contains a string like >> "flit:build_system_api" (or whatever) which names a Python object >> accessed like >> >> import flit >> flit.build_system_api >> >> (This is the same syntax used for naming entry points.) Which would >> then have attributes and methods describing the actual build system >> interface. Build operations are performed by calling these methods. > > Option 3 expressed by Donald on IRC Where is this IRC channel, btw? :-) > (and implied by his 'smaller step' > email - hard code the CLI). > > A compromise position from 'setup.py the 'setup.py' step in pypa.json, but define the rest as a fixed > contract, e.g. with subcommands like wheel, metadata etc. This drops > the self describing tool blob and the caching machinery. So this would give up on having schema versioning for the API, I guess? > I plan on using that approach in my next draft. > > Your point about bugs etc is interesting, but the use of stdin etc in > a dedicated Python API also needs to be specified. Yes, but this specification is trivial: "Stdin is unspecified, and stdout/stderr can be used for printing status messages, errors, etc. just like you're used to from every other build system in the world." Similarly, we still have to specify how what the different operations are, what arguments they take, how they signal errors, etc. The point though is this specification will be shorter and simpler if we're specifying Python APIs than if we're specifying IPC APIs, because with a Python API we get to assume the existence of things like data structures and kwargs and exceptions and return values instead of having to build them from scratch. -n -- Nathaniel J. Smith -- http://vorpus.org From chris.barker at noaa.gov Tue Nov 10 18:14:19 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 10 Nov 2015 15:14:19 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> Message-ID: <-9100366705980829073@unknownmsgid> > In this situation, if pip's default for finding the python environment > were to search the path for 'python' instead of using sys.executable, One trick here -- PATH may not be the same everywhere. For instance, on OS-X, the environment GUI programs get is entirely independent of the shell. So, as a rule, PATH will not be "right". I don't know if anyone tries to run pip from an IDE, but it might be worth keeping in mind-- this issue comes up all the time with setting the Python used by an IDE. -CHB From ubernostrum at gmail.com Tue Nov 10 18:29:01 2015 From: ubernostrum at gmail.com (James Bennett) Date: Tue, 10 Nov 2015 15:29:01 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Mon, Nov 9, 2015 at 3:41 PM, Chris Barker wrote: > pip is a special case -- for MOST python command line tools, the user does > not care which python it is running with -- if it works, it works. > > the failure case we are trying to address here is when "pip install" works > sjtu fine -- it finds and installs the package into the python pip is > associated with -- it just doesn't do what the user wants and expects! > I still feel like it's just kicking the problem down the line. Switching from 'pip install' to 'python -m pip install' doesn't actually solve the issue of how easy we've made it for people to create non-functional Python installations (non-functional in the sense that they will mysteriously "work but not really work"). All this switch does is hand the problem over to the *next* tool the user happens to invoke. So "pip is special" doesn't really work as a rebuttal, at least to me. The real problem to solve is management of multi-Python installations; I don't really know right now *how* to solve it, but slapping one band-aid onto the problem by changing pip won't accomplish that and so will break a lot of existing conventions and documentation for what seems like at best very minor gains. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Nov 10 19:22:06 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 Nov 2015 16:22:06 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Tue, Nov 10, 2015 at 3:29 PM, James Bennett wrote: > On Mon, Nov 9, 2015 at 3:41 PM, Chris Barker wrote: >> >> pip is a special case -- for MOST python command line tools, the user does >> not care which python it is running with -- if it works, it works. >> >> the failure case we are trying to address here is when "pip install" works >> sjtu fine -- it finds and installs the package into the python pip is >> associated with -- it just doesn't do what the user wants and expects! > > > I still feel like it's just kicking the problem down the line. Switching > from 'pip install' to 'python -m pip install' doesn't actually solve the > issue of how easy we've made it for people to create non-functional Python > installations (non-functional in the sense that they will mysteriously "work > but not really work"). All this switch does is hand the problem over to the > *next* tool the user happens to invoke. So "pip is special" doesn't really > work as a rebuttal, at least to me. There are lots of ways that Python installations can be broken. As another example, I helped someone today whose bug report turned out to boil down to: they used 'pip install' to upgrade a package, and got upgraded .py files, but somehow their old .pyc files were still around and in use, so they were still seeing bugs from the old version. The solution was to manually delete all the .pyc files [1]. No idea how they managed that, it has nothing to do with this thread, it's just an example of how infinitely weird installation/configuration problems get out at the long tail. Python's installed base is large enough that one-in-a-million cases happen every day... What's special about pip is that it totally violates DRY: for a functional python installation, each way of spawning a python interpreter needs to have a corresponding stub script to spawn pip, and the shebang line of that stub script has to point to the corresponding python interpreter. If any part of this complex assemblage gets out of sync or missing, then your installation is broken. (Very few tools have this kind of consistency requirement, because very few tools are as tightly tied to a single python environment as pip is -- who cares which virtualenv hg runs out of, it does the same thing either way. Also, if you do discover that some virtualenv is missing a script that it should have, then the way to fix that is... run pip. Kinda a problem if the missing script *is* pip.) I totally get why people dislike the ergonomics of 'python -m pip', but we can also acknowledge that it does solve a real technical problem: it strictly reduces the number of things that can go wrong, in a tool that's down at the base of the stack. -n [1] https://groups.google.com/d/msg/pystatsmodels/KcSzNqDxv-Q/CCim-Tz_BwAJ -- Nathaniel J. Smith -- http://vorpus.org From waynejwerner at gmail.com Wed Nov 11 00:08:25 2015 From: waynejwerner at gmail.com (Wayne Werner) Date: Wed, 11 Nov 2015 05:08:25 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: With all of the weirdness involved, it makes me wonder - could there be a better way? If we waved our hands and were able to magically make Python package management perfect, what would that look like? Would that kind of discussion even be valuable? On Tue, Nov 10, 2015, 6:22 PM Nathaniel Smith wrote: I totally get why people dislike the ergonomics of 'python -m pip', but we can also acknowledge that it does solve a real technical problem: it strictly reduces the number of things that can go wrong, in a tool that's down at the base of the stack. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 11 00:53:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Nov 2015 15:53:27 +1000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 08:44, Nathaniel Smith wrote: > On Mon, Nov 9, 2015 at 6:11 PM, Robert Collins > wrote: >> On 10 November 2015 at 15:03, Nathaniel Smith wrote: > Similarly, we still have to specify how what the different operations > are, what arguments they take, how they signal errors, etc. The point > though is this specification will be shorter and simpler if we're > specifying Python APIs than if we're specifying IPC APIs, because with > a Python API we get to assume the existence of things like data > structures and kwargs and exceptions and return values instead of > having to build them from scratch. I think the potentially improved quality of error handling arising from a Python API based approach is well worth taking into account. When the backend interface is CLI based, you're limited to: 1. The return code 2. Typically unstructured stderr output This isn't like HTTP+JSON, where there's an existing rich suite of well-defined error codes to use, and an ability to readily include error details in the reply payload. The other thing is that if the core interface is Python API based, then if no hook is specified, there can be a default provider in pip that knows how to invoke the setup.py CLI (or perhaps even implements looking up the CLI to invoke from the source tree metadata). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Wed Nov 11 01:11:56 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 11 Nov 2015 19:11:56 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: I've pushed up a major edit resulting from various reviews + IRC discussion. tl;dr: - replaced the ABNF grammar with a parsley grammar, which is functionally the same but we can directly execute. sample code doing that is now also included in the PEP. - added some basic tests - dropped the new variables that depended on now deprecated platform.dist - made the fallback from PEP-440 to string comparisons better worded - reinstated the ~= and === operators with appropriate warnings about adoption within markers. They shouldn't have been removed from version comparisons in the first place, so that was a bugfix. - fixed a bug in the marker language that happened somewhere in PEP-426. In PEP 345 markers were strictly 'LHS OP RHS' followed by AND or OR and then another marker expression. The grammar in PEP-426 however allowed things like "(python_version) and (python_version=='2.7')" which I believe wasn't actually the intent - truthy values are not defined there. So the new grammar does not allow ("fred" and "bar") or other such things - and and or are exclusively between well defined expressions now. -Rob commit 874538b3e6a5dd6421517e519e63162c3fc30194 Author: Robert Collins Date: Fri Nov 6 16:25:57 2015 +1300 Define dependency specifications formally. diff --git a/dependency-specification.rst b/dependency-specification.rst new file mode 100644 index 0000000..8b38550 --- /dev/null +++ b/dependency-specification.rst @@ -0,0 +1,522 @@ +:PEP: XX +:Title: Dependency specification for Python Software Packages +:Version: $Revision$ +:Last-Modified: $Date$ +:Author: Robert Collins +:BDFL-Delegate: Donald Stufft +:Discussions-To: distutils-sig +:Status: Draft +:Type: Standards Track +:Content-Type: text/x-rst +:Created: 11-Nov-2015 +:Post-History: XX + + +Abstract +======== + +This PEP specifies the language used to describe dependencies for packages. +It draws a border at the edge of describing a single dependency - the +different sorts of dependencies and when they should be installed is a higher +level problem. The intent is provide a building block for higher layer +specifications. + +The job of a dependency is to enable tools like pip [#pip]_ to find the right +package to install. Sometimes this is very loose - just specifying a name, and +sometimes very specific - referring to a specific file to install. Sometimes +dependencies are only relevant in one platform, or only some versions are +acceptable, so the language permits describing all these cases. + +The language defined is a compact line based format which is already in +widespread use in pip requirements files, though we do not specify the command +line option handling that those files permit. There is one caveat - the +URL reference form, specified in PEP-440 [#pep440]_ is not actually +implemented in pip, but since PEP-440 is accepted, we use that format rather +than pip's current native format. + +Motivation +========== + +Any specification in the Python packaging ecosystem that needs to consume +lists of dependencies needs to build on an approved PEP for such, but +PEP-426 [#pep426]_ is mostly aspirational - and there are already existing +implementations of the dependency specification which we can instead adopt. +The existing implementations are battle proven and user friendly, so adopting +them is arguably much better than approving an aspirational, unconsumed, format. + +Specification +============= + +Examples +-------- + +All features of the language shown with a name based lookup:: + + requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < "2.7.10" + +A minimal URL based lookup:: + + pip @ https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 + +Concepts +-------- + +A dependency specification always specifies a distribution name. It may +include extras, which expand the dependencies of the named distribution to +enable optional features. The version installed can be controlled using +version limits, or giving the URL to a specific artifact to install. Finally +the dependency can be made conditional using environment markers. + +Grammar +------- + +We first cover the grammar briefly and then drill into the semantics of each +section later. + +A distribution specification is written in ASCII text. We use a parsley +[#parsley]_ grammar to provide a precise grammar. It is expected that the +specification will be embedded into a larger system which offers framing such +as comments, multiple line support via continuations, or other such features. + +The full grammar including annotations to build a useful parse tree is +included at the end of the PEP. + +Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: +URI is defined in std-66 [#std66]_:: + + version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' + version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ + version_one = version_cmp version + version_many = version_one (wsp* ',' version_one)* + versionspec = ( '(' version_many ')' ) | version_many + urlspec = '@' wsp* + +Environment markers allow making a specification only take effect in some +environments:: + + marker_op = version_cmp | 'in' | 'not in' + python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | + '-' | '_' | '*') + dquote = '"' + squote = '\\'' + python_str = (squote (python_str_c | dquote)* squote | + dquote (python_str_c | squote)* dquote) + env_var = ('python_version' | 'python_full_version' | + 'os_name' | 'sys_platform' | 'platform_release' | + 'platform_system' | 'platform_version' | + 'platform_machine' | 'python_implementation' | + 'implementation_name' | 'implementation_version' | + 'extra' # ONLY when defined by a containing layer + ) + marker_var = env_var | python_str + marker_expr = ('(' wsp* marker wsp* ')' + | (marker_var wsp* marker_op wsp* marker_var)) + marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* + quoted_marker = ';' wsp* marker + +Optional components of a distribution may be specified using the extras +field:: + + identifier = ( digit | letter | '-' | '_' | '.')+ + name = identifier + extras_list = identifier (wsp* ',' wsp* identifier)* + extras = '[' wsp* extras_list? ']' + +Giving us a rule for name based requirements:: + + name_req = name wsp* extras? wsp* versionspec? wsp* quoted_marker? + +And a rule for direct reference specifications:: + + url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? + +Leading to the unified rule that can specify a dependency.:: + + specification = wsp* ( url_req | name_req ) wsp* + +Whitespace +---------- + +Non line-breaking whitespace is mostly optional with no semantic meaning. The +sole exception is detecting the end of a URL requirement. + +Names +----- + +Python distribution names are currently defined in PEP-345 [#pep345]_. Names +act as the primary identifier for distributions. They are present in all +dependency specifications, and are sufficient to be a specification on their +own. + +Extras +------ + +An extra is an optional part of a distribution. Distributions can specify as +many extras as they wish, and each extra results in the declaration of +additional dependencies of the distribution **when** the extra is used in a +dependency specification. For instance:: + + requests[security] + +Extras union in the dependencies they define with the dependencies of the +distribution they are attached to. The example above would result in requests +being installed, and requests own dependencies, and also any dependencies that +are listed in the "security" extra of requests. + +If multiple extras are listed, all the dependencies are unioned together. + +Versions +-------- + +See PEP-440 [#pep440]_ for more detail on both version numbers and version +comparisons. Version specifications limit the versions of a distribution that +can be used. They only apply to distributions looked up by name, rather than +via a URL. Version comparison are also used in the markers feature. The +optional brackets around a version are present for compatibility with PEP-345 +[#pep345]_ but should not be generated, only accepted. + +Environment Markers +------------------- + +Environment markers allow a dependency specification to provide a rule that +describes when the dependency should be used. For instance, consider a package +that needs argparse. In Python 2.7 argparse is always present. On older Python +versions it has to be installed as a dependency. This can be expressed as so:: + + argparse;python_version<"2.7" + +A marker expression evalutes to either True or False. When it evaluates to +False, the dependency specification should be ignored. + +The marker language is a subset of Python itself, chosen for the ability to +safely evaluate it without running arbitrary code that could become a security +vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This PEP +fixes some issues that were observed in the described in PEP-426 [#pep426]_. + +Comparisons in marker expressions are typed by the comparison operator. The + operators that are not in perform the same as they +do for strings in Python. The operators use the PEP-440 +[#pep440]_ version comparison rules when those are defined (that is when both +sides have a valid version specifier). If there is no defined PEP-440 +behaviour and the operator exists in Python, then the operator falls back to +the Python behaviour. Otherwise the result of the operator is False. + +User supplied constants are always encoded as strings with either ``'`` or +``"`` quote marks. Note that backslash escapes are not defined, but existing +implementations do support them them. They are not included in this +specification because none of the variables to which constants can be compared +contain quote-requiring values. + +The variables in the marker grammar such as "os_name" resolve to values looked +up in the Python runtime. With the exception of "extra" all values are defined +on all Python versions today - it is an error in the implementation of markers +if a value is not defined. + +Unknown variables must raise an error rather than resulting in a comparison +that evaluates to True or False. + +Variables whose value cannot be calculated on a given Python implementation +should evaluate to ``0`` for versions, and an empty string for all other +variables. + +The "extra" variable is special. It is used by wheels to signal which +specifications apply to a given extra in the wheel ``METADATA`` file, but +since the ``METADATA`` file is based on a draft version of PEP-426, there is +no current specification for this. Regardless, outside of a context where this +special handling is taking place, the "extra" variable should result in a +SyntaxError like all other unknown variables. + +.. list-table:: + :header-rows: 1 + + * - Marker + - Python equivalent + - Sample values + * - ``os_name`` + - ``os.name`` + - ``posix``, ``java`` + * - ``sys_platform`` + - ``sys.platform`` + - ``linux``, ``darwin``, ``java1.8.0_51`` + * - ``platform_machine`` + - ``platform.machine()`` + - ``x86_64`` + * - ``python_implementation`` + - ``platform.python_implementation()`` + - ``CPython``, ``Jython`` + * - ``platform_release`` + - ``platform.release()`` + - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` + * - ``platform_system`` + - ``platform.system()`` + - ``Linux``, ``Windows``, ``Java`` + * - ``platform_version`` + - ``platform.version()`` + - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` + ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` + ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; root:xnu-2782.40.9~2/RELEASE_X86_64`` + * - ``python_version`` + - ``platform.python_version()[:3]`` + - ``3.4``, ``2.7`` + * - ``python_full_version`` + - ``platform.python_version()`` + - ``3.4.0``, ``3.5.0b1`` + * - ``implementation_name`` + - ``sys.implementation.name`` + - ``cpython`` + * - ``implementation_version`` + - see definition below + - ``3.4.0``, ``3.5.0b1`` + * - ``extra`` + - A SyntaxError except when defined by the context interpreting the + specification. + - ``test`` + +The ``implementation_version`` marker variable is derived from +``sys.implementation.version``:: + + def format_full_version(info): + version = '{0.major}.{0.minor}.{0.micro}'.format(info) + kind = info.releaselevel + if kind != 'final': + version += kind[0] + str(info.serial) + return version + + implementation_version = format_full_version(sys.implementation.version) + +Backwards Compatibility +======================= + +Most of this PEP is already widely deployed and thus offers no compatibiltiy +concerns. + +There are however a few points where the PEP differs from the deployed base. + +Firstly, PEP-440 direct references haven't actually been deployed in the wild, +but they were designed to be compatibly added, and there are no known +obstacles to adding them to pip or other tools that consume the existing +dependency metadata in distributions - particularly since they won't be +permitted to be present in PyPI uploaded distributions anyway. + +Secondly, PEP-426 markers which have had some reasonable deployment, +particularly in wheels and pip, will handle version comparisons with +``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is less +than "2.7.9". This backward incompatibility is deliberate. We are also +defining new operators - "~=" and "===", and new variables - +``platform_release``, ``platform_system``, ``implementation_name``, and +``implementation_version`` which are not present in older marker +implementations. The variables will error on those implementations. Users of +both features will need to make a judgement as to when support has become +sufficiently widespread in the ecosystem that using them will not cause +compatibility issues. + +Thirdly, PEP-345 required brackets around version specifiers. In order to +accept PEP-345 dependency specifications, brackets are accepted, but they +should not be generated. + +Rationale +========= + +In order to move forward with any new PEPs that depend on environment markers, +we needed a specification that included them in their modern form. This PEP +brings together all the currently unspecified components into a specified +form. + +The requirement specifier EBNF is lifted from setuptools pkg_resources +documentation, since we wish to avoid depending on a defacto, vs PEP +specified, standard. + +Complete Grammar +================ + +The complete parsley grammar:: + + wsp = ' ' | '\t' + version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '==='> + version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' )+> + version_one = version_cmp:op version:v -> (op, v) + version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 + versionspec = ('(' version_many:v ')' ->v) | version_many + urlspec = '@' wsp* + marker_op = version_cmp | 'in' | 'not in' + python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | + '-' | '_' | '*') + dquote = '"' + squote = '\\'' + python_str = (squote <(python_str_c | dquote)*>:s squote | + dquote <(python_str_c | squote)*>:s dquote) -> s + env_var = ('python_version' | 'python_full_version' | + 'os_name' | 'sys_platform' | 'platform_release' | + 'platform_system' | 'platform_version' | + 'platform_machine' | 'python_implementation' | + 'implementation_name' | 'implementation_version' | + 'extra' # ONLY when defined by a containing layer + ):varname -> lookup(varname) + marker_var = env_var | python_str + marker_expr = (("(" wsp* marker:m wsp* ")" -> m) + | ((marker_var:l wsp* marker_op:o wsp* marker_var:r)) + -> (l, o, r)) + marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* + marker_expr:r -> (o, r))*:ms -> (m, ms)) + quoted_marker = ';' wsp* marker + identifier = <( digit | letter | '-' | '_' | '.')+> + name = identifier + extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + ids + extras = '[' wsp* extras_list?:e ']' -> e + name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* quoted_marker?:m + -> (n, e or [], v or [], m)) + url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ quoted_marker?:m + -> (n, e or [], v, m)) + specification = wsp* ( url_req | name_req ):s wsp* -> s + # The result is a tuple - name, list-of-extras, + # list-of-version-constraints-or-a-url, marker-ast or None + + + URI_reference = + URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? + hier_part = ('//' authority path_abempty) | path_absolute | path_rootless | path_empty + absolute_URI = scheme ':' hier_part ( '?' query )? + relative_ref = relative_part ( '?' query )? ( '#' fragment )? + relative_part = '//' authority path_abempty | path_absolute | path_noscheme | path_empty + scheme = letter ( letter | digit | '+' | '-' | '.')* + authority = ( userinfo '@' )? host ( ':' port )? + userinfo = ( unreserved | pct_encoded | sub_delims | ':')* + host = IP_literal | IPv4address | reg_name + port = digit* + IP_literal = '[' ( IPv6address | IPvFuture) ']' + IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ + IPv6address = ( + ( h16 ':'){6} ls32 + | '::' ( h16 ':'){5} ls32 + | ( h16 )? '::' ( h16 ':'){4} ls32 + | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 + | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 + | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 + | ( ( h16 ':'){0,4} h16 )? '::' ls32 + | ( ( h16 ':'){0,5} h16 )? '::' h16 + | ( ( h16 ':'){0,6} h16 )? '::' ) + h16 = hexdig{1,4} + ls32 = ( h16 ':' h16) | IPv4address + IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet + nz = ~'0' digit + dec_octet = ( + digit # 0-9 + | nz digit # 10-99 + | '1' digit{2} # 100-199 + | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 + | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# %250-255 + reg_name = ( unreserved | pct_encoded | sub_delims)* + path = ( + path_abempty # begins with '/' or is empty + | path_absolute # begins with '/' but not '//' + | path_noscheme # begins with a non-colon segment + | path_rootless # begins with a segment + | path_empty ) # zero characters + path_abempty = ( '/' segment)* + path_absolute = '/' ( segment_nz ( '/' segment)* )? + path_noscheme = segment_nz_nc ( '/' segment)* + path_rootless = segment_nz ( '/' segment)* + path_empty = pchar{0} + segment = pchar* + segment_nz = pchar+ + segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ + # non-zero-length segment without any colon ':' + pchar = unreserved | pct_encoded | sub_delims | ':' | '@' + query = ( pchar | '/' | '?')* + fragment = ( pchar | '/' | '?')* + pct_encoded = '%' hexdig + unreserved = letter | digit | '-' | '.' | '_' | '~' + reserved = gen_delims | sub_delims + gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' + sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | ',' | ';' | '=' + hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | 'D' | 'e' | 'E' | 'f' | 'F' + +A test program - if the grammar is in a string ``grammar``:: + + import os + import sys + import platform + + from parsley import makeGrammar + + grammar = """ + wsp ... + """ + tests = [ + "name [fred,bar] @ http://foo.com ; python_version=='2.7'", + "name", + "name>=3", + "name>=3,<2", + "name[quux, strange];python_version<'2.7' and platform_version=='2'", + "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", + "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", + ] + + def format_full_version(info): + version = '{0.major}.{0.minor}.{0.micro}'.format(info) + kind = info.releaselevel + if kind != 'final': + version += kind[0] + str(info.serial) + return version + + if hasattr(sys, 'implementation'): + implementation_version = format_full_version(sys.implementation.version) + implementation_name = sys.implementation.name + else: + implementation_version = '' + implementation_name = '' + bindings = { + 'implementation_name': implementation_name, + 'implementation_version': implementation_version, + 'os_name': os.name, + 'platform_machine': platform.machine(), + 'platform_release': platform.release(), + 'platform_system': platform.system(), + 'platform_version': platform.version(), + 'python_full_version': platform.python_version(), + 'python_implementation': platform.python_implementation(), + 'python_version': platform.python_version()[:3], + 'sys_platform': sys.platform, + } + + compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) + for test in tests: + parsed = compiled(test).specification() + print(parsed) + +References +========== + +.. [#pip] pip, the recommended installer for Python packages + (http://pip.readthedocs.org/en/stable/) + +.. [#pep345] PEP-345, Python distribution metadata version 1.2. + (https://www.python.org/dev/peps/pep-0345/) + +.. [#pep426] PEP-426, Python distribution metadata. + (https://www.python.org/dev/peps/pep-0426/) + +.. [#pep440] PEP-440, Python distribution metadata. + (https://www.python.org/dev/peps/pep-0440/) + +.. [#std66] The URL specification. + (https://tools.ietf.org/html/rfc3986) + +.. [#parsley] The parsley PEG library. + (https://pypi.python.org/pypi/parsley/) + +Copyright +========= + +This document has been placed in the public domain. + + + +.. + Local Variables: + mode: indented-text + indent-tabs-mode: nil + sentence-end-double-space: t + fill-column: 70 + coding: utf-8 + End: On 10 November 2015 at 07:56, Robert Collins wrote: > Pushed up this edit.. > > On 9 November 2015 at 18:45, Robert Collins wrote: >> On 9 November 2015 at 17:55, Nathaniel Smith wrote: >>> The new version is looking pretty good to me! >>> >>> My main concern still is that specification of whitespace handling is >>> still kinda confusing/underspecified. The text says "all whitespace is >>> optional", but the grammar says that it's mandatory in some cases >>> (e.g. url-marker, still not sure why -- I'd understand if it were >>> mandatory before the ";" since ";" is a valid character in URLs, but >>> it says it's mandatory afterward?), and the grammar is still wrong >>> about whitespace in some cases (e.g. it says ">= 1.0" is an illegal >>> versionspec). >>> >>> I guess the two options are either to go through carefully sprinkling >>> *WSP's about at all the appropriate places, or else to tackle things >>> more systematically by adding a lexer layer... >> >> I'm happy either way. You are right though that there is one spot >> where it is not optional. Thats how "url; marker stuff here" is >> defined in pip today. We could in principle define a new rule here, >> such as putting markers before the url. But as markers aren't self >> delimiting (blame PEP-345) that is a bit fugly. We could say 'url >> 1*WSP ";" *WSP marker', which would be a bit more consistent, but >> different to pip's current handling. Of course, the @ syntax is >> already different, so it seems reasonable to do so to me. > > diff --git a/dependency-specification.rst b/dependency-specification.rst > index 9e95417..6afe288 100644 > --- a/dependency-specification.rst > +++ b/dependency-specification.rst > @@ -84,8 +84,9 @@ URI is defined in std-66 [#std66]_:: > > version-cmp = "<" / "<=" / "!=" / "==" / ">=" / ">" > version = 1*( DIGIT / ALPHA / "-" / "_" / "." / "*" ) > - version-inner = version-cmp version *(',' version-cmp version) > - versionspec = ("(" version-inner ")") / version-inner > + version-one = *WSP version-cmp *WSP version > + version-many = version-one *(*WSP "," version-one) > + versionspec = ("(" version-many ")") / version-many > urlspec = "@" URI > > Environment markers allow making a specification only take effect in some > @@ -107,7 +108,7 @@ environments:: > =/ (marker-var [*WSP marker-op *WSP marker-var]) > marker = *WSP marker-expr *( *WSP ("and" / "or") *WSP marker-expr) > name-marker = ";" *WSP marker > - url-marker = ";" 1*WSP marker > + url-marker = WSP ";" *WSP marker > > Optional components of a distribution may be specified using the extras > field:: > @@ -131,7 +132,8 @@ Leading to the unified rule that can specify a dependency:: > Whitespace > ---------- > > -Non line-breaking whitespace is optional and has no semantic meaning. > +Non line-breaking whitespace is mostly optional with no semantic meaning. The > +sole exception is detecting the end of a URL requirement. > > Names > ----- > > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Nov 11 01:19:11 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 11 Nov 2015 19:19:11 +1300 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 18:53, Nick Coghlan wrote: > On 11 November 2015 at 08:44, Nathaniel Smith wrote: >> On Mon, Nov 9, 2015 at 6:11 PM, Robert Collins >> wrote: >>> On 10 November 2015 at 15:03, Nathaniel Smith wrote: >> Similarly, we still have to specify how what the different operations >> are, what arguments they take, how they signal errors, etc. The point >> though is this specification will be shorter and simpler if we're >> specifying Python APIs than if we're specifying IPC APIs, because with >> a Python API we get to assume the existence of things like data >> structures and kwargs and exceptions and return values instead of >> having to build them from scratch. > > I think the potentially improved quality of error handling arising > from a Python API based approach is well worth taking into account. > When the backend interface is CLI based, you're limited to: > > 1. The return code > 2. Typically unstructured stderr output > > This isn't like HTTP+JSON, where there's an existing rich suite of > well-defined error codes to use, and an ability to readily include > error details in the reply payload. > > The other thing is that if the core interface is Python API based, > then if no hook is specified, there can be a default provider in pip > that knows how to invoke the setup.py CLI (or perhaps even implements > looking up the CLI to invoke from the source tree metadata). Its richer, which is both a positive and a negative. I appreciate the arguments, but I'm not convinced at this point. pip is going to be invoking a CLI *no matter what*. Thats a hard requirement unless Python's very fundamental import behaviour changes. Slapping a Python API on things is lipstick on a pig here IMO: we're going to have to downgrade any richer interface; and by specifying the actual LCD as the interface it is then amenable to direct exploration by users without them having to reverse engineer an undocumented thunk within pip. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From guettliml at thomas-guettler.de Wed Nov 11 01:30:32 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 11 Nov 2015 07:30:32 +0100 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: References: <563E0C8C.9070609@thomas-guettler.de> Message-ID: <5642E088.8070009@thomas-guettler.de> Am 10.11.2015 um 21:54 schrieb Wes Turner: > * It is [currently [#PEP426JSONLD)] necessary to run setup.py with each given destination platform, because parameters are expanded within the scope of setup.py. OK > * Because of this, client side dependency resolution (with a given platform) is currently the only viable option for something like this Are you sure that this conclusion is the only solution? A server could create a new container/VM to run setup.py. Then the install_requires can be cached (for this plattform). Maybe I am missing something, but still think server side dependency resolution is possible. Please tell me what's wrong with my conclusion. > > ... > > * Build: Docker, Tox (Dox) to build package(s) > * Each assembly of packages is / could be a package with a setup.py (and/or a requirements.txt) > * And tests: > * http://conda.pydata.org/docs/building/meta-yaml.html#test-section > * Release: DevPi > * http://doc.devpi.net/latest/ > * conda env environment.yml YAML: http://conda.pydata.org/docs/using/envs.html > * [x] conda packages > * [x] pip packages > * [ ] system packages (configuration management) > > And then, really, Is there a stored version of this instance of a named Docker image? > #reproducibility #linkedreproducibility I don't fully understand the above. I guess you had the container/VM solution in mind, too. There is a new topic in your mail which I will reply to in a new thread. Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From ncoghlan at gmail.com Wed Nov 11 01:35:04 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Nov 2015 16:35:04 +1000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On 10 November 2015 at 09:24, Chris Barker wrote: > As it happens, I am in the middle of a intro class that's using python3.4 or > 3.5 right now -- and I am telling everyone to do: > > python3 -m pip install > > Yes, plain old "pip install" is nicer, but only a little bit, and the > biggest reason we really, really want that to still work is that there are a > LOT of instructions all over the web telling people to do that -- so really > too bad if it doesn't work! But the reality is that it often DOESN'T work > now! and when it doesn't newbies really have no idea why in the heck not! > > personally, I think the best approach is to deprecate plain old "pip > install" -- if it's not there as an option, I expect no one will find it odd > that to install something for python, you might use python to do that! Long thread, so I'm picking a semi-random spot to chime in, rather than replying to every part individually. I think there are a few different aspects worth considering here: 1. The ergonomics of "pip install X" are really nice, it's by far the most common instruction given in project documentation, and we're still in the process of updating documentation to recommend it over "easy_install X" (or sometimes even over running "./setup.py install" directly). We *want* it to be the right answer for installing Python packages, especially in single-installation scenarios. 2. There's one particular reason I *didn't* specify it as the default recommendation in https://docs.python.org/3/installing/: until Python 3.5, the "pip" executable wasn't placed on the PATH on Windows by default, even if you'd enabled PATH modification when installing Python (the problem is that the installers for Python 3.3 and 3.4 only add the directory where CPython itself resides to PATH, but not the Scripts directory where "pip.exe" ends up, while even earlier versions didn't offer the option to modify PATH automatically at all) So that meant "python -m pip" gave me the broadest coverage - it worked for everything except system level Python 3 installation on *nix systems. With the path issue being fixed in Python 3.5, we're now in the situation where "pip install X" based instructions will work in all of the following "single installation" scenarios: - any activated virtual environment (including conda ones) - Windows Python 3.5+ installations (with PATH modification selected at install time) - *nix Python 2 installations (including Mac OS X and Linux) "pip install X" still doesn't work for system Python 3 installations on *nix systems - you need "pip3 install X" or "python3 -m pip install X" there Windows Python 2 installations require manual PATH modifications regardless, but it's more common for people to know how to make "python -m pip install X" work, than it is for them to remember to also add the "Scripts" directory needed to make "pip install X" work. 3. I've started thinking of pip as a "plugin manager" for Python. This is mostly a consequence of work, since many of the challenges we face in dealing with language specific package management systems are akin to the ones we encounter with the plugin management systems in web browsers and IDEs like Eclipse. (It also helps to more clearly differentiate it from conda, which has the much broader role of being a data analysis environment manager) The problem with pip in its current form for that role is that it's installed as a standalone tool, but isn't natively multi-version aware. Accordingly, the ideas I like the most are the ones that suggest taking it down the path of the "py" launcher - make it natively multi-version aware, and have it choose a well-defined default target version if multiple versions are available. Longer term, it may even make sense to take the "python" command on *nix systems in that direction, or, at the very least, make "py" a cross-platform invocation technique: https://mail.python.org/pipermail/linux-sig/2015-October/000000.html Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 11 01:41:36 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Nov 2015 16:41:36 +1000 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: On 11 November 2015 at 16:11, Robert Collins wrote: > - fixed a bug in the marker language that happened somewhere in > PEP-426. In PEP 345 markers were strictly 'LHS OP RHS' followed by AND > or OR and then another marker expression. The grammar in PEP-426 > however allowed things like "(python_version) and > (python_version=='2.7')" which I believe wasn't actually the intent - > truthy values are not defined there. So the new grammar does not allow > ("fred" and "bar") or other such things - and and or are exclusively > between well defined expressions now. Right, Vinay pointed out that the use of parentheses for grouping wasn't well defined in 345, but instead grew implicitly out of their evaluation as a Python subset. I attempted to fix that in 426, but didn't intend to allow non-comparisons as operands for AND and OR. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Nov 11 01:49:39 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 Nov 2015 16:49:39 +1000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 16:19, Robert Collins wrote: > On 11 November 2015 at 18:53, Nick Coghlan wrote: >> On 11 November 2015 at 08:44, Nathaniel Smith wrote: >>> On Mon, Nov 9, 2015 at 6:11 PM, Robert Collins >>> wrote: >>>> On 10 November 2015 at 15:03, Nathaniel Smith wrote: >>> Similarly, we still have to specify how what the different operations >>> are, what arguments they take, how they signal errors, etc. The point >>> though is this specification will be shorter and simpler if we're >>> specifying Python APIs than if we're specifying IPC APIs, because with >>> a Python API we get to assume the existence of things like data >>> structures and kwargs and exceptions and return values instead of >>> having to build them from scratch. >> >> I think the potentially improved quality of error handling arising >> from a Python API based approach is well worth taking into account. >> When the backend interface is CLI based, you're limited to: >> >> 1. The return code >> 2. Typically unstructured stderr output >> >> This isn't like HTTP+JSON, where there's an existing rich suite of >> well-defined error codes to use, and an ability to readily include >> error details in the reply payload. >> >> The other thing is that if the core interface is Python API based, >> then if no hook is specified, there can be a default provider in pip >> that knows how to invoke the setup.py CLI (or perhaps even implements >> looking up the CLI to invoke from the source tree metadata). > > Its richer, which is both a positive and a negative. I appreciate the > arguments, but I'm not convinced at this point. > > pip is going to be invoking a CLI *no matter what*. Thats a hard > requirement unless Python's very fundamental import behaviour changes. > Slapping a Python API on things is lipstick on a pig here IMO: we're > going to have to downgrade any richer interface; and by specifying the > actual LCD as the interface it is then amenable to direct exploration > by users without them having to reverse engineer an undocumented thunk > within pip. I'm not opposed to documenting how pip talks to its worker CLI - I just share Nathan's concerns about locking that down in a PEP vs keeping *that* CLI within pip's boundary of responsibilities, and having a documented Python interface used for invoking build systems. However, I've now realised that we're not constrained even if we start with the CLI interface, as there's still a migration path to a Python API based model: Now: documented CLI for invoking build systems Future: documented Python API for invoking build systems, default fallback invokes the documented CLI So the CLI documented in the PEP isn't *necessarily* going to be the one used by pip to communicate into the build environment - it may be invoked locally within the build environment. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Wed Nov 11 02:27:57 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 11 Nov 2015 20:27:57 +1300 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 19:49, Nick Coghlan wrote: > On 11 November 2015 at 16:19, Robert Collins wrote: ...>> pip is going to be invoking a CLI *no matter what*. Thats a hard >> requirement unless Python's very fundamental import behaviour changes. >> Slapping a Python API on things is lipstick on a pig here IMO: we're >> going to have to downgrade any richer interface; and by specifying the >> actual LCD as the interface it is then amenable to direct exploration >> by users without them having to reverse engineer an undocumented thunk >> within pip. > > I'm not opposed to documenting how pip talks to its worker CLI - I > just share Nathan's concerns about locking that down in a PEP vs > keeping *that* CLI within pip's boundary of responsibilities, and > having a documented Python interface used for invoking build systems. I'm also very wary of something that would be an attractive nuisance. I've seen nothing suggesting that a Python API would be anything but: - it won't be usable [it requires the glue to set up an isolated context, which is buried in pip] in the general case - no matter what we do, pip can't benefit from it beyond the subprocess interface pip needs, because pip *cannot* import and use the build interface tl;dr - I think making the case that the layer we define should be a Python protocol rather than a subprocess protocol requires some really strong evidence. We're *not* dealing with the same moving parts that typical Python stuff requires. > However, I've now realised that we're not constrained even if we start > with the CLI interface, as there's still a migration path to a Python > API based model: > > Now: documented CLI for invoking build systems > Future: documented Python API for invoking build systems, default > fallback invokes the documented CLI Or we just issue an updated bootstrap schema, and there's no fallback or anything needed. > So the CLI documented in the PEP isn't *necessarily* going to be the > one used by pip to communicate into the build environment - it may be > invoked locally within the build environment. No, it totally will be. Exactly as setup.py is today. Thats deliberate: The *new* thing we're setting out to enable is abstract build systems, not reengineering pip. The future - sure, someone can write a new thing, and the necessary capability we're building in to allow future changes will allow a new PEP to slot in easily and take on that [non trivial and substantial chunk of work]. (For instance, how do you do compiler and build system specific options when you have a CLI to talk to pip with)? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Wed Nov 11 04:04:46 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 01:04:46 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Tue, Nov 10, 2015 at 11:27 PM, Robert Collins wrote: > On 11 November 2015 at 19:49, Nick Coghlan wrote: >> On 11 November 2015 at 16:19, Robert Collins wrote: > ...>> pip is going to be invoking a CLI *no matter what*. Thats a hard >>> requirement unless Python's very fundamental import behaviour changes. >>> Slapping a Python API on things is lipstick on a pig here IMO: we're >>> going to have to downgrade any richer interface; and by specifying the >>> actual LCD as the interface it is then amenable to direct exploration >>> by users without them having to reverse engineer an undocumented thunk >>> within pip. >> >> I'm not opposed to documenting how pip talks to its worker CLI - I >> just share Nathan's concerns about locking that down in a PEP vs >> keeping *that* CLI within pip's boundary of responsibilities, and >> having a documented Python interface used for invoking build systems. > > I'm also very wary of something that would be an attractive nuisance. > I've seen nothing suggesting that a Python API would be anything but: > - it won't be usable [it requires the glue to set up an isolated > context, which is buried in pip] in the general case This is exactly as true of a command line API -- in the general case it also requires the glue to set up an isolated context. People who go ahead and run 'flit' from their global environment instead of in the isolated build environment will experience exactly the same problems as people who go ahead and import 'flit.build_system_api' in their global environment, so I don't see how one is any more of an attractive nuisance than the other? AFAICT the main difference is that "setting up a specified Python context and then importing something and exploring its API" is literally what I do all day as a Python developer. Either way you have to set stuff up, and then once you do, in the Python API case you get stuff like tab completion, ipython introspection (? and ??), etc. for free. > - no matter what we do, pip can't benefit from it beyond the > subprocess interface pip needs, because pip *cannot* import and use > the build interface Not sure what you mean by "benefit" here. At best this is an argument that the two options have similar capabilities, in which case I would argue that we should choose the one that leads to simpler and thus more probably bug-free specification language. But even this isn't really true -- the difference between them is that either way you have a subprocess API, but with a Python API, the subprocess interface that pip uses has the option of being improved incrementally over time -- including, potentially, to take further advantage of the underlying richness of the Python semantics. Sure, maybe the first release would just take all exceptions and map them into some text printed to stderr and a non-zero return code, and that's all that pip would get. But if someone had an idea for how pip could do better than this by, I dunno, encoding some structured metadata about the particular exception that occurred and passing this back up to pip to do something intelligent with it, they absolutely could write the code and submit a PR to pip, without having to write a new PEP. > tl;dr - I think making the case that the layer we define should be a > Python protocol rather than a subprocess protocol requires some really > strong evidence. We're *not* dealing with the same moving parts that > typical Python stuff requires. I'm very confused and honestly do not understand what you find attractive about the subprocess protocol approach. Even your arguments above aren't really even trying to be arguments that it's good, just arguments that the Python API approach isn't much better. I'm sure there is some reason you like it, and you might even have said it but I missed it because I disagreed or something :-). But literally the only reason I can think of right now for why one would prefer the subprocess approach is that it lets one remove 50 lines of "worker process" code from pip and move them into the individual build backends instead, which I guess is a win if one is focused narrowly on pip itself. But surely there is more I'm missing? (And even this is lines-of-code argument is actually pretty dubious -- right now your draft PEP is importing-by-reference an entire existing codebase (!) for shell variable expansion in command lines, which is code that simply doesn't need to exist in the Python API approach. I'd be willing to bet that your approach requires more code in pip than mine :-).) >> However, I've now realised that we're not constrained even if we start >> with the CLI interface, as there's still a migration path to a Python >> API based model: >> >> Now: documented CLI for invoking build systems >> Future: documented Python API for invoking build systems, default >> fallback invokes the documented CLI > > Or we just issue an updated bootstrap schema, and there's no fallback > or anything needed. Oh no! But this totally gives up the most brilliant part of your original idea! :-) In my original draft, I had each hook specified separately in the bootstrap file, e.g. (super schematically): build-requirements = flit-build-requirements do-wheel-build = flit-do-wheel-build do-editable-build = flit-do-editable build and you counterproposed that instead there should just be one line like build-system = flit-build-system and this is exactly right, because it means that if some new capability is added to the spec (e.g. a new hook -- like hypothetically imagine if we ended up deferring the equivalent of egg-info or editable-build-mode to v2), then the new capability just needs to be implemented in pip and in flit, and then all the projects that use flit immediately gain superpowers without anyone having to go around and manually change all the bootstrap files in every project individually. But for this to work it's crucial that the pip<->build-system interface have some sort of versioning or negotiation beyond the bootstrap file's schema version. >> So the CLI documented in the PEP isn't *necessarily* going to be the >> one used by pip to communicate into the build environment - it may be >> invoked locally within the build environment. > > No, it totally will be. Exactly as setup.py is today. Thats > deliberate: The *new* thing we're setting out to enable is abstract > build systems, not reengineering pip. > > The future - sure, someone can write a new thing, and the necessary > capability we're building in to allow future changes will allow a new > PEP to slot in easily and take on that [non trivial and substantial > chunk of work]. (For instance, how do you do compiler and build system > specific options when you have a CLI to talk to pip with)? I dunno, that seems pretty easy? My original draft just suggested that the build hook would take a dict of string-valued keys, and then we'd add some options to pip like "--project-build-option foo=bar" that would set entries in that dict, and that's pretty much sufficient to get the job done. To enable backcompat you'd also want to map the old --install-option and --build-option switches to add entries to some well-known keys in that dict. But none of the details here need to be specified, because it's up to individual projects/build-systems to assign meaning to this stuff and individual build-frontends like pip to provide an interface to it -- at the build-frontent/build-backend interface layer we just need some way to pass through the blobs. I admit that this is another case where the Python API approach is making things trivial though ;-). If you want to pass arbitrary user-specified data through a command-line API, while avoiding things like potential namespace collisions between user-defined switches and standard-defined switches, then you have to do much more work than just say "there's another argument that's a dict". -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Wed Nov 11 04:17:53 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 01:17:53 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: In case it's useful to make this discussion more concrete, here's a sketch of what the pip code for dealing with a build system defined by a Python API might look like: https://gist.github.com/njsmith/75818a6debbce9d7ff48 Obviously there's room to build on this to get much fancier, but AFAICT even this minimal version is already enough to correctly handle all the important stuff -- schema version checking, error reporting, full args/kwargs/return values. (It does assume that we'll only use json-serializable data structures for argument and return values, but that seems like a good plan anyway. Pickle would probably be a bad idea because we're crossing between two different python environments that may have different or incompatible packages/classes available.) -n On Wed, Nov 11, 2015 at 1:04 AM, Nathaniel Smith wrote: > On Tue, Nov 10, 2015 at 11:27 PM, Robert Collins > wrote: >> On 11 November 2015 at 19:49, Nick Coghlan wrote: >>> On 11 November 2015 at 16:19, Robert Collins wrote: >> ...>> pip is going to be invoking a CLI *no matter what*. Thats a hard >>>> requirement unless Python's very fundamental import behaviour changes. >>>> Slapping a Python API on things is lipstick on a pig here IMO: we're >>>> going to have to downgrade any richer interface; and by specifying the >>>> actual LCD as the interface it is then amenable to direct exploration >>>> by users without them having to reverse engineer an undocumented thunk >>>> within pip. >>> >>> I'm not opposed to documenting how pip talks to its worker CLI - I >>> just share Nathan's concerns about locking that down in a PEP vs >>> keeping *that* CLI within pip's boundary of responsibilities, and >>> having a documented Python interface used for invoking build systems. >> >> I'm also very wary of something that would be an attractive nuisance. >> I've seen nothing suggesting that a Python API would be anything but: >> - it won't be usable [it requires the glue to set up an isolated >> context, which is buried in pip] in the general case > > This is exactly as true of a command line API -- in the general case > it also requires the glue to set up an isolated context. People who go > ahead and run 'flit' from their global environment instead of in the > isolated build environment will experience exactly the same problems > as people who go ahead and import 'flit.build_system_api' in their > global environment, so I don't see how one is any more of an > attractive nuisance than the other? > > AFAICT the main difference is that "setting up a specified Python > context and then importing something and exploring its API" is > literally what I do all day as a Python developer. Either way you have > to set stuff up, and then once you do, in the Python API case you get > stuff like tab completion, ipython introspection (? and ??), etc. for > free. > >> - no matter what we do, pip can't benefit from it beyond the >> subprocess interface pip needs, because pip *cannot* import and use >> the build interface > > Not sure what you mean by "benefit" here. At best this is an argument > that the two options have similar capabilities, in which case I would > argue that we should choose the one that leads to simpler and thus > more probably bug-free specification language. > > But even this isn't really true -- the difference between them is that > either way you have a subprocess API, but with a Python API, the > subprocess interface that pip uses has the option of being improved > incrementally over time -- including, potentially, to take further > advantage of the underlying richness of the Python semantics. Sure, > maybe the first release would just take all exceptions and map them > into some text printed to stderr and a non-zero return code, and > that's all that pip would get. But if someone had an idea for how pip > could do better than this by, I dunno, encoding some structured > metadata about the particular exception that occurred and passing this > back up to pip to do something intelligent with it, they absolutely > could write the code and submit a PR to pip, without having to write a > new PEP. > >> tl;dr - I think making the case that the layer we define should be a >> Python protocol rather than a subprocess protocol requires some really >> strong evidence. We're *not* dealing with the same moving parts that >> typical Python stuff requires. > > I'm very confused and honestly do not understand what you find > attractive about the subprocess protocol approach. Even your arguments > above aren't really even trying to be arguments that it's good, just > arguments that the Python API approach isn't much better. I'm sure > there is some reason you like it, and you might even have said it but > I missed it because I disagreed or something :-). But literally the > only reason I can think of right now for why one would prefer the > subprocess approach is that it lets one remove 50 lines of "worker > process" code from pip and move them into the individual build > backends instead, which I guess is a win if one is focused narrowly on > pip itself. But surely there is more I'm missing? > > (And even this is lines-of-code argument is actually pretty dubious -- > right now your draft PEP is importing-by-reference an entire existing > codebase (!) for shell variable expansion in command lines, which is > code that simply doesn't need to exist in the Python API approach. I'd > be willing to bet that your approach requires more code in pip than > mine :-).) > >>> However, I've now realised that we're not constrained even if we start >>> with the CLI interface, as there's still a migration path to a Python >>> API based model: >>> >>> Now: documented CLI for invoking build systems >>> Future: documented Python API for invoking build systems, default >>> fallback invokes the documented CLI >> >> Or we just issue an updated bootstrap schema, and there's no fallback >> or anything needed. > > Oh no! But this totally gives up the most brilliant part of your > original idea! :-) > > In my original draft, I had each hook specified separately in the > bootstrap file, e.g. (super schematically): > > build-requirements = flit-build-requirements > do-wheel-build = flit-do-wheel-build > do-editable-build = flit-do-editable build > > and you counterproposed that instead there should just be one line like > > build-system = flit-build-system > > and this is exactly right, because it means that if some new > capability is added to the spec (e.g. a new hook -- like > hypothetically imagine if we ended up deferring the equivalent of > egg-info or editable-build-mode to v2), then the new capability just > needs to be implemented in pip and in flit, and then all the projects > that use flit immediately gain superpowers without anyone having to go > around and manually change all the bootstrap files in every project > individually. > > But for this to work it's crucial that the pip<->build-system > interface have some sort of versioning or negotiation beyond the > bootstrap file's schema version. > >>> So the CLI documented in the PEP isn't *necessarily* going to be the >>> one used by pip to communicate into the build environment - it may be >>> invoked locally within the build environment. >> >> No, it totally will be. Exactly as setup.py is today. Thats >> deliberate: The *new* thing we're setting out to enable is abstract >> build systems, not reengineering pip. >> >> The future - sure, someone can write a new thing, and the necessary >> capability we're building in to allow future changes will allow a new >> PEP to slot in easily and take on that [non trivial and substantial >> chunk of work]. (For instance, how do you do compiler and build system >> specific options when you have a CLI to talk to pip with)? > > I dunno, that seems pretty easy? My original draft just suggested that > the build hook would take a dict of string-valued keys, and then we'd > add some options to pip like "--project-build-option foo=bar" that > would set entries in that dict, and that's pretty much sufficient to > get the job done. To enable backcompat you'd also want to map the old > --install-option and --build-option switches to add entries to some > well-known keys in that dict. But none of the details here need to be > specified, because it's up to individual projects/build-systems to > assign meaning to this stuff and individual build-frontends like pip > to provide an interface to it -- at the build-frontent/build-backend > interface layer we just need some way to pass through the blobs. > > I admit that this is another case where the Python API approach is > making things trivial though ;-). If you want to pass arbitrary > user-specified data through a command-line API, while avoiding things > like potential namespace collisions between user-defined switches and > standard-defined switches, then you have to do much more work than > just say "there's another argument that's a dict". > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org -- Nathaniel J. Smith -- http://vorpus.org From p.f.moore at gmail.com Wed Nov 11 07:06:30 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 Nov 2015 12:06:30 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On 11 November 2015 at 06:35, Nick Coghlan wrote: > Windows Python 2 installations require manual PATH modifications > regardless, but it's more common for people to know how to make > "python -m pip install X" work, than it is for them to remember to > also add the "Scripts" directory needed to make "pip install X" work. ... and "py -m pip install X" works without any PATH modification on all Windows systems with the launcher installed (I can't recall if it's included with Python 2.7 - but if not, maybe it should be backported? There's a standalone version people can get as well). Paul From donald at stufft.io Wed Nov 11 07:29:44 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 07:29:44 -0500 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On November 11, 2015 at 4:05:11 AM, Nathaniel Smith (njs at pobox.com) wrote: > > But even this isn't really true -- the difference between them > is that > either way you have a subprocess API, but with a Python API, the > subprocess interface that pip uses has the option of being improved > incrementally over time -- including, potentially, to take > further > advantage of the underlying richness of the Python semantics. > Sure, > maybe the first release would just take all exceptions and map > them > into some text printed to stderr and a non-zero return code, and > that's all that pip would get. But if someone had an idea for how > pip > could do better than this by, I dunno, encoding some structured > metadata about the particular exception that occurred and passing > this > back up to pip to do something intelligent with it, they absolutely > could write the code and submit a PR to pip, without having to write > a > new PEP. I think I prefer a CLI based approach (my suggestion was to remove the formatting/interpolation all together and just have the file include a list of things to install, and a python module to invoke via ``python -m ``). The main reason I think I prefer a CLI based approach is that I worry about the impedance mismatch between the two systems. We?re not actually going to be able to take advantage of Python?s plethora of types in any meaningful capacity because at the end of the day the bulk of the data is either naturally a string or as we start to allow end users to pass options through pip into the build system, we have no real way of knowing what the type is supposed to be other than the fact we got it as a CLI flag. How does a user encode something like ?pass an integer into this value in the build system?? on the CLI in a generic way? I can?t think of any way which means that any boundary code in the build system is going to need to be smart enough to handle an array of arguments that come in via the user typing something on the CLI. We have a wide variety of libraries to handle that case already for building CLI apps but we do not have a wide array of libraries handling it for a Python API. It will have to be manually encoded for each and every option that the build system supports. My other concern is that it introduces another potential area for mistake that is a bit harder to test. I don?t believe that any sort of ?worker.py? script is ever going to be able to handle arbitrary Python values coming back as a return value from a Python script. Whatever serialization we use to send data back into the main pip process (likely JSON) will simply choke and cause an error if it encounters a type it doesn?t know how to serialize. However this error case will only happen when the build system is being invoked by pip, not when it is being invoked ?naturally? in the build system?s unit tests. By forcing build tool authors to write a CLI interface, we push the work of ?how do I serialize my internal data structures? down onto them instead of making it some implicit piece of code that pip needs to work. The other reason I think a CLI approach is nicer is that it gives us a standard interface that we can us to have defined errors that the build system can omit. For instance if we wanted to allow the build system to indicate that it can?t do a build because it?s missing a mandatory C library, that would be trivial to do in a natural way for a CLI approach, we just define an error code and say that if the CLI exits with a 2 then we assume it?s missing a mandatory C library and we can take additional measures in pip to handle that case. If we use a Python API the natural way to signal an error like that is using an exception? but we don?t have any way to force a standard exception hierarchy on people. There is no ?Missing C Library Exception? in Python so either we?d have to encode some numerical or string based identifier that we?ll inspect an exception for (like Exception().error_code) or we?ll need to make a mandatory runtime library that the build systems must utilize to get their exceptions from. Alternatively we could have the calling functions return exit codes as well just like a process boundary does, however that is also not natural in Python and is more natural in a language like C. The main downside to the CLI approach is that it?s harder for the build system to send structured information back to the calling process outside of defined error code. However I do not believe that is particularly difficult since we can have it do something like send messages on stdout that are JSON encoded messages that pip can process and understand. I don?t think that it?s a requirement or even useful that the same CLI that end users would use to directly invoke that build system is the same one that pip would use to invoke that build system. So we wouldn?t need to worry about the fact that a bunch of JSON blobs being put on stdout isn?t very user friendly, because the user isn?t the target of these commands, pip is. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From oscar.j.benjamin at gmail.com Wed Nov 11 07:38:16 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Wed, 11 Nov 2015 12:38:16 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On 11 November 2015 at 06:35, Nick Coghlan wrote: > > Longer term, it may even make sense to take the "python" command on > *nix systems in that direction, or, at the very least, make "py" a > cross-platform invocation technique: > https://mail.python.org/pipermail/linux-sig/2015-October/000000.html This would also be good. The inconsistency between Windows and everything else is just annoying here. I've never been able to recommend the use of py.exe even though most of my students are on Windows and the lab machines are Windows because many of them use OSX and a few use Linux. Also although I can reliably assume that "python" is on PATH I can't know what version it is since it is most likely 3.x on Windows and 2.x on everything else which means that every script I give them has to be 2/3 compatible. With py.exe I could recommend "py -3" or I guess "py somescript.py" would throw a helpful error if the shebang doesn't match (which would be good). -- Oscar From wes.turner at gmail.com Wed Nov 11 07:44:24 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 11 Nov 2015 06:44:24 -0600 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: <5642E088.8070009@thomas-guettler.de> References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> Message-ID: #PythonPackageBuildEnvironment ... * pip-tools: https://github.com/nvie/pip-tools * "are there updates?" * "which updates would there be?" * devpi: https://bitbucket.org/hpk42/devpi/ * "where do we push these?" * "does it build (do the included package tests pass)?" "does it build (do the included package tests pass)?" * [Makefile], setup.py, tox.ini, travis.yml, dox.yml * https://github.com/pypa/warehouse * Src: https://bitbucket.org/pypa/pypi * https://westurner.org/tools/#pypi * Docs: https://westurner.org/tools/#python-packages ... " Re: [Distutils] Where should I put tests when packaging python modules?" https://code.activestate.com/lists/python-distutils-sig/26482/ > [...] > * https://tox.readthedocs.org/en/latest/config.html * https://github.com/docker/docker-registry/blob/master/tox.ini #flake8 * dox = docker + tox | PyPI: https://pypi.python.org/pypi/dox | Src: https://git.openstack.org/cgit/stackforge/dox/tree/dox.yml * docker-compose.yml | Docs: https://docs.docker.com/compose/ | Docs: https://github.com/docker/compose/blob/master/docs/yml.md *https://github.com/kelseyhightower/kubernetes-docker-files/blob/master/docker-compose.yml *https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pods.md#alternatives-considered * https://github.com/docker/docker/issues/8781 ( pods ( containers ) ) * http://docs.buildbot.net/latest/tutorial/docker.html *http://docs.buildbot.net/current/tutorial/docker.html#building-and-running-buildbot tox.ini often is not sufficient: * [Makefile: make test/tox] * setup.py * tox.ini * docker/platform-ver/Dockerfile * [dox.yml] * [docker-compose.yml] * [CI config] * http://docs.buildbot.net/current/manual/configuration.html * jenkins-kubernetes, jenkins-mesos > /[..] On Wed, Nov 11, 2015 at 12:30 AM, Thomas G?ttler < guettliml at thomas-guettler.de> wrote: > Am 10.11.2015 um 21:54 schrieb Wes Turner: > > * It is [currently [#PEP426JSONLD)] necessary to run setup.py with each > given destination platform, because parameters are expanded within the > scope of setup.py. > > OK > > > > * Because of this, client side dependency resolution (with a given > platform) is currently the only viable option for something like this > > Are you sure that this conclusion is the only solution? > > A server could create a new container/VM to run setup.py. > > Then the install_requires can be cached (for this plattform). > > Maybe I am missing something, but still think server side dependency > resolution is possible. > > Please tell me what's wrong with my conclusion. > > > > > > > ... > > > > * Build: Docker, Tox (Dox) to build package(s) > > * Each assembly of packages is / could be a package with a setup.py > (and/or a requirements.txt) > > * And tests: > > * > http://conda.pydata.org/docs/building/meta-yaml.html#test-section > > * Release: DevPi > > * http://doc.devpi.net/latest/ > > * conda env environment.yml YAML: > http://conda.pydata.org/docs/using/envs.html > > * [x] conda packages > > * [x] pip packages > > * [ ] system packages (configuration management) > > > > And then, really, Is there a stored version of this instance of a named > Docker image? > > #reproducibility #linkedreproducibility > > I don't fully understand the above. > > I guess you had the container/VM solution in mind, too. > > There is a new topic in your mail which I will reply to in a new thread. > > Regards, > Thomas G?ttler > > > -- > http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Wed Nov 11 07:49:35 2015 From: steve.dower at python.org (Steve Dower) Date: Wed, 11 Nov 2015 07:49:35 -0500 Subject: [Distutils] command line versus python API for build systemabstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: As much as I dislike sniping into threads like this, my gut feeling is strongly pushing towards defining the Python interface in the PEP and keeping command line interfaces as private. I don't have any new evidence, but pickle and binary stdio (not to mention TCP/HTTP for doing things remotely) are reliable cross-platform where CLIs are not, so you're going to have a horrible time locking down something that will work across multiple OS/shell combinations. There are also limits to command lines lengths that may be triggered when passing many long paths (if that ends up in there). Might be nice to have an in-proc option for builders too, so I can handle the IPC in my own way. Maybe that's not useful, but with a Python interface it's trivial to enable. Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Nathaniel Smith" Sent: ?11/?11/?2015 4:18 To: "Robert Collins" Cc: "DistUtils mailing list" Subject: Re: [Distutils] command line versus python API for build systemabstraction (was Re: build system abstraction PEP) In case it's useful to make this discussion more concrete, here's a sketch of what the pip code for dealing with a build system defined by a Python API might look like: https://gist.github.com/njsmith/75818a6debbce9d7ff48 Obviously there's room to build on this to get much fancier, but AFAICT even this minimal version is already enough to correctly handle all the important stuff -- schema version checking, error reporting, full args/kwargs/return values. (It does assume that we'll only use json-serializable data structures for argument and return values, but that seems like a good plan anyway. Pickle would probably be a bad idea because we're crossing between two different python environments that may have different or incompatible packages/classes available.) -n On Wed, Nov 11, 2015 at 1:04 AM, Nathaniel Smith wrote: > On Tue, Nov 10, 2015 at 11:27 PM, Robert Collins > wrote: >> On 11 November 2015 at 19:49, Nick Coghlan wrote: >>> On 11 November 2015 at 16:19, Robert Collins wrote: >> ...>> pip is going to be invoking a CLI *no matter what*. Thats a hard >>>> requirement unless Python's very fundamental import behaviour changes. >>>> Slapping a Python API on things is lipstick on a pig here IMO: we're >>>> going to have to downgrade any richer interface; and by specifying the >>>> actual LCD as the interface it is then amenable to direct exploration >>>> by users without them having to reverse engineer an undocumented thunk >>>> within pip. >>> >>> I'm not opposed to documenting how pip talks to its worker CLI - I >>> just share Nathan's concerns about locking that down in a PEP vs >>> keeping *that* CLI within pip's boundary of responsibilities, and >>> having a documented Python interface used for invoking build systems. >> >> I'm also very wary of something that would be an attractive nuisance. >> I've seen nothing suggesting that a Python API would be anything but: >> - it won't be usable [it requires the glue to set up an isolated >> context, which is buried in pip] in the general case > > This is exactly as true of a command line API -- in the general case > it also requires the glue to set up an isolated context. People who go > ahead and run 'flit' from their global environment instead of in the > isolated build environment will experience exactly the same problems > as people who go ahead and import 'flit.build_system_api' in their > global environment, so I don't see how one is any more of an > attractive nuisance than the other? > > AFAICT the main difference is that "setting up a specified Python > context and then importing something and exploring its API" is > literally what I do all day as a Python developer. Either way you have > to set stuff up, and then once you do, in the Python API case you get > stuff like tab completion, ipython introspection (? and ??), etc. for > free. > >> - no matter what we do, pip can't benefit from it beyond the >> subprocess interface pip needs, because pip *cannot* import and use >> the build interface > > Not sure what you mean by "benefit" here. At best this is an argument > that the two options have similar capabilities, in which case I would > argue that we should choose the one that leads to simpler and thus > more probably bug-free specification language. > > But even this isn't really true -- the difference between them is that > either way you have a subprocess API, but with a Python API, the > subprocess interface that pip uses has the option of being improved > incrementally over time -- including, potentially, to take further > advantage of the underlying richness of the Python semantics. Sure, > maybe the first release would just take all exceptions and map them > into some text printed to stderr and a non-zero return code, and > that's all that pip would get. But if someone had an idea for how pip > could do better than this by, I dunno, encoding some structured > metadata about the particular exception that occurred and passing this > back up to pip to do something intelligent with it, they absolutely > could write the code and submit a PR to pip, without having to write a > new PEP. > >> tl;dr - I think making the case that the layer we define should be a >> Python protocol rather than a subprocess protocol requires some really >> strong evidence. We're *not* dealing with the same moving parts that >> typical Python stuff requires. > > I'm very confused and honestly do not understand what you find > attractive about the subprocess protocol approach. Even your arguments > above aren't really even trying to be arguments that it's good, just > arguments that the Python API approach isn't much better. I'm sure > there is some reason you like it, and you might even have said it but > I missed it because I disagreed or something :-). But literally the > only reason I can think of right now for why one would prefer the > subprocess approach is that it lets one remove 50 lines of "worker > process" code from pip and move them into the individual build > backends instead, which I guess is a win if one is focused narrowly on > pip itself. But surely there is more I'm missing? > > (And even this is lines-of-code argument is actually pretty dubious -- > right now your draft PEP is importing-by-reference an entire existing > codebase (!) for shell variable expansion in command lines, which is > code that simply doesn't need to exist in the Python API approach. I'd > be willing to bet that your approach requires more code in pip than > mine :-).) > >>> However, I've now realised that we're not constrained even if we start >>> with the CLI interface, as there's still a migration path to a Python >>> API based model: >>> >>> Now: documented CLI for invoking build systems >>> Future: documented Python API for invoking build systems, default >>> fallback invokes the documented CLI >> >> Or we just issue an updated bootstrap schema, and there's no fallback >> or anything needed. > > Oh no! But this totally gives up the most brilliant part of your > original idea! :-) > > In my original draft, I had each hook specified separately in the > bootstrap file, e.g. (super schematically): > > build-requirements = flit-build-requirements > do-wheel-build = flit-do-wheel-build > do-editable-build = flit-do-editable build > > and you counterproposed that instead there should just be one line like > > build-system = flit-build-system > > and this is exactly right, because it means that if some new > capability is added to the spec (e.g. a new hook -- like > hypothetically imagine if we ended up deferring the equivalent of > egg-info or editable-build-mode to v2), then the new capability just > needs to be implemented in pip and in flit, and then all the projects > that use flit immediately gain superpowers without anyone having to go > around and manually change all the bootstrap files in every project > individually. > > But for this to work it's crucial that the pip<->build-system > interface have some sort of versioning or negotiation beyond the > bootstrap file's schema version. > >>> So the CLI documented in the PEP isn't *necessarily* going to be the >>> one used by pip to communicate into the build environment - it may be >>> invoked locally within the build environment. >> >> No, it totally will be. Exactly as setup.py is today. Thats >> deliberate: The *new* thing we're setting out to enable is abstract >> build systems, not reengineering pip. >> >> The future - sure, someone can write a new thing, and the necessary >> capability we're building in to allow future changes will allow a new >> PEP to slot in easily and take on that [non trivial and substantial >> chunk of work]. (For instance, how do you do compiler and build system >> specific options when you have a CLI to talk to pip with)? > > I dunno, that seems pretty easy? My original draft just suggested that > the build hook would take a dict of string-valued keys, and then we'd > add some options to pip like "--project-build-option foo=bar" that > would set entries in that dict, and that's pretty much sufficient to > get the job done. To enable backcompat you'd also want to map the old > --install-option and --build-option switches to add entries to some > well-known keys in that dict. But none of the details here need to be > specified, because it's up to individual projects/build-systems to > assign meaning to this stuff and individual build-frontends like pip > to provide an interface to it -- at the build-frontent/build-backend > interface layer we just need some way to pass through the blobs. > > I admit that this is another case where the Python API approach is > making things trivial though ;-). If you want to pass arbitrary > user-specified data through a command-line API, while avoiding things > like potential namespace collisions between user-defined switches and > standard-defined switches, then you have to do much more work than > just say "there's another argument that's a dict". > > -n > > -- > Nathaniel J. Smith -- http://vorpus.org -- Nathaniel J. Smith -- http://vorpus.org _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Nov 11 07:59:58 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 07:59:58 -0500 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: <5642E088.8070009@thomas-guettler.de> References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> Message-ID: On November 11, 2015 at 1:30:57 AM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: > > Maybe I am missing something, but still think server side dependency resolution is possible. > I don?t believe it?s possible nor desirable to have the server handle dependency resolution, at least not without removing some currently supported features and locking out some future features from ever happening. Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By default this only includes PyPI but people can either remove that, or add additional repository locations. In order to support this we need a resolver that can union multiple repositories together before doing the resolving. If the repository itself was the one handling the resolution than we are locked into a single repository per invocation of pip. Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just a simple directory, there *is* no server process running that would allow for a server side resolver to happen and pip either *must* handle the resolution itself in this case or it must disallow these feature all together. Additionally, the fact that we currently treat the server as a ?dumb? server, means that someone can implement a PEP 503 compatible repository very trivially with pretty much any web server that supports static files and automatically generating an index for static files. Switching to server side resolution would require removing this capability and force everyone to run a dedicated repository software that can handle that resolution. Additionally, we want there to be as little variance in the requests that people make to the repository as possible. We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we?ve been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served directly out of the cache rather than having to be go back to our backend servers. Finally, we want to move further away from trusting the actual repository where we can. In the future we?ll be allowing package signing that will make it possible to survive a compromise of the repository. However there is no way to do that if the repository needs to be able to dynamically generate a list of packages that need to be installed as part of a resolution process because by definition that needs to be done on the fly and thus must be signed by a key that the repository has access too if it?s signed at all. However, since the metadata for a package can be signed once and then it never changes, that can be signed by a human when they are uploading to PyPI and than pip can verify the signature on that metadata before feeding it into the resolver. This would allow us to treat PyPI as just an untrusted middleman instead of something that is essentially going to be allowed to force us to execute arbitrary code whenever someone does a pip install (because it?ll be able to instruct us to install any package, and packages can contain arbitrary code). Hopefully that answers your question about why it?s unlikely that we?ll ever move to a server side dependency resolver because even though it is possible to do so, doing it would severely regress a number of very important features. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From donald at stufft.io Wed Nov 11 08:06:50 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 08:06:50 -0500 Subject: [Distutils] command line versus python API for build systemabstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On November 11, 2015 at 7:51:05 AM, Steve Dower (steve.dower at python.org) wrote: > As much as I dislike sniping into threads like this, my gut feeling is strongly pushing > towards defining the Python interface in the PEP and keeping command line interfaces > as private. > > I don't have any new evidence, but pickle and binary stdio (not to mention TCP/HTTP for > doing things remotely) are reliable cross-platform where CLIs are not, so you're going > to have a horrible time locking down something that will work across multiple OS/shell > combinations. There are also limits to command lines lengths that may be triggered when > passing many long paths (if that ends up in there). The flip side is we are already successfully creating a cross-platform CLI via setup.py. It?s not like that is some new thing that we?ve not been handling for like two decades already. Pickle makes me nervous because it?s trivial for something to ?leak? out of the subprocess into the main process that shouldn?t. For example, if we implement isolated builds then we might end up having a build tool like ?mycoolbuildthing? installed not into the same location as pip, but added to PYTHONPATH when invoking the build tool. The build tool then returns some internally defined class as part of it?s interface and pickle dutifully serializes that. Then when we go to deserialize that in the main pip process, it blows up and fails because we don?t have ?mycoolbuildthing? installed. I could see an in language API if Python had a history of typed interfaces where we could write an interface that said ?it is an error for this interface to ever return anything but a True/False? or some other such rule. However Python doesn?t and duck typing works against us here because build tool authors will have to be aware of how we?re serializing the results across the IPC boundary without actually having that IPC being defined. > > Might be nice to have an in-proc option for builders too, so I can handle the IPC in my own > way. Maybe that's not useful, but with a Python interface it's trivial to enable. > > Cheers, > Steve > > Top-posted from my Windows Phone > > -----Original Message----- > From: "Nathaniel Smith" > Sent: ?11/?11/?2015 4:18 > To: "Robert Collins" > Cc: "DistUtils mailing list" > Subject: Re: [Distutils] command line versus python API for build systemabstraction > (was Re: build system abstraction PEP) > > In case it's useful to make this discussion more concrete, here's a > sketch of what the pip code for dealing with a build system defined by > a Python API might look like: > > https://gist.github.com/njsmith/75818a6debbce9d7ff48 > > Obviously there's room to build on this to get much fancier, but > AFAICT even this minimal version is already enough to correctly handle > all the important stuff -- schema version checking, error reporting, > full args/kwargs/return values. (It does assume that we'll only use > json-serializable data structures for argument and return values, but > that seems like a good plan anyway. Pickle would probably be a bad > idea because we're crossing between two different python environments > that may have different or incompatible packages/classes available.) > > -n > > On Wed, Nov 11, 2015 at 1:04 AM, Nathaniel Smith wrote: > > On Tue, Nov 10, 2015 at 11:27 PM, Robert Collins > > wrote: > >> On 11 November 2015 at 19:49, Nick Coghlan wrote: > >>> On 11 November 2015 at 16:19, Robert Collins wrote: > >> ...>> pip is going to be invoking a CLI *no matter what*. Thats a hard > >>>> requirement unless Python's very fundamental import behaviour changes. > >>>> Slapping a Python API on things is lipstick on a pig here IMO: we're > >>>> going to have to downgrade any richer interface; and by specifying the > >>>> actual LCD as the interface it is then amenable to direct exploration > >>>> by users without them having to reverse engineer an undocumented thunk > >>>> within pip. > >>> > >>> I'm not opposed to documenting how pip talks to its worker CLI - I > >>> just share Nathan's concerns about locking that down in a PEP vs > >>> keeping *that* CLI within pip's boundary of responsibilities, and > >>> having a documented Python interface used for invoking build systems. > >> > >> I'm also very wary of something that would be an attractive nuisance. > >> I've seen nothing suggesting that a Python API would be anything but: > >> - it won't be usable [it requires the glue to set up an isolated > >> context, which is buried in pip] in the general case > > > > This is exactly as true of a command line API -- in the general case > > it also requires the glue to set up an isolated context. People who go > > ahead and run 'flit' from their global environment instead of in the > > isolated build environment will experience exactly the same problems > > as people who go ahead and import 'flit.build_system_api' in their > > global environment, so I don't see how one is any more of an > > attractive nuisance than the other? > > > > AFAICT the main difference is that "setting up a specified Python > > context and then importing something and exploring its API" is > > literally what I do all day as a Python developer. Either way you have > > to set stuff up, and then once you do, in the Python API case you get > > stuff like tab completion, ipython introspection (? and ??), etc. for > > free. > > > >> - no matter what we do, pip can't benefit from it beyond the > >> subprocess interface pip needs, because pip *cannot* import and use > >> the build interface > > > > Not sure what you mean by "benefit" here. At best this is an argument > > that the two options have similar capabilities, in which case I would > > argue that we should choose the one that leads to simpler and thus > > more probably bug-free specification language. > > > > But even this isn't really true -- the difference between them is that > > either way you have a subprocess API, but with a Python API, the > > subprocess interface that pip uses has the option of being improved > > incrementally over time -- including, potentially, to take further > > advantage of the underlying richness of the Python semantics. Sure, > > maybe the first release would just take all exceptions and map them > > into some text printed to stderr and a non-zero return code, and > > that's all that pip would get. But if someone had an idea for how pip > > could do better than this by, I dunno, encoding some structured > > metadata about the particular exception that occurred and passing this > > back up to pip to do something intelligent with it, they absolutely > > could write the code and submit a PR to pip, without having to write a > > new PEP. > > > >> tl;dr - I think making the case that the layer we define should be a > >> Python protocol rather than a subprocess protocol requires some really > >> strong evidence. We're *not* dealing with the same moving parts that > >> typical Python stuff requires. > > > > I'm very confused and honestly do not understand what you find > > attractive about the subprocess protocol approach. Even your arguments > > above aren't really even trying to be arguments that it's good, just > > arguments that the Python API approach isn't much better. I'm sure > > there is some reason you like it, and you might even have said it but > > I missed it because I disagreed or something :-). But literally the > > only reason I can think of right now for why one would prefer the > > subprocess approach is that it lets one remove 50 lines of "worker > > process" code from pip and move them into the individual build > > backends instead, which I guess is a win if one is focused narrowly on > > pip itself. But surely there is more I'm missing? > > > > (And even this is lines-of-code argument is actually pretty dubious -- > > right now your draft PEP is importing-by-reference an entire existing > > codebase (!) for shell variable expansion in command lines, which is > > code that simply doesn't need to exist in the Python API approach. I'd > > be willing to bet that your approach requires more code in pip than > > mine :-).) > > > >>> However, I've now realised that we're not constrained even if we start > >>> with the CLI interface, as there's still a migration path to a Python > >>> API based model: > >>> > >>> Now: documented CLI for invoking build systems > >>> Future: documented Python API for invoking build systems, default > >>> fallback invokes the documented CLI > >> > >> Or we just issue an updated bootstrap schema, and there's no fallback > >> or anything needed. > > > > Oh no! But this totally gives up the most brilliant part of your > > original idea! :-) > > > > In my original draft, I had each hook specified separately in the > > bootstrap file, e.g. (super schematically): > > > > build-requirements = flit-build-requirements > > do-wheel-build = flit-do-wheel-build > > do-editable-build = flit-do-editable build > > > > and you counterproposed that instead there should just be one line like > > > > build-system = flit-build-system > > > > and this is exactly right, because it means that if some new > > capability is added to the spec (e.g. a new hook -- like > > hypothetically imagine if we ended up deferring the equivalent of > > egg-info or editable-build-mode to v2), then the new capability just > > needs to be implemented in pip and in flit, and then all the projects > > that use flit immediately gain superpowers without anyone having to go > > around and manually change all the bootstrap files in every project > > individually. > > > > But for this to work it's crucial that the pip<->build-system > > interface have some sort of versioning or negotiation beyond the > > bootstrap file's schema version. > > > >>> So the CLI documented in the PEP isn't *necessarily* going to be the > >>> one used by pip to communicate into the build environment - it may be > >>> invoked locally within the build environment. > >> > >> No, it totally will be. Exactly as setup.py is today. Thats > >> deliberate: The *new* thing we're setting out to enable is abstract > >> build systems, not reengineering pip. > >> > >> The future - sure, someone can write a new thing, and the necessary > >> capability we're building in to allow future changes will allow a new > >> PEP to slot in easily and take on that [non trivial and substantial > >> chunk of work]. (For instance, how do you do compiler and build system > >> specific options when you have a CLI to talk to pip with)? > > > > I dunno, that seems pretty easy? My original draft just suggested that > > the build hook would take a dict of string-valued keys, and then we'd > > add some options to pip like "--project-build-option foo=bar" that > > would set entries in that dict, and that's pretty much sufficient to > > get the job done. To enable backcompat you'd also want to map the old > > --install-option and --build-option switches to add entries to some > > well-known keys in that dict. But none of the details here need to be > > specified, because it's up to individual projects/build-systems to > > assign meaning to this stuff and individual build-frontends like pip > > to provide an interface to it -- at the build-frontent/build-backend > > interface layer we just need some way to pass through the blobs. > > > > I admit that this is another case where the Python API approach is > > making things trivial though ;-). If you want to pass arbitrary > > user-specified data through a command-line API, while avoiding things > > like potential namespace collisions between user-defined switches and > > standard-defined switches, then you have to do much more work than > > just say "there's another argument that's a dict". > > > > -n > > > > -- > > Nathaniel J. Smith -- http://vorpus.org > > > > -- > Nathaniel J. Smith -- http://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Wed Nov 11 08:30:58 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 Nov 2015 13:30:58 +0000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 10 November 2015 at 22:44, Nathaniel Smith wrote: > "Stdin is unspecified, and stdout/stderr can be used for printing > status messages, errors, etc. just like you're used to from every > other build system in the world." This is over simplistic. We have real-world requirements from users of pip that they *don't* want to see all of the progress that the various build tools invoke. That is not something we can ignore. We also have some users saying they want access to all of the build tool output. And we also have a requirement for progress reporting. Taking all of those requirements into account, pip *has* to have some level of control over the output of a build tool - with setuptools at the moment, we have no such control (other than "we may or may not show the output to the user") and that means we struggle to realistically satisfy all of the conflicting requirements we have. So we do need much better defined contracts over stdin, stdout and stderr, and return codes. This is true whether or not the build system is invoked via a Python API or a CLI. Paul From wes.turner at gmail.com Wed Nov 11 09:15:02 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 11 Nov 2015 08:15:02 -0600 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On Nov 10, 2015 11:09 PM, "Wayne Werner" wrote: > > > With all of the weirdness involved, it makes me wonder - could there be a better way? If we waved our hands and were able to magically make Python package management perfect, what would that look like? > > Would that kind of discussion even be valuable? e.g. re-specifying the mission, goals, and objectives of PyPA? or e.g. creating a set of numbered user stories / specification requirements? "[Users] can [...] (in order to [...] (thus [saving\gaining] [resource xyz]))" * Users can install packages from a package index IOT: * share code: sdist * share binaries: save build time, * Users can specify (python) package dependencies * [ ] Users can specify (platform) package (build) dependencies * e.g. libssl-dev * conda does not solve for this either * [ ] Users can link between built packages and source VCS revisions with URIs * platform-rev / rev-platform * **diff** > > On Tue, Nov 10, 2015, 6:22 PM Nathaniel Smith wrote: > > >> I totally get why people dislike the ergonomics of 'python -m pip', >> but we can also acknowledge that it does solve a real technical >> problem: it strictly reduces the number of things that can go wrong, >> in a tool that's down at the base of the stack. > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 11 09:34:06 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 Nov 2015 00:34:06 +1000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On 11 November 2015 at 15:08, Wayne Werner wrote: > > With all of the weirdness involved, it makes me wonder - could there be a > better way? If we waved our hands and were able to magically make Python > package management perfect, what would that look like? > > Would that kind of discussion even be valuable? That's essentially what PEP 426 evolved into - an all-singing all-dancing wish list of what *my* dream packaging system would enable (especially once you include the "Deferred Features" section). In practice, most of that is "nice to have" rather than "absolutely essential" though, so we're in the midst of the process: 1. Figuring out incremental steps that help us to get from "here" to "there" by way of formalising what already exists 2. Figuring out which parts of "there" represent needless complexity that can just be dropped entirely Packaging systems are a uniquely difficult ship to steer (even moreso than programming language design), since interoperability is king, and you need to cope with legacy versions of both packaging tools *and* language runtimes. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Nov 11 09:48:12 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 09:48:12 -0500 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On November 11, 2015 at 9:34:41 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 11 November 2015 at 15:08, Wayne Werner wrote: > > > > With all of the weirdness involved, it makes me wonder - could there be a > > better way? If we waved our hands and were able to magically make Python > > package management perfect, what would that look like? > > > > Would that kind of discussion even be valuable? > > That's essentially what PEP 426 evolved into - an all-singing > all-dancing wish list of what *my* dream packaging system would enable > (especially once you include the "Deferred Features" section). In > practice, most of that is "nice to have" rather than "absolutely > essential" though, so we're in the midst of the process: > > 1. Figuring out incremental steps that help us to get from "here" to > "there" by way of formalising what already exists > 2. Figuring out which parts of "there" represent needless complexity > that can just be dropped entirely > > Packaging systems are a uniquely difficult ship to steer (even moreso > than programming language design), since interoperability is king, and > you need to cope with legacy versions of both packaging tools *and* > language runtimes. > Right. I think PEP 426 fell into the same trap that distutils2 fell into. It attempted to boil the ocean in one step and the longer it went on the more aspirational stuff got layered onto it because it was being held up as the great hope for packaging. I think the lessons we?ve learned is that careful [1] incremental improvements is the best way forward. It?s a lot easier to reason and handle a small change than it is to handle a massive change. The other important lesson is that one of our ecosystem?s biggest strengths is also one of (and probably *the) biggest things holding us back from a large re-envisioning. We have a massive number of available packages that have accumulated over like two decades. There are half a million individual installable package files on PyPI and who knows how many in private repositories all around the world. The most ideal system in the world isn?t actually useful if it requires throwing out the entire existing ecosystem. [1] Ones which don?t back us into corners as far as what path we are forced to go down into and which don?t add unneeded things because they would be ?cool?. The Zen of Python has a great section on this, "Now is better than never,?Although never is often better than *right* now.?. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From p.f.moore at gmail.com Wed Nov 11 10:34:43 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 Nov 2015 15:34:43 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On 11 November 2015 at 14:48, Donald Stufft wrote: > Right. I think PEP 426 fell into the same trap that distutils2 fell into. It attempted to boil the ocean in one step and the longer it went on the more aspirational stuff got layered onto it because it was being held up as the great hope for packaging. Yeah. We really should have seen that happening - when people start using "Metadata 2.0" as a shorthand for "we're working on it", but there are no actual changes being delivered, that's a bad sign. Just shows how hard it is, even when we *know* the issue from bitter experience, to avoid falling into the same old trap. People like Nathaniel popping up and saying "look, let's just solve this one piece of the puzzle right now" are a great catalyst for getting us out of that rut. (Even if "... never is often better than *right* now" :-) > I think the lessons we?ve learned is that careful [1] incremental improvements is the best way forward. It?s a lot easier to reason and handle a small change than it is to handle a massive change. Agreed. We work best when we're alternating between phases of incrementally adding new functionality, re-stabilising after change, and paying off technical debt. Changes that are too big disrupt that rhythm. Paul From holger at merlinux.eu Wed Nov 11 10:46:33 2015 From: holger at merlinux.eu (holger krekel) Date: Wed, 11 Nov 2015 15:46:33 +0000 Subject: [Distutils] devpi-server-2.4.0 and friends: speedup, fixes, profiling Message-ID: <20151111154633.GJ16107@merlinux.eu> We just pushed devpi-{server,web,client,common} release files out to pypi. Most notably, the private pypi package server allows much faster installs due to much improved simple-page serving speed. See the changelog below for a host of other changes and fixes as well as for compatibility warnings. Docs about the devpi system are to be found here: http://doc.devpi.net Many thanks to my co-maintainer Florian Schulze and particularly to Stephan Erb and Chad Wagner for their contributions. cheers, holger -- about me: http://holgerkrekel.net/about-me/ contracting: http://merlinux.eu devpi-server 2.4.0 (2015-11-11) ------------------------------- - NOTE: devpi-server-2.4 is compatible to data from devpi-server-2.3 but not the other way round. Once you run devpi-server-2.4 you can not go back. It's always a good idea to make a backup before trying a new version :) - NOTE: if you use "--logger-cfg" with .yaml files you will need to install pyyaml yourself as devpi-server-2.4 dropped it as a direct dependency as it does not install for win32/python3.5 and is not needed for devpi-server operations except for logging configuration. Specifying a *.json file always works. - add timeout to replica requests - fix issue275: improve error message when a serverdir exists but has no version - improve testing mechanics and name normalization related to storing doczips - refine keyfs to provide lazy deep readonly-views for dict/set/list/tuple types by default. This introduces safety because users (including plugins) of keyfs-values can only write/modify a value by explicitly getting it with readonly=False (thereby deep copying it) and setting it with the transaction. It also allows to avoid unnecessary copy-operations when just reading values. - fix issue283: pypi cache didn't work for replicas. - performance improvements for simple pages with lots of releases. this also changed the db layout of the caching from pypi.python.org mirrors but will seamlessly work on older data, see NOTE at top. - add "--profile-requests=NUM" option which turns on per-request profiling and will print out after NUM requests are executed and then restart profiling. - fix tests for pypy. We officially support pypy now. devpi-client-2.3.2 (2015-11-11) ------------------------------- - fix git submodules for devpi upload. ``.git`` is a file not a folder for submodules. Before this fix the repository which contains the submodule was found instead, which caused a failure, because the files aren't tracked there. - new option "devpi upload --setupdir-only" which will only vcs-export the directory containing setup.py. You can also set "setupdirs-only = 1" in the "[devpi:upload]" section of setup.cfg for the same effect. Thanks Chad Wagner for the PR. devpi-web 2.4.2 (2015-11-11) ---------------------------- - log exceptions during search index updates. - adapted tests/code to work with devpi-server-2.4 devpi-common 2.0.8 (2015-11-11) ------------------------------- - fix URL.joinpath to not add double slashes From robertc at robertcollins.net Wed Nov 11 13:30:09 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 Nov 2015 07:30:09 +1300 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 12 November 2015 at 02:30, Paul Moore wrote: > On 10 November 2015 at 22:44, Nathaniel Smith wrote: >> "Stdin is unspecified, and stdout/stderr can be used for printing >> status messages, errors, etc. just like you're used to from every >> other build system in the world." > > This is over simplistic. > > We have real-world requirements from users of pip that they *don't* > want to see all of the progress that the various build tools invoke. > That is not something we can ignore. We also have some users saying > they want access to all of the build tool output. And we also have a > requirement for progress reporting. > > Taking all of those requirements into account, pip *has* to have some > level of control over the output of a build tool - with setuptools at > the moment, we have no such control (other than "we may or may not > show the output to the user") and that means we struggle to > realistically satisfy all of the conflicting requirements we have. > > So we do need much better defined contracts over stdin, stdout and > stderr, and return codes. This is true whether or not the build system > is invoked via a Python API or a CLI. Aye. I'd like everyone to take a breather on this thread btw. I'm focusing on the dependency specification PEP and until thats at the point I can't move it forward, I won't be updating the draft build abstraction PEP: when thats done, with the thing Donald and I hammered out on IRC a few days back (Option 3, earlier) then we'll have something to talk about and consider. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Wed Nov 11 13:38:14 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 10:38:14 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Nov 11, 2015 5:30 AM, "Paul Moore" wrote: > > On 10 November 2015 at 22:44, Nathaniel Smith wrote: > > "Stdin is unspecified, and stdout/stderr can be used for printing > > status messages, errors, etc. just like you're used to from every > > other build system in the world." > > This is over simplistic. > > We have real-world requirements from users of pip that they *don't* > want to see all of the progress that the various build tools invoke. > That is not something we can ignore. We also have some users saying > they want access to all of the build tool output. And we also have a > requirement for progress reporting. Have you tried current dev versions of pip recently? The default now is to suppress the actual output but for progress reporting to show a spinner that rotates each time a line of text would have been printed. It's low tech but IMHO very effective. (And obviously you can also flip a switch to either see all or nothing of the output as well, or if that isn't there now if books really be added.) So I kinda feel like these are solved problems. > Taking all of those requirements into account, pip *has* to have some > level of control over the output of a build tool - with setuptools at > the moment, we have no such control (other than "we may or may not > show the output to the user") and that means we struggle to > realistically satisfy all of the conflicting requirements we have. > > So we do need much better defined contracts over stdin, stdout and > stderr, and return codes. This is true whether or not the build system > is invoked via a Python API or a CLI. Even if you really do want to define a generic structured system for build progress reporting (it feels pretty second-systemy to me), then in the python api approach there are better options than trying to define a specific protocol on stdout. Guaranteeing a clean stdout/stderr is hard: it means you have to be careful to correctly capture and process the output of every child you invoke (e.g. compilers), and deal correctly with the tricky aspects of pipes (deadlocks, sigpipe, ...). And even then you can get thwarted by accidentally importing the wrong library into your main process, and discovering that it writes directly to stdout/stderr on some error condition. And it may or may not respect your resetting of sys.stdout/sys.stderr at the python level. So to be really reliable the only thing to do is to create some pipes and some threads to read the pipes and do the dup2 dance (but not everyone will actually do this, they'll just accept corrupted output on errors) and ugh, all of this is a huge hassle that massively raises the bar on implementing simple build systems. In the subprocess approach you don't really have many options; if you want live feedback from a build process then you have to get it somehow, and you can't just say "fine part of the protocol is that we use fd 3 for structured status updates" because that doesn't work on windows. In the python api approach, we have better options, though. The way I'd do this is to define some of progress reporting abstract interface, like class BuildUpdater: # pass -1 for "unknown" def set_total_steps(self, n): pass # if total is unknown, call this repeatedly to say "something's happening" def set_current_step(self, n): pass def alert_user(self, message): pass And methods like build_wheel would accept an object implementing this interface as an argument. Stdout/stderr keep the same semantics as they have today; this is a separate, additional channel. And then a build frontend could decide how it wants to actually implement this interface. A simple frontend that didn't want to implement fancy UI stuff might just have each of those methods print something to stderr to be captured along with the rest of the chatter. A fancier frontend like pip could pick whichever ipc mechanism they like best and implement that inside their worker. (E.g., maybe on POSIX we use fd 3, and on windows we do incremental writes to a temp file, or use a named pipe. Or maybe we prefer to stick to using stdout for pip<->worker communication, and the worker would take the responsibility of robustly redirecting stdout via dup2 before invoking the actual build hook. There are lots of options; the beauty of the approach, again, is that we don't have to pick one now and write it in stone.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed Nov 11 13:42:43 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 13:42:43 -0500 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On November 11, 2015 at 1:38:38 PM, Nathaniel Smith (njs at pobox.com) wrote: > > Guaranteeing a clean stdout/stderr is hard: it means you have > to be careful to correctly capture and process the output of every > child you invoke (e.g. compilers), and deal correctly with the > tricky aspects of pipes (deadlocks, sigpipe, ...). And even > then you can get thwarted by accidentally importing the wrong > library into your main process, and discovering that it writes > directly to stdout/stderr on some error condition. And it may > or may not respect your resetting of sys.stdout/sys.stderr > at the python level. So to be really reliable the only thing to > do is to create some pipes and some threads to read the pipes and > do the dup2 dance (but not everyone will actually do this, they'll > just accept corrupted output on errors) and ugh, all of this is > a huge hassle that massively raises the bar on implementing simple > build systems. How is this not true for a worker.py process as well? If the worker process communicates via stdout then it has to make sure it captures the stdout and redirects it before calling into the Python API and then undoes that afterwords. It makes it harder to do incremental output actually because a Python function can?t return in the middle of execution so we?d need to make it some sort of akward generator protocol to make that happen too. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From wes.turner at gmail.com Wed Nov 11 13:56:59 2015 From: wes.turner at gmail.com (Wes Turner) Date: Wed, 11 Nov 2015 12:56:59 -0600 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Nov 11, 2015 12:31 PM, "Robert Collins" wrote: > > On 12 November 2015 at 02:30, Paul Moore wrote: > > On 10 November 2015 at 22:44, Nathaniel Smith wrote: > >> "Stdin is unspecified, and stdout/stderr can be used for printing > >> status messages, errors, etc. just like you're used to from every > >> other build system in the world." > > > > This is over simplistic. > > > > We have real-world requirements from users of pip that they *don't* > > want to see all of the progress that the various build tools invoke. > > That is not something we can ignore. We also have some users saying > > they want access to all of the build tool output. And we also have a > > requirement for progress reporting. > > > > Taking all of those requirements into account, pip *has* to have some > > level of control over the output of a build tool - with setuptools at > > the moment, we have no such control (other than "we may or may not > > show the output to the user") and that means we struggle to > > realistically satisfy all of the conflicting requirements we have. > > > > So we do need much better defined contracts over stdin, stdout and > > stderr, and return codes. This is true whether or not the build system > > is invoked via a Python API or a CLI. > > Aye. > > I'd like everyone to take a breather on this thread btw. I'm focusing > on the dependency specification PEP and until thats at the point I > can't move it forward, I won't be updating the draft build abstraction > PEP: Presumably, it would be great to list a platform parameter description as JSONLD-serializable keys and values (e.g. for a bdist/wheel build "imprint" in the JSONLD build metadata composition file) ... #PEP426JSONLD > when thats done, with the thing Donald and I hammered out on IRC > a few days back (Option 3, earlier) then we'll have something to talk > about and consider. > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Nov 11 14:07:56 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 11:07:56 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Wed, Nov 11, 2015 at 10:42 AM, Donald Stufft wrote: > On November 11, 2015 at 1:38:38 PM, Nathaniel Smith (njs at pobox.com) wrote: >> > Guaranteeing a clean stdout/stderr is hard: it means you have >> to be careful to correctly capture and process the output of every >> child you invoke (e.g. compilers), and deal correctly with the >> tricky aspects of pipes (deadlocks, sigpipe, ...). And even >> then you can get thwarted by accidentally importing the wrong >> library into your main process, and discovering that it writes >> directly to stdout/stderr on some error condition. And it may >> or may not respect your resetting of sys.stdout/sys.stderr >> at the python level. So to be really reliable the only thing to >> do is to create some pipes and some threads to read the pipes and >> do the dup2 dance (but not everyone will actually do this, they'll >> just accept corrupted output on errors) and ugh, all of this is >> a huge hassle that massively raises the bar on implementing simple >> build systems. > > How is this not true for a worker.py process as well? If the worker process communicates via stdout then it has to make sure it captures the stdout and redirects it before calling into the Python API and then undoes that afterwords. It makes it harder to do incremental output actually because a Python function can?t return in the middle of execution so we?d need to make it some sort of akward generator protocol to make that happen too. Did you, uh, read the second half of my email? :-) My actual position is that we shouldn't even try to get structured incremental output from the build system, and should stick with the current approach of unstructured incremental output on stdout/stderr. But if we do insist on getting structured incremental output, then I described a system that's much easier for backends to implement, while leaving it up to the frontend to pick whether they want to bother doing complicated redirection tricks, and if so then which particular variety of complicated redirection trick they like best. In both approaches, yeah, any kind of incremental output is eventually come down to some Python code issuing some sort of function call that reports progress without returning, whether that's sys.stdout.write(json.dumps(...)) or progress_reporter.report_update(...). Between those two options, it's sys.stdout.write(json.dumps(...)) that looks more awkward to me. -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Wed Nov 11 14:12:49 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 Nov 2015 08:12:49 +1300 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 12 November 2015 at 08:07, Nathaniel Smith wrote: > On Wed, Nov 11, 2015 at 10:42 AM, Donald Stufft wrote: >> On November 11, 2015 at 1:38:38 PM, Nathaniel Smith (njs at pobox.com) wrote: >>> > Guaranteeing a clean stdout/stderr is hard: it means you have >>> to be careful to correctly capture and process the output of every >>> child you invoke (e.g. compilers), and deal correctly with the >>> tricky aspects of pipes (deadlocks, sigpipe, ...). And even >>> then you can get thwarted by accidentally importing the wrong >>> library into your main process, and discovering that it writes >>> directly to stdout/stderr on some error condition. And it may >>> or may not respect your resetting of sys.stdout/sys.stderr >>> at the python level. So to be really reliable the only thing to >>> do is to create some pipes and some threads to read the pipes and >>> do the dup2 dance (but not everyone will actually do this, they'll >>> just accept corrupted output on errors) and ugh, all of this is >>> a huge hassle that massively raises the bar on implementing simple >>> build systems. >> >> How is this not true for a worker.py process as well? If the worker process communicates via stdout then it has to make sure it captures the stdout and redirects it before calling into the Python API and then undoes that afterwords. It makes it harder to do incremental output actually because a Python function can?t return in the middle of execution so we?d need to make it some sort of akward generator protocol to make that happen too. > > Did you, uh, read the second half of my email? :-) My actual position > is that we shouldn't even try to get structured incremental output > from the build system, and should stick with the current approach of > unstructured incremental output on stdout/stderr. But if we do insist > on getting structured incremental output, then I described a system > that's much easier for backends to implement, while leaving it up to > the frontend to pick whether they want to bother doing complicated > redirection tricks, and if so then which particular variety of > complicated redirection trick they like best. > > In both approaches, yeah, any kind of incremental output is eventually > come down to some Python code issuing some sort of function call that > reports progress without returning, whether that's > sys.stdout.write(json.dumps(...)) or > progress_reporter.report_update(...). Between those two options, it's > sys.stdout.write(json.dumps(...)) that looks more awkward to me. I think there is some big disconnect in the conversation. AIUI Donald and Marcus and I are saying that build systems should just use print("Something happened") to provide incremental output. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Wed Nov 11 14:15:07 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 Nov 2015 14:15:07 -0500 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On November 11, 2015 at 2:08:00 PM, Nathaniel Smith (njs at pobox.com) wrote: > On Wed, Nov 11, 2015 at 10:42 AM, Donald Stufft wrote: > > On November 11, 2015 at 1:38:38 PM, Nathaniel Smith (njs at pobox.com) wrote: > >> > Guaranteeing a clean stdout/stderr is hard: it means you have > >> to be careful to correctly capture and process the output of every > >> child you invoke (e.g. compilers), and deal correctly with the > >> tricky aspects of pipes (deadlocks, sigpipe, ...). And even > >> then you can get thwarted by accidentally importing the wrong > >> library into your main process, and discovering that it writes > >> directly to stdout/stderr on some error condition. And it may > >> or may not respect your resetting of sys.stdout/sys.stderr > >> at the python level. So to be really reliable the only thing to > >> do is to create some pipes and some threads to read the pipes and > >> do the dup2 dance (but not everyone will actually do this, they'll > >> just accept corrupted output on errors) and ugh, all of this is > >> a huge hassle that massively raises the bar on implementing simple > >> build systems. > > > > How is this not true for a worker.py process as well? If the worker process communicates > via stdout then it has to make sure it captures the stdout and redirects it before calling > into the Python API and then undoes that afterwords. It makes it harder to do incremental > output actually because a Python function can?t return in the middle of execution so > we?d need to make it some sort of akward generator protocol to make that happen too. > > Did you, uh, read the second half of my email? :-) My actual position > is that we shouldn't even try to get structured incremental output > from the build system, and should stick with the current approach of > unstructured incremental output on stdout/stderr. But if we do insist > on getting structured incremental output, then I described a system > that's much easier for backends to implement, while leaving it up to > the frontend to pick whether they want to bother doing complicated > redirection tricks, and if so then which particular variety of > complicated redirection trick they like best. > > In both approaches, yeah, any kind of incremental output is eventually > come down to some Python code issuing some sort of function call that > reports progress without returning, whether that's > sys.stdout.write(json.dumps(...)) or > progress_reporter.report_update(...). Between those two options, it's > sys.stdout.write(json.dumps(...)) that looks more awkward to me. > I?m confused how the progress indicator you just implemented would work if there wasn?t something triggering a ?hey I?m still doing work? to incrementally output information. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From robertc at robertcollins.net Wed Nov 11 14:17:01 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 Nov 2015 08:17:01 +1300 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: On 12 November 2015 at 03:48, Donald Stufft wrote: ... > Right. I think PEP 426 fell into the same trap that distutils2 fell into. It attempted to boil the ocean in one step and the longer it went on the more aspirational stuff got layered onto it because it was being held up as the great hope for packaging. >From my blog post: https://rbtcollins.wordpress.com/2015/08/04/the-merits-of-careful-impatience/ """ So here is how I think we should deliver things instead: 1. Design the change with specific care that it fails closed and is opt-in. 2. Implement the change(s) needed, in a new minor version of the tools. 3. Tell users they can use it. """ -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Wed Nov 11 14:19:44 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 11:19:44 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Wed, Nov 11, 2015 at 4:29 AM, Donald Stufft wrote: > On November 11, 2015 at 4:05:11 AM, Nathaniel Smith (njs at pobox.com) wrote: >> > But even this isn't really true -- the difference between them >> is that >> either way you have a subprocess API, but with a Python API, the >> subprocess interface that pip uses has the option of being improved >> incrementally over time -- including, potentially, to take >> further >> advantage of the underlying richness of the Python semantics. >> Sure, >> maybe the first release would just take all exceptions and map >> them >> into some text printed to stderr and a non-zero return code, and >> that's all that pip would get. But if someone had an idea for how >> pip >> could do better than this by, I dunno, encoding some structured >> metadata about the particular exception that occurred and passing >> this >> back up to pip to do something intelligent with it, they absolutely >> could write the code and submit a PR to pip, without having to write >> a >> new PEP. > > I think I prefer a CLI based approach (my suggestion was to remove the formatting/interpolation all together and just have the file include a list of things to install, and a python module to invoke via ``python -m ``). > > The main reason I think I prefer a CLI based approach is that I worry about the impedance mismatch between the two systems. We?re not actually going to be able to take advantage of Python?s plethora of types in any meaningful capacity because at the end of the day the bulk of the data is either naturally a string or as we start to allow end users to pass options through pip into the build system, we have no real way of knowing what the type is supposed to be other than the fact we got it as a CLI flag. How does a user encode something like ?pass an integer into this value in the build system?? on the CLI in a generic way? I can?t think of any way which means that any boundary code in the build system is going to need to be smart enough to handle an array of arguments that come in via the user typing something on the CLI. We have a wide variety of libraries to handle that case already for building CLI apps but we do not have a wide array of libraries handling it for a Python API. It will have to be manually encoded for each and every option that the build system supports. You're overcomplicating things :-). The solution to this problem is just "pip's UI only allows passing arbitrary strings as option values, so build backends had better deal with it". That's what we'd effectively be doing anyway in the CLI approach. > My other concern is that it introduces another potential area for mistake that is a bit harder to test. I don?t believe that any sort of ?worker.py? script is ever going to be able to handle arbitrary Python values coming back as a return value from a Python script. Whatever serialization we use to send data back into the main pip process (likely JSON) will simply choke and cause an error if it encounters a type it doesn?t know how to serialize. However this error case will only happen when the build system is being invoked by pip, not when it is being invoked ?naturally? in the build system?s unit tests. By forcing build tool authors to write a CLI interface, we push the work of ?how do I serialize my internal data structures? down onto them instead of making it some implicit piece of code that pip needs to work. I think this is another issue that isn't actually a problem. Remember, we don't need to support translating arbitrary Python function calls across process boundaries; there will be a fixed, finite set of methods that we need to support, and those methods' semantics will be defined in a PEP. So e.g., if the PEP says that build backends should define a method like this: def build_requirements(self, build_options): """Calculate the dynamic portion of the build-requirements. :param build_options: The build options dictionary. :returns: A list of strings, where each string is a PEP XX requirement specifier. """ then our IPC mechanism doesn't need to be able to handle arbitrary types as return values, it needs to be able to handle a list of strings. Which that sketch I sent does handle, so we're good. And the build tool's unit tests will be checking that it returns a list of strings, because... that's what unit tests do, they validate that methods implement the interface that they're defined to implement :-). So this is a non-problem -- we just have to make sure when we define the various method interfaces in the PEP that we don't have any methods that return arbitrary complicated Python types. Which we weren't going to be tempted to do anyway. -n -- Nathaniel J. Smith -- http://vorpus.org From njs at pobox.com Wed Nov 11 14:31:40 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 11:31:40 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Wed, Nov 11, 2015 at 11:12 AM, Robert Collins wrote: > On 12 November 2015 at 08:07, Nathaniel Smith wrote: >> On Wed, Nov 11, 2015 at 10:42 AM, Donald Stufft wrote: >>> On November 11, 2015 at 1:38:38 PM, Nathaniel Smith (njs at pobox.com) wrote: >>>> > Guaranteeing a clean stdout/stderr is hard: it means you have >>>> to be careful to correctly capture and process the output of every >>>> child you invoke (e.g. compilers), and deal correctly with the >>>> tricky aspects of pipes (deadlocks, sigpipe, ...). And even >>>> then you can get thwarted by accidentally importing the wrong >>>> library into your main process, and discovering that it writes >>>> directly to stdout/stderr on some error condition. And it may >>>> or may not respect your resetting of sys.stdout/sys.stderr >>>> at the python level. So to be really reliable the only thing to >>>> do is to create some pipes and some threads to read the pipes and >>>> do the dup2 dance (but not everyone will actually do this, they'll >>>> just accept corrupted output on errors) and ugh, all of this is >>>> a huge hassle that massively raises the bar on implementing simple >>>> build systems. >>> >>> How is this not true for a worker.py process as well? If the worker process communicates via stdout then it has to make sure it captures the stdout and redirects it before calling into the Python API and then undoes that afterwords. It makes it harder to do incremental output actually because a Python function can?t return in the middle of execution so we?d need to make it some sort of akward generator protocol to make that happen too. >> >> Did you, uh, read the second half of my email? :-) My actual position >> is that we shouldn't even try to get structured incremental output >> from the build system, and should stick with the current approach of >> unstructured incremental output on stdout/stderr. But if we do insist >> on getting structured incremental output, then I described a system >> that's much easier for backends to implement, while leaving it up to >> the frontend to pick whether they want to bother doing complicated >> redirection tricks, and if so then which particular variety of >> complicated redirection trick they like best. >> >> In both approaches, yeah, any kind of incremental output is eventually >> come down to some Python code issuing some sort of function call that >> reports progress without returning, whether that's >> sys.stdout.write(json.dumps(...)) or >> progress_reporter.report_update(...). Between those two options, it's >> sys.stdout.write(json.dumps(...)) that looks more awkward to me. > > I think there is some big disconnect in the conversation. AIUI Donald > and Marcus and I are saying that build systems should just use > > print("Something happened") > > to provide incremental output. I agree that this is the best approach. This particular subthread is all hanging off of Paul's message [1] where he argues that we can't just print arbitrary text to stdout/stderr, we need, like, structured JSON messages on stdout that pip can parse while the build is running. (Which implies that you can *only* have structured JSON messages on stdout, because otherwise there's no way to tell which bits are supposed to be structured and which bits are just arbitrary text.) And I said well, I think that's probably overcomplicated and unnecessary, but if you insist then this is what it would look like in the different approaches. (Your current draft does create similar challenges for build backends because it also uses stdout for passing structured data. But I know you're in the middle of rewriting it anyway, so maybe this is irrelevant.) -n [1] http://thread.gmane.org/gmane.comp.python.distutils.devel/24760/focus=24792 -- Nathaniel J. Smith -- http://vorpus.org From p.f.moore at gmail.com Wed Nov 11 14:34:42 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 Nov 2015 19:34:42 +0000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 18:38, Nathaniel Smith wrote: > Have you tried current dev versions of pip recently? No, but I did see your work on this, and I appreciate and approve of it. > The default now is to > suppress the actual output but for progress reporting to show a spinner that > rotates each time a line of text would have been printed. It's low tech but > IMHO very effective. (And obviously you can also flip a switch to either see > all or nothing of the output as well, or if that isn't there now if books > really be added.) So I kinda feel like these are solved problems. And this relies on build tools outputting to stdout, not stderr, and not buffering their output. That's an interface spec. Not everything has to be massively complicated, and I wasn't implying it needed to be. Just that we need conventions. One constant annoyance for pip is that distutils doesn't properly separate stdout and stderr, so we can't suppress unnecessary status reports without losing important error messages. Users report this as a bug in pip, not in distutils, and I don't imagine that would change if a project was using . > >> Taking all of those requirements into account, pip *has* to have some >> level of control over the output of a build tool - with setuptools at >> the moment, we have no such control (other than "we may or may not >> show the output to the user") and that means we struggle to >> realistically satisfy all of the conflicting requirements we have. >> >> So we do need much better defined contracts over stdin, stdout and >> stderr, and return codes. This is true whether or not the build system >> is invoked via a Python API or a CLI. > > Even if you really do want to define a generic structured system for build > progress reporting (it feels pretty second-systemy to me), then in the > python api approach there are better options than trying to define a > specific protocol on stdout. No, no, no. I never said that. All I was saying was that we need a level of agreement on what pip can expect to do with stdout and stderr, *given that there are known requirements pip's users expect to be satisfied*. Paul From p.f.moore at gmail.com Wed Nov 11 14:35:32 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 Nov 2015 19:35:32 +0000 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On 11 November 2015 at 19:31, Nathaniel Smith wrote: > This particular subthread is all hanging off of Paul's message [1] > where he argues that we can't just print arbitrary text to > stdout/stderr, we need, like, structured JSON messages on stdout that > pip can parse while the build is running As I already pointed out, I never said that. Paul From njs at pobox.com Wed Nov 11 14:48:40 2015 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 Nov 2015 11:48:40 -0800 Subject: [Distutils] command line versus python API for build system abstraction (was Re: build system abstraction PEP) In-Reply-To: References: Message-ID: On Wed, Nov 11, 2015 at 11:34 AM, Paul Moore wrote: > On 11 November 2015 at 18:38, Nathaniel Smith wrote: >> Have you tried current dev versions of pip recently? > > No, but I did see your work on this, and I appreciate and approve of it. > >> The default now is to >> suppress the actual output but for progress reporting to show a spinner that >> rotates each time a line of text would have been printed. It's low tech but >> IMHO very effective. (And obviously you can also flip a switch to either see >> all or nothing of the output as well, or if that isn't there now if books >> really be added.) So I kinda feel like these are solved problems. > > And this relies on build tools outputting to stdout, not stderr, and > not buffering their output. FWIW the spinner patch actually looks at both stdout and stderr, and it also takes care to force the child process's sys.stdout/sys.stderr into line-buffered mode, but of course this buffering tweak only helps for output printed by python code running in the immediate child. So yeah, it wouldn't hurt to add a few non-normative words about buffering to my original one-sentence specification :-). > That's an interface spec. Not everything has to be massively > complicated, and I wasn't implying it needed to be. Just that we need > conventions. One constant annoyance for pip is that distutils doesn't > properly separate stdout and stderr, so we can't suppress unnecessary > status reports without losing important error messages. Users report > this as a bug in pip, not in distutils, and I don't imagine that would > change if a project was using . Sorry for misunderstanding! I guess the other thing we could do is to try to convince build systems to do a better job of separating stdout and stderr, but I'm dubious about how much this would help, because I think the problem is more fundamental than that. For outright errors, there isn't really a problem IMO, because when the build fails that gives you a clear signal that you should probably show the user the output :-). The case that's trickier, and could potentially benefit, is warnings that don't cause the build to fail. If gcc outputs a warning, should we show that to the user? Yes if this is the developer building their own code... but probably not if this is pip building from an automatically downloaded sdist for an end-user -- there are lots and lots of harmless warnings in the output of popular packages, and dumping those scary and inscrutable messages on end-users is going to create all the problems we were trying to avoid by hiding the output in the first place. -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Wed Nov 11 15:38:18 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 Nov 2015 09:38:18 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Next iteration: diff --git a/dependency-specification.rst b/dependency-specification.rst index d60df6e..72a87f1 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -117,7 +117,8 @@ environments:: Optional components of a distribution may be specified using the extras field:: - identifier = ( digit | letter | '-' | '_' | '.')+ + identifier = ( letterOrDigit | + letterOrDigit (letterOrDigit | '-' | '_' | '.')* letterOrDigit ) name = identifier extras_list = identifier (wsp* ',' wsp* identifier)* extras = '[' wsp* extras_list? ']' @@ -146,7 +147,12 @@ Names Python distribution names are currently defined in PEP-345 [#pep345]_. Names act as the primary identifier for distributions. They are present in all dependency specifications, and are sufficient to be a specification on their -own. +own. However, PyPI places strict restrictions on names - they must match a +case insensitive regex or they won't be accepted. Accordingly in this PEP we +limit the acceptable values for identifiers to that regex. A full redefinition +of name may take place in a future metadata PEP:: + + ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ Extras ------ @@ -199,13 +205,18 @@ do for strings in Python. The operators use the PEP-440 [#pep440]_ version comparison rules when those are defined (that is when both sides have a valid version specifier). If there is no defined PEP-440 behaviour and the operator exists in Python, then the operator falls back to -the Python behaviour. Otherwise the result of the operator is False. +the Python behaviour. Otherwise an error should be raised. e.g. the following +will result in errors:: + + "dog" ~= "fred" + python_version ~= "surprise" User supplied constants are always encoded as strings with either ``'`` or ``"`` quote marks. Note that backslash escapes are not defined, but existing implementations do support them them. They are not included in this -specification because none of the variables to which constants can be compared -contain quote-requiring values. +specification because they add complexity and there is no observable need for +them today. Similarly we do not define non-ASCII character support: all the +runtime variables we are referencing are expected to be ASCII-only. The variables in the marker grammar such as "os_name" resolve to values looked up in the Python runtime. With the exception of "extra" all values are defined @@ -223,8 +234,8 @@ The "extra" variable is special. It is used by wheels to signal which specifications apply to a given extra in the wheel ``METADATA`` file, but since the ``METADATA`` file is based on a draft version of PEP-426, there is no current specification for this. Regardless, outside of a context where this -special handling is taking place, the "extra" variable should result in a -SyntaxError like all other unknown variables. +special handling is taking place, the "extra" variable should result in an +error like all other unknown variables. .. list-table:: :header-rows: 1 @@ -237,7 +248,8 @@ SyntaxError like all other unknown variables. - ``posix``, ``java`` * - ``sys_platform`` - ``sys.platform`` - - ``linux``, ``darwin``, ``java1.8.0_51`` + - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that "linux" + is from Python3 and "linux2" from Python2) * - ``platform_machine`` - ``platform.machine()`` - ``x86_64`` @@ -268,7 +280,7 @@ SyntaxError like all other unknown variables. - see definition below - ``3.4.0``, ``3.5.0b1`` * - ``extra`` - - A SyntaxError except when defined by the context interpreting the + - An error except when defined by the context interpreting the specification. - ``test`` @@ -282,7 +294,10 @@ The ``implementation_version`` marker variable is derived from version += kind[0] + str(info.serial) return version - implementation_version = format_full_version(sys.implementation.version) + if hasattr(sys, 'implementation'): + implementation_version = format_full_version(sys.implementation.version) + else: + implementation_version = "0" Backwards Compatibility ======================= @@ -322,9 +337,9 @@ we needed a specification that included them in their modern form. This PEP brings together all the currently unspecified components into a specified form. -The requirement specifier EBNF is lifted from setuptools pkg_resources -documentation, since we wish to avoid depending on a defacto, vs PEP -specified, standard. +The requirement specifier was adopted from the EBNF in the setuptools +pkg_resources documentation, since we wish to avoid depending on a defacto, vs +PEP specified, standard. Complete Grammar ================ @@ -333,14 +348,14 @@ The complete parsley grammar:: wsp = ' ' | '\t' version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '==='> - version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' )+> + version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | '+' | '!' )+> version_one = version_cmp:op version:v -> (op, v) version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 versionspec = ('(' version_many:v ')' ->v) | version_many urlspec = '@' wsp* marker_op = version_cmp | 'in' | 'not in' python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | - '-' | '_' | '*') + '-' | '_' | '*' | '#') dquote = '"' squote = '\\'' python_str = (squote <(python_str_c | dquote)*>:s squote | @@ -359,7 +374,8 @@ The complete parsley grammar:: marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* marker_expr:r -> (o, r))*:ms -> (m, ms)) quoted_marker = ';' wsp* marker - identifier = <( digit | letter | '-' | '_' | '.')+> + identifier = <( letterOrDigit | + letterOrDigit (letterOrDigit | '-' | '_' | '.')* letterOrDigit )> name = identifier extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + ids extras = '[' wsp* extras_list?:e ']' -> e On 11 November 2015 at 19:41, Nick Coghlan wrote: > On 11 November 2015 at 16:11, Robert Collins wrote: >> - fixed a bug in the marker language that happened somewhere in >> PEP-426. In PEP 345 markers were strictly 'LHS OP RHS' followed by AND >> or OR and then another marker expression. The grammar in PEP-426 >> however allowed things like "(python_version) and >> (python_version=='2.7')" which I believe wasn't actually the intent - >> truthy values are not defined there. So the new grammar does not allow >> ("fred" and "bar") or other such things - and and or are exclusively >> between well defined expressions now. > > Right, Vinay pointed out that the use of parentheses for grouping > wasn't well defined in 345, but instead grew implicitly out of their > evaluation as a Python subset. I attempted to fix that in 426, but > didn't intend to allow non-comparisons as operands for AND and OR. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -- Robert Collins Distinguished Technologist HP Converged Cloud From guettliml at thomas-guettler.de Thu Nov 12 03:55:49 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 12 Nov 2015 09:55:49 +0100 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> Message-ID: <56445415.1000503@thomas-guettler.de> Am 11.11.2015 um 13:59 schrieb Donald Stufft: > On November 11, 2015 at 1:30:57 AM, Thomas G?ttler (guettliml at thomas-guettler.de) wrote: >> >> Maybe I am missing something, but still think server side dependency resolution is possible. >> > > I don?t believe it?s possible nor desirable to have the server handle dependency resolution, at least not without > removing some currently supported features and locking out some future features from ever happening. I can understand you, if you say it is not desirable. I like the general concept of simple clients and solving complicated stuff at the server. Now to "possible": - What features are not supported if you do resolve dependencies on the server? - What features are not possible in the future? > > Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By > default this only includes PyPI but people can either remove that, or add additional repository locations. In order > to support this we need a resolver that can union multiple repositories together before doing the resolving. If the > repository itself was the one handling the resolution than we are locked into a single repository per invocation of > pip. I am aware of that. In our company the CI system has no access to pypi.org. All packages come from our package server which contains a mirror of some pypi packages. If this can be done on the client side today, I see no problem doing this on the server-side tomorrow. > Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just > a simple directory, there *is* no server process running that would allow for a server side resolver to happen and > pip either *must* handle the resolution itself in this case or it must disallow these feature all together. Same as above: can be done on a server, too. > > Additionally, the fact that we currently treat the server as a ?dumb? server, means that someone can implement a PEP > 503 compatible repository very trivially with pretty much any web server that supports static files and automatically > generating an index for static files. Switching to server side resolution would require removing this capability and > force everyone to run a dedicated repository software that can handle that resolution. You currently treat the server as a "dump" server. That's ok. Did I think I want to replace your server with my idea? I am very sorry if you thought this way. My solution is optional and just an idea. I never meant that pypi.or or the new wheel server should use my idea. You use the word "force". Nobody gets forced just because there is an alternative. > Additionally, we want there to be as little variance in the requests that people make to the repository as possible. > We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we?ve > been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a > server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served > directly out of the cache rather than having to be go back to our backend servers. Your thoughts were too fast. There are a lot of private package hostings servers in intranets of companies. In this context the load can be handled very well. And if you have CI-Systems asking for the same stuff over and over again, caching could improve the speed very much. You can do caching at high level: all projects going through CI in one company benefit. > Finally, we want to move further away from trusting the actual repository where we can. In the future we?ll be > allowing package signing that will make it possible to survive a compromise of the repository. However there is no > way to do that if the repository needs to be able to dynamically generate a list of packages that need to be > installed as part of a resolution process because by definition that needs to be done on the fly and thus must be > signed by a key that the repository has access too if it?s signed at all. However, since the metadata for a package > can be signed once and then it never changes, that can be signed by a human when they are uploading to PyPI and than > pip can verify the signature on that metadata before feeding it into the resolver. This would allow us to treat PyPI > as just an untrusted middleman instead of something that is essentially going to be allowed to force us to execute > arbitrary code whenever someone does a pip install (because it?ll be able to instruct us to install any package, and > packages can contain arbitrary code). My idea is made of two parts which don't depend on each other. The main (first) part is dep resolution on server: Input: install_requires list with fuzzy version requirements Output: version pinned package list. If the server was hacked. What could a black hat hacker have done? He could send you an evil line in the result. Instead of "Django==1.8.3" he could send you "Django-with-my-evil-hacks-included==1.8.3". It is still up to the client if he install the requirements that the server gave you. These packages can be downloaded individually and checked with the way you want pip the check packages in the future. I understand you fear for the second part: Creating one package from a list of version-pinned requirements. > Hopefully that answers your question about why it?s unlikely that we?ll ever move to a server side dependency > resolver because even though it is possible to do so, doing it would severely regress a number of very important > features. I just wanted to share my idea: https://github.com/guettli/virtualenv-build-server The idea is in the public domain. I will happily coach developers who want to implement it. I won't implement the idea myself :-) Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From leorochael at gmail.com Thu Nov 12 07:32:12 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Thu, 12 Nov 2015 10:32:12 -0200 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: <56445415.1000503@thomas-guettler.de> References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> <56445415.1000503@thomas-guettler.de> Message-ID: Hi Thomas, I think your idea could be very useful as an accelerator if installation in closed environments, as you suggested in your last e-mail, but which wasn't clear in your first. After all, in closed environments you have control of the machine architecture of all clients, and can be reasonably sure that the wheels you build server-side are installable client-side. By default, when proposing ideas on this list, people tend to assume they're ideas being proposed to PyPI itself, unless there is a very clear mention that this is not the case, hence Donald's answer. My only comment about your idea would be that since packages get upgraded all the time, then the "fuzzy set of requirements" can't be treated as the cache key, otherwise your pre-built virtualenvs will get stale all the time... Rather, the cache key of the pre-built virtual environments should be the "fixed set of packages with exactly pinned versions" that was resolved from the fuzzy set. Regards, Leo On 12 November 2015 at 06:55, Thomas G?ttler wrote: > Am 11.11.2015 um 13:59 schrieb Donald Stufft: > >> On November 11, 2015 at 1:30:57 AM, Thomas G?ttler ( >> guettliml at thomas-guettler.de) wrote: >> >>> >>> Maybe I am missing something, but still think server side dependency >>> resolution is possible. >>> >>> >> I don?t believe it?s possible nor desirable to have the server handle >> dependency resolution, at least not without >> removing some currently supported features and locking out some future >> features from ever happening. >> > > I can understand you, if you say it is not desirable. > > I like the general concept of simple clients and solving complicated stuff > at the server. > > Now to "possible": > > - What features are not supported if you do resolve dependencies on the > server? > - What features are not possible in the future? > > > >> Currently pip can be configured with multiple repository locations that >> it will use when resolving dependencies. By >> default this only includes PyPI but people can either remove that, or add >> additional repository locations. In order >> to support this we need a resolver that can union multiple repositories >> together before doing the resolving. If the >> repository itself was the one handling the resolution than we are locked >> into a single repository per invocation of >> pip. >> > > I am aware of that. In our company the CI system has no access to pypi.org. > All packages come from our package server which contains a mirror of some > pypi packages. > > If this can be done on the client side today, I see no problem doing this > on the server-side tomorrow. > > Additionally, pip can also be configured to use a simple directory full of >> files as a repository. Since this is just >> a simple directory, there *is* no server process running that would allow >> for a server side resolver to happen and >> pip either *must* handle the resolution itself in this case or it must >> disallow these feature all together. >> > > Same as above: can be done on a server, too. > > >> Additionally, the fact that we currently treat the server as a ?dumb? >> server, means that someone can implement a PEP >> 503 compatible repository very trivially with pretty much any web server >> that supports static files and automatically >> generating an index for static files. Switching to server side resolution >> would require removing this capability and >> force everyone to run a dedicated repository software that can handle >> that resolution. >> > > You currently treat the server as a "dump" server. That's ok. > > Did I think I want to replace your server with my idea? I am very sorry if > you thought this way. > > My solution is optional and just an idea. I never meant that pypi.or or > the new wheel server should use my idea. > > You use the word "force". Nobody gets forced just because there is an > alternative. > > Additionally, we want there to be as little variance in the requests that >> people make to the repository as possible. >> We utilize a caching CDN layer which handles > 80% of the total traffic >> to PyPI which is the primary reason we?ve >> been able to scale to handling 5TB and ~50 million requests a day with a >> skeleton crew of people. If we move to a >> server side dependency resolution than we reduce our ability to ensure >> that as many requests as possible are served >> directly out of the cache rather than having to be go back to our backend >> servers. >> > > Your thoughts were too fast. There are a lot of private package hostings > servers in intranets of companies. > > In this context the load can be handled very well. And if you have > CI-Systems asking for the same stuff > over and over again, caching could improve the speed very much. You can do > caching at high level: all > projects going through CI in one company benefit. > > Finally, we want to move further away from trusting the actual repository >> where we can. In the future we?ll be >> allowing package signing that will make it possible to survive a >> compromise of the repository. However there is no >> way to do that if the repository needs to be able to dynamically generate >> a list of packages that need to be >> installed as part of a resolution process because by definition that >> needs to be done on the fly and thus must be >> signed by a key that the repository has access too if it?s signed at all. >> However, since the metadata for a package >> can be signed once and then it never changes, that can be signed by a >> human when they are uploading to PyPI and than >> pip can verify the signature on that metadata before feeding it into the >> resolver. This would allow us to treat PyPI >> as just an untrusted middleman instead of something that is essentially >> going to be allowed to force us to execute >> arbitrary code whenever someone does a pip install (because it?ll be able >> to instruct us to install any package, and >> packages can contain arbitrary code). >> > > My idea is made of two parts which don't depend on each other. > > The main (first) part is dep resolution on server: > > Input: install_requires list with fuzzy version requirements > Output: version pinned package list. > > If the server was hacked. What could a black hat hacker have done? > He could send you an evil line in the result. Instead of "Django==1.8.3" > he could > send you "Django-with-my-evil-hacks-included==1.8.3". > > It is still up to the client if he install the requirements that the > server gave you. > These packages can be downloaded individually and checked with the way you > want > pip the check packages in the future. > > I understand you fear for the second part: Creating one package from a list > of version-pinned requirements. > > Hopefully that answers your question about why it?s unlikely that we?ll >> ever move to a server side dependency >> resolver because even though it is possible to do so, doing it would >> severely regress a number of very important >> features. >> > > I just wanted to share my idea: > https://github.com/guettli/virtualenv-build-server > > The idea is in the public domain. I will happily coach developers who > want to implement it. I won't implement the idea myself :-) > > Regards, > Thomas G?ttler > > > > -- > Thomas Guettler http://www.thomas-guettler.de/ > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Thu Nov 12 07:59:34 2015 From: wes.turner at gmail.com (Wes Turner) Date: Thu, 12 Nov 2015 06:59:34 -0600 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> <56445415.1000503@thomas-guettler.de> Message-ID: On Nov 12, 2015 6:32 AM, "Leonardo Rochael Almeida" wrote: > > Hi Thomas, > > I think your idea could be very useful as an accelerator if installation in closed environments, as you suggested in your last e-mail, but which wasn't clear in your first. > > After all, in closed environments you have control of the machine architecture of all clients, and can be reasonably sure that the wheels you build server-side are installable client-side. > > By default, when proposing ideas on this list, people tend to assume they're ideas being proposed to PyPI itself, unless there is a very clear mention that this is not the case, hence Donald's answer. > > My only comment about your idea would be that since packages get upgraded all the time, then the "fuzzy set of requirements" can't be treated as the cache key, otherwise your pre-built virtualenvs will get stale all the time... > > Rather, the cache key of the pre-built virtual environments should be the "fixed set of packages with exactly pinned versions" that was resolved from the fuzzy set. * [(PKG, VERSTR)] * {sys.platform: platform strings} * [or] the revision of a meta-(package/module) and build options * e.g. --make-relocatable, prefix ... like a PPA build farm with a parameterized test 'grid'? > > Regards, > > Leo > > > On 12 November 2015 at 06:55, Thomas G?ttler wrote: >> >> Am 11.11.2015 um 13:59 schrieb Donald Stufft: >>> >>> On November 11, 2015 at 1:30:57 AM, Thomas G?ttler ( guettliml at thomas-guettler.de) wrote: >>>> >>>> >>>> Maybe I am missing something, but still think server side dependency resolution is possible. >>>> >>> >>> I don?t believe it?s possible nor desirable to have the server handle dependency resolution, at least not without >>> removing some currently supported features and locking out some future features from ever happening. >> >> >> I can understand you, if you say it is not desirable. >> >> I like the general concept of simple clients and solving complicated stuff at the server. >> >> Now to "possible": >> >> - What features are not supported if you do resolve dependencies on the server? >> - What features are not possible in the future? >> >> >>> >>> Currently pip can be configured with multiple repository locations that it will use when resolving dependencies. By >>> default this only includes PyPI but people can either remove that, or add additional repository locations. In order >>> to support this we need a resolver that can union multiple repositories together before doing the resolving. If the >>> repository itself was the one handling the resolution than we are locked into a single repository per invocation of >>> pip. >> >> >> I am aware of that. In our company the CI system has no access to pypi.org. All packages come from our package server which contains a mirror of some pypi packages. >> >> If this can be done on the client side today, I see no problem doing this on the server-side tomorrow. >> >>> Additionally, pip can also be configured to use a simple directory full of files as a repository. Since this is just >>> a simple directory, there *is* no server process running that would allow for a server side resolver to happen and >>> pip either *must* handle the resolution itself in this case or it must disallow these feature all together. >> >> >> Same as above: can be done on a server, too. >> >>> >>> Additionally, the fact that we currently treat the server as a ?dumb? server, means that someone can implement a PEP >>> 503 compatible repository very trivially with pretty much any web server that supports static files and automatically >>> generating an index for static files. Switching to server side resolution would require removing this capability and >>> force everyone to run a dedicated repository software that can handle that resolution. >> >> >> You currently treat the server as a "dump" server. That's ok. >> >> Did I think I want to replace your server with my idea? I am very sorry if you thought this way. >> >> My solution is optional and just an idea. I never meant that pypi.or or the new wheel server should use my idea. >> >> You use the word "force". Nobody gets forced just because there is an alternative. >> >>> Additionally, we want there to be as little variance in the requests that people make to the repository as possible. >>> We utilize a caching CDN layer which handles > 80% of the total traffic to PyPI which is the primary reason we?ve >>> been able to scale to handling 5TB and ~50 million requests a day with a skeleton crew of people. If we move to a >>> server side dependency resolution than we reduce our ability to ensure that as many requests as possible are served >>> directly out of the cache rather than having to be go back to our backend servers. >> >> >> Your thoughts were too fast. There are a lot of private package hostings servers in intranets of companies. >> >> In this context the load can be handled very well. And if you have CI-Systems asking for the same stuff >> over and over again, caching could improve the speed very much. You can do caching at high level: all >> projects going through CI in one company benefit. >> >>> Finally, we want to move further away from trusting the actual repository where we can. In the future we?ll be >>> allowing package signing that will make it possible to survive a compromise of the repository. However there is no >>> way to do that if the repository needs to be able to dynamically generate a list of packages that need to be >>> installed as part of a resolution process because by definition that needs to be done on the fly and thus must be >>> signed by a key that the repository has access too if it?s signed at all. However, since the metadata for a package >>> can be signed once and then it never changes, that can be signed by a human when they are uploading to PyPI and than >>> pip can verify the signature on that metadata before feeding it into the resolver. This would allow us to treat PyPI >>> as just an untrusted middleman instead of something that is essentially going to be allowed to force us to execute >>> arbitrary code whenever someone does a pip install (because it?ll be able to instruct us to install any package, and >>> packages can contain arbitrary code). >> >> >> My idea is made of two parts which don't depend on each other. >> >> The main (first) part is dep resolution on server: >> >> Input: install_requires list with fuzzy version requirements >> Output: version pinned package list. >> >> If the server was hacked. What could a black hat hacker have done? >> He could send you an evil line in the result. Instead of "Django==1.8.3" he could >> send you "Django-with-my-evil-hacks-included==1.8.3". >> >> It is still up to the client if he install the requirements that the server gave you. >> These packages can be downloaded individually and checked with the way you want >> pip the check packages in the future. >> >> I understand you fear for the second part: Creating one package from a list >> of version-pinned requirements. >> >>> Hopefully that answers your question about why it?s unlikely that we?ll ever move to a server side dependency >>> resolver because even though it is possible to do so, doing it would severely regress a number of very important >>> features. >> >> >> I just wanted to share my idea: https://github.com/guettli/virtualenv-build-server >> >> The idea is in the public domain. I will happily coach developers who >> want to implement it. I won't implement the idea myself :-) >> >> Regards, >> Thomas G?ttler >> >> >> >> -- >> Thomas Guettler http://www.thomas-guettler.de/ >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Nov 12 12:08:36 2015 From: brett at python.org (Brett Cannon) Date: Thu, 12 Nov 2015 17:08:36 +0000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On Wed, 11 Nov 2015 at 04:06 Paul Moore wrote: > On 11 November 2015 at 06:35, Nick Coghlan wrote: > > Windows Python 2 installations require manual PATH modifications > > regardless, but it's more common for people to know how to make > > "python -m pip install X" work, than it is for them to remember to > > also add the "Scripts" directory needed to make "pip install X" work. > > ... and "py -m pip install X" works without any PATH modification on > all Windows systems with the launcher installed (I can't recall if > it's included with Python 2.7 - but if not, maybe it should be > backported? There's a standalone version people can get as well). > While the discussion to try and get UNIX to adopt `py` is nice, I think that decision falls under python-dev's jurisdiction. So if people here decide "we should be pushing for that" then that's great, but that means someone needs to go to python-dev and say "distutils-sig is trying to solve the issue of `pip` being ambiguous as to what Python installation it works with and we thought making `py` a thing on UNIX was the best solution forward for `py -m pip`". And if that's the case then the stop-gap is `python -m pip`. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Thu Nov 12 13:36:42 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Thu, 12 Nov 2015 19:36:42 +0100 Subject: [Distutils] Serverside Dependency Resolution and Virtualenv Build Server In-Reply-To: References: <563E0C8C.9070609@thomas-guettler.de> <5642E088.8070009@thomas-guettler.de> <56445415.1000503@thomas-guettler.de> Message-ID: <5644DC3A.7000204@thomas-guettler.de> Am 12.11.2015 um 13:32 schrieb Leonardo Rochael Almeida: > Hi Thomas, > > I think your idea could be very useful as an accelerator if installation in closed environments, as you suggested in your last e-mail, but which wasn't clear in your first. > > After all, in closed environments you have control of the machine architecture of all clients, and can be reasonably sure that the wheels you build server-side are installable client-side. > > By default, when proposing ideas on this list, people tend to assume they're ideas being proposed to PyPI itself, unless there is a very clear mention that this is not the case, hence Donald's answer. > > My only comment about your idea would be that since packages get upgraded all the time, then the "fuzzy set of requirements" can't be treated as the cache key, otherwise your pre-built virtualenvs will get stale all the time... :-) thank you very much. This makes my happy, since you look at it in detail > Rather, the cache key of the pre-built virtual environments should be the "fixed set of packages with exactly pinned versions" that was resolved from the fuzzy set. Yes -- http://www.thomas-guettler.de/ From njs at pobox.com Thu Nov 12 19:28:20 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 Nov 2015 16:28:20 -0800 Subject: [Distutils] [Numpy-discussion] Proposal: stop supporting 'setup.py install'; start requiring 'pip install .' instead In-Reply-To: References: <561702487663958680@unknownmsgid> <-4238370622781470465@unknownmsgid> Message-ID: On Sun, Nov 8, 2015 at 12:52 PM, Paul Moore wrote: > On 8 November 2015 at 17:42, Nathaniel Smith wrote: >> I'm not sure exactly what's at stake in this terminological/ontological >> debate, but it certainly is fairly common for developers to have >> conversations like "thanks for reporting that issue, I think it's fixed in >> master but can't reproduce myself so can you try 'pip install >> https://github.com/pydata/patsy/archive/master.zip' and report back whether >> it helps?" > > Well, reviewing this scenario is probably much more useful than the > endless terminology debates that I seem to be forever starting, so > thanks for stopping me! > > It seems to me that in this situation, optimising rebuild times > probably isn't too important. The user is likely to only be building > once or twice, so reusing object files from a previous build isn't > likely to be a killer benefit. Sure. And there's no reasonable way to optimize rebuild times anyway when the input is a remote URL -- it's only when the input is an on-disk directory that worrying about incremental builds even makes sense. > However, if the user does as you asked here, they'd likely be pretty > surprised (and it'd be a nasty situation for you to debug) if pip > didn't install what the user asked. In all honesty, You could argue > that this implies that pip should unconditionally install files > specified on the command line, Yes, that is what I do argue :-) > but I'd suggest that you should > actually be asking the user to run 'pip install --ignore-installed > https://github.com/pydata/patsy/archive/master.zip'. That avoids any > risk that whatever the user has currently installed could mess things > up, and is explicit that it's doing so (and equally, it's explicit > that it'll overwrite the currently installed version, which the user > might not want to do in his main environment). Problem 1 is that I don't actually know what --ignore-installed does. My first guess is that it would cause pip to skip uninstalling packages before upgrading them, resulting in an inconsistent/corrupt environment. (No, this doesn't sound like particularly useful behavior to me either, but most operations/switches in pip have semantics that are somewhat skewed from what I would consider intuitive, so who knows. It's right next to --no-deps in the --help output, and --no-deps is literally a "please give me an inconsistent/corrupt environment" switch, so it's totally plausible that --ignore-installed is intended for similarly ill-conceived uses.) Or maybe it causes pip to pretend that the environment is totally empty when picking the set of (package, version) tuples to install, triggering upgrades of dependent packages? I would actually guess both of those before guessing that it means "please actually install the thing I asked you to install, but otherwise act normally", and as of right now I still actually have no idea which of these is correct (if any). AFAICT there aren't any docs -- maybe I'm just failing to search properly. Problem 2 is that even if --ignore-installed does do the appropriate thing, and even if there is some way for me to figure this out, then it will still inevitably happen that 1 in 10 times I will forget to mention it, not notice that I have forgotten to mention it, and the user will not realize that nothing has happened, and just report that "they installed the new version but they still get the same error", and then I spend hours tearing out my hair trying to figure out why not (because I "know" that they actually installed the new version). If you want to optimize your UI to frustrate people and waste their time, then a really impressively good technique is to include a special switch that usually does nothing, but every once in a while is necessary, and if you forget it then the computer and the user's mental model will get totally out of sync. Otherwise, though... :-/ > Maybe you could argue that you want --ignore-installed to be the > default (probably only when a file is specified rather than a > requirement, assuming that distinguishing between a file and a > requirement is practical). But if we did that, we'd still need a > --dont-ignore-installed flag to restore the current behaviour. For > example, because Christoph Gohlke's builds must be manually > downloaded, I find it's quite common to download a wheel from his site > and "pip install" it in a number of environments, with the meaning > "only if it'd be an upgrade to whatever is currently installed". Sure, I have no objection to a pip install --only-if-upgrade flag. -n -- Nathaniel J. Smith -- http://vorpus.org From chris.barker at noaa.gov Thu Nov 12 21:09:23 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Thu, 12 Nov 2015 18:09:23 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> Message-ID: <-1886531515833612636@unknownmsgid> > If we waved our hands and were able to magically make Python package > management perfect, what would that look like? well, I think the command would be: python install package_name I know there are good reasons to keep package installer development out of core, but if have ensurepip-- we could do this. CHB Would that kind of discussion even be valuable? On Tue, Nov 10, 2015, 6:22 PM Nathaniel Smith wrote: I totally get why people dislike the ergonomics of 'python -m pip', but we can also acknowledge that it does solve a real technical problem: it strictly reduces the number of things that can go wrong, in a tool that's down at the base of the stack. _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Nov 13 02:07:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 13 Nov 2015 17:07:48 +1000 Subject: [Distutils] Current PyPI storage requirements? Message-ID: This isn't an urgent question, but rather a "if the stats are readily available, I'm curious as to the answer" one: what are PyPi's current storage requirements? The warehouse.python.org front page indicates how many objects there are, but not the amount of space they take up. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Nov 13 02:14:16 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 13 Nov 2015 17:14:16 +1000 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <92DBF413-0CCC-45B9-B76D-E5878DF5106A@twistedmatrix.com> Message-ID: On 13 November 2015 at 03:08, Brett Cannon wrote: > > > On Wed, 11 Nov 2015 at 04:06 Paul Moore wrote: >> >> On 11 November 2015 at 06:35, Nick Coghlan wrote: >> > Windows Python 2 installations require manual PATH modifications >> > regardless, but it's more common for people to know how to make >> > "python -m pip install X" work, than it is for them to remember to >> > also add the "Scripts" directory needed to make "pip install X" work. >> >> ... and "py -m pip install X" works without any PATH modification on >> all Windows systems with the launcher installed (I can't recall if >> it's included with Python 2.7 - but if not, maybe it should be >> backported? There's a standalone version people can get as well). > > > While the discussion to try and get UNIX to adopt `py` is nice, I think > that decision falls under python-dev's jurisdiction. So if people here > decide "we should be pushing for that" then that's great, but that means > someone needs to go to python-dev and say "distutils-sig is trying to solve > the issue of `pip` being ambiguous as to what Python installation it works > with and we thought making `py` a thing on UNIX was the best solution > forward for `py -m pip`". And if that's the case then the stop-gap is > `python -m pip`. That particular discussion would (and did) start on the new linux-sig list :) To be honest, it didn't gain much traction, but converting /usr/bin/python to a launcher seems to be a relatively plausible way of helping to address the 2->3 transition problem (although the semantics would differ significantly from those of the Windows "py" launcher). That would leave "py" continuing to exist indefinitely as a tool primarily for handling Windows file associations by bringing shebang line parsing to Windows rather than for direct invocation of the Python interpreter. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Fri Nov 13 02:16:52 2015 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 Nov 2015 23:16:52 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: <-1886531515833612636@unknownmsgid> References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> <-1886531515833612636@unknownmsgid> Message-ID: On Thu, Nov 12, 2015 at 6:09 PM, Chris Barker - NOAA Federal wrote: > > If we waved our hands and were able to magically make Python package >> management perfect, what would that look like? > > well, I think the command would be: > > python install package_name > > I know there are good reasons to keep package installer development out of > core, but if have ensurepip-- we could do this. 1) What about 'pip uninstall', 'pip freeze', 'pip list', 'pip show', 'pip search', 'pip wheel'? 2) If it requires python 3.6 it's kinda a non-starter... -n -- Nathaniel J. Smith -- http://vorpus.org From gandalf at shopzeus.com Fri Nov 13 06:50:34 2015 From: gandalf at shopzeus.com (=?UTF-8?Q?Nagy_L=c3=a1szl=c3=b3_Zsolt?=) Date: Fri, 13 Nov 2015 12:50:34 +0100 Subject: [Distutils] How to exclude a directory from a module that has tests? Message-ID: <5645CE8A.7010207@shopzeus.com> Hello, My source distribution contains a single module. It is **not** a package. I have used this: setup(name='yubistorm', version=__version__, description='Asnychronous two factor authentication client for YubiCloud with Tornado', long_description="""Provides a simple module that can be used from a Tornado server to authenticate users """ + """ with YubiCloud (https://www.yubico.com/products/services-software/yubicloud/)""", author='L?szl? Zsolt Nagy', author_email='nagylzs at gmail.com', license="LGPL v3", py_modules=['yubistorm'], requires=['tornado (>=4.3)'], url="https://bitbucket.org/nagylzs/yubistorm", classifiers=[ 'Topic :: Security', 'Topic :: Internet :: WWW/HTTP', "Programming Language :: Python :: 3.5", "Programming Language :: Python :: Implementation :: CPython", ], ) The problem is that there is a "test" directory and it is added to the source distribution. I want to exclude that. I know that there is an "exclude" parameter for find_packages(). But this is not a package. This is a single module. How do I exclude a directory then? Thanks, Laszlo -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri Nov 13 07:22:24 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 13 Nov 2015 07:22:24 -0500 Subject: [Distutils] Current PyPI storage requirements? In-Reply-To: References: Message-ID: <92B9930E-5045-4827-B14B-C07DD78590BE@stufft.io> > On Nov 13, 2015, at 2:07 AM, Nick Coghlan wrote: > > This isn't an urgent question, but rather a "if the stats are readily > available, I'm curious as to the answer" one: what are PyPi's current > storage requirements? The warehouse.python.org front page indicates > how many objects there are, but not the amount of space they take up. According to the usage reports in Amazon S3, we?re looking at roughly 215GB for PyPI and 10GB for TestPyPI. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris at simplistix.co.uk Fri Nov 13 07:26:10 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Fri, 13 Nov 2015 12:26:10 +0000 Subject: [Distutils] "option --single-version-externally-managed not recognized" when pip install mysql-connector-python-rf Message-ID: <5645D6E2.7010803@simplistix.co.uk> Hi All, My Travis nightly builds of a package started failing this morning with the following: https://travis-ci.org/Mortar/mortar_rdb/jobs/90919221 pip install mysql-connector-python-rf You are using pip version 6.0.7, however version 7.1.2 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Collecting mysql-connector-python-rf Downloading mysql-connector-python-rf-2.1.3.tar.gz (271kB) 100% |################################| 274kB 2.1MB/s Installing collected packages: mysql-connector-python-rf Running setup.py install for mysql-connector-python-rf usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: -c --help [cmd1 cmd2 ...] or: -c --help-commands or: -c cmd --help error: option --single-version-externally-managed not recognized Complete output from command /home/travis/virtualenv/python3.4.2/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-_f5a57lj/mysql-connector-python-rf/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-jd_zku67-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/travis/virtualenv/python3.4.2/include/site/python3.4: usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: -c --help [cmd1 cmd2 ...] or: -c --help-commands or: -c cmd --help error: option --single-version-externally-managed not recognized mysql-connector-python-rf hasn't changed in ages, and I haven't pushed any commits to mortar_rdb in some time. Has pip or setuptools changed in the last 24 hrs? This something that might have changed on TravisCI? cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Fri Nov 13 07:55:14 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 13 Nov 2015 14:55:14 +0200 Subject: [Distutils] How to exclude a directory from a module that has tests? In-Reply-To: <5645CE8A.7010207@shopzeus.com> References: <5645CE8A.7010207@shopzeus.com> Message-ID: On Fri, Nov 13, 2015 at 1:50 PM, Nagy L?szl? Zsolt wrote: > The problem is that there is a "test" directory and it is added to the > source distribution. I want to exclude that. I know that there is an > "exclude" parameter for find_packages(). But this is not a package. This is > a single module. How do I exclude a directory then? ?People usually include the tests in their sdist - you might want to do that too. If you still don't then you can control the contents of sdist via MANIFEST.in - "test"? is inlcuded by default afaik so just add a "prune test" in MANIFEST.in. Still, take note that sdist is not what gets installed in site-packages, so you should have all your source files in it. IOW: don't do prune test. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at ionelmc.ro Fri Nov 13 07:56:57 2015 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 13 Nov 2015 14:56:57 +0200 Subject: [Distutils] "option --single-version-externally-managed not recognized" when pip install mysql-connector-python-rf In-Reply-To: <5645D6E2.7010803@simplistix.co.uk> References: <5645D6E2.7010803@simplistix.co.uk> Message-ID: On Fri, Nov 13, 2015 at 2:26 PM, Chris Withers wrote: > ?? > mysql-connector-python-rf hasn't changed in ages ?Looks like ?mysql-connector-python-rf 2.1.3 was released today, strange setup.py customization inside ... Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Fri Nov 13 08:31:02 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Fri, 13 Nov 2015 13:31:02 +0000 Subject: [Distutils] "option --single-version-externally-managed not recognized" when pip install mysql-connector-python-rf In-Reply-To: References: <5645D6E2.7010803@simplistix.co.uk> Message-ID: <5645E616.6080808@simplistix.co.uk> On 13/11/2015 12:56, Ionel Cristian M?rie? wrote: > > On Fri, Nov 13, 2015 at 2:26 PM, Chris Withers > wrote: > > ?? > mysql-connector-python-rf hasn't changed in ages > > > ?Looks like ?mysql-connector-python-rf 2.1.3 was released today, > strange setup.py customization inside ... Good spot, sorry I missed that. Anyone happen to know how to contact the ?mysql-connector-python maintainers? All the links from pypi are pretty devoid of content... Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Nov 13 12:39:01 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 Nov 2015 09:39:01 -0800 Subject: [Distutils] The future of invoking pip In-Reply-To: References: <20151107232134.5190af14@fsol> <20151108005337.285d2f9e@fsol> <20151108012204.35939337@fsol> <85h9kwi3ev.fsf@benfinney.id.au> <-1886531515833612636@unknownmsgid> Message-ID: On Thu, Nov 12, 2015 at 11:16 PM, Nathaniel Smith wrote: > > > If we waved our hands and were able to magically make Python package > >> management perfect, what would that look like? > > > > well, I think the command would be: > > > > python install package_name > > > > I know there are good reasons to keep package installer development out > of > > core, but if have ensurepip-- we could do this. > > 1) What about 'pip uninstall', 'pip freeze', 'pip list', 'pip show', > 'pip search', 'pip wheel'? > hmm -- half of those are "advanced" features, but yes, there are a few that newbies want easy access to, so how about : python pip install python pip search ... just doesn't need the "-m" -- which is a bit of advanced python voodoo (OK, not very advanced...) or maybe: python install search python install list python install .... though that would make it tough to have a package called "search", etc... what I'm getting at is that it makes plenty of sense for package management to be seen as a feature of the python interpreter itself -- maybe slightly more typing that "pip" (less than easy_install?) -- but no one is going to be surprised that you use python to manage your python installation. \2) If it requires python 3.6 it's kinda a non-starter... > well, this was a response to "magically make Python package management perfect" but anyway, there is always 2.7.11 :-) -- or would it even be possible to hack a change to the command line handling with a package install? somehow I doubt it. and it may not be SO bad to require the -m pip for all legacy versions -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Nov 13 12:42:57 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 Nov 2015 09:42:57 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: Message-ID: On Fri, Nov 6, 2015 at 8:06 AM, Paul Moore wrote: > That's the correct command, but you need to run it from the Windows > command prompt, not from within IDLE. > Now that we are talking about how to invoke the installer on other threads... This is NOT the least bit a rare mistake for newbies. Maybe we should have a way to install right from inside the python REPL. That would certainly clear up the "which python is this going to get installed into" problem. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Fri Nov 13 14:58:01 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Fri, 13 Nov 2015 14:58:01 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: Message-ID: <564640C9.7060801@sdamon.com> import pip pip.install(PACKAGESPEC) something like that? On 11/13/2015 12:42, Chris Barker wrote: > On Fri, Nov 6, 2015 at 8:06 AM, Paul Moore > wrote: > > That's the correct command, but you need to run it from the Windows > command prompt, not from within IDLE. > > > Now that we are talking about how to invoke the installer on other > threads... > > This is NOT the least bit a rare mistake for newbies. Maybe we should > have a way to install right from inside the python REPL. > > That would certainly clear up the "which python is this going to get > installed into" problem. > > -CHB > > > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri Nov 13 15:09:28 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 13 Nov 2015 12:09:28 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: <564640C9.7060801@sdamon.com> References: <564640C9.7060801@sdamon.com> Message-ID: On Nov 13, 2015 12:00 PM, "Alexander Walters" wrote: > > import pip > pip.install(PACKAGESPEC) > > something like that? This would be extremely handy if it could be made to work reliably... But I'm skeptical about whether it can be made to work reliably. Consider all the fun things that could happen once you start upgrading packages while python is running, and might e.g. have half of an upgraded package already loaded into memory. It's like the reloading problem but even more so. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Nov 13 15:27:48 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 Nov 2015 12:27:48 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> Message-ID: On Fri, Nov 13, 2015 at 12:09 PM, Nathaniel Smith wrote: > On Nov 13, 2015 12:00 PM, "Alexander Walters" > wrote: > > > > import pip > > pip.install(PACKAGESPEC) > > > > something like that? > > This would be extremely handy if it could be made to work reliably... But > I'm skeptical about whether it can be made to work reliably. Consider all > the fun things that could happen once you start upgrading packages while > python is running, and might e.g. have half of an upgraded package already > loaded into memory. It's like the reloading problem but even more so. > indeed -- does seem risky. also, if were are in fantasy land, and want to be really newbie friendly, a new built in: pip.install(PACKAGESPEC) with no import required.... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Fri Nov 13 15:57:28 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Fri, 13 Nov 2015 15:57:28 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> Message-ID: <56464EB8.1060805@sdamon.com> While I like the concept of calling pip via an api (and let me pat myself on the back for suggesting it in the first place in this thread), I honestly think that if it is something that is allowed, it should be implemented with a fair bit of guards. It would probably end up being a power-user feature - something to help manage deployments in some tricky environments - than a newbie feature. Python is an IDE-less language, and I say this knowing full well what IDLE is. We don't default to eclipse like java does, or Visual Studio like .NET languages (and C(++) on windows). We do not have the default tooling in place to avoid using the command line. Learning the command line is a vital skill for newbies. Now, while this thread may or may not be about Windows newbies specifically, I do not tend to see this brought up for *nix newbies. Is this because we assume that a *nix user will have to know the command line? or that they are inherently power users? If it is the latter, then I need to say that being a programmer also means being a power user. We should guide new users to power user tools (the command line, powershell, etc), instead of trying to bend python to regular users who will eventually be power users anyways. I guess I am suggesting maybe we try and find a way to shallow the learning curve into using the command line than to just implement commands in the repl itself. all that said, IDLE could be tooled to intercept the syntax 'pip install foo' and print a more helpful message. On 11/13/2015 15:27, Chris Barker wrote: > > > On Fri, Nov 13, 2015 at 12:09 PM, Nathaniel Smith > wrote: > > On Nov 13, 2015 12:00 PM, "Alexander Walters" > > wrote: > > > > import pip > > pip.install(PACKAGESPEC) > > > > something like that? > > This would be extremely handy if it could be made to work > reliably... But I'm skeptical about whether it can be made to work > reliably. Consider all the fun things that could happen once you > start upgrading packages while python is running, and might e.g. > have half of an upgraded package already loaded into memory. It's > like the reloading problem but even more so. > > indeed -- does seem risky. > > also, if were are in fantasy land, and want to be really newbie > friendly, a new built in: > > pip.install(PACKAGESPEC) > > with no import required.... > > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Nov 13 15:17:51 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 13 Nov 2015 15:17:51 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> Message-ID: <20151113201752.42727B1408D@webabinitio.net> On Fri, 13 Nov 2015 12:09:28 -0800, Nathaniel Smith wrote: > On Nov 13, 2015 12:00 PM, "Alexander Walters" > wrote: > > > > import pip > > pip.install(PACKAGESPEC) > > > > something like that? > > This would be extremely handy if it could be made to work reliably... But > I'm skeptical about whether it can be made to work reliably. Consider all > the fun things that could happen once you start upgrading packages while > python is running, and might e.g. have half of an upgraded package already > loaded into memory. It's like the reloading problem but even more so. If I remember correctly, this is something that R supports that I thought was cool when I saw it. We could have a command analogous to the 'help' command, so you wouldn't even have to do an explicit import. But yeah, making it work may be hard. --David From njs at pobox.com Fri Nov 13 18:38:28 2015 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 13 Nov 2015 15:38:28 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: <20151113201752.42727B1408D@webabinitio.net> References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> Message-ID: On Nov 13, 2015 3:07 PM, "R. David Murray" wrote: > > On Fri, 13 Nov 2015 12:09:28 -0800, Nathaniel Smith wrote: > > On Nov 13, 2015 12:00 PM, "Alexander Walters" > > wrote: > > > > > > import pip > > > pip.install(PACKAGESPEC) > > > > > > something like that? > > > > This would be extremely handy if it could be made to work reliably... But > > I'm skeptical about whether it can be made to work reliably. Consider all > > the fun things that could happen once you start upgrading packages while > > python is running, and might e.g. have half of an upgraded package already > > loaded into memory. It's like the reloading problem but even more so. > > If I remember correctly, this is something that R supports that I > thought was cool when I saw it. We could have a command analogous > to the 'help' command, so you wouldn't even have to do an explicit > import. But yeah, making it work may be hard. Yeah, I've long used this in R and it really is awesome -- I wasn't kidding in the first sentence I wrote above :-). It leads to a really short frustration cycle: >>> import somepkg error >>> install("somepkg") installing...done. >>> import somepkg :-) But details of R's execution model make this easier to do. Maybe it could be supported for the special case of installing new packages with no upgrades? A good way to environment with the possibilities would be to write a %pip magic for ipython: http://ipython.readthedocs.org/en/stable/interactive/tutorial.html#magic-functions http://ipython.readthedocs.org/en/stable/config/custommagics.html -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat Nov 14 06:12:26 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 14 Nov 2015 11:12:26 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> Message-ID: On 13 November 2015 at 23:38, Nathaniel Smith wrote: > But details of R's execution model make this easier to do. Indeed. I don't know how R works, but Python's module caching behaviour would mean this would be full of surprising and confusing corner cases ("I upgraded but I'm still getting the old version" being the simplest and most obvious one). > Maybe it could be supported for the special case of installing new packages with no upgrades? Possibly. But the rules on what is allowed would likely be fairly complex and hard to understand. > A good way to environment with the possibilities would be to write a %pip > magic for ipython: Equally, if you want to see how well the model works, you can just start up a Python interpreter session and when you want to install something, do so in a separate command window. All of the issues I can think of are basically a result of not restarting Python after installing a new package, so you'd probably see most of them like that. Conversely, if IPython has a "restart the kernel" command, then I see no reason why a %pip magic wouldn't be fine, as long as you restart the kernel after each (series of) %pip commands. The same with Idle, if there's a "restart the interpreter" option, that would be safe. Of course this doesn't solve the issue of "I want to keep my work in progress" but the fact that you can't is an easier restriction to explain than "only when installing new packages where none of the package install nor any of its dependencies triggers an upgrade"... Paul From oscar.j.benjamin at gmail.com Sat Nov 14 06:37:47 2015 From: oscar.j.benjamin at gmail.com (Oscar Benjamin) Date: Sat, 14 Nov 2015 11:37:47 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> Message-ID: On 14 Nov 2015 11:12, "Paul Moore" wrote: > > On 13 November 2015 at 23:38, Nathaniel Smith wrote: > > But details of R's execution model make this easier to do. > > Indeed. I don't know how R works, but Python's module caching > behaviour would mean this would be full of surprising and confusing > corner cases ("I upgraded but I'm still getting the old version" being > the simplest and most obvious one). > > > Maybe it could be supported for the special case of installing new packages with no upgrades Maybe it could prompt the user that the interpreter will need to be restarted for the changes to take effect. IDLE runs the interactive interpreter in a separate process so it could restart the subprocess without closing the GUI (after prompting the user with a restart/continue dialogue). I'm not sure if the standard interpreter would be able to relaunch itself but it could at least exit and tell the user to restart (after a yes/no question in the terminal). The command could also be limited to the when the interpreter is in interactive mode. How it works in the terminal is less important to me than how it works in IDLE though; being able to teach how to use Python through IDLE (deferring discussion of terminals etc) is useful for introductory programming classes. -- Oscar -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Sat Nov 14 13:48:51 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 14 Nov 2015 13:48:51 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> Message-ID: <56478213.40408@sdamon.com> I perhaps can support added dialogs to IDLE to manage packages (having it shell out to pip, if no api is forthcoming), but I don't think I can support having the repl inside of IDLE intercept pip's command line syntax and do anything OTHER than giving a better error message. On 11/14/2015 06:37, Oscar Benjamin wrote: > > > On 14 Nov 2015 11:12, "Paul Moore" > wrote: > > > > On 13 November 2015 at 23:38, Nathaniel Smith > wrote: > > > But details of R's execution model make this easier to do. > > > > Indeed. I don't know how R works, but Python's module caching > > behaviour would mean this would be full of surprising and confusing > > corner cases ("I upgraded but I'm still getting the old version" being > > the simplest and most obvious one). > > > > > Maybe it could be supported for the special case of installing new > packages with no upgrades > > Maybe it could prompt the user that the interpreter will need to be > restarted for the changes to take effect. IDLE runs the interactive > interpreter in a separate process so it could restart the subprocess > without closing the GUI (after prompting the user with a > restart/continue dialogue). > > I'm not sure if the standard interpreter would be able to relaunch > itself but it could at least exit and tell the user to restart (after > a yes/no question in the terminal). The command could also be limited > to the when the interpreter is in interactive mode. > > How it works in the terminal is less important to me than how it works > in IDLE though; being able to teach how to use Python through IDLE > (deferring discussion of terminals etc) is useful for introductory > programming classes. > > -- > Oscar > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Sat Nov 14 15:50:03 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 14 Nov 2015 15:50:03 -0500 Subject: [Distutils] Installing packages using pip In-Reply-To: <56478213.40408@sdamon.com> References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <56478213.40408@sdamon.com> Message-ID: <20151114205003.C24B6B1408F@webabinitio.net> On Sat, 14 Nov 2015 13:48:51 -0500, Alexander Walters wrote: > I perhaps can support added dialogs to IDLE to manage packages (having > it shell out to pip, if no api is forthcoming), but I don't think I can > support having the repl inside of IDLE intercept pip's command line > syntax and do anything OTHER than giving a better error message. http://bugs.python.org/issue23551 From qwcode at gmail.com Sun Nov 15 12:49:31 2015 From: qwcode at gmail.com (Marcus Smith) Date: Sun, 15 Nov 2015 09:49:31 -0800 Subject: [Distutils] New PyPUG Tutorials Message-ID: > > > To have the most success, the writers will certainly need feedback from > subject matter experts, so the process will include 2 stages where we > specifically ask for feedback from PyPA-Dev and Distutils-Sig: 1) To > validate the initial proposal that covers the scope of the changes, and 2) > to review the actual PRs to PyPUG for accuracy, when it's time for merging. > I'll post again with more details as those stages occur. > So, I'm back to post the initial proposal as mentioned above. The proposal was put together by Daniel Beck (http://www.danieldbeck.com), one of the writers Nicole (from the Warehouse team) was able to rally with her call for volunteers [1]. It includes a *tentative* new outline, but keep in mind that however it turns out, we will maintain redirects for the current critical links, like the tool recommendations page, and the two current tutorials. As work proceeds, it will take place as PRs to the develop branch of the PyPUG (https://github.com/pypa/python-packaging-user-guide/), so people can follow along and help in the technical review process. When the final product is deemed worthy and ready, there will be a final call for review before the merge of develop back into master. Both Daniel and Nicole are now subscribed to distutils-sig, and can respond to any questions, concerns, or feedback. Here's the proposal, as quoted from an email from Daniel: ----------------------------------------- Scope --------- The guide is meant to answer this question: "How do Python, packages, and packaging tools fit together?" The guide's not to meant to replace tools' docs; rather, it's to show when and how these tools and other artifacts work as an ecosystem. Audience ------------- I get the impression that these are the major audience groups we want to help with the guide: - Python users who are at an 'advanced beginner' level (e.g., know how to import from the standard library, but want to learn to install a third-party package for the first time - Python users with intermediate experience who want to make and distribute a package for the first time - Python users with more experience who need a refresher or want to understand what the "one way to do it" is right now - Python users with lots of experience who want to take on some advanced or specialized packaging task Outline ---------- I sketched out a gross outline for the guide. This is not complete, sometimes speculative, and certainly subject to change, but I wanted to give you an overall picture of how I imagine the guide could look when we've had some time to work on this. Again, because the existing content is so good, I don't expect much, if anything, will be deleted?so just because it's not listed here doesn't mean anything. Rather, I expect that stuff will be easier to find. :-) 1. About this guide 2. Getting help 3. Quick references (i.e., five minute tutorials) a. Installing a package b. Making a package c. Publish a package on PyPI 4. Intro to Packaging* a. Who is this for? b. Prereqs (making sure Python, etc. are working) c. What's a package? d. Where do packages come from? e. Installing a package for the first time f. Using a virtualenv g. Making a package for the first time h. Publishing your first package 5. The Packaging Ecosystem (concept docs) a. Overview of tools b. Developing packages c. Distributing packages d. Indices and caches 6. Practices and patterns (task docs) a. Making packages b. Publishing packages c. Using packages 7. History and legacy of packaging a. that whole distutils->setuptools->distribute->setuptools saga b. history of PyPA c. eggs vs wheels d. easy_install vs pip e. cheeseshop vs warehouse 8. Contributing to this guide a. Style guide * This is the only part that I think functions as a continuous unit meant to be read from start to end. I'm imagining something that would work as, for example, a lesson plan for a PyLadies workshop. For the rest of the guide, I want to work with the expectation that every page is a landing page. Timeline ------------ Here's my current perspective on how quickly we can move on these things: - In the next few days: contacting volunteers to let them know that things are happening soon (Nicole) - In the 7 or 8 days: decide on workflow questions (Nicole and Marcus), amend project roles (Nicole and Marcus), and ask for distutils-sig approval (Marcus) - In the next two weeks: start opening issues and inviting volunteers to start writing - By the end of November: most, if not all, existing content could be reorganized into a new outline - By mid-December: have publishable versions of the new tutorials - By the end of January: potentially finished? [1] https://github.com/pypa/warehouse/issues/729 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun Nov 15 15:25:04 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sun, 15 Nov 2015 12:25:04 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> Message-ID: <-4014175157681291829@unknownmsgid> How it works in the terminal is less important to me than how it works in IDLE though; being able to teach how to use Python through IDLE (deferring discussion of terminals etc) is useful for introductory programming classes. Personally, I don't use IDLE for teaching, but do use iPython. But if we have a way to call pip from a Python REPL, it really should work in the standard REPL. Though still a good idea to have IDLE specific and iPython specific ways to install packages within those environments. I like the %pip Idea for iPython -- and I'm pretty sure the kernel can be restarted. Certainly in a notebook. As for the plain REPL, maybe a warning that you need to restart after an upgrade would be enough. Though I suspect that Window's aggressive file locking will put the kibosh on in-place upgrades :) CHB -- Oscar _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Nov 15 15:31:10 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 15 Nov 2015 20:31:10 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: <-4014175157681291829@unknownmsgid> References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On 15 November 2015 at 20:25, Chris Barker - NOAA Federal wrote: > Though I suspect that Window's aggressive file locking will put the kibosh > on in-place upgrades :) Generally, no. Python loads pyc files with a single read, it doesn't leave the files open. The only locking issues are when you try to upgrade a wrapper exe while it's in use. That might affect you if you try to upgrade IPython from within IPython, but otherwise it's probably fine. Paul From rmcgibbo at gmail.com Sun Nov 15 17:20:28 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Sun, 15 Nov 2015 14:20:28 -0800 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: But I think dll/pyd files from extension modules present more of a challenge, since they're left open. I recall some issues around this with conda (e.g. https://github.com/conda/conda-build/pull/520) -Robert On Sun, Nov 15, 2015 at 12:31 PM, Paul Moore wrote: > On 15 November 2015 at 20:25, Chris Barker - NOAA Federal > wrote: > > Though I suspect that Window's aggressive file locking will put the > kibosh > > on in-place upgrades :) > > Generally, no. Python loads pyc files with a single read, it doesn't > leave the files open. The only locking issues are when you try to > upgrade a wrapper exe while it's in use. That might affect you if you > try to upgrade IPython from within IPython, but otherwise it's > probably fine. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Nov 15 18:10:52 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 15 Nov 2015 23:10:52 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On 15 November 2015 at 22:20, Robert McGibbon wrote: > But I think dll/pyd files from extension modules present more of a > challenge, since they're left open. Good point, I'd forgotten about those. Yes, they would cause an upgrade to fail. Sorry. Paul From ncoghlan at gmail.com Sun Nov 15 19:16:42 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 16 Nov 2015 10:16:42 +1000 Subject: [Distutils] Current PyPI storage requirements? In-Reply-To: <92B9930E-5045-4827-B14B-C07DD78590BE@stufft.io> References: <92B9930E-5045-4827-B14B-C07DD78590BE@stufft.io> Message-ID: On 13 November 2015 at 22:22, Donald Stufft wrote: > >> On Nov 13, 2015, at 2:07 AM, Nick Coghlan wrote: >> >> This isn't an urgent question, but rather a "if the stats are readily >> available, I'm curious as to the answer" one: what are PyPi's current >> storage requirements? The warehouse.python.org front page indicates >> how many objects there are, but not the amount of space they take up. > > According to the usage reports in Amazon S3, we?re looking at roughly 215GB for PyPI and 10GB for TestPyPI. Thanks! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Sun Nov 15 23:07:22 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 16 Nov 2015 17:07:22 +1300 Subject: [Distutils] New PEP : dependency specification In-Reply-To: References: Message-ID: Final tweaks I hope... tl;dr: fixed a non-PEG safe grammar construct in identifier; added some missing whitespace from review. Added some corner case tests. diff --git a/dependency-specification.rst b/dependency-specification.rst index 72a87f1..a9953ed 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -86,7 +86,7 @@ URI is defined in std-66 [#std66]_:: version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ - version_one = version_cmp version + version_one = version_cmp version wsp* version_many = version_one (wsp* ',' version_one)* versionspec = ( '(' version_many ')' ) | version_many urlspec = '@' wsp* @@ -94,7 +94,7 @@ URI is defined in std-66 [#std66]_:: Environment markers allow making a specification only take effect in some environments:: - marker_op = version_cmp | 'in' | 'not in' + marker_op = version_cmp | 'in' | 'not' wsp+ 'in' python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*') dquote = '"' @@ -117,11 +117,12 @@ environments:: Optional components of a distribution may be specified using the extras field:: - identifier = ( letterOrDigit | - letterOrDigit (letterOrDigit | '-' | '_' | '.')* letterOrDigit ) + identifier = letterOrDigit ( + letterOrDigit | + (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) )* name = identifier extras_list = identifier (wsp* ',' wsp* identifier)* - extras = '[' wsp* extras_list? ']' + extras = '[' wsp* extras_list? wsp* ']' Giving us a rule for name based requirements:: @@ -197,10 +198,11 @@ False, the dependency specification should be ignored. The marker language is a subset of Python itself, chosen for the ability to safely evaluate it without running arbitrary code that could become a security vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This PEP -fixes some issues that were observed in the described in PEP-426 [#pep426]_. +fixes some issues that were observed in the design described in PEP-426 +[#pep426]_. Comparisons in marker expressions are typed by the comparison operator. The - operators that are not in perform the same as they + operators that are not in perform the same as they do for strings in Python. The operators use the PEP-440 [#pep440]_ version comparison rules when those are defined (that is when both sides have a valid version specifier). If there is no defined PEP-440 @@ -213,7 +215,7 @@ will result in errors:: User supplied constants are always encoded as strings with either ``'`` or ``"`` quote marks. Note that backslash escapes are not defined, but existing -implementations do support them them. They are not included in this +implementations do support them. They are not included in this specification because they add complexity and there is no observable need for them today. Similarly we do not define non-ASCII character support: all the runtime variables we are referencing are expected to be ASCII-only. @@ -349,11 +351,11 @@ The complete parsley grammar:: wsp = ' ' | '\t' version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '==='> version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | '+' | '!' )+> - version_one = version_cmp:op version:v -> (op, v) + version_one = version_cmp:op version:v wsp* -> (op, v) version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 versionspec = ('(' version_many:v ')' ->v) | version_many urlspec = '@' wsp* - marker_op = version_cmp | 'in' | 'not in' + marker_op = version_cmp | 'in' | 'not' wsp+ 'in' python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*' | '#') dquote = '"' @@ -374,11 +376,12 @@ The complete parsley grammar:: marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* marker_expr:r -> (o, r))*:ms -> (m, ms)) quoted_marker = ';' wsp* marker - identifier = <( letterOrDigit | - letterOrDigit (letterOrDigit | '-' | '_' | '.')* letterOrDigit )> + identifier = name = identifier extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + ids - extras = '[' wsp* extras_list?:e ']' -> e + extras = '[' wsp* extras_list?:e wsp* ']' -> e name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* quoted_marker?:m -> (n, e or [], v or [], m)) url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ quoted_marker?:m @@ -459,10 +462,12 @@ A test program - if the grammar is in a string ``grammar``:: wsp ... """ tests = [ - "name [fred,bar] @ http://foo.com ; python_version=='2.7'", + "A", + "aa", "name", "name>=3", "name>=3,<2", + "name [fred,bar] @ http://foo.com ; python_version=='2.7'", "name[quux, strange];python_version<'2.7' and platform_version=='2'", "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", From ncoghlan at gmail.com Mon Nov 16 00:32:02 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 16 Nov 2015 15:32:02 +1000 Subject: [Distutils] New PyPUG Tutorials In-Reply-To: References: Message-ID: On 16 November 2015 at 03:49, Marcus Smith wrote: >> >> To have the most success, the writers will certainly need feedback from >> subject matter experts, so the process will include 2 stages where we >> specifically ask for feedback from PyPA-Dev and Distutils-Sig: 1) To >> validate the initial proposal that covers the scope of the changes, and 2) >> to review the actual PRs to PyPUG for accuracy, when it's time for merging. >> I'll post again with more details as those stages occur. > > So, I'm back to post the initial proposal as mentioned above. That generally looks good to me, but I think we're going to need to keep the "Advanced topics" section in one form or another. Longer term, it might be possible to split them out into themed subsections (as the outline does for pip/easy_install and wheel/egg by moving them into the history section), but I don't think reorganising them is at all urgent, so that can be tackled after this initial rearrangement is done. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From waynejwerner at gmail.com Mon Nov 16 08:12:11 2015 From: waynejwerner at gmail.com (Wayne Werner) Date: Mon, 16 Nov 2015 07:12:11 -0600 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On Nov 15, 2015 5:11 PM, "Paul Moore" wrote: > > On 15 November 2015 at 22:20, Robert McGibbon wrote: > > But I think dll/pyd files from extension modules present more of a > > challenge, since they're left open. > > Good point, I'd forgotten about those. Yes, they would cause an > upgrade to fail. Sorry. Windows file locking is the worst. But it *is* possible to get around - see the program called BareTail. I was actually looking around at one point because I wanted to make a cross platform version of it (it really is the best tail application I've ever seen), and I think I came across a way to do it on StackOverflow, but I think it required doing some low level open magic and maybe required the win32api? Anyway, I kind of gave up on it, but that's the sort of thing that I would love to see in core python. But I don't know that it would get there without a motivated individual. -W -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Nov 16 08:38:23 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 16 Nov 2015 13:38:23 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On 16 November 2015 at 13:12, Wayne Werner wrote: > Windows file locking is the worst. But it *is* possible to get around - see > the program called BareTail. Windows file locking is just complex, and the defaults are "safe" (i.e. people can't change a file you have open). As you say, you can use different locking options with the Win32 API, but because core Python's file API is basically derived from Unix, it doesn't expose these options. It wouldn't help here anyway, as it's Windows that locks a DLL when you load it. Personally, I don't see why people think that's a bad thing - who wants someone to modify the code they are using behind their back? (I don't know what Unix does, I suspect it retains an old copy of the shared library for the process until the process exists, in which case you'd see a different issue, that you do an upgrade, but your process still uses the old code till you restart). Long story short, modifying code that a process is using is a bad thing to do. Therefore, the fact that pip can't easily do it is probably a good thing in reality... Paul From waynejwerner at gmail.com Mon Nov 16 10:04:50 2015 From: waynejwerner at gmail.com (Wayne Werner) Date: Mon, 16 Nov 2015 09:04:50 -0600 (CST) Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On Mon, 16 Nov 2015, Paul Moore wrote: > On 16 November 2015 at 13:12, Wayne Werner wrote: >> Windows file locking is the worst. But it *is* possible to get around - see >> the program called BareTail. > > Windows file locking is just complex, and the defaults are "safe" > (i.e. people can't change a file you have open). As you say, you can > use different locking options with the Win32 API, but because core > Python's file API is basically derived from Unix, it doesn't expose > these options. > > It wouldn't help here anyway, as it's Windows that locks a DLL when > you load it. Personally, I don't see why people think that's a bad > thing - who wants someone to modify the code they are using behind > their back? (I don't know what Unix does, I suspect it retains an old > copy of the shared library for the process until the process exists, > in which case you'd see a different issue, that you do an upgrade, but > your process still uses the old code till you restart). This is the case for all files. To check, simply open two terminals and in one: echo "Going away" >> ~/test.txt tail ~/test.txt And in the other: echo "You will not see this" > ~/test.txt But tail has an option `-f` that will follow the file by name, instead of I presume it't the inode. Of course if you >> append to the file instead, tail *will* actually pick that up. > Long story short, modifying code that a process is using is a bad > thing to do. Therefore, the fact that pip can't easily do it is > probably a good thing in reality... I suspect it makes life simple (which is better than complex). My personal assumption about DLL loading would be that it would follow the same pattern as Python importing modules - it's loaded once from disk at the first time it's imported, and it never goes back "to disk" for the orignal DLL. Though I can also understand the idea behind locking ones files, I kind of put that in the same basket as a language enforcing "private" variables. It just increases the burden of making that particular choice, regardless of how appropriate it may (or may not) be. I suppose that's mostly academic anyway - I'm not sure that invoking pip from within the repl is *really* the best solution to getting packages installed into the correct Python anyway. -W From p.f.moore at gmail.com Mon Nov 16 10:39:35 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 16 Nov 2015 15:39:35 +0000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On 16 November 2015 at 15:04, Wayne Werner wrote: > I suspect it makes life simple (which is better than complex). My personal > assumption about DLL loading would be that it would follow the same pattern > as Python importing modules - it's loaded once from disk at the first time > it's imported, and it never goes back "to disk" for the orignal DLL. On Windows, DLL loads map the DLL into the code space of the process. Which is why you don't want to change it. (Without some sort of copy on write, which has its own consequences). Basically, it's a trade-off that's handled differently between the two operating systems. We can argue forever over which is "best" without reaching any useful conclusion. And we're way off topic anyway, so let's leave it there :-) Paul From jamiel.almeida at gmail.com Sat Nov 14 22:46:22 2015 From: jamiel.almeida at gmail.com (Jamiel Almeida) Date: Sat, 14 Nov 2015 19:46:22 -0800 Subject: [Distutils] Version Spec compatible with PEP 0440 as well as SemVer Message-ID: <20151115034622.GA50796@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 # Version Spec compatible with PEP 0440 as well as SemVer ## Short Version/TL;DR I want to know if the following versions would abide by SemVer AND PEP 0440 as I would like both to be valid for my projects (order matters). The idea is to create a formal specification based on the Semantic Versioning Specification that is fully compatible with both Semantic Versioning and PEP 0440. As such, I'd like to know what you think. ### Examples: 0.0.0 # Before the first public release, local iterations and commits to the repository when a public API is still not present. 0.1.0 # First public release, since major = 0, public API is expected to be unstable 0.1.1 # First patch applied to the first public release 0.2.0 1.0.0-a0 # Alpha pre-release of version 1 (first stable release) 1.0.0-a1 1.0.0-b0 1.0.0-rc0 # First release candidate 1.0.0-rc1 1.0.0 1.0.1 1.0.2 ## Long version ### Intention I got very interested in versioning and format of public version identifiers a couple of days ago, and was studying different formats used by projects. I really liked the two "big" propositions I found, which are Semantic Versioning from semver.org (or "SemVer" from now on) and PEP 0440 (or "the PEP" from now on). I'm aware of the existence of forks to SemVer that "fix" it to be compatible with the PEP, e.g. OpenStack Foundation PBR's "Linux/Python Compatible Semantic Versioning 3.0.0" [1]. I'm also aware of discussions in mailing lists involving the "fixing" the PEP, e.g. "module version number support for semver.org" in python-ideas [2]. I have no intention to fix either, rather have a version specification for the projects I lead or my personal projects that is compatible with both. And by compatible with both I hope to have a simple spec that still provides the determinism in downloads/upgrades introduced by the PEP, even if that means forgoing "versatility" of some suffixes and identifiers, for simplicity. As well as providing a meaning and rationale for each element of a version-number/release (the "sem" in SemVer) Keywords are "determinism", "simplicity"; and hopefully not "bikeshedding", nor "trolling", nor "flame-baiting". ### Compatible bits I quote the section of the PEP that talks about compatibility with SemVer [3]: > The "Major.Minor.Patch" (described in this PEP as "major.minor.micro") > aspects of semantic versioning (clauses 1-9 in the 2.0.0-rc-1 > specification) are fully compatible with the version scheme defined in > this PEP, and abiding by these aspects is encouraged. > > Semantic versions containing a hyphen (pre-releases - clause 10) or a > plus sign (builds - clause 11) are not compatible with this PEP and > are not permitted in the public version field. > > One possible mechanism to translate such semantic versioning based > source labels to compatible public versions is to use the .devN suffix > to specify the appropriate version order. So I know the first four versions in my example above are compatible with both, as well as the last. The question arises when reading the above quote after having studied the prescriptions of the PEP in sections "Pre-releases" [4] and "Pre-release separators" [5] (under "Normalization") ### Pre-releases For the PEP, pre-releases are allowed to be separated by a dash from the release component described as "major.minor.patch" in Semantic Versioning, and as "major.minor.micro" in the PEP. So that part of clause 10 of version 2.0.0-rc-1 [6] (quoted below, as this was the one used when writing the PEP) on the Semantic Versioning format isn't the problem, where the problem lies is that SemVer allows that pre-release identifier to be multiple "sections" and to allow many more combinations of characters. > 10. A pre-release version MAY be denoted by appending a dash and a > series of dot separated identifiers immediately following the > patch version. Identifiers MUST be comprised of only ASCII > alphanumerics and dash [0-9A-Za-z-]. Pre-release versions satisfy > but have a lower precedence than the associated normal version. > Examples: 1.0.0-alpha, 1.0.0-alpha.1, 1.0.0-0.3.7, 1.0.0-x.7.z.92. Question #1: : If restricted to being one of "a", "b", or "rc" followed by a numeric component from 0-9, and also restricted to only one section, as shown on the examples I presented, would this be still incompatible with the PEP after being normalized? Question #2: : In the above-quoted version section of the PEP, pre-release versions aren't permitted to be part of the "public version field", what does this mean? I see that there are versions of software distributed using PyPI that have pre-release identifiers, e.g. Sphinx==1.3b3 [7] Semantically, a version "1.3b3" is the same as "1.3.0-b3" (an identifier compliant with SemVer, and the PEP after being normalized to "1.3.0.b3") But when saying "public version field", does the PEP prescribes that I "SHOULD NOT" use this identifier when publishing or what? Or was the intention that it not be part of the "final release" identifier (since it is after all a pre-release identifier). When I wrote the above examples, the spirit was to abide by SemVer (all clauses) while still being compatible with PEP 0440. The limitation of the numeric component to be [0-9] is to avoid sorting problems caused by a subtle difference in how these are sorted in SemVer vs The PEP. Since my examples use letters and numbers for this section identifier, SemVer prescribes lexical ordering by ASCII sort order, while the PEP compares same pre-release stage versions in numerical order. I.E. in SemVer "a1" < "a11" < "a2" as identifier components are expected to be separated by dots if they have different meaning. ### Aside. There was an update in this clause of SemVer from version 2.0.0-rc-1 as used in the PEP to version 2.0.0 (current), this update specified that the sections separated by dots must not be empty, must not include leading zeroes, and a small blurb on the meaning of this identifier. There was also a renumbering due to the deleting of a section before. These changes are irrelevant for this discussion. See [8] It's also important to note that I consider changes to documentation between minor releases a patch (SemVer) or (micro) in the PEP, and this number is updated when the code is "published" (using the definition in PEP 0426 [9]). This mechanism could fit into PEP 0440's concept of "post-release" but I'm choosing to forgo this concept to be able to stick to SemVer and not alter sorting order by using '-post#', '-r#', or '-rev#', which would force me to use '-c#' instead of '-rc#'. But it is possible and I might consider it in the future. Code pushed to the repository or merged from a pull-request doesn't fit into this definition of publication (the one used in PEP 0426). I consider part of this definition, however, the tagging a release in the version control repository, uploading to index servers, and in doing that providing the software to what PEP 0426 defines as "software integrators". As such, if after releasing a sample version "3.1.2", I were to push any number of commits and/or merge pull requests, this code or otherwise file changes are not considered to be published under the definition above, nor is it part of 3.1.2. Code is considered published and part of a version, when, as part of the development cycle; the version number on the repository is updated. This update would entail updating the version number in the code, and updating the changelog in the repository to the new version number and with all changes that have been added since last version was published". The change in version number and update to changelog should happen in the same commit, and this commit should not contain any other changes. This commit is then tagged with the version id, prepended with the character 'v'; and this new version is then to be uploaded to package indexes and sent to integrators. As such, and with the limitation of pre-release staged versions to only 10 (0-9) it is advised that the documentation be complete before pre-releasing in alpha, and only bugfixes be submitted (these may also be in the documentation) during a pre-release stage. The code and documentation is to be considered "frozen" for those pre-releases, and as such, changes should not be taken lightly and version bumps not be done trivially. End aside. ### Local versions / Build metadata In relation to what I call "+identifier"s, both the PEP and SemVer allow for "Local version identifier" or "build metadata" as they respectively refer to the section after the "public version identifier" in the PEP, and "normal version" in SemVer. This section is, on both documents, allowed to be included after a separator, specifically, the character "+" (plus sign). There are two differences, however. The first difference is syntactical, SemVer allows for letter, digit, and '-' (hyphen) characters in subsections separated by '.' (dot) in clause 10 of version 2.0.0 of the specification [8] (please don't confuse the clause with the previous #10, that was for the rc1 of the specification); while the PEP doesn't allow '-' (hyphen) on the segments as it calls them. The second difference is a semantic one, with an added "kicker". In SemVer, this section has no restrictions and is considered part of the public identifier. In contrast, it is prescribed in the PEP that this section of the identifier SHOULD NOT be used when publishing, and MAY be used to denote local versions built from source, and SHOULD be used by downstream projects when releasing a version of the upstream project (see section of the PEP about local version identifiers [10]). Also, please note that the PEP has specific meanings for "MAY" "SHOULD", and "SHOULD NOT". The "kicker" I mention, is that in SemVer two "build metadata" identifiers are not considered when determining precedence, while in the PEP they do. In more human terms, according to the PEP, my project should not use the "+identifier" for it's own releases, downstream projects can when they release a new version of my project that is compatible as defined in both SemVer and the PEP. And as long as those downstream projects don't use '-' (hyphen) these versions released by them would still abide by both SemVer and the PEP. [1]: http://docs.openstack.org/developer/pbr/semver.html [2]: http://grokbase.com/t/python/python-ideas/144e5x67tq/module-version-number-support-for-semver-org [3]: https://www.python.org/dev/peps/pep-0440/#semantic-versioning [4]: https://www.python.org/dev/peps/pep-0440/#pre-releases [5]: https://www.python.org/dev/peps/pep-0440/#pre-release-separators [6]: http://semver.org/spec/v2.0.0-rc.1.html [7]: https://pypi.python.org/pypi/Sphinx/1.3b3 [8]: http://semver.org/spec/v2.0.0.html [9]: https://www.python.org/dev/peps/pep-0426/ [10]: https://www.python.org/dev/peps/pep-0440/#local-version-identifiers - -- Jamiel Almeida -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.29 (Darwin) iQIcBAEBCgAGBQJWSAAOAAoJEHY8IXipLI+0uUwP/RLnCk62QAjBpmMsC/O5lk3c FqL5fD5ZaDjX5paxy3aoKU83UY5D5KTWiMeBbpZ2ae6nf00UP5hKEjoi3eioQn6Q I8ELE1vHXP2X9s+iJRzzbO6zMVtYQ/OHWtJq/YaGeOQdtN8mfl4e6esyMK6Tldl6 TstKLEPMQXL5zxgXPzcudvYW3wj7MXa1Pn9xcY06LgO5ERtLM4JCk2PJMzbZ8R8r gS8+B5UbVLuu9aE5n9ZjDbP1A07jKSSmwFdoxatvG176Vp7ha4xl80Kc0liYl9ke L1Zn3whpCBn7C5amO2RzmklniOgs56MCURo4wyqe+4V1egl9yio298YvrAvDRT4d TEftliEsErsCQjDpPutkYYgynARktU/xQHRZUw7cXNeqNfTuzA7ZaDuFgO3X6fDf A7Bd3BbMq9h3FsMTGq7Tb2F6cF8W4wVASTZTb3endRWN9X+khVr6Bkvhl9qr5cWh XG3xXBhdAku9vUaqxEXVyUlKoFytt3UwFQpJgvgB/eKffVCcXa+e3RSsNaN5p5dO yxCuqblHHjF9JHfEEybG3IWZzL3eyBgMPEGLC9LtHCrGwKXh0m+oQU8UOYrSDld7 RfcdoYzbrZ0k+rdiKPXiOx59vB3FWKKfZAeGempE6efhYlYAKsVpQ31S1sxQZH9w 6VBZmyFa+fGGYVfgToQs =VeL1 -----END PGP SIGNATURE----- From marius at gedmin.as Mon Nov 16 11:25:03 2015 From: marius at gedmin.as (Marius Gedminas) Date: Mon, 16 Nov 2015 18:25:03 +0200 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: <20151116162503.GA22281@platonas> On Mon, Nov 16, 2015 at 01:38:23PM +0000, Paul Moore wrote: > I don't know what Unix does, I suspect it retains an old > copy of the shared library for the process until the process exists, > in which case you'd see a different issue, that you do an upgrade, but > your process still uses the old code till you restart. Basically. Technically, both Linux and Windows won't let you write to a shared library you have mapped into a process's address space for execution. (You get an -ETEXT error on Linux, which one can observer if one tries to re-create virtualenv while its bin/python is currently running.) What you can do Linux that you cannot do on Windows is delete a shared library file while it's mapped into a process's address space. Then Linux lets you create a new file with the same name, while the old file stays around, nameless, until it's no longer used, at which point the disk space gets garbage-collected. (If we can call reference counting "garbage collection".) The result is as you said: existing processes keep running the old code until you restart them. There are tools (based on lsof, AFAIU) that check for this situation and remind you to restart daemons. Marius Gedminas -- We like stress testing, because we know the future will be stressful. -- Maritza Mendez -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: Digital signature URL: From robertc at robertcollins.net Mon Nov 16 15:46:21 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 09:46:21 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP Message-ID: :PEP: XX :Title: Dependency specification for Python Software Packages :Version: $Revision$ :Last-Modified: $Date$ :Author: Robert Collins :BDFL-Delegate: Donald Stufft :Discussions-To: distutils-sig :Status: Draft :Type: Standards Track :Content-Type: text/x-rst :Created: 11-Nov-2015 :Post-History: XX Abstract ======== This PEP specifies the language used to describe dependencies for packages. It draws a border at the edge of describing a single dependency - the different sorts of dependencies and when they should be installed is a higher level problem. The intent is provide a building block for higher layer specifications. The job of a dependency is to enable tools like pip [#pip]_ to find the right package to install. Sometimes this is very loose - just specifying a name, and sometimes very specific - referring to a specific file to install. Sometimes dependencies are only relevant in one platform, or only some versions are acceptable, so the language permits describing all these cases. The language defined is a compact line based format which is already in widespread use in pip requirements files, though we do not specify the command line option handling that those files permit. There is one caveat - the URL reference form, specified in PEP-440 [#pep440]_ is not actually implemented in pip, but since PEP-440 is accepted, we use that format rather than pip's current native format. Motivation ========== Any specification in the Python packaging ecosystem that needs to consume lists of dependencies needs to build on an approved PEP for such, but PEP-426 [#pep426]_ is mostly aspirational - and there are already existing implementations of the dependency specification which we can instead adopt. The existing implementations are battle proven and user friendly, so adopting them is arguably much better than approving an aspirational, unconsumed, format. Specification ============= Examples -------- All features of the language shown with a name based lookup:: requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < "2.7.10" A minimal URL based lookup:: pip @ https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 Concepts -------- A dependency specification always specifies a distribution name. It may include extras, which expand the dependencies of the named distribution to enable optional features. The version installed can be controlled using version limits, or giving the URL to a specific artifact to install. Finally the dependency can be made conditional using environment markers. Grammar ------- We first cover the grammar briefly and then drill into the semantics of each section later. A distribution specification is written in ASCII text. We use a parsley [#parsley]_ grammar to provide a precise grammar. It is expected that the specification will be embedded into a larger system which offers framing such as comments, multiple line support via continuations, or other such features. The full grammar including annotations to build a useful parse tree is included at the end of the PEP. Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: URI is defined in std-66 [#std66]_:: version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ version_one = version_cmp version wsp* version_many = version_one (wsp* ',' version_one)* versionspec = ( '(' version_many ')' ) | version_many urlspec = '@' wsp* Environment markers allow making a specification only take effect in some environments:: marker_op = version_cmp | 'in' | 'not' wsp+ 'in' python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*') dquote = '"' squote = '\\'' python_str = (squote (python_str_c | dquote)* squote | dquote (python_str_c | squote)* dquote) env_var = ('python_version' | 'python_full_version' | 'os_name' | 'sys_platform' | 'platform_release' | 'platform_system' | 'platform_version' | 'platform_machine' | 'python_implementation' | 'implementation_name' | 'implementation_version' | 'extra' # ONLY when defined by a containing layer ) marker_var = env_var | python_str marker_expr = ('(' wsp* marker wsp* ')' | (marker_var wsp* marker_op wsp* marker_var)) marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* quoted_marker = ';' wsp* marker Optional components of a distribution may be specified using the extras field:: identifier = letterOrDigit ( letterOrDigit | (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) )* name = identifier extras_list = identifier (wsp* ',' wsp* identifier)* extras = '[' wsp* extras_list? wsp* ']' Giving us a rule for name based requirements:: name_req = name wsp* extras? wsp* versionspec? wsp* quoted_marker? And a rule for direct reference specifications:: url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? Leading to the unified rule that can specify a dependency.:: specification = wsp* ( url_req | name_req ) wsp* Whitespace ---------- Non line-breaking whitespace is mostly optional with no semantic meaning. The sole exception is detecting the end of a URL requirement. Names ----- Python distribution names are currently defined in PEP-345 [#pep345]_. Names act as the primary identifier for distributions. They are present in all dependency specifications, and are sufficient to be a specification on their own. However, PyPI places strict restrictions on names - they must match a case insensitive regex or they won't be accepted. Accordingly in this PEP we limit the acceptable values for identifiers to that regex. A full redefinition of name may take place in a future metadata PEP:: ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ Extras ------ An extra is an optional part of a distribution. Distributions can specify as many extras as they wish, and each extra results in the declaration of additional dependencies of the distribution **when** the extra is used in a dependency specification. For instance:: requests[security] Extras union in the dependencies they define with the dependencies of the distribution they are attached to. The example above would result in requests being installed, and requests own dependencies, and also any dependencies that are listed in the "security" extra of requests. If multiple extras are listed, all the dependencies are unioned together. Versions -------- See PEP-440 [#pep440]_ for more detail on both version numbers and version comparisons. Version specifications limit the versions of a distribution that can be used. They only apply to distributions looked up by name, rather than via a URL. Version comparison are also used in the markers feature. The optional brackets around a version are present for compatibility with PEP-345 [#pep345]_ but should not be generated, only accepted. Environment Markers ------------------- Environment markers allow a dependency specification to provide a rule that describes when the dependency should be used. For instance, consider a package that needs argparse. In Python 2.7 argparse is always present. On older Python versions it has to be installed as a dependency. This can be expressed as so:: argparse;python_version<"2.7" A marker expression evalutes to either True or False. When it evaluates to False, the dependency specification should be ignored. The marker language is a subset of Python itself, chosen for the ability to safely evaluate it without running arbitrary code that could become a security vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This PEP fixes some issues that were observed in the design described in PEP-426 [#pep426]_. Comparisons in marker expressions are typed by the comparison operator. The operators that are not in perform the same as they do for strings in Python. The operators use the PEP-440 [#pep440]_ version comparison rules when those are defined (that is when both sides have a valid version specifier). If there is no defined PEP-440 behaviour and the operator exists in Python, then the operator falls back to the Python behaviour. Otherwise an error should be raised. e.g. the following will result in errors:: "dog" ~= "fred" python_version ~= "surprise" User supplied constants are always encoded as strings with either ``'`` or ``"`` quote marks. Note that backslash escapes are not defined, but existing implementations do support them. They are not included in this specification because they add complexity and there is no observable need for them today. Similarly we do not define non-ASCII character support: all the runtime variables we are referencing are expected to be ASCII-only. The variables in the marker grammar such as "os_name" resolve to values looked up in the Python runtime. With the exception of "extra" all values are defined on all Python versions today - it is an error in the implementation of markers if a value is not defined. Unknown variables must raise an error rather than resulting in a comparison that evaluates to True or False. Variables whose value cannot be calculated on a given Python implementation should evaluate to ``0`` for versions, and an empty string for all other variables. The "extra" variable is special. It is used by wheels to signal which specifications apply to a given extra in the wheel ``METADATA`` file, but since the ``METADATA`` file is based on a draft version of PEP-426, there is no current specification for this. Regardless, outside of a context where this special handling is taking place, the "extra" variable should result in an error like all other unknown variables. .. list-table:: :header-rows: 1 * - Marker - Python equivalent - Sample values * - ``os_name`` - ``os.name`` - ``posix``, ``java`` * - ``sys_platform`` - ``sys.platform`` - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that "linux" is from Python3 and "linux2" from Python2) * - ``platform_machine`` - ``platform.machine()`` - ``x86_64`` * - ``python_implementation`` - ``platform.python_implementation()`` - ``CPython``, ``Jython`` * - ``platform_release`` - ``platform.release()`` - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` * - ``platform_system`` - ``platform.system()`` - ``Linux``, ``Windows``, ``Java`` * - ``platform_version`` - ``platform.version()`` - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; root:xnu-2782.40.9~2/RELEASE_X86_64`` * - ``python_version`` - ``platform.python_version()[:3]`` - ``3.4``, ``2.7`` * - ``python_full_version`` - ``platform.python_version()`` - ``3.4.0``, ``3.5.0b1`` * - ``implementation_name`` - ``sys.implementation.name`` - ``cpython`` * - ``implementation_version`` - see definition below - ``3.4.0``, ``3.5.0b1`` * - ``extra`` - An error except when defined by the context interpreting the specification. - ``test`` The ``implementation_version`` marker variable is derived from ``sys.implementation.version``:: def format_full_version(info): version = '{0.major}.{0.minor}.{0.micro}'.format(info) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version if hasattr(sys, 'implementation'): implementation_version = format_full_version(sys.implementation.version) else: implementation_version = "0" Backwards Compatibility ======================= Most of this PEP is already widely deployed and thus offers no compatibiltiy concerns. There are however a few points where the PEP differs from the deployed base. Firstly, PEP-440 direct references haven't actually been deployed in the wild, but they were designed to be compatibly added, and there are no known obstacles to adding them to pip or other tools that consume the existing dependency metadata in distributions - particularly since they won't be permitted to be present in PyPI uploaded distributions anyway. Secondly, PEP-426 markers which have had some reasonable deployment, particularly in wheels and pip, will handle version comparisons with ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is less than "2.7.9". This backward incompatibility is deliberate. We are also defining new operators - "~=" and "===", and new variables - ``platform_release``, ``platform_system``, ``implementation_name``, and ``implementation_version`` which are not present in older marker implementations. The variables will error on those implementations. Users of both features will need to make a judgement as to when support has become sufficiently widespread in the ecosystem that using them will not cause compatibility issues. Thirdly, PEP-345 required brackets around version specifiers. In order to accept PEP-345 dependency specifications, brackets are accepted, but they should not be generated. Rationale ========= In order to move forward with any new PEPs that depend on environment markers, we needed a specification that included them in their modern form. This PEP brings together all the currently unspecified components into a specified form. The requirement specifier was adopted from the EBNF in the setuptools pkg_resources documentation, since we wish to avoid depending on a defacto, vs PEP specified, standard. Complete Grammar ================ The complete parsley grammar:: wsp = ' ' | '\t' version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '==='> version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | '+' | '!' )+> version_one = version_cmp:op version:v wsp* -> (op, v) version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 versionspec = ('(' version_many:v ')' ->v) | version_many urlspec = '@' wsp* marker_op = version_cmp | 'in' | 'not' wsp+ 'in' python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*' | '#') dquote = '"' squote = '\\'' python_str = (squote <(python_str_c | dquote)*>:s squote | dquote <(python_str_c | squote)*>:s dquote) -> s env_var = ('python_version' | 'python_full_version' | 'os_name' | 'sys_platform' | 'platform_release' | 'platform_system' | 'platform_version' | 'platform_machine' | 'python_implementation' | 'implementation_name' | 'implementation_version' | 'extra' # ONLY when defined by a containing layer ):varname -> lookup(varname) marker_var = env_var | python_str marker_expr = (("(" wsp* marker:m wsp* ")" -> m) | ((marker_var:l wsp* marker_op:o wsp* marker_var:r)) -> (l, o, r)) marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* marker_expr:r -> (o, r))*:ms -> (m, ms)) quoted_marker = ';' wsp* marker identifier = name = identifier extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + ids extras = '[' wsp* extras_list?:e wsp* ']' -> e name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* quoted_marker?:m -> (n, e or [], v or [], m)) url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ quoted_marker?:m -> (n, e or [], v, m)) specification = wsp* ( url_req | name_req ):s wsp* -> s # The result is a tuple - name, list-of-extras, # list-of-version-constraints-or-a-url, marker-ast or None URI_reference = URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? hier_part = ('//' authority path_abempty) | path_absolute | path_rootless | path_empty absolute_URI = scheme ':' hier_part ( '?' query )? relative_ref = relative_part ( '?' query )? ( '#' fragment )? relative_part = '//' authority path_abempty | path_absolute | path_noscheme | path_empty scheme = letter ( letter | digit | '+' | '-' | '.')* authority = ( userinfo '@' )? host ( ':' port )? userinfo = ( unreserved | pct_encoded | sub_delims | ':')* host = IP_literal | IPv4address | reg_name port = digit* IP_literal = '[' ( IPv6address | IPvFuture) ']' IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ IPv6address = ( ( h16 ':'){6} ls32 | '::' ( h16 ':'){5} ls32 | ( h16 )? '::' ( h16 ':'){4} ls32 | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 | ( ( h16 ':'){0,4} h16 )? '::' ls32 | ( ( h16 ':'){0,5} h16 )? '::' h16 | ( ( h16 ':'){0,6} h16 )? '::' ) h16 = hexdig{1,4} ls32 = ( h16 ':' h16) | IPv4address IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet nz = ~'0' digit dec_octet = ( digit # 0-9 | nz digit # 10-99 | '1' digit{2} # 100-199 | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# %250-255 reg_name = ( unreserved | pct_encoded | sub_delims)* path = ( path_abempty # begins with '/' or is empty | path_absolute # begins with '/' but not '//' | path_noscheme # begins with a non-colon segment | path_rootless # begins with a segment | path_empty ) # zero characters path_abempty = ( '/' segment)* path_absolute = '/' ( segment_nz ( '/' segment)* )? path_noscheme = segment_nz_nc ( '/' segment)* path_rootless = segment_nz ( '/' segment)* path_empty = pchar{0} segment = pchar* segment_nz = pchar+ segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ # non-zero-length segment without any colon ':' pchar = unreserved | pct_encoded | sub_delims | ':' | '@' query = ( pchar | '/' | '?')* fragment = ( pchar | '/' | '?')* pct_encoded = '%' hexdig unreserved = letter | digit | '-' | '.' | '_' | '~' reserved = gen_delims | sub_delims gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | ',' | ';' | '=' hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | 'D' | 'e' | 'E' | 'f' | 'F' A test program - if the grammar is in a string ``grammar``:: import os import sys import platform from parsley import makeGrammar grammar = """ wsp ... """ tests = [ "A", "aa", "name", "name>=3", "name>=3,<2", "name [fred,bar] @ http://foo.com ; python_version=='2.7'", "name[quux, strange];python_version<'2.7' and platform_version=='2'", "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", ] def format_full_version(info): version = '{0.major}.{0.minor}.{0.micro}'.format(info) kind = info.releaselevel if kind != 'final': version += kind[0] + str(info.serial) return version if hasattr(sys, 'implementation'): implementation_version = format_full_version(sys.implementation.version) implementation_name = sys.implementation.name else: implementation_version = '0' implementation_name = '' bindings = { 'implementation_name': implementation_name, 'implementation_version': implementation_version, 'os_name': os.name, 'platform_machine': platform.machine(), 'platform_release': platform.release(), 'platform_system': platform.system(), 'platform_version': platform.version(), 'python_full_version': platform.python_version(), 'python_implementation': platform.python_implementation(), 'python_version': platform.python_version()[:3], 'sys_platform': sys.platform, } compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) for test in tests: parsed = compiled(test).specification() print(parsed) References ========== .. [#pip] pip, the recommended installer for Python packages (http://pip.readthedocs.org/en/stable/) .. [#pep345] PEP-345, Python distribution metadata version 1.2. (https://www.python.org/dev/peps/pep-0345/) .. [#pep426] PEP-426, Python distribution metadata. (https://www.python.org/dev/peps/pep-0426/) .. [#pep440] PEP-440, Python distribution metadata. (https://www.python.org/dev/peps/pep-0440/) .. [#std66] The URL specification. (https://tools.ietf.org/html/rfc3986) .. [#parsley] The parsley PEG library. (https://pypi.python.org/pypi/parsley/) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Mon Nov 16 20:30:17 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 16 Nov 2015 17:30:17 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: On Nov 16, 2015 12:46 PM, "Robert Collins" wrote: > [...] > marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* I guess technically the spec doesn't say either way what the precedence of "and" versus "or" should be; it just parses unparenthesised sequences to a flat ast and in principle the (unspecified) ast evaluator could apply different precedences. The way the grammar's written though seems to suggest that they're equal precedence with some unspecified associativity, which is different from python where "and" is more tightly binding than "or" (and associativity doesn't matter because each operator is associative in isolation). Maybe this is implied here too via the language about how this is intended to be a python subset, but it would be good to clarify what the evaluation semantics should be. Other comments: The whitespace handling looks correct to me now :-) I didn't check that the two copies of the grammar are identical (and I only looked at the top version, not the bottom version). Hopefully someone did? It'll be a headache if we discover later that there's skew between them, and no guidance on which is normative. Otherwise looks good to me. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Mon Nov 16 20:46:53 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 14:46:53 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: On 17 November 2015 at 14:30, Nathaniel Smith wrote: > On Nov 16, 2015 12:46 PM, "Robert Collins" > wrote: >> > [...] >> marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* >> marker_expr)* > > I guess technically the spec doesn't say either way what the precedence of > "and" versus "or" should be; it just parses unparenthesised sequences to a > flat ast and in principle the (unspecified) ast evaluator could apply > different precedences. The way the grammar's written though seems to suggest > that they're equal precedence with some unspecified associativity, which is > different from python where "and" is more tightly binding than "or" (and > associativity doesn't matter because each operator is associative in > isolation). Maybe this is implied here too via the language about how this > is intended to be a python subset, but it would be good to clarify what the > evaluation semantics should be. e.g. "1" < "2" and "3" < "4" or "5" < "6" Right now we haven't used prose to define AND or OR as having differing precedences - and PEP 345 didn't either. I think simple left associative would be fine - right now the grammar we give parses: "name; os_name=='dud' and os_name=='odd' or os_name=='fred'" as ('name', [], [], (('posix', '==', 'dud'), [('and', ('posix', '==', 'odd')), ('or', ('posix', '==', 'fred'))])) which can just be rolled up left to right. > Other comments: > > The whitespace handling looks correct to me now :-) > > I didn't check that the two copies of the grammar are identical (and I only > looked at the top version, not the bottom version). Hopefully someone did? > It'll be a headache if we discover later that there's skew between them, and > no guidance on which is normative. I've been pretty careful; if there's a difference we'll discuss and figure it out :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Mon Nov 16 20:49:00 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 Nov 2015 20:49:00 -0500 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> > On Nov 16, 2015, at 8:46 PM, Robert Collins wrote: > > On 17 November 2015 at 14:30, Nathaniel Smith wrote: >> On Nov 16, 2015 12:46 PM, "Robert Collins" >> wrote: >>> >> [...] >>> marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* >>> marker_expr)* >> >> I guess technically the spec doesn't say either way what the precedence of >> "and" versus "or" should be; it just parses unparenthesised sequences to a >> flat ast and in principle the (unspecified) ast evaluator could apply >> different precedences. The way the grammar's written though seems to suggest >> that they're equal precedence with some unspecified associativity, which is >> different from python where "and" is more tightly binding than "or" (and >> associativity doesn't matter because each operator is associative in >> isolation). Maybe this is implied here too via the language about how this >> is intended to be a python subset, but it would be good to clarify what the >> evaluation semantics should be. > > e.g. > "1" < "2" and "3" < "4" or "5" < "6" > > Right now we haven't used prose to define AND or OR as having > differing precedences - and PEP 345 didn't either. > > I think simple left associative would be fine - right now the grammar > we give parses: > > "name; os_name=='dud' and os_name=='odd' or os_name=='fred'" > > as > > ('name', [], [], (('posix', '==', 'dud'), [('and', ('posix', '==', > 'odd')), ('or', ('posix', '==', 'fred'))])) > > which can just be rolled up left to right. Should we use the same rules as Python in order to maintain compatibility or is there a compelling reason to break compatibility here? > > > > >> Other comments: >> >> The whitespace handling looks correct to me now :-) >> >> I didn't check that the two copies of the grammar are identical (and I only >> looked at the top version, not the bottom version). Hopefully someone did? >> It'll be a headache if we discover later that there's skew between them, and >> no guidance on which is normative. > > I've been pretty careful; if there's a difference we'll discuss and > figure it out :) > -Rob > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Mon Nov 16 21:11:36 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 15:11:36 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: On 17 November 2015 at 14:49, Donald Stufft wrote: ... >> ('name', [], [], (('posix', '==', 'dud'), [('and', ('posix', '==', >> 'odd')), ('or', ('posix', '==', 'fred'))])) >> >> which can just be rolled up left to right. > > Should we use the same rules as Python in order to maintain compatibility or is there a compelling reason to break compatibility here? So for clarity: True and -> right hand side evaluated stand alone False and -> False True or -> True False or -> right hand side evaluated stand alone We can roll that up using the parse tree: (('posix', '==', 'dud'), [('and', ('posix', '==', 'odd')), ('or', ('posix', '==', 'fred'))])) evaluate the start to get a 'result' pop an expression from the beginning of the list giving opcode, next. lookup (result, opcode) in the above truth table, giving one of True, False, None on None, result = next and loop. on True or False, return that. So - no reason to break compat, though the grammar could perhaps be tweaked if we want to write an inline evaluator. The same expressions would be accepted either way. I think its worth copying the Python rules into the PEP though for clarity. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Mon Nov 16 21:24:04 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 15:24:04 +1300 Subject: [Distutils] build system abstraction PEP, take #2 Message-ID: This is the update I promised...full prose, since enough has changed that a diff would be noise. :PEP: XX :Title: Build system abstraction for pip/conda etc :Version: $Revision$ :Last-Modified: $Date$ :Author: Robert Collins , Nathaniel Smith :BDFL-Delegate: Donald Stufft :Discussions-To: distutils-sig :Status: Draft :Type: Standards Track :Content-Type: text/x-rst :Created: 26-Oct-2015 :Post-History: XX :Requires: The new dependency-specification PEP. Abstract ======== This PEP specifies a programmatic interface for pip [#pip]_ and other distribution or installation tools to use when working with Python source trees (both the developer tree - e.g. the git tree - and source distributions). The programmatic interface allows decoupling of pip from its current hard dependency on setuptools [#setuptools]_ able for two key reasons: 1. It enables new build systems that may be much easier to use without requiring them to even appear to be setuptools. 2. It facilitates setuptools itself changing its user interface without breaking pip, giving looser coupling. The interface needed to permit pip to install build systems also enables pip to install build time requirements for packages which is an important step in getting pip to full feature parity with the installation components of easy-install. As PEP-426 [#pep426]_ is draft, we cannot utilise the metadata format it defined. However PEP-427 wheels are in wide use and fairly well specified, so we have adopted the METADATA format from that for specifying distribution dependencies. However something was needed for communicating bootstrap requirements and build requirements - but a thin JSON schema is sufficient when overlaid over the new dependency specification PEP. Motivation ========== There is significant pent-up frustration in the Python packaging ecosystem around the current lock-in between build system and pip. Breaking that lock-in is better for pip, for setuptools, and for other build systems like flit [#flit]_. Specification ============= Overview -------- Build tools will be located by reading a file ``pypa.json`` from the root directory of the source tree. That file describes how to get the build tool and the name of the command to run to invoke the tool. All tools will be expected to conform to a single command line interface modelled on pip's existing use of the setuptools setup.py interface. pypa.json --------- The file ``pypa.json`` acts as neutron configuration file for pip and other tools that want to build source trees to consult for configuration. The absence of a ``pypa.json`` file in a Python source tree implies a setuptools or setuptools compatible build system. The JSON has the following schema. Extra keys are ignored. schema The version of the schema. This PEP defines version "1". Defaults to "1" when absent. All tools reading the file must error on an unrecognised schema version. bootstrap_requires Optional list of dependency specifications [#dependencyspec] that must be installed before running the build tool. For instance, if using flit, then the requirements might be:: bootstrap_requires: ["flit"] build_command A mandatory Python format string [#strformat]_ describing the command to run. For instance, if using flit then the build command might be:: build_command: ["flit"] If using a command which is a runnable module fred:: {PYTHON} -m fred Process interface ----------------- The command to run is defined by a simple Python format string [#strformat]_. This permits build systems with dedicated scripts and those that are invoked using "python -m somemodule". Processes will be run with the current working directory set to the root of the source tree. When run, processes should not read from stdin - while pip currently runs build systems with stdin connected to it's own stdin, stdout and stderr are redirected and no communication with the user is possible. As usual with processes, a non-zero exit status indicates an error. Available variables ------------------- PYTHON The Python interpreter in use. This is important to enable calling things which are just Python entry points. ${PYTHON} -m foo Subcommands ----------- There are a number of separate subcommands that build systems must support. The examples below use a build_command of ``flit`` for illustrative purposes. build_requires Query build requirements. Build requirements are returned as a JSON document with one key ``build_requires`` consisting of a list of dependency specifications [#dependencyspec]_. Additional keys must be ignored. The build_requires command is the only command run without setting up a build environment. Example command:: flit build_requires metadata Query project metadata. The metadata and only the metadata should be output on stdout. pip would run metadata just once to determine what other packages need to be downloaded and installed. The metadata is output as a wheel METADATA file per PEP-427 [#pep427]_. Note that the metadata generated by the metadata command, and the metadata present in a generated wheel must be identical. Example command:: flit metadata wheel -d OUTPUT_DIR Command to run to build a wheel of the project. OUTPUT_DIR will point to an existing directory where the wheel should be output. Stdout and stderr have no semantic meaning. Only one file should be output - if more are output then pip would pick an arbitrary one to consume. Example command:: flit wheel -d /tmp/pip-build_1234 develop Command to do an in-place 'development' installation of the project. Stdout and stderr have no semantic meaning. Not all build systems will be able to perform develop installs. If a build system cannot do develop installs, then it should error when run. Note that doing so will cause use operations like ``pip install -e foo`` to fail. The build environment --------------------- Except for the build_requires command, all commands are run within a build environment. No specific implementation is required, but a build environment must achieve the following requirements. 1. All dependencies specified by the project's build_requires must be available for import from within ``$PYTHON``. 1. All command-line scripts provided by the build-required packages must be present in ``$PATH``. A corollary of this is that build systems cannot assume access to any Python package that is not declared as a build_requires or in the Python standard library. Hermetic builds --------------- This specification does not prescribe whether builds should be hermetic or not. Existing build tools like setuptools will use installed versions of build time requirements (e.g. setuptools_scm) and only install other versions on version conflicts or missing dependencies. However its likely that better consistency can be created by always isolation builds and using only the specified dependencies. However there are nuanced problems there - such as how can users force the avoidance of a bad version of a build requirement which meets some packages dependencies. Future PEPs may tackle this problem, but it is not currently in scope - it does not affect the metadata required to coordinate between build systems and things that need to do builds, and thus is not PEP material. Upgrades -------- 'pypa.json' is versioned to permit future changes without requiring compatibility. The sequence for upgrading either of schemas in a new PEP will be: 1. Issue new PEP defining an updated schema. If the schema is not entirely backward compatible then a new version number must be defined. 2. Consumers (e.g. pip) implement support for the new schema version. 3. Package authors opt into the new schema when they are happy to introduce a dependency on the version of 'pip' (and potentially other consumers) that introduced support for the new schema version. The *same* process will take place for the initial deployment of this PEP:- the propogation of the capability to use this PEP without a `setuptools shim`_ will be largely gated by the adoption rate of the first version of pip that supports it. Static metadata in sdists ------------------------- This PEP does not tackle the current inability to trust static metadata in sdists. That is a separate problem to identifying and consuming the build system that is in use in a source tree, whether it came from an sdist or not. Handling of compiler options ---------------------------- Handling of different compiler options is out of scope for this specification. pip currently handles compiler options by appending user supplied strings to the command line it runs when running setuptools. This approach is sufficient to work with the build system interface defined in this PEP, with the exception that globally specified options will stop working globally as different build systems evolve. That problem can be solved in pip (or conda or other installers) without affecting interoperability. In the long term, wheels should be able to express the difference between wheels built with one compiler or options vs another, and that is PEP material. Examples ======== An example 'pypa.json' for using flit:: {"bootstrap_requires": ["flit"], "build_command": ["flit"]} When 'pip' reads this it would prepare an environment with flit in it before trying to use flit. Because flit doesn't have setup-requires support today, `flit build_requires` would just output a constant string:: {"build_requires": []} `flit metadata` would interrogate `flit.ini` and marshal the metadata into a wheel METADATA file and output that on stdout. `flit wheel` would need to accept a `-d` parameter that tells it where to output the wheel (pip needs this). Backwards Compatibility ======================= Older pips will remain unable to handle alternative build systems. This is no worse than the status quo - and individual build system projects can decide whether to include a shim ``setup.py`` or not. All existing build systems that can product wheels and do develop installs should be able to run under this abstraction and will only need a specific adapter for them constructed and published on PyPI. In the absence of a ``pypa.json`` file, tools like pip should assume a setuptools build system and use setuptools commands directly. Network effects --------------- Projects that adopt build systems that are not setuptools compatible - that is that they have no setup.py, or the setup.py doesn't accept commands that existing tools try to use - will not be installable by those existing tools. Where those projects are used by other projects, this effect will cascade. In particular, because pip does not handle setup-requires today, any project (A) that adopts a setuptools-incompatible build system and is consumed as a setup-requirement by a second project (B) which has not itself transitioned to having a pypa.json will make B uninstallable by any version of pip. This is because setup.py in B will trigger easy-install when 'setup.py egg_info' is run by pip, and that will try and fail to install A. As such we recommend that tools which are currently used as setup-requires either ensure that they keep a `setuptools shim`_ or find their consumers and get them all to upgrade to the use of a `pypa.json` in advance of moving themselves. Pragmatically that is impossible, so the advice is to keep a setuptools shim indefinitely - both for projects like pbr, setuptools_scm and also projects like numpy. setuptools shim --------------- It would be possible to write a generic setuptools shim that looks like ``setup.py`` and under the hood uses ``pypa.json`` to drive the builds. This is not needed for pip to use the system, but would allow package authors to use the new features while still retaining compatibility with older pip versions. Rationale ========= This PEP started with a long mailing list thread on distutils-sig [#thread]_. Subsequent to that a online meeting was held to debug all the positions folk had. Minutes from that were posted to the list [#minutes]_. This specification is a translation of the consensus reached there into PEP form, along with some arbitrary choices on the minor remaining questions. The basic heuristic for the design has to been to focus on introducing an abstraction without requiring development not strictly tied to the abstraction. Where the gap is small to improvements, or the cost of using the existing interface is very high, then we've taken on having the improvement as a dependency, but otherwise defered such to future iterations. We chose wheel METADATA files rather than defining a new specification, because pip can already handle wheel .dist-info directories which encode all the necessary data in a METADATA file. PEP-426 can't be used as it's still draft, and defining a new metadata format, while we should do that, is a separate problem. Using a directory on disk would not add any value to the interface (pip has to do that today due to limitations in the setuptools CLI). The use of 'develop' as a command is because there is no PEP specifying the interoperability of things that do what 'setuptools develop' does - so we'll need to define that before pip can take on the responsibility for doing the 'develop' step. Once thats done we can issue a successor PEP to this one. The use of a command line API rather than a Python API is a little contentious. Fundamentally anything can be made to work, and the pip maintainers have spoken strongly in favour of retaining a process based interface - something that is mature and robust in pip today. The choice of JSON as a file format is a compromise between several constraints. Firstly there is no stdlib YAML interpreter, nor one for any of the other low-friction structured file formats. Secondly, INIParser is a poor format for a number of reasons, primarily that it has very minimal structure - but pip's maintainers are not fond of it. JSON is in the stdlib, has sufficient structure to permit embedding anything we want in future without requiring embedded DSL's. Donald suggested using ``setup.cfg`` and the existing setuptools command line rather than inventing something new. While that would permit interoperability with less visible changes, it requires nearly as much engineering on the pip side - looking for the new key in setup.cfg, implementing the non-installed environments to run the build in. And the desire from other build system authors not to confuse their users by delivering something that looks like but behaves quite differently to setuptools seems like a bigger issue than pip learning how to invoke a custom build tool. The metadata and wheel commands are required to have consistent metadata to avoid a race condition that could otherwise happen where pip reads the metadata, acts on it, and then the resulting wheel has incompatible requirements. That race is exploited today by packages using PEP-426 environment markers, to work with older pip versions that do not support environment markers. That exploit is not needed with this PEP, because either the setuptools shim is in use (with older pip versions), or an environment marker ready pip is in use. The setuptools shim can take care of exploiting the difference older pip versions require. References ========== .. [#pip] pip, the recommended installer for Python packages (http://pip.readthedocs.org/en/stable/) .. [#setuptools] setuptools, the defacto Python package build system (https://pythonhosted.org/setuptools/) .. [#flit] flit, a simple way to put packages in PyPI (http://flit.readthedocs.org/en/latest/) .. [#pypi] PyPI, the Python Package Index (https://pypi.python.org/) .. [#shellvars] Shellvars, an implementation of shell variable rules for Python. (https://github.com/testing-cabal/shellvars) .. [#pep426] PEP-426, Python distribution metadata. (https://www.python.org/dev/peps/pep-0426/) .. [#pep427] PEP-427, Python distribution metadata. (https://www.python.org/dev/peps/pep-0427/) .. [#thread] The kick-off thread. (https://mail.python.org/pipermail/distutils-sig/2015-October/026925.html) .. [#minutes] The minutes. (https://mail.python.org/pipermail/distutils-sig/2015-October/027214.html) .. [#strformat] The Python string formatting syntax. (https://docs.python.org/3.1/library/string.html#format-string-syntax) .. [#dependencyspec] Dependency specification language PEP. (https://github.com/pypa/interoperability-peps/pull/56) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Mon Nov 16 21:42:51 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 16 Nov 2015 18:42:51 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: On Mon, Nov 16, 2015 at 6:11 PM, Robert Collins wrote: > On 17 November 2015 at 14:49, Donald Stufft wrote: > ... >>> ('name', [], [], (('posix', '==', 'dud'), [('and', ('posix', '==', >>> 'odd')), ('or', ('posix', '==', 'fred'))])) >>> >>> which can just be rolled up left to right. >> >> Should we use the same rules as Python in order to maintain compatibility or is there a compelling reason to break compatibility here? > > So for clarity: > True and -> right hand side evaluated stand alone > False and -> False > True or -> True > False or -> right hand side evaluated stand alone > > We can roll that up using the parse tree: > (('posix', '==', 'dud'), [('and', ('posix', '==', 'odd')), ('or', > ('posix', '==', 'fred'))])) > evaluate the start to get a 'result' > pop an expression from the beginning of the list giving opcode, next. > lookup (result, opcode) in the above truth table, giving one of True, > False, None > on None, result = next and loop. > on True or False, return that. Not sure if we're communicating or not? The case I'm concerned about is a or b and c which Python parses as (a or (b and c)) because 'and' has higher precedence than 'or'. So for example in Python, True or True and False returns True, but if you use a left-to-right evaluation rule than it returns False. -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Mon Nov 16 22:27:59 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 16:27:59 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: On 17 November 2015 at 15:42, Nathaniel Smith wrote: >> So for clarity: >> True and -> right hand side evaluated stand alone >> False and -> False >> True or -> True >> False or -> right hand side evaluated stand alone >> >> We can roll that up using the parse tree: >> (('posix', '==', 'dud'), [('and', ('posix', '==', 'odd')), ('or', >> ('posix', '==', 'fred'))])) >> evaluate the start to get a 'result' >> pop an expression from the beginning of the list giving opcode, next. >> lookup (result, opcode) in the above truth table, giving one of True, >> False, None >> on None, result = next and loop. >> on True or False, return that. > > Not sure if we're communicating or not? The case I'm concerned about is > > a or b and c > > which Python parses as (a or (b and c)) because 'and' has higher > precedence than 'or'. So for example in Python, > > True or True and False > > returns True, but if you use a left-to-right evaluation rule than it > returns False. True or True and False -> lookup = { (True, 'and'): None, (False, 'and'): False, (True, 'or'): True, (False, 'or'): None} def evaluate(expr): result = expr[0] remainder = expr[1] while remainder: opcode, next = remainder.pop(0) branch = lookup[(result, opcode)] if branch is not None: return branch result = next return result expr = (True, [('or', True), ('and', False)]) evaluate(expr) -> True So, I think what I describe does what you want it to. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From njs at pobox.com Mon Nov 16 22:37:58 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 16 Nov 2015 19:37:58 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: On Mon, Nov 16, 2015 at 7:27 PM, Robert Collins wrote: > On 17 November 2015 at 15:42, Nathaniel Smith wrote: >>> So for clarity: >>> True and -> right hand side evaluated stand alone >>> False and -> False >>> True or -> True >>> False or -> right hand side evaluated stand alone >>> >>> We can roll that up using the parse tree: >>> (('posix', '==', 'dud'), [('and', ('posix', '==', 'odd')), ('or', >>> ('posix', '==', 'fred'))])) >>> evaluate the start to get a 'result' >>> pop an expression from the beginning of the list giving opcode, next. >>> lookup (result, opcode) in the above truth table, giving one of True, >>> False, None >>> on None, result = next and loop. >>> on True or False, return that. >> >> Not sure if we're communicating or not? The case I'm concerned about is >> >> a or b and c >> >> which Python parses as (a or (b and c)) because 'and' has higher >> precedence than 'or'. So for example in Python, >> >> True or True and False >> >> returns True, but if you use a left-to-right evaluation rule than it >> returns False. > > > True or True and False > -> > lookup = { > (True, 'and'): None, > (False, 'and'): False, > (True, 'or'): True, > (False, 'or'): None} > > def evaluate(expr): > result = expr[0] > remainder = expr[1] > while remainder: > opcode, next = remainder.pop(0) > branch = lookup[(result, opcode)] > if branch is not None: > return branch > result = next > return result > > expr = (True, [('or', True), ('and', False)]) > evaluate(expr) > > -> True > > So, I think what I describe does what you want it to. In [7]: evaluate((False, [('and', False), ('or', True)])) Out[7]: False In [8]: False and False or True Out[8]: True -n -- Nathaniel J. Smith -- http://vorpus.org From robertc at robertcollins.net Mon Nov 16 22:59:52 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 16:59:52 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: On 17 November 2015 at 16:37, Nathaniel Smith wrote: Blah. Shall address. -Rob > > -- > Nathaniel J. Smith -- http://vorpus.org -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Tue Nov 17 02:51:30 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Nov 2015 17:51:30 +1000 Subject: [Distutils] Installing packages using pip In-Reply-To: References: <564640C9.7060801@sdamon.com> <20151113201752.42727B1408D@webabinitio.net> <-4014175157681291829@unknownmsgid> Message-ID: On 16 November 2015 at 23:38, Paul Moore wrote: > (I don't know what Unix does, I suspect it retains an old > copy of the shared library for the process until the process exists, > in which case you'd see a different issue, that you do an upgrade, but > your process still uses the old code till you restart). Marius explained the lower level technical details, but the relevant API at the Python level is the "fileno()" method on file-like objects: once you have a file descriptor, you can access the kernel object representing the open file directly, and the kernel doesn't care if the original filesystem path has been remapped to refer to something else. The persistent identifier at the filesystem level is the inode number, rather than the filesystem path. After opening a file, the inode numbers match: >>> f = open("example", "w") >>> os.stat("example").st_ino 244985 >>> os.stat(f.fileno()).st_ino 244985 The filesystem's reference to the inode can be dropped, without losing the kernel's reference: >>> os.remove("example") >>> os.stat("example").st_ino Traceback (most recent call last): File "", line 1, in FileNotFoundError: [Errno 2] No such file or directory: 'example' >>> os.stat(f.fileno()).st_ino 244985 The original filesystem path can then be mapped to a new inode: >>> f2 = open("example", "w") >>> os.stat("example").st_ino 242960 >>> os.stat(f.fileno()).st_ino 244985 >>> os.stat(f2.fileno()).st_ino 242960 As Wayne noted, the fact shared libraries can be overwritten while processes are using them is then just an artifact of this general property of *nix style filesystem access. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Tue Nov 17 02:57:18 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 17 Nov 2015 08:57:18 +0100 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: <564ADDDE.7010503@thomas-guettler.de> > The job of a dependency is to enable tools like pip [#pip]_ to find the right > package to install. My worries: AFAIK pip is not a library. I don't want to re-implement code to handle this pep. I would like to re-use. But AFAIK pip is not a library. I am stupid and don't know how to proceed. Please tell me what to do. Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From opensource at ronnypfannschmidt.de Tue Nov 17 02:53:28 2015 From: opensource at ronnypfannschmidt.de (Ronny Pfannschmidt) Date: Tue, 17 Nov 2015 08:53:28 +0100 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: Message-ID: <5997BB8E-5FDA-4635-B5BD-C57E5C6118F7@ronnypfannschmidt.de> I forgot to reply to the list, so I re-added it To further explain, in my setuptools replacement experiment I implement the develop command by generating a a wheel Using a local version tag `+develop`, Instead of the actual code, for each top-level it contains a shim python file that will load the real code from the editable location However it will use pip to install that wheel and create scripts/exe files In particular windows support for a full develop command is orders of magnitude more easy if one let's pip make the scripts(exe files) I think it's very helpful for tool authors if pip has ways for them to avoid platform pitfalls like exe files Am 17. November 2015 08:15:41 MEZ, schrieb Robert Collins : >On 17 November 2015 at 19:16, Ronny Pfannschmidt > wrote: >> The develop command should be able to generate a wheel with a local >version >> that pip can install >> That way tool authors can completely avoid writing the target folders >and >> its possible to always run completely unprivileged builds > >Thats an interesting idea, but wholly different to what 'develop' does >and is used for. My goal here is to only commit to new things where >the new thing is all of: > - a small amount of work > - meets existing use cases > - able to be put into pip/setuptools/wheel/etc now (rather than after >a bunch of other things are done) > >I completely agree with the idea that develop as it stands has a >number of limitations, but there is no generic implementation >available for pip to reuse; there is no feature in pip that would do >what you want either; nor even a proof of concept. I think as such its >premature to PEPify it. > >What could be done is to take the abstract build system work (which >*is* within reach for pip) and create a implementation with a new >schema version and whatever semantics you want around this new >developish thing, and write an incremental PEP (so you don't need any >of the PEP I've put up, just inherit from it and define your new thing >- and then we can kick the tires on it and see how its going to work >etc. > >Note that what you describe 'generate a wheel with a local version' is >actually exactly what the 'wheel' command does as far as I can tell, >but I'm sure thats not what you have in mind - so please do expand on >the differences, in >paranoid-folk-make-sure-folk-cannot-misunderstand-style ;) > >-Rob > > >-- >Robert Collins >Distinguished Technologist >HP Converged Cloud -- Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Nov 17 03:02:13 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 17 Nov 2015 21:02:13 +1300 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <564ADDDE.7010503@thomas-guettler.de> References: <564ADDDE.7010503@thomas-guettler.de> Message-ID: On 17 November 2015 at 20:57, Thomas G?ttler wrote: >> The job of a dependency is to enable tools like pip [#pip]_ to find the right >> package to install. > > My worries: AFAIK pip is not a library. > > I don't want to re-implement code to handle this pep. > > I would like to re-use. > > But AFAIK pip is not a library. > > I am stupid and don't know how to proceed. > > Please tell me what to do. What are you trying to accomplish? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From guettliml at thomas-guettler.de Tue Nov 17 06:26:37 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 17 Nov 2015 12:26:37 +0100 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <564ADDDE.7010503@thomas-guettler.de> Message-ID: <564B0EED.3050404@thomas-guettler.de> Am 17.11.2015 um 09:02 schrieb Robert Collins: > On 17 November 2015 at 20:57, Thomas G?ttler > wrote: >>> The job of a dependency is to enable tools like pip [#pip]_ to find the right >>> package to install. >> >> My worries: AFAIK pip is not a library. >> >> I don't want to re-implement code to handle this pep. >> >> I would like to re-use. >> >> But AFAIK pip is not a library. >> >> I am stupid and don't know how to proceed. >> >> Please tell me what to do. > > What are you trying to accomplish? I want to parse a single dependency of a python package. -- Thomas Guettler http://www.thomas-guettler.de/ From njs at pobox.com Tue Nov 17 06:33:09 2015 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 17 Nov 2015 03:33:09 -0800 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <564ADDDE.7010503@thomas-guettler.de> References: <564ADDDE.7010503@thomas-guettler.de> Message-ID: On Nov 16, 2015 11:57 PM, "Thomas G?ttler" wrote: > > > The job of a dependency is to enable tools like pip [#pip]_ to find the right > > package to install. > > My worries: AFAIK pip is not a library. > > I don't want to re-implement code to handle this pep. > > I would like to re-use. > > But AFAIK pip is not a library. > > I am stupid and don't know how to proceed. > > Please tell me what to do. Presumably there will be a dependency parser added to the 'packaging' library, which already exists as a standard place to stick stuff like this, so you'll just use that. (E.g. it's what pip uses for PEP 440 version parsing today.) https://pypi.python.org/pypi/packaging -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue Nov 17 07:48:49 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Nov 2015 07:48:49 -0500 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <564ADDDE.7010503@thomas-guettler.de> Message-ID: <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> > On Nov 17, 2015, at 6:33 AM, Nathaniel Smith wrote: > > On Nov 16, 2015 11:57 PM, "Thomas G?ttler" > wrote: > > > > > The job of a dependency is to enable tools like pip [#pip]_ to find the right > > > package to install. > > > > My worries: AFAIK pip is not a library. > > > > I don't want to re-implement code to handle this pep. > > > > I would like to re-use. > > > > But AFAIK pip is not a library. > > > > I am stupid and don't know how to proceed. > > > > Please tell me what to do. > > Presumably there will be a dependency parser added to the 'packaging' library, which already exists as a standard place to stick stuff like this, so you'll just use that. (E.g. it's what pip uses for PEP 440 version parsing today.) > > https://pypi.python.org/pypi/packaging > https://github.com/pypa/packaging/pull/45 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From guettliml at thomas-guettler.de Tue Nov 17 08:43:27 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 17 Nov 2015 14:43:27 +0100 Subject: [Distutils] Link to lib missing in PEP: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> References: <564ADDDE.7010503@thomas-guettler.de> <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> Message-ID: <564B2EFF.80309@thomas-guettler.de> Am 17.11.2015 um 13:48 schrieb Donald Stufft: > >> On Nov 17, 2015, at 6:33 AM, Nathaniel Smith > wrote: >> >> On Nov 16, 2015 11:57 PM, "Thomas G?ttler" > wrote: >> > >> > > The job of a dependency is to enable tools like pip [#pip]_ to find the right >> > > package to install. >> > >> > My worries: AFAIK pip is not a library. >> > >> > I don't want to re-implement code to handle this pep. >> > >> > I would like to re-use. >> > >> > But AFAIK pip is not a library. >> > >> > I am stupid and don't know how to proceed. >> > >> > Please tell me what to do. >> >> Presumably there will be a dependency parser added to the 'packaging' library, which already exists as a standard >> place to stick stuff like this, so you'll just use that. (E.g. it's what pip uses for PEP 440 version parsing today.) >> >> https://pypi.python.org/pypi/packaging Nice, a re-usable library :-) Since I don't see a reason for two implementations, I think the PEP should provide a link to the implementation. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From ncoghlan at gmail.com Tue Nov 17 08:57:22 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 17 Nov 2015 23:57:22 +1000 Subject: [Distutils] Link to lib missing in PEP: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <564B2EFF.80309@thomas-guettler.de> References: <564ADDDE.7010503@thomas-guettler.de> <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> <564B2EFF.80309@thomas-guettler.de> Message-ID: On 17 November 2015 at 23:43, Thomas G?ttler wrote: > > > Am 17.11.2015 um 13:48 schrieb Donald Stufft: >> >> >>> On Nov 17, 2015, at 6:33 AM, Nathaniel Smith >> > wrote: >>> >>> On Nov 16, 2015 11:57 PM, "Thomas G?ttler" >> > wrote: >>> > >>> > > The job of a dependency is to enable tools like pip [#pip]_ to find >>> > > the right >>> > > package to install. >>> > >>> > My worries: AFAIK pip is not a library. >>> > >>> > I don't want to re-implement code to handle this pep. >>> > >>> > I would like to re-use. >>> > >>> > But AFAIK pip is not a library. >>> > >>> > I am stupid and don't know how to proceed. >>> > >>> > Please tell me what to do. >>> >>> Presumably there will be a dependency parser added to the 'packaging' >>> library, which already exists as a standard >>> place to stick stuff like this, so you'll just use that. (E.g. it's what >>> pip uses for PEP 440 version parsing today.) >>> >>> https://pypi.python.org/pypi/packaging > > Nice, a re-usable library :-) > > Since I don't see a reason for two implementations, I think the PEP should > provide a link to the implementation. No, it shouldn't, as the whole point of these PEPs is to get away from implementation defined behaviours. If somebody can't implement their own library from scratch using just the material in the PEP, then the PEP is incomplete. However, it may be worth pointing to "packaging" from https://packaging.python.org/en/latest/current/ for the sake of folks implementing their own tooling (or adapting existing tools). At the moment, I believe the only reference is from the full project listing at https://packaging.python.org/en/latest/projects/#packaging Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Tue Nov 17 08:59:49 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 14:59:49 +0100 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP References: Message-ID: <20151117145949.038695e3@fsol> On Tue, 17 Nov 2015 09:46:21 +1300 Robert Collins wrote: > > URI_reference = > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > hier_part = ('//' authority path_abempty) | path_absolute | > path_rootless | path_empty > absolute_URI = scheme ':' hier_part ( '?' query )? > relative_ref = relative_part ( '?' query )? ( '#' fragment )? > relative_part = '//' authority path_abempty | path_absolute | > path_noscheme | path_empty > scheme = letter ( letter | digit | '+' | '-' | '.')* > authority = ( userinfo '@' )? host ( ':' port )? > userinfo = ( unreserved | pct_encoded | sub_delims | ':')* > host = IP_literal | IPv4address | reg_name > port = digit* > IP_literal = '[' ( IPv6address | IPvFuture) ']' > IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ > IPv6address = ( > ( h16 ':'){6} ls32 > | '::' ( h16 ':'){5} ls32 > | ( h16 )? '::' ( h16 ':'){4} ls32 > | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 > | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 > | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 > | ( ( h16 ':'){0,4} h16 )? '::' ls32 > | ( ( h16 ':'){0,5} h16 )? '::' h16 > | ( ( h16 ':'){0,6} h16 )? '::' ) It seems weird that the PEP tries to include an entire subgrammar for URIs, including even the parsing various kinds of IP addresses. Why not be lenient in their detection and leave actual definition of valid URIs to the IETF? It doesn't seem there is any point to embed/duplicate such knowledge in Python packaging tools. Regards Antoine. From solipsis at pitrou.net Tue Nov 17 09:04:00 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 15:04:00 +0100 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP References: <564ADDDE.7010503@thomas-guettler.de> Message-ID: <20151117150400.2280d043@fsol> On Tue, 17 Nov 2015 03:33:09 -0800 Nathaniel Smith wrote: > > Presumably there will be a dependency parser added to the 'packaging' > library, which already exists as a standard place to stick stuff like this, > so you'll just use that. (E.g. it's what pip uses for PEP 440 version > parsing today.) > > https://pypi.python.org/pypi/packaging Ah... Is this a different thing than distlib? Does one depend on the other? (this may come to mind: https://www.jwz.org/doc/cadt.html :-)) Regards Antoine. From guettliml at thomas-guettler.de Tue Nov 17 09:09:47 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 17 Nov 2015 15:09:47 +0100 Subject: [Distutils] Link to lib missing in PEP: FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <564ADDDE.7010503@thomas-guettler.de> <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> <564B2EFF.80309@thomas-guettler.de> Message-ID: <564B352B.9060701@thomas-guettler.de> Am 17.11.2015 um 14:57 schrieb Nick Coghlan: > On 17 November 2015 at 23:43, Thomas G?ttler > wrote: >> >> >> Am 17.11.2015 um 13:48 schrieb Donald Stufft: >>> >>> >>>> On Nov 17, 2015, at 6:33 AM, Nathaniel Smith >>> > wrote: >>>> >>>> On Nov 16, 2015 11:57 PM, "Thomas G?ttler" >>> > wrote: >>>>> >>>>>> The job of a dependency is to enable tools like pip [#pip]_ to find >>>>>> the right >>>>>> package to install. >>>>> >>>>> My worries: AFAIK pip is not a library. >>>>> >>>>> I don't want to re-implement code to handle this pep. >>>>> >>>>> I would like to re-use. >>>>> >>>>> But AFAIK pip is not a library. >>>>> >>>>> I am stupid and don't know how to proceed. >>>>> >>>>> Please tell me what to do. >>>> >>>> Presumably there will be a dependency parser added to the 'packaging' >>>> library, which already exists as a standard >>>> place to stick stuff like this, so you'll just use that. (E.g. it's what >>>> pip uses for PEP 440 version parsing today.) >>>> >>>> https://pypi.python.org/pypi/packaging >> >> Nice, a re-usable library :-) >> >> Since I don't see a reason for two implementations, I think the PEP should >> provide a link to the implementation. > > No, it shouldn't, as the whole point of these PEPs is to get away from > implementation defined behaviours. If somebody can't implement their > own library from scratch using just the material in the PEP, then the > PEP is incomplete. I think you are flying to high. Have you understood what I want? I just want to add **one** sentence to the PEP: "This PEP is implemented in ...." You always have "implementation defined behaviours" since the PEP is not executable. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From donald at stufft.io Tue Nov 17 09:19:49 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Nov 2015 09:19:49 -0500 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <20151117150400.2280d043@fsol> References: <564ADDDE.7010503@thomas-guettler.de> <20151117150400.2280d043@fsol> Message-ID: <81DEFB3A-DA04-4B23-A4A9-A32FC49F3DE0@stufft.io> > On Nov 17, 2015, at 9:04 AM, Antoine Pitrou wrote: > > On Tue, 17 Nov 2015 03:33:09 -0800 > Nathaniel Smith wrote: >> >> Presumably there will be a dependency parser added to the 'packaging' >> library, which already exists as a standard place to stick stuff like this, >> so you'll just use that. (E.g. it's what pip uses for PEP 440 version >> parsing today.) >> >> https://pypi.python.org/pypi/packaging > > Ah... Is this a different thing than distlib? Does one depend on the > other? > > (this may come to mind: https://www.jwz.org/doc/cadt.html :-)) > It?s different yes. distlib took a direction that I wasn?t happy with, it added a lot of experimental APIs that were not backed by PEPs. I didn?t think that was appropriate for a reference implementation so I created that library which will only contain items backed by PEPs (and any additional items to make it possible to use that PEP in reality, like LegacyVersion). All of the new PEP features that pip uses are typically implemented by packaging and then pip uses it. We also use distlib for a few things, but I plan to remove that dependency once we have everything we need in packaging. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Nov 17 09:27:12 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 15:27:12 +0100 Subject: [Distutils] build system abstraction PEP, take #2 References: Message-ID: <20151117152712.2c162dcb@fsol> On Tue, 17 Nov 2015 15:24:04 +1300 Robert Collins wrote: > > The programmatic interface allows decoupling of pip from its current > hard dependency on setuptools [#setuptools]_ able for two > key reasons: > > 1. It enables new build systems that may be much easier to use without > requiring them to even appear to be setuptools. And yet: > There are a number of separate subcommands that build systems must support. I wonder how desirable and viable this all is. Desirable, because you are still asking the build system to appear as setuptools *in some way*. Viable, because pip may some day need to ask more from setuptools and then third-party build tools will have to adapt and implement said command-line options, defeating the "abstraction". In other words, this PEP seems to be only solving a fraction of the problem. > The file ``pypa.json`` acts as neutron configuration file for pip and other > tools that want to build source trees to consult for configuration. What is a "neutron configuration file"? Why is it called "pypa.json" and not the more descriptive "pybuild.json" (or, if you prefer, "pip.json")? "pypa", as far as I know, is the name of an informal group, not a well-known utility, operation or command. > bootstrap_requires > Optional list of dependency specifications [#dependencyspec] that must be > installed before running the build tool. How are the requirements gathered and installed? Using setuptools? Regards Antoine. From donald at stufft.io Tue Nov 17 09:33:56 2015 From: donald at stufft.io (Donald Stufft) Date: Tue, 17 Nov 2015 09:33:56 -0500 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: <20151117152712.2c162dcb@fsol> References: <20151117152712.2c162dcb@fsol> Message-ID: <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> > On Nov 17, 2015, at 9:27 AM, Antoine Pitrou wrote: > >> >> There are a number of separate subcommands that build systems must support. > > I wonder how desirable and viable this all is. Desirable, because you > are still asking the build system to appear as setuptools *in some way*. > Viable, because pip may some day need to ask more from setuptools and > then third-party build tools will have to adapt and implement said > command-line options, defeating the "abstraction". > > In other words, this PEP seems to be only solving a fraction of the > problem. Can you explain this? I don?t see how it?s true. We need some way for pip to invoke the build system no matter what the build system is. Either that API is a Python API or that build system is a CLI based API but either way there needs to be some way for that to happen. This PEP chooses (at my request) a defined CLI API because it makes the delineation between build system and pip cleaner. The whole point of this PEP is that once we have it, we can?t just randomly require more from the build tool than what is in the interface defined in this PEP. If we need more than we have to write a new PEP that extends the old interface with a new feature, but at all times it is built on an interface that is standardized via a PEP. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue Nov 17 09:45:47 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 15:45:47 +0100 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> Message-ID: <20151117154547.7e73e88b@fsol> On Tue, 17 Nov 2015 09:33:56 -0500 Donald Stufft wrote: > > > On Nov 17, 2015, at 9:27 AM, Antoine Pitrou wrote: > > > >> > >> There are a number of separate subcommands that build systems must support. > > > > I wonder how desirable and viable this all is. Desirable, because you > > are still asking the build system to appear as setuptools *in some way*. > > Viable, because pip may some day need to ask more from setuptools and > > then third-party build tools will have to adapt and implement said > > command-line options, defeating the "abstraction". > > > > In other words, this PEP seems to be only solving a fraction of the > > problem. > > Can you explain this? I don?t see how it?s true. We need some way for pip > to invoke the build system no matter what the build system is. Either that > API is a Python API or that build system is a CLI based API but either way > there needs to be some way for that to happen. This PEP chooses (at my > request) a defined CLI API because it makes the delineation between build > system and pip cleaner. I may have misunderstood, it seemed to me that "wheel -d" and "develop" are simply setuptools commands christened by the PEP. I tend to think Python APIs are better than CLI APIs, but that probably doesn't make a lot of difference. This assumes of course that potential problems are taken care of (such end-of-line conventions and character encodings on stdin / stdout :-)). The one of thing where a CLI API is clearly inferior is error report, though... > The whole point of this PEP is that once we have it, we can?t just randomly > require more from the build tool than what is in the interface defined in > this PEP. If we need more than we have to write a new PEP that extends the > old interface with a new feature, but at all times it is built on an > interface that is standardized via a PEP. That clears things up, thank you. Regards Antoine. From dholth at gmail.com Tue Nov 17 10:20:22 2015 From: dholth at gmail.com (Daniel Holth) Date: Tue, 17 Nov 2015 15:20:22 +0000 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: <20151117154547.7e73e88b@fsol> References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: LGTM Q: Why is build_command a list? Q: Why isn't the file name venezuelanbeavercheese.json instead of pypa.json? On Tue, Nov 17, 2015 at 10:06 AM Antoine Pitrou wrote: > On Tue, 17 Nov 2015 09:33:56 -0500 > Donald Stufft wrote: > > > > > On Nov 17, 2015, at 9:27 AM, Antoine Pitrou > wrote: > > > > > >> > > >> There are a number of separate subcommands that build systems must > support. > > > > > > I wonder how desirable and viable this all is. Desirable, because you > > > are still asking the build system to appear as setuptools *in some > way*. > > > Viable, because pip may some day need to ask more from setuptools and > > > then third-party build tools will have to adapt and implement said > > > command-line options, defeating the "abstraction". > > > > > > In other words, this PEP seems to be only solving a fraction of the > > > problem. > > > > Can you explain this? I don?t see how it?s true. We need some way for pip > > to invoke the build system no matter what the build system is. Either > that > > API is a Python API or that build system is a CLI based API but either > way > > there needs to be some way for that to happen. This PEP chooses (at my > > request) a defined CLI API because it makes the delineation between build > > system and pip cleaner. > > I may have misunderstood, it seemed to me that "wheel -d" and "develop" > are simply setuptools commands christened by the PEP. > > I tend to think Python APIs are better than CLI APIs, but that probably > doesn't make a lot of difference. This assumes of course that > potential problems are taken care of (such end-of-line conventions and > character encodings on stdin / stdout :-)). The one of thing where a > CLI API is clearly inferior is error report, though... > > > The whole point of this PEP is that once we have it, we can?t just > randomly > > require more from the build tool than what is in the interface defined in > > this PEP. If we need more than we have to write a new PEP that extends > the > > old interface with a new feature, but at all times it is built on an > > interface that is standardized via a PEP. > > That clears things up, thank you. > > Regards > > Antoine. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Tue Nov 17 11:15:30 2015 From: qwcode at gmail.com (Marcus Smith) Date: Tue, 17 Nov 2015 08:15:30 -0800 Subject: [Distutils] New PyPUG Tutorials In-Reply-To: References: Message-ID: yea, I was thinking the same. but we'll see how the initial reorg goes. On Sun, Nov 15, 2015 at 9:32 PM, Nick Coghlan wrote: > On 16 November 2015 at 03:49, Marcus Smith wrote: > >> > >> To have the most success, the writers will certainly need feedback from > >> subject matter experts, so the process will include 2 stages where we > >> specifically ask for feedback from PyPA-Dev and Distutils-Sig: 1) To > >> validate the initial proposal that covers the scope of the changes, and > 2) > >> to review the actual PRs to PyPUG for accuracy, when it's time for > merging. > >> I'll post again with more details as those stages occur. > > > > So, I'm back to post the initial proposal as mentioned above. > > That generally looks good to me, but I think we're going to need to > keep the "Advanced topics" section in one form or another. Longer > term, it might be possible to split them out into themed subsections > (as the outline does for pip/easy_install and wheel/egg by moving them > into the history section), but I don't think reorganising them is at > all urgent, so that can be tackled after this initial rearrangement is > done. > > Regards, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leorochael at gmail.com Tue Nov 17 11:28:59 2015 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Tue, 17 Nov 2015 14:28:59 -0200 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 17 November 2015 at 13:20, Daniel Holth wrote: > LGTM > > Q: Why is build_command a list? > Q: Why isn't the file name venezuelanbeavercheese.json instead of > pypa.json? > Or why not just use a specific key in setup.cfg instead of a pypa.json file? ISTM that this PEP expects to find in pypa.json some keys that are supposed to be entered manually by humans, even though json is a format more easily written by machines than by humans... Regards, Leo On Tue, Nov 17, 2015 at 10:06 AM Antoine Pitrou wrote: > >> On Tue, 17 Nov 2015 09:33:56 -0500 >> Donald Stufft wrote: >> > >> > > On Nov 17, 2015, at 9:27 AM, Antoine Pitrou >> wrote: >> > > >> > >> >> > >> There are a number of separate subcommands that build systems must >> support. >> > > >> > > I wonder how desirable and viable this all is. Desirable, because you >> > > are still asking the build system to appear as setuptools *in some >> way*. >> > > Viable, because pip may some day need to ask more from setuptools and >> > > then third-party build tools will have to adapt and implement said >> > > command-line options, defeating the "abstraction". >> > > >> > > In other words, this PEP seems to be only solving a fraction of the >> > > problem. >> > >> > Can you explain this? I don?t see how it?s true. We need some way for >> pip >> > to invoke the build system no matter what the build system is. Either >> that >> > API is a Python API or that build system is a CLI based API but either >> way >> > there needs to be some way for that to happen. This PEP chooses (at my >> > request) a defined CLI API because it makes the delineation between >> build >> > system and pip cleaner. >> >> I may have misunderstood, it seemed to me that "wheel -d" and "develop" >> are simply setuptools commands christened by the PEP. >> >> I tend to think Python APIs are better than CLI APIs, but that probably >> doesn't make a lot of difference. This assumes of course that >> potential problems are taken care of (such end-of-line conventions and >> character encodings on stdin / stdout :-)). The one of thing where a >> CLI API is clearly inferior is error report, though... >> >> > The whole point of this PEP is that once we have it, we can?t just >> randomly >> > require more from the build tool than what is in the interface defined >> in >> > this PEP. If we need more than we have to write a new PEP that extends >> the >> > old interface with a new feature, but at all times it is built on an >> > interface that is standardized via a PEP. >> >> That clears things up, thank you. >> >> Regards >> >> Antoine. >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue Nov 17 11:40:33 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 05:40:33 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <20151117145949.038695e3@fsol> References: <20151117145949.038695e3@fsol> Message-ID: On 18 November 2015 at 02:59, Antoine Pitrou wrote: > On Tue, 17 Nov 2015 09:46:21 +1300 > Robert Collins wrote: >> >> URI_reference = >> URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? >> hier_part = ('//' authority path_abempty) | path_absolute | >> path_rootless | path_empty >> absolute_URI = scheme ':' hier_part ( '?' query )? >> relative_ref = relative_part ( '?' query )? ( '#' fragment )? >> relative_part = '//' authority path_abempty | path_absolute | >> path_noscheme | path_empty >> scheme = letter ( letter | digit | '+' | '-' | '.')* >> authority = ( userinfo '@' )? host ( ':' port )? >> userinfo = ( unreserved | pct_encoded | sub_delims | ':')* >> host = IP_literal | IPv4address | reg_name >> port = digit* >> IP_literal = '[' ( IPv6address | IPvFuture) ']' >> IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ >> IPv6address = ( >> ( h16 ':'){6} ls32 >> | '::' ( h16 ':'){5} ls32 >> | ( h16 )? '::' ( h16 ':'){4} ls32 >> | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 >> | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 >> | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 >> | ( ( h16 ':'){0,4} h16 )? '::' ls32 >> | ( ( h16 ':'){0,5} h16 )? '::' h16 >> | ( ( h16 ':'){0,6} h16 )? '::' ) > > It seems weird that the PEP tries to include an entire subgrammar for > URIs, including even the parsing various kinds of IP addresses. Why not > be lenient in their detection and leave actual definition of valid URIs > to the IETF? > > It doesn't seem there is any point to embed/duplicate such knowledge in > Python packaging tools. Its included in the complete grammar, otherwise it can't be tested. Note that that the PEP body refers to the IETF document for the definition of URIs. e.g. exactly what you suggest. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From solipsis at pitrou.net Tue Nov 17 11:51:44 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 17:51:44 +0100 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: <20151117175144.020cb1aa@fsol> On Wed, 18 Nov 2015 05:40:33 +1300 Robert Collins wrote: > > Its included in the complete grammar, otherwise it can't be tested. > Note that that the PEP body refers to the IETF document for the > definition of URIs. e.g. exactly what you suggest. What I suggest is that the grammar doesn't try to define URIs at all, and instead includes a simple rule that is a superset of URI matching. It doesn't make sense for Python packaging tools to detect what is a valid URI or not. It's not their job, and the work will probably be replicated by whatever URI-loading library they use anyway (since they will pass it the URI by string, not by individual components). The only place where URIs are used seem to be the "urlspec" rule, and probably you can accept any opaque string there. Regards Antoine. From robertc at robertcollins.net Tue Nov 17 12:32:35 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 06:32:35 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <20151117175144.020cb1aa@fsol> References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 18 November 2015 at 05:51, Antoine Pitrou wrote: > On Wed, 18 Nov 2015 05:40:33 +1300 > Robert Collins wrote: >> >> Its included in the complete grammar, otherwise it can't be tested. >> Note that that the PEP body refers to the IETF document for the >> definition of URIs. e.g. exactly what you suggest. > > What I suggest is that the grammar doesn't try to define URIs at all, We don't. We consume the definition the IETF give. > and instead includes a simple rule that is a superset of URI matching. > It doesn't make sense for Python packaging tools to detect what is a > valid URI or not. It's not their job, and the work will probably be > replicated by whatever URI-loading library they use anyway (since they > will pass it the URI by string, not by individual components). > > The only place where URIs are used seem to be the "urlspec" rule, and > probably you can accept any opaque string there. Uhm, why are you making this suggestion? What problem will we solve by using a proxy rule? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Nov 17 12:34:51 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 06:34:51 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: <20151117152712.2c162dcb@fsol> References: <20151117152712.2c162dcb@fsol> Message-ID: On 18 November 2015 at 03:27, Antoine Pitrou wrote: > On Tue, 17 Nov 2015 15:24:04 +1300 > Robert Collins wrote: >> >> The programmatic interface allows decoupling of pip from its current >> hard dependency on setuptools [#setuptools]_ able for two >> key reasons: >> >> 1. It enables new build systems that may be much easier to use without >> requiring them to even appear to be setuptools. > > And yet: > >> There are a number of separate subcommands that build systems must support. > > I wonder how desirable and viable this all is. Desirable, because you > are still asking the build system to appear as setuptools *in some way*. > Viable, because pip may some day need to ask more from setuptools and > then third-party build tools will have to adapt and implement said > command-line options, defeating the "abstraction". > > In other words, this PEP seems to be only solving a fraction of the > problem. > > >> The file ``pypa.json`` acts as neutron configuration file for pip and other >> tools that want to build source trees to consult for configuration. > > What is a "neutron configuration file"? Typo - neutral. > Why is it called "pypa.json" and not the more descriptive > "pybuild.json" (or, if you prefer, "pip.json")? > "pypa", as far as I know, is the name of an informal group, not a > well-known utility, operation or command. I don't care about the name. As I don't want to be the arbiter of a bikeshedding thing, the BDFL delegate can choose: until they do, I'm going to stay with the current name to avoid edit churn. >> bootstrap_requires >> Optional list of dependency specifications [#dependencyspec] that must be >> installed before running the build tool. > > How are the requirements gathered and installed? Using setuptools? By whatever tool is consuming the build system. E.g. pip, or perhaps conda. Or some new tool. How they do the install is not an interoperability issue, and thus out of scope for the PEP. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From solipsis at pitrou.net Tue Nov 17 12:35:52 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 18:35:52 +0100 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: <20151117183552.47b6a0b8@fsol> On Wed, 18 Nov 2015 06:32:35 +1300 Robert Collins wrote: > > > > The only place where URIs are used seem to be the "urlspec" rule, and > > probably you can accept any opaque string there. > > Uhm, why are you making this suggestion? What problem will we solve by > using a proxy rule? Making the PEP simpler, making implementations simpler, avoiding bugs in implementations which may otherwise try to implement full URI matching, avoiding having to issue a PEP update whenever the IETF updates the definition. These are the four I can think about, anyway :-) Regards Antoine. From robertc at robertcollins.net Tue Nov 17 12:36:51 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 06:36:51 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 18 November 2015 at 04:20, Daniel Holth wrote: > LGTM > > Q: Why is build_command a list? Because the dependency spec framing we established doesn't describe multiple dependencies in one string (and we chose to write one that can be embedded in different higher layer things specifically to allow wider reuse) - we could define a wrapper here, or, since we have a structured config file, use that structure. It read a bit nicer in YAML, but see JSON under rationale. > Q: Why isn't the file name venezuelanbeavercheese.json instead of pypa.json? :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Nov 17 12:37:32 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 06:37:32 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 18 November 2015 at 05:28, Leonardo Rochael Almeida wrote: > On 17 November 2015 at 13:20, Daniel Holth wrote: >> >> LGTM >> >> Q: Why is build_command a list? >> Q: Why isn't the file name venezuelanbeavercheese.json instead of >> pypa.json? > > > Or why not just use a specific key in setup.cfg instead of a pypa.json file? > ISTM that this PEP expects to find in pypa.json some keys that are supposed > to be entered manually by humans, even though json is a format more easily > written by machines than by humans... I believe this is fully described in the rationale section - if not, let me know which bit didn't make sense and I'll edit it. -Rob From robertc at robertcollins.net Tue Nov 17 12:47:53 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 06:47:53 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <20151117183552.47b6a0b8@fsol> References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> <20151117183552.47b6a0b8@fsol> Message-ID: On 18 November 2015 at 06:35, Antoine Pitrou wrote: > On Wed, 18 Nov 2015 06:32:35 +1300 > Robert Collins wrote: >> > >> > The only place where URIs are used seem to be the "urlspec" rule, and >> > probably you can accept any opaque string there. >> >> Uhm, why are you making this suggestion? What problem will we solve by >> using a proxy rule? > > Making the PEP simpler, making implementations simpler, avoiding bugs > in implementations which may otherwise try to implement full URI > matching, avoiding having to issue a PEP update whenever the IETF > updates the definition. > > These are the four I can think about, anyway :-) Taking them in reverse order: We reference the URI standard, so when that changes, our definition changes - we can update the combined grammar, but we don't need to update anything we've defined (unless the URI change is something that invalidates our symbol delimitation, in which case we'd have to anyway. The grammar is a reference, meant to act as a test in the event of disagreement should two implementations differ from each other. The packaging implementation for instance uses pyparsing and thus by necessity a different grammar. Implementations don't have to use any of it - I'm not sure, unless we require that implementations use OMeta, how we're causing (or preventing) bugs by changing the combined grammar. Note that previous PEPs have either had no grammar (and interop issue) or partially defined grammar's (and logical issues - see PEP-426 for example). I think its very important we be able to test what we're saying should happen well. Implementations don't have to use the grammar, they just have to accept the same inputs and interpret it in the same ways. packaging's implementation doesn't use the same grammar, for instance. (It uses pyparsing and a mix of regexes and objects). Since the bit you're complaining about is basically an appendix, I can see that it makes the PEP shorter, but not how it makes it simpler: we still depend on the definition of URI, because we consume URI's - and thats a PEP-440 choice, so changing that is something I'd seek a very strong reason for, since its accepted. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From solipsis at pitrou.net Tue Nov 17 13:09:50 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 17 Nov 2015 19:09:50 +0100 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> <20151117183552.47b6a0b8@fsol> Message-ID: <20151117190950.73c2784b@fsol> On Wed, 18 Nov 2015 06:47:53 +1300 Robert Collins wrote: > Note that previous PEPs have either had no grammar (and > interop issue) or partially defined grammar's (and logical issues - > see PEP-426 for example). I think its very important we be able to > test what we're saying should happen well. Of course. But only for the things the PEP is useful for: i.e., packaging-related information. Being able to separate valid URIs and invalid URIs is not a useful feature for an implementation of this PEP. Yet (correct me if I'm wrong) you seem to claim it's an integral part of being a conformant implementation. > Implementations don't have to use the grammar, they just have to > accept the same inputs It means they have to invoke some logic to reject invalid URIs upfront, even though it's none of their business (since they will later treat the URIs as strings, anyway). > packaging's > implementation doesn't use the same grammar, for instance. (It uses > pyparsing and a mix of regexes and objects). And the URI validation bit is implemented as...? > Since the bit you're complaining about is basically an appendix, I can > see that it makes the PEP shorter, but not how it makes it simpler: we > still depend on the definition of URI, because we consume URI's "We" don't depend on it. Whatever does the URI parsing and loading does - another library certainly, not a packaging-specific library. From a packaging point of view, URIs are plain strings, not something you parse to extract meaningful information. This is called abstraction :-) Regards Antoine. From p.f.moore at gmail.com Tue Nov 17 13:10:26 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Nov 2015 18:10:26 +0000 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 17 November 2015 at 17:32, Robert Collins wrote: >> The only place where URIs are used seem to be the "urlspec" rule, and >> probably you can accept any opaque string there. > > Uhm, why are you making this suggestion? What problem will we solve by > using a proxy rule? I think the point here is syntax vs semantics. It is simpler to parse if we make the *syntax* state that an opaque string is allowed here. The *semantics* can then say that the string is to be handled as a URL, meaning that any string that isn't a valid URL will fail when we try to pass it to urllib or whatever. The only advantage of *parsing* it as a URL is that we get to reject foo::::/bar:baz as a syntax error. But we'd still reject foo:/bar as an invalid URL (unknown protocol) later in the processing, so why bother trying to trap the specific error of "doesn't look like a URL" early? By including the URL syntax, we're mandating that conforming implementations *have* to trap malformed URLs early, and can't defer that validation to the URL library being used to process the URL. Paul From guettliml at thomas-guettler.de Tue Nov 17 13:39:01 2015 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Tue, 17 Nov 2015 19:39:01 +0100 Subject: [Distutils] Link to lib missing in PEP: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <120BF8AB-68E9-4E82-99AE-440433E18F29@ronnypfannschmidt.de> References: <564ADDDE.7010503@thomas-guettler.de> <7EAF8724-43BD-4FE5-8E58-9C797FCDC71D@stufft.io> <564B2EFF.80309@thomas-guettler.de> <564B352B.9060701@thomas-guettler.de> <120BF8AB-68E9-4E82-99AE-440433E18F29@ronnypfannschmidt.de> Message-ID: <564B7445.8040408@thomas-guettler.de> Am 17.11.2015 um 15:13 schrieb Ronny Pfannschmidt: > That is no acceptable way to answer in particular since the original mail continued with places where to put the link I was asking for a link from the PEP to the implementation. I received a "no". I replyed with my opinion. What is not acceptable? Regards, Thomas G?ttler -- http://www.thomas-guettler.de/ From robertc at robertcollins.net Tue Nov 17 13:43:41 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 07:43:41 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 18 November 2015 at 07:10, Paul Moore wrote: > On 17 November 2015 at 17:32, Robert Collins wrote: >>> The only place where URIs are used seem to be the "urlspec" rule, and >>> probably you can accept any opaque string there. >> >> Uhm, why are you making this suggestion? What problem will we solve by >> using a proxy rule? > > I think the point here is syntax vs semantics. It is simpler to parse > if we make the *syntax* state that an opaque string is allowed here. > The *semantics* can then say that the string is to be handled as a > URL, meaning that any string that isn't a valid URL will fail when we > try to pass it to urllib or whatever. > > The only advantage of *parsing* it as a URL is that we get to reject > foo::::/bar:baz as a syntax error. But we'd still reject foo:/bar as > an invalid URL (unknown protocol) later in the processing, so why > bother trying to trap the specific error of "doesn't look like a URL" > early? > > By including the URL syntax, we're mandating that conforming > implementations *have* to trap malformed URLs early, and can't defer > that validation to the URL library being used to process the URL. I don't understand how we're mandating that. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Nov 17 13:53:12 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 07:53:12 +1300 Subject: [Distutils] Pip is not a library was: FINAL DRAFT: Dependency specifier PEP In-Reply-To: <564B0EED.3050404@thomas-guettler.de> References: <564ADDDE.7010503@thomas-guettler.de> <564B0EED.3050404@thomas-guettler.de> Message-ID: On 18 November 2015 at 00:26, Thomas G?ttler wrote: .. > I want to parse a single dependency of a python > package. Presumably from Python; in which case you can use the packaging library once the PEP is pronounced on and the code has been added. If you want to do it from another language, you can use the grammar from the PEP to write an implementation. Please do remember though, the *goal* of the PEP process here is to define the interoperation of multiple competing implementations. So its actually the intent that you be able to write a library function from this specification and have good confidence that it will interoperate with the one in pip - *whether or not that is in / from a library*. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Tue Nov 17 14:53:52 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Nov 2015 19:53:52 +0000 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 17 November 2015 at 18:43, Robert Collins wrote: >> By including the URL syntax, we're mandating that conforming >> implementations *have* to trap malformed URLs early, and can't defer >> that validation to the URL library being used to process the URL. > > I don't understand how we're mandating that. urlspec = '@' wsp* combined with URI_reference = URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? (etc) implies that conforming parsers have to validate that what follows '@' must conform to the URI definition. So they have to reject @::::: because ::::: is not a valid URI. But why bother? It's extra work, and given that all an implementation will ever do with the URI_reference is pass it to a function that treats it as a URI, and that function will do all the validation you need. I'd argue that the spec can simply say URI_reference = The discussion of how a urlspec is used can point out that the string will be assumed to be a URI. A library that parsed any non-whitespace string as a URI_reference would be just as useful for all practical purposes, and much easier to write (and test!) But it would technically be non-conformant to this PEP. Personally, I don't actually care all that much, as I probably won't ever write a library that implements this spec. The packaging library will be fine for me. But given that the point of writing the interoperability PEPs is to ensure people *can* write alternative implementations, I'm against adding complexity and implementation burden that has no practical benefit. Paul From robertc at robertcollins.net Tue Nov 17 15:02:31 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 09:02:31 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 18 November 2015 at 08:53, Paul Moore wrote: > On 17 November 2015 at 18:43, Robert Collins wrote: >>> By including the URL syntax, we're mandating that conforming >>> implementations *have* to trap malformed URLs early, and can't defer >>> that validation to the URL library being used to process the URL. >> >> I don't understand how we're mandating that. > > urlspec = '@' wsp* > > combined with > > URI_reference = > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > (etc) > > implies that conforming parsers have to validate that what follows '@' > must conform to the URI definition. So they have to reject @::::: > because ::::: is not a valid URI. But why bother? It's extra work, and > given that all an implementation will ever do with the URI_reference > is pass it to a function that treats it as a URI, and that function > will do all the validation you need. > > I'd argue that the spec can simply say > > URI_reference = > > The discussion of how a urlspec is used can point out that the string > will be assumed to be a URI. > > A library that parsed any non-whitespace string as a URI_reference > would be just as useful for all practical purposes, and much easier to > write (and test!) But it would technically be non-conformant to this > PEP. > > Personally, I don't actually care all that much, as I probably won't > ever write a library that implements this spec. The packaging library > will be fine for me. But given that the point of writing the > interoperability PEPs is to ensure people *can* write alternative > implementations, I'm against adding complexity and implementation > burden that has no practical benefit. I'm still struggling to understand. I see two angles; the first is on what is accepted or not by an implementation: The reference here is not the implementation - its a *reference*. An implementation whose URI handling can't handle std-66 URI's that another ones can would lead to interop issues : and thats what we're trying to avoid. An alternative implementation whose URI handling has some extension that means it handles things that other implementations don't would accept everything the PEP mandates but also accept more - leading to interop issues. Some interop issues (e.g. pip handles git+https:// urls, setuptools doesn't) are not covered yet, but thats a pep-440 issue (at least, the way things are split up today) - so I don't want to dive into that. The second is on whether the implementation achieves that acceptance up front in its parsing, or on the backend in its URI library. And I could care way less which way around it does it. We're not defining implementation, but we are defining the language. As I understand it, you and Antoine are saying that the current PEP *does* define implementation because folk can't trust their URI library to error appropriately - and thats the bit I don't understand. Just parse however you want as an author, and cross check against the full grammar here in case of doubt. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From p.f.moore at gmail.com Tue Nov 17 15:50:20 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 17 Nov 2015 20:50:20 +0000 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: On 17 November 2015 at 20:02, Robert Collins wrote: > As I understand it, you and Antoine are saying that the current PEP > *does* define implementation because folk can't trust their URI > library to error appropriately - and thats the bit I don't understand. > Just parse however you want as an author, and cross check against the > full grammar here in case of doubt. What I'm saying is that for syntax, a conforming implementation has to stand alone in its conformance to the spec. If the spec says X is (in)valid, then the implementation should agree. I might want to use the implementation to *check* that some file conforms to the spec, so I may not even be using a URL library. On the other hand, *semantics* are how you use the data - it's perfectly OK to say that the "urlspec" chunk of data is intended to be passed to a URL library. In that case, @::: is valid (it conforms to the syntax) but meaningless. That's a looser spec (and so easier to implement) but just as useful in any practical sense. One last point - AIUI, the implementation being added to the packaging library does full URI parsing. And I doubt anyone claiming to be implementing the spec would feel comfortable not doing so. So I don't follow your idea that people implementing the spec can defer to a URL library for that bit - it's not what's actually happening. Anyway, as I said it's not going to affect me in practice, so I'll leave it here. Ultimately it'll be for the BDFL-delegate for this PEP to decide whether it's worth asking for a change in this area. Paul From robertc at robertcollins.net Tue Nov 17 15:51:31 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 09:51:31 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: Addressed along with some minor quirks and github feedback. diff --git a/dependency-specification.rst b/dependency-specification.rst index a9953ed..8a632ba 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -18,7 +18,7 @@ Abstract This PEP specifies the language used to describe dependencies for packages. It draws a border at the edge of describing a single dependency - the different sorts of dependencies and when they should be installed is a higher -level problem. The intent is provide a building block for higher layer +level problem. The intent is to provide a building block for higher layer specifications. The job of a dependency is to enable tools like pip [#pip]_ to find the right @@ -85,7 +85,7 @@ Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: URI is defined in std-66 [#std66]_:: version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' - version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ + version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' | '+' | '!' )+ version_one = version_cmp version wsp* version_many = version_one (wsp* ',' version_one)* versionspec = ( '(' version_many ')' ) | version_many @@ -94,7 +94,7 @@ URI is defined in std-66 [#std66]_:: Environment markers allow making a specification only take effect in some environments:: - marker_op = version_cmp | 'in' | 'not' wsp+ 'in' + marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*') dquote = '"' @@ -108,10 +108,14 @@ environments:: 'implementation_name' | 'implementation_version' | 'extra' # ONLY when defined by a containing layer ) - marker_var = env_var | python_str - marker_expr = ('(' wsp* marker wsp* ')' - | (marker_var wsp* marker_op wsp* marker_var)) - marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* + marker_var = wsp* (env_var | python_str) + marker_expr = marker_var marker_op marker_var + | wsp* '(' marker wsp* ')' + marker_and = marker_expr wsp* 'and' marker_expr + | marker_expr + marker_or = marker_and wsp* 'or' marker_and + | marker_and + marker = marker_or quoted_marker = ';' wsp* marker Optional components of a distribution may be specified using the extras @@ -304,7 +308,7 @@ The ``implementation_version`` marker variable is derived from Backwards Compatibility ======================= -Most of this PEP is already widely deployed and thus offers no compatibiltiy +Most of this PEP is already widely deployed and thus offers no compatibility concerns. There are however a few points where the PEP differs from the deployed base. @@ -355,7 +359,7 @@ The complete parsley grammar:: version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 versionspec = ('(' version_many:v ')' ->v) | version_many urlspec = '@' wsp* - marker_op = version_cmp | 'in' | 'not' wsp+ 'in' + marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | '-' | '_' | '*' | '#') dquote = '"' @@ -369,12 +373,14 @@ The complete parsley grammar:: 'implementation_name' | 'implementation_version' | 'extra' # ONLY when defined by a containing layer ):varname -> lookup(varname) - marker_var = env_var | python_str - marker_expr = (("(" wsp* marker:m wsp* ")" -> m) - | ((marker_var:l wsp* marker_op:o wsp* marker_var:r)) - -> (l, o, r)) - marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* - marker_expr:r -> (o, r))*:ms -> (m, ms)) + marker_var = wsp* (env_var | python_str) + marker_expr = marker_var:l marker_op:o marker_var:r -> (o, l, r) + | wsp* '(' marker:m wsp* ')' -> m + marker_and = marker_expr:l wsp* 'and' marker_expr:r -> ('and', l, r) + | marker_expr:m -> m + marker_or = marker_and:l wsp* 'or' marker_and:r -> ('or', l, r) + | marker_and:m -> m + marker = marker_or quoted_marker = ';' wsp* marker identifier = =3,<2", "name [fred,bar] @ http://foo.com ; python_version=='2.7'", "name[quux, strange];python_version<'2.7' and platform_version=='2'", - "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", - "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", + "name; os_name=='a' or os_name=='b'", + # Should parse as (a and b) or c + "name; os_name=='a' and os_name=='b' or os_name=='c'", + # Overriding precedence -> a and (b or c) + "name; os_name=='a' and (os_name=='b' or os_name=='c')", + # should parse as a or (b and c) + "name; os_name=='a' or os_name=='b' and os_name=='c'", + # Overriding precedence -> (a or b) and c + "name; (os_name=='a' or os_name=='b') and os_name=='c'", ] def format_full_version(info): @@ -502,8 +515,8 @@ A test program - if the grammar is in a string ``grammar``:: compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) for test in tests: - parsed = compiled(test).specification() - print(parsed) + parsed = compiled(test).specification() + print("%s -> %s" % (test, parsed)) References ========== On 17 November 2015 at 16:59, Robert Collins wrote: > On 17 November 2015 at 16:37, Nathaniel Smith wrote: > > Blah. Shall address. > > -Rob > >> >> -- >> Nathaniel J. Smith -- http://vorpus.org > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Tue Nov 17 16:01:01 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 10:01:01 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: On 18 November 2015 at 04:20, Daniel Holth wrote: > LGTM > > Q: Why is build_command a list? I misread the question - fixed. diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst index 8eb0681..d36b7d5 100644 --- a/build-system-abstraction.rst +++ b/build-system-abstraction.rst @@ -91,7 +91,7 @@ build_command A mandatory Python format string [#strformat]_ describing the command to run. For instance, if using flit then the build command might be:: - build_command: ["flit"] + build_command: "flit" If using a command which is a runnable module fred:: @@ -254,7 +254,7 @@ Examples An example 'pypa.json' for using flit:: {"bootstrap_requires": ["flit"], - "build_command": ["flit"]} + "build_command": "flit"} When 'pip' reads this it would prepare an environment with flit in it before trying to use flit. -- Robert Collins Distinguished Technologist HP Converged Cloud From rdmurray at bitdance.com Tue Nov 17 16:22:03 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 17 Nov 2015 16:22:03 -0500 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> Message-ID: <20151117212205.1DB76B14089@webabinitio.net> On Wed, 18 Nov 2015 09:02:31 +1300, Robert Collins wrote: > On 18 November 2015 at 08:53, Paul Moore wrote: > > On 17 November 2015 at 18:43, Robert Collins wrote: > >>> By including the URL syntax, we're mandating that conforming > >>> implementations *have* to trap malformed URLs early, and can't defer > >>> that validation to the URL library being used to process the URL. > >> > >> I don't understand how we're mandating that. > > > > urlspec = '@' wsp* > > > > combined with > > > > URI_reference = > > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > > (etc) > > > > implies that conforming parsers have to validate that what follows '@' > > must conform to the URI definition. So they have to reject @::::: > > because ::::: is not a valid URI. But why bother? It's extra work, and > > given that all an implementation will ever do with the URI_reference > > is pass it to a function that treats it as a URI, and that function > > will do all the validation you need. > > > > I'd argue that the spec can simply say > > > > URI_reference = > > > > The discussion of how a urlspec is used can point out that the string > > will be assumed to be a URI. > > > > A library that parsed any non-whitespace string as a URI_reference > > would be just as useful for all practical purposes, and much easier to > > write (and test!) But it would technically be non-conformant to this > > PEP. > > > > Personally, I don't actually care all that much, as I probably won't > > ever write a library that implements this spec. The packaging library > > will be fine for me. But given that the point of writing the > > interoperability PEPs is to ensure people *can* write alternative > > implementations, I'm against adding complexity and implementation > > burden that has no practical benefit. > > > I'm still struggling to understand. > > I see two angles; the first is on what is accepted or not by an implementation: > The reference here is not the implementation - its a *reference*. An > implementation whose URI handling can't handle std-66 URI's that > another ones can would lead to interop issues : and thats what we're > trying to avoid. An alternative implementation whose URI handling has > some extension that means it handles things that other implementations > don't would accept everything the PEP mandates but also accept more - > leading to interop issues. Some interop issues (e.g. pip handles > git+https:// urls, setuptools doesn't) are not covered yet, but thats > a pep-440 issue (at least, the way things are split up today) - so I > don't want to dive into that. > > The second is on whether the implementation achieves that acceptance > up front in its parsing, or on the backend in its URI library. And I > could care way less which way around it does it. We're not defining > implementation, but we are defining the language. > > As I understand it, you and Antoine are saying that the current PEP > *does* define implementation because folk can't trust their URI > library to error appropriately - and thats the bit I don't understand. > Just parse however you want as an author, and cross check against the > full grammar here in case of doubt. OK, so it *is* the case that the PEP is mandating that a conforming implementation has to accept valid and reject invalid URLs according to the grammar in the PEP, but not *how* or *when* it does that (the implementation). So "trap malformed URLs early" is false, but "trap malformed URLs" is true, if you want to be a conformant implementation. --David From robertc at robertcollins.net Tue Nov 17 16:29:25 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 10:29:25 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <20151117212205.1DB76B14089@webabinitio.net> References: <20151117145949.038695e3@fsol> <20151117175144.020cb1aa@fsol> <20151117212205.1DB76B14089@webabinitio.net> Message-ID: On 18 November 2015 at 10:22, R. David Murray wrote: ... >> As I understand it, you and Antoine are saying that the current PEP >> *does* define implementation because folk can't trust their URI >> library to error appropriately - and thats the bit I don't understand. >> Just parse however you want as an author, and cross check against the >> full grammar here in case of doubt. > > OK, so it *is* the case that the PEP is mandating that a conforming > implementation has to accept valid and reject invalid URLs according > to the grammar in the PEP, but not *how* or *when* it does that (the > implementation). So "trap malformed URLs early" is false, but "trap > malformed URLs" is true, if you want to be a conformant implementation. Yes. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From erik.m.bray at gmail.com Tue Nov 17 18:13:41 2015 From: erik.m.bray at gmail.com (Erik Bray) Date: Tue, 17 Nov 2015 18:13:41 -0500 Subject: [Distutils] Packaging shared objects with setuptools In-Reply-To: References: Message-ID: On Fri, Oct 30, 2015 at 12:47 PM, Mario Pezzoni wrote: > Hello, > > I am wrapping a c++ library with cython. I compile the pyxs and the c++ code > with cmake using the cython-cmake-example. > If I import the package from the build directory it works flawlessly, if I > install through "python setup.py install" into a virtual environment it > breaks because the shared objects are not installed (but all the pure .py > modules are installed). > > How do I tell setuptools to install the shared objects? For starters, what does your setup.py look like? You should make sure all extension modules are registered with setup() as described here: https://docs.python.org/2/distutils/examples.html#single-extension-module I'm not sure what cython-cmake-example is though, or how whatever it does would interact with distutils. Erik From robertc at robertcollins.net Tue Nov 17 18:21:03 2015 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 18 Nov 2015 12:21:03 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <6B4D7AAE-7B53-42BC-AB1B-0C1B6B259569@stufft.io> Message-ID: Another hopefully last tweak - adding in more punctuation to the strings in markers. I'm still holding off of defining \ escapes for now. -Rob diff --git a/dependency-specification.rst b/dependency-specification.rst index 8a632ba..b115b66 100644 --- a/dependency-specification.rst +++ b/dependency-specification.rst @@ -96,7 +96,9 @@ environments:: marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | - '-' | '_' | '*') + '-' | '_' | '*' | '#' | ':' | ';' | ',' | '/' | '?' | + '[' | ']' | '!' | '~' | '`' | '@' | '$' | '%' | '^' | + '&' | '=' | '+' | '|' | '<' | '>' ) dquote = '"' squote = '\\'' python_str = (squote (python_str_c | dquote)* squote | @@ -361,7 +363,9 @@ The complete parsley grammar:: urlspec = '@' wsp* marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | - '-' | '_' | '*' | '#') + '-' | '_' | '*' | '#' | ':' | ';' | ',' | '/' | '?' | + '[' | ']' | '!' | '~' | '`' | '@' | '$' | '%' | '^' | + '&' | '=' | '+' | '|' | '<' | '>' ) dquote = '"' squote = '\\'' python_str = (squote <(python_str_c | dquote)*>:s squote | On 18 November 2015 at 09:51, Robert Collins wrote: > Addressed along with some minor quirks and github feedback. > > diff --git a/dependency-specification.rst b/dependency-specification.rst > index a9953ed..8a632ba 100644 > --- a/dependency-specification.rst > +++ b/dependency-specification.rst > @@ -18,7 +18,7 @@ Abstract > This PEP specifies the language used to describe dependencies for packages. > It draws a border at the edge of describing a single dependency - the > different sorts of dependencies and when they should be installed is a higher > -level problem. The intent is provide a building block for higher layer > +level problem. The intent is to provide a building block for higher layer > specifications. > > The job of a dependency is to enable tools like pip [#pip]_ to find the right > @@ -85,7 +85,7 @@ Versions may be specified according to the PEP-440 > [#pep440]_ rules. (Note: > URI is defined in std-66 [#std66]_:: > > version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' > - version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ > + version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' | '+' | '!' )+ > version_one = version_cmp version wsp* > version_many = version_one (wsp* ',' version_one)* > versionspec = ( '(' version_many ')' ) | version_many > @@ -94,7 +94,7 @@ URI is defined in std-66 [#std66]_:: > Environment markers allow making a specification only take effect in some > environments:: > > - marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > + marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*') > dquote = '"' > @@ -108,10 +108,14 @@ environments:: > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ) > - marker_var = env_var | python_str > - marker_expr = ('(' wsp* marker wsp* ')' > - | (marker_var wsp* marker_op wsp* marker_var)) > - marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* > + marker_var = wsp* (env_var | python_str) > + marker_expr = marker_var marker_op marker_var > + | wsp* '(' marker wsp* ')' > + marker_and = marker_expr wsp* 'and' marker_expr > + | marker_expr > + marker_or = marker_and wsp* 'or' marker_and > + | marker_and > + marker = marker_or > quoted_marker = ';' wsp* marker > > Optional components of a distribution may be specified using the extras > @@ -304,7 +308,7 @@ The ``implementation_version`` marker variable is > derived from > Backwards Compatibility > ======================= > > -Most of this PEP is already widely deployed and thus offers no compatibiltiy > +Most of this PEP is already widely deployed and thus offers no compatibility > concerns. > > There are however a few points where the PEP differs from the deployed base. > @@ -355,7 +359,7 @@ The complete parsley grammar:: > version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 > versionspec = ('(' version_many:v ')' ->v) | version_many > urlspec = '@' wsp* > - marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > + marker_op = version_cmp | (wsp* 'in') | (wsp* 'not' wsp+ 'in') > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*' | '#') > dquote = '"' > @@ -369,12 +373,14 @@ The complete parsley grammar:: > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ):varname -> lookup(varname) > - marker_var = env_var | python_str > - marker_expr = (("(" wsp* marker:m wsp* ")" -> m) > - | ((marker_var:l wsp* marker_op:o wsp* marker_var:r)) > - -> (l, o, r)) > - marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* > - marker_expr:r -> (o, r))*:ms -> (m, ms)) > + marker_var = wsp* (env_var | python_str) > + marker_expr = marker_var:l marker_op:o marker_var:r -> (o, l, r) > + | wsp* '(' marker:m wsp* ')' -> m > + marker_and = marker_expr:l wsp* 'and' marker_expr:r -> ('and', l, r) > + | marker_expr:m -> m > + marker_or = marker_and:l wsp* 'or' marker_and:r -> ('or', l, r) > + | marker_and:m -> m > + marker = marker_or > quoted_marker = ';' wsp* marker > identifier = letterOrDigit | > @@ -469,8 +475,15 @@ A test program - if the grammar is in a string > ``grammar``:: > "name>=3,<2", > "name [fred,bar] @ http://foo.com ; python_version=='2.7'", > "name[quux, strange];python_version<'2.7' and platform_version=='2'", > - "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", > - "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", > + "name; os_name=='a' or os_name=='b'", > + # Should parse as (a and b) or c > + "name; os_name=='a' and os_name=='b' or os_name=='c'", > + # Overriding precedence -> a and (b or c) > + "name; os_name=='a' and (os_name=='b' or os_name=='c')", > + # should parse as a or (b and c) > + "name; os_name=='a' or os_name=='b' and os_name=='c'", > + # Overriding precedence -> (a or b) and c > + "name; (os_name=='a' or os_name=='b') and os_name=='c'", > ] > > def format_full_version(info): > @@ -502,8 +515,8 @@ A test program - if the grammar is in a string ``grammar``:: > > compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) > for test in tests: > - parsed = compiled(test).specification() > - print(parsed) > + parsed = compiled(test).specification() > + print("%s -> %s" % (test, parsed)) > > References > ========== > > On 17 November 2015 at 16:59, Robert Collins wrote: >> On 17 November 2015 at 16:37, Nathaniel Smith wrote: >> >> Blah. Shall address. >> >> -Rob >> >>> >>> -- >>> Nathaniel J. Smith -- http://vorpus.org >> >> >> >> -- >> Robert Collins >> Distinguished Technologist >> HP Converged Cloud > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud -- Robert Collins Distinguished Technologist HP Converged Cloud From robertc at robertcollins.net Wed Nov 18 13:24:26 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 19 Nov 2015 07:24:26 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: I didn't realise PEP 440 refered to 426, though the reference is weak, its enumerating one valid sort of content to refer to (urls that are valid as source urls). -Rob On 19 November 2015 at 05:44, Marcus Smith wrote: > as it is, this PEP defers the concept of a "Direct Reference URL" to PEP440. > > but then PEP440 partially defers to PEP426's "source_url" concept, when it > says "a direct URL reference may be a valid source_url entry" > > do we expect PEP440 to be updated to fully own what a "Direct Reference URL" > can be?, since referring to PEP426 is now a dead-end path (and partially > replaced by this PEP) > > On Mon, Nov 16, 2015 at 12:46 PM, Robert Collins > wrote: >> >> :PEP: XX >> :Title: Dependency specification for Python Software Packages >> :Version: $Revision$ >> :Last-Modified: $Date$ >> :Author: Robert Collins >> :BDFL-Delegate: Donald Stufft >> :Discussions-To: distutils-sig >> :Status: Draft >> :Type: Standards Track >> :Content-Type: text/x-rst >> :Created: 11-Nov-2015 >> :Post-History: XX >> >> >> Abstract >> ======== >> >> This PEP specifies the language used to describe dependencies for >> packages. >> It draws a border at the edge of describing a single dependency - the >> different sorts of dependencies and when they should be installed is a >> higher >> level problem. The intent is provide a building block for higher layer >> specifications. >> >> The job of a dependency is to enable tools like pip [#pip]_ to find the >> right >> package to install. Sometimes this is very loose - just specifying a name, >> and >> sometimes very specific - referring to a specific file to install. >> Sometimes >> dependencies are only relevant in one platform, or only some versions are >> acceptable, so the language permits describing all these cases. >> >> The language defined is a compact line based format which is already in >> widespread use in pip requirements files, though we do not specify the >> command >> line option handling that those files permit. There is one caveat - the >> URL reference form, specified in PEP-440 [#pep440]_ is not actually >> implemented in pip, but since PEP-440 is accepted, we use that format >> rather >> than pip's current native format. >> >> Motivation >> ========== >> >> Any specification in the Python packaging ecosystem that needs to consume >> lists of dependencies needs to build on an approved PEP for such, but >> PEP-426 [#pep426]_ is mostly aspirational - and there are already existing >> implementations of the dependency specification which we can instead >> adopt. >> The existing implementations are battle proven and user friendly, so >> adopting >> them is arguably much better than approving an aspirational, unconsumed, >> format. >> >> Specification >> ============= >> >> Examples >> -------- >> >> All features of the language shown with a name based lookup:: >> >> requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < >> "2.7.10" >> >> A minimal URL based lookup:: >> >> pip @ >> https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 >> >> Concepts >> -------- >> >> A dependency specification always specifies a distribution name. It may >> include extras, which expand the dependencies of the named distribution to >> enable optional features. The version installed can be controlled using >> version limits, or giving the URL to a specific artifact to install. >> Finally >> the dependency can be made conditional using environment markers. >> >> Grammar >> ------- >> >> We first cover the grammar briefly and then drill into the semantics of >> each >> section later. >> >> A distribution specification is written in ASCII text. We use a parsley >> [#parsley]_ grammar to provide a precise grammar. It is expected that the >> specification will be embedded into a larger system which offers framing >> such >> as comments, multiple line support via continuations, or other such >> features. >> >> The full grammar including annotations to build a useful parse tree is >> included at the end of the PEP. >> >> Versions may be specified according to the PEP-440 [#pep440]_ rules. >> (Note: >> URI is defined in std-66 [#std66]_:: >> >> version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | >> '===' >> version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ >> version_one = version_cmp version wsp* >> version_many = version_one (wsp* ',' version_one)* >> versionspec = ( '(' version_many ')' ) | version_many >> urlspec = '@' wsp* >> >> Environment markers allow making a specification only take effect in some >> environments:: >> >> marker_op = version_cmp | 'in' | 'not' wsp+ 'in' >> python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | >> '-' | '_' | '*') >> dquote = '"' >> squote = '\\'' >> python_str = (squote (python_str_c | dquote)* squote | >> dquote (python_str_c | squote)* dquote) >> env_var = ('python_version' | 'python_full_version' | >> 'os_name' | 'sys_platform' | 'platform_release' | >> 'platform_system' | 'platform_version' | >> 'platform_machine' | 'python_implementation' | >> 'implementation_name' | 'implementation_version' | >> 'extra' # ONLY when defined by a containing layer >> ) >> marker_var = env_var | python_str >> marker_expr = ('(' wsp* marker wsp* ')' >> | (marker_var wsp* marker_op wsp* marker_var)) >> marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* >> marker_expr)* >> quoted_marker = ';' wsp* marker >> >> Optional components of a distribution may be specified using the extras >> field:: >> >> identifier = letterOrDigit ( >> letterOrDigit | >> (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) >> )* >> name = identifier >> extras_list = identifier (wsp* ',' wsp* identifier)* >> extras = '[' wsp* extras_list? wsp* ']' >> >> Giving us a rule for name based requirements:: >> >> name_req = name wsp* extras? wsp* versionspec? wsp* >> quoted_marker? >> >> And a rule for direct reference specifications:: >> >> url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? >> >> Leading to the unified rule that can specify a dependency.:: >> >> specification = wsp* ( url_req | name_req ) wsp* >> >> Whitespace >> ---------- >> >> Non line-breaking whitespace is mostly optional with no semantic meaning. >> The >> sole exception is detecting the end of a URL requirement. >> >> Names >> ----- >> >> Python distribution names are currently defined in PEP-345 [#pep345]_. >> Names >> act as the primary identifier for distributions. They are present in all >> dependency specifications, and are sufficient to be a specification on >> their >> own. However, PyPI places strict restrictions on names - they must match a >> case insensitive regex or they won't be accepted. Accordingly in this PEP >> we >> limit the acceptable values for identifiers to that regex. A full >> redefinition >> of name may take place in a future metadata PEP:: >> >> ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ >> >> Extras >> ------ >> >> An extra is an optional part of a distribution. Distributions can specify >> as >> many extras as they wish, and each extra results in the declaration of >> additional dependencies of the distribution **when** the extra is used in >> a >> dependency specification. For instance:: >> >> requests[security] >> >> Extras union in the dependencies they define with the dependencies of the >> distribution they are attached to. The example above would result in >> requests >> being installed, and requests own dependencies, and also any dependencies >> that >> are listed in the "security" extra of requests. >> >> If multiple extras are listed, all the dependencies are unioned together. >> >> Versions >> -------- >> >> See PEP-440 [#pep440]_ for more detail on both version numbers and version >> comparisons. Version specifications limit the versions of a distribution >> that >> can be used. They only apply to distributions looked up by name, rather >> than >> via a URL. Version comparison are also used in the markers feature. The >> optional brackets around a version are present for compatibility with >> PEP-345 >> [#pep345]_ but should not be generated, only accepted. >> >> Environment Markers >> ------------------- >> >> Environment markers allow a dependency specification to provide a rule >> that >> describes when the dependency should be used. For instance, consider a >> package >> that needs argparse. In Python 2.7 argparse is always present. On older >> Python >> versions it has to be installed as a dependency. This can be expressed as >> so:: >> >> argparse;python_version<"2.7" >> >> A marker expression evalutes to either True or False. When it evaluates to >> False, the dependency specification should be ignored. >> >> The marker language is a subset of Python itself, chosen for the ability >> to >> safely evaluate it without running arbitrary code that could become a >> security >> vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This >> PEP >> fixes some issues that were observed in the design described in PEP-426 >> [#pep426]_. >> >> Comparisons in marker expressions are typed by the comparison operator. >> The >> operators that are not in perform the same as >> they >> do for strings in Python. The operators use the PEP-440 >> [#pep440]_ version comparison rules when those are defined (that is when >> both >> sides have a valid version specifier). If there is no defined PEP-440 >> behaviour and the operator exists in Python, then the operator falls back >> to >> the Python behaviour. Otherwise an error should be raised. e.g. the >> following >> will result in errors:: >> >> "dog" ~= "fred" >> python_version ~= "surprise" >> >> User supplied constants are always encoded as strings with either ``'`` or >> ``"`` quote marks. Note that backslash escapes are not defined, but >> existing >> implementations do support them. They are not included in this >> specification because they add complexity and there is no observable need >> for >> them today. Similarly we do not define non-ASCII character support: all >> the >> runtime variables we are referencing are expected to be ASCII-only. >> >> The variables in the marker grammar such as "os_name" resolve to values >> looked >> up in the Python runtime. With the exception of "extra" all values are >> defined >> on all Python versions today - it is an error in the implementation of >> markers >> if a value is not defined. >> >> Unknown variables must raise an error rather than resulting in a >> comparison >> that evaluates to True or False. >> >> Variables whose value cannot be calculated on a given Python >> implementation >> should evaluate to ``0`` for versions, and an empty string for all other >> variables. >> >> The "extra" variable is special. It is used by wheels to signal which >> specifications apply to a given extra in the wheel ``METADATA`` file, but >> since the ``METADATA`` file is based on a draft version of PEP-426, there >> is >> no current specification for this. Regardless, outside of a context where >> this >> special handling is taking place, the "extra" variable should result in an >> error like all other unknown variables. >> >> .. list-table:: >> :header-rows: 1 >> >> * - Marker >> - Python equivalent >> - Sample values >> * - ``os_name`` >> - ``os.name`` >> - ``posix``, ``java`` >> * - ``sys_platform`` >> - ``sys.platform`` >> - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that >> "linux" >> is from Python3 and "linux2" from Python2) >> * - ``platform_machine`` >> - ``platform.machine()`` >> - ``x86_64`` >> * - ``python_implementation`` >> - ``platform.python_implementation()`` >> - ``CPython``, ``Jython`` >> * - ``platform_release`` >> - ``platform.release()`` >> - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` >> * - ``platform_system`` >> - ``platform.system()`` >> - ``Linux``, ``Windows``, ``Java`` >> * - ``platform_version`` >> - ``platform.version()`` >> - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` >> ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle >> Corporation`` >> ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; >> root:xnu-2782.40.9~2/RELEASE_X86_64`` >> * - ``python_version`` >> - ``platform.python_version()[:3]`` >> - ``3.4``, ``2.7`` >> * - ``python_full_version`` >> - ``platform.python_version()`` >> - ``3.4.0``, ``3.5.0b1`` >> * - ``implementation_name`` >> - ``sys.implementation.name`` >> - ``cpython`` >> * - ``implementation_version`` >> - see definition below >> - ``3.4.0``, ``3.5.0b1`` >> * - ``extra`` >> - An error except when defined by the context interpreting the >> specification. >> - ``test`` >> >> The ``implementation_version`` marker variable is derived from >> ``sys.implementation.version``:: >> >> def format_full_version(info): >> version = '{0.major}.{0.minor}.{0.micro}'.format(info) >> kind = info.releaselevel >> if kind != 'final': >> version += kind[0] + str(info.serial) >> return version >> >> if hasattr(sys, 'implementation'): >> implementation_version = >> format_full_version(sys.implementation.version) >> else: >> implementation_version = "0" >> >> Backwards Compatibility >> ======================= >> >> Most of this PEP is already widely deployed and thus offers no >> compatibiltiy >> concerns. >> >> There are however a few points where the PEP differs from the deployed >> base. >> >> Firstly, PEP-440 direct references haven't actually been deployed in the >> wild, >> but they were designed to be compatibly added, and there are no known >> obstacles to adding them to pip or other tools that consume the existing >> dependency metadata in distributions - particularly since they won't be >> permitted to be present in PyPI uploaded distributions anyway. >> >> Secondly, PEP-426 markers which have had some reasonable deployment, >> particularly in wheels and pip, will handle version comparisons with >> ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is >> less >> than "2.7.9". This backward incompatibility is deliberate. We are also >> defining new operators - "~=" and "===", and new variables - >> ``platform_release``, ``platform_system``, ``implementation_name``, and >> ``implementation_version`` which are not present in older marker >> implementations. The variables will error on those implementations. Users >> of >> both features will need to make a judgement as to when support has become >> sufficiently widespread in the ecosystem that using them will not cause >> compatibility issues. >> >> Thirdly, PEP-345 required brackets around version specifiers. In order to >> accept PEP-345 dependency specifications, brackets are accepted, but they >> should not be generated. >> >> Rationale >> ========= >> >> In order to move forward with any new PEPs that depend on environment >> markers, >> we needed a specification that included them in their modern form. This >> PEP >> brings together all the currently unspecified components into a specified >> form. >> >> The requirement specifier was adopted from the EBNF in the setuptools >> pkg_resources documentation, since we wish to avoid depending on a >> defacto, vs >> PEP specified, standard. >> >> Complete Grammar >> ================ >> >> The complete parsley grammar:: >> >> wsp = ' ' | '\t' >> version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | >> '==='> >> version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | >> '+' | '!' )+> >> version_one = version_cmp:op version:v wsp* -> (op, v) >> version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 >> versionspec = ('(' version_many:v ')' ->v) | version_many >> urlspec = '@' wsp* >> marker_op = version_cmp | 'in' | 'not' wsp+ 'in' >> python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | >> '-' | '_' | '*' | '#') >> dquote = '"' >> squote = '\\'' >> python_str = (squote <(python_str_c | dquote)*>:s squote | >> dquote <(python_str_c | squote)*>:s dquote) -> s >> env_var = ('python_version' | 'python_full_version' | >> 'os_name' | 'sys_platform' | 'platform_release' | >> 'platform_system' | 'platform_version' | >> 'platform_machine' | 'python_implementation' | >> 'implementation_name' | 'implementation_version' | >> 'extra' # ONLY when defined by a containing layer >> ):varname -> lookup(varname) >> marker_var = env_var | python_str >> marker_expr = (("(" wsp* marker:m wsp* ")" -> m) >> | ((marker_var:l wsp* marker_op:o wsp* >> marker_var:r)) >> -> (l, o, r)) >> marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* >> marker_expr:r -> (o, r))*:ms -> (m, ms)) >> quoted_marker = ';' wsp* marker >> identifier = > letterOrDigit | >> (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) >> )*> >> name = identifier >> extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + >> ids >> extras = '[' wsp* extras_list?:e wsp* ']' -> e >> name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* >> quoted_marker?:m >> -> (n, e or [], v or [], m)) >> url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ >> quoted_marker?:m >> -> (n, e or [], v, m)) >> specification = wsp* ( url_req | name_req ):s wsp* -> s >> # The result is a tuple - name, list-of-extras, >> # list-of-version-constraints-or-a-url, marker-ast or None >> >> >> URI_reference = >> URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? >> hier_part = ('//' authority path_abempty) | path_absolute | >> path_rootless | path_empty >> absolute_URI = scheme ':' hier_part ( '?' query )? >> relative_ref = relative_part ( '?' query )? ( '#' fragment )? >> relative_part = '//' authority path_abempty | path_absolute | >> path_noscheme | path_empty >> scheme = letter ( letter | digit | '+' | '-' | '.')* >> authority = ( userinfo '@' )? host ( ':' port )? >> userinfo = ( unreserved | pct_encoded | sub_delims | ':')* >> host = IP_literal | IPv4address | reg_name >> port = digit* >> IP_literal = '[' ( IPv6address | IPvFuture) ']' >> IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ >> IPv6address = ( >> ( h16 ':'){6} ls32 >> | '::' ( h16 ':'){5} ls32 >> | ( h16 )? '::' ( h16 ':'){4} ls32 >> | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 >> | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 >> | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 >> | ( ( h16 ':'){0,4} h16 )? '::' ls32 >> | ( ( h16 ':'){0,5} h16 )? '::' h16 >> | ( ( h16 ':'){0,6} h16 )? '::' ) >> h16 = hexdig{1,4} >> ls32 = ( h16 ':' h16) | IPv4address >> IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet >> nz = ~'0' digit >> dec_octet = ( >> digit # 0-9 >> | nz digit # 10-99 >> | '1' digit{2} # 100-199 >> | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 >> | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# >> %250-255 >> reg_name = ( unreserved | pct_encoded | sub_delims)* >> path = ( >> path_abempty # begins with '/' or is empty >> | path_absolute # begins with '/' but not '//' >> | path_noscheme # begins with a non-colon segment >> | path_rootless # begins with a segment >> | path_empty ) # zero characters >> path_abempty = ( '/' segment)* >> path_absolute = '/' ( segment_nz ( '/' segment)* )? >> path_noscheme = segment_nz_nc ( '/' segment)* >> path_rootless = segment_nz ( '/' segment)* >> path_empty = pchar{0} >> segment = pchar* >> segment_nz = pchar+ >> segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ >> # non-zero-length segment without any colon ':' >> pchar = unreserved | pct_encoded | sub_delims | ':' | '@' >> query = ( pchar | '/' | '?')* >> fragment = ( pchar | '/' | '?')* >> pct_encoded = '%' hexdig >> unreserved = letter | digit | '-' | '.' | '_' | '~' >> reserved = gen_delims | sub_delims >> gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' >> sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | >> ',' | ';' | '=' >> hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | >> 'D' | 'e' | 'E' | 'f' | 'F' >> >> A test program - if the grammar is in a string ``grammar``:: >> >> import os >> import sys >> import platform >> >> from parsley import makeGrammar >> >> grammar = """ >> wsp ... >> """ >> tests = [ >> "A", >> "aa", >> "name", >> "name>=3", >> "name>=3,<2", >> "name [fred,bar] @ http://foo.com ; python_version=='2.7'", >> "name[quux, strange];python_version<'2.7' and >> platform_version=='2'", >> "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", >> "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", >> ] >> >> def format_full_version(info): >> version = '{0.major}.{0.minor}.{0.micro}'.format(info) >> kind = info.releaselevel >> if kind != 'final': >> version += kind[0] + str(info.serial) >> return version >> >> if hasattr(sys, 'implementation'): >> implementation_version = >> format_full_version(sys.implementation.version) >> implementation_name = sys.implementation.name >> else: >> implementation_version = '0' >> implementation_name = '' >> bindings = { >> 'implementation_name': implementation_name, >> 'implementation_version': implementation_version, >> 'os_name': os.name, >> 'platform_machine': platform.machine(), >> 'platform_release': platform.release(), >> 'platform_system': platform.system(), >> 'platform_version': platform.version(), >> 'python_full_version': platform.python_version(), >> 'python_implementation': platform.python_implementation(), >> 'python_version': platform.python_version()[:3], >> 'sys_platform': sys.platform, >> } >> >> compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) >> for test in tests: >> parsed = compiled(test).specification() >> print(parsed) >> >> References >> ========== >> >> .. [#pip] pip, the recommended installer for Python packages >> (http://pip.readthedocs.org/en/stable/) >> >> .. [#pep345] PEP-345, Python distribution metadata version 1.2. >> (https://www.python.org/dev/peps/pep-0345/) >> >> .. [#pep426] PEP-426, Python distribution metadata. >> (https://www.python.org/dev/peps/pep-0426/) >> >> .. [#pep440] PEP-440, Python distribution metadata. >> (https://www.python.org/dev/peps/pep-0440/) >> >> .. [#std66] The URL specification. >> (https://tools.ietf.org/html/rfc3986) >> >> .. [#parsley] The parsley PEG library. >> (https://pypi.python.org/pypi/parsley/) >> >> Copyright >> ========= >> >> This document has been placed in the public domain. >> >> >> >> .. >> Local Variables: >> mode: indented-text >> indent-tabs-mode: nil >> sentence-end-double-space: t >> fill-column: 70 >> coding: utf-8 >> End: >> >> >> -- >> Robert Collins >> Distinguished Technologist >> HP Converged Cloud >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > -- Robert Collins Distinguished Technologist HP Converged Cloud From qwcode at gmail.com Wed Nov 18 13:30:17 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 18 Nov 2015 10:30:17 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: > > > Its included in the complete grammar, otherwise it can't be tested. > Note that that the PEP body refers to the IETF document for the > definition of URIs. e.g. exactly what you suggest. > doesn't this imply any possible URI can theoretically be a PEP440 direct reference URI ? Is that true? It's unclear to me what PEP440's definition really is with words like "The exact URLs and targets supported will be tool dependent" Will "direct references" ever be well-defined? or open to whatever any tool decides can be an artifact reference? -------------- next part -------------- An HTML attachment was scrubbed... URL: From qwcode at gmail.com Wed Nov 18 11:44:35 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 18 Nov 2015 08:44:35 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: as it is, this PEP defers the concept of a "Direct Reference URL" to PEP440. but then PEP440 partially defers to PEP426's "source_url" concept, when it says "a direct URL reference may be a valid source_url entry" do we expect PEP440 to be updated to fully own what a "Direct Reference URL" can be?, since referring to PEP426 is now a dead-end path (and partially replaced by this PEP) On Mon, Nov 16, 2015 at 12:46 PM, Robert Collins wrote: > :PEP: XX > :Title: Dependency specification for Python Software Packages > :Version: $Revision$ > :Last-Modified: $Date$ > :Author: Robert Collins > :BDFL-Delegate: Donald Stufft > :Discussions-To: distutils-sig > :Status: Draft > :Type: Standards Track > :Content-Type: text/x-rst > :Created: 11-Nov-2015 > :Post-History: XX > > > Abstract > ======== > > This PEP specifies the language used to describe dependencies for packages. > It draws a border at the edge of describing a single dependency - the > different sorts of dependencies and when they should be installed is a > higher > level problem. The intent is provide a building block for higher layer > specifications. > > The job of a dependency is to enable tools like pip [#pip]_ to find the > right > package to install. Sometimes this is very loose - just specifying a name, > and > sometimes very specific - referring to a specific file to install. > Sometimes > dependencies are only relevant in one platform, or only some versions are > acceptable, so the language permits describing all these cases. > > The language defined is a compact line based format which is already in > widespread use in pip requirements files, though we do not specify the > command > line option handling that those files permit. There is one caveat - the > URL reference form, specified in PEP-440 [#pep440]_ is not actually > implemented in pip, but since PEP-440 is accepted, we use that format > rather > than pip's current native format. > > Motivation > ========== > > Any specification in the Python packaging ecosystem that needs to consume > lists of dependencies needs to build on an approved PEP for such, but > PEP-426 [#pep426]_ is mostly aspirational - and there are already existing > implementations of the dependency specification which we can instead adopt. > The existing implementations are battle proven and user friendly, so > adopting > them is arguably much better than approving an aspirational, unconsumed, > format. > > Specification > ============= > > Examples > -------- > > All features of the language shown with a name based lookup:: > > requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < > "2.7.10" > > A minimal URL based lookup:: > > pip @ > https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 > > Concepts > -------- > > A dependency specification always specifies a distribution name. It may > include extras, which expand the dependencies of the named distribution to > enable optional features. The version installed can be controlled using > version limits, or giving the URL to a specific artifact to install. > Finally > the dependency can be made conditional using environment markers. > > Grammar > ------- > > We first cover the grammar briefly and then drill into the semantics of > each > section later. > > A distribution specification is written in ASCII text. We use a parsley > [#parsley]_ grammar to provide a precise grammar. It is expected that the > specification will be embedded into a larger system which offers framing > such > as comments, multiple line support via continuations, or other such > features. > > The full grammar including annotations to build a useful parse tree is > included at the end of the PEP. > > Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: > URI is defined in std-66 [#std66]_:: > > version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | > '===' > version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ > version_one = version_cmp version wsp* > version_many = version_one (wsp* ',' version_one)* > versionspec = ( '(' version_many ')' ) | version_many > urlspec = '@' wsp* > > Environment markers allow making a specification only take effect in some > environments:: > > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*') > dquote = '"' > squote = '\\'' > python_str = (squote (python_str_c | dquote)* squote | > dquote (python_str_c | squote)* dquote) > env_var = ('python_version' | 'python_full_version' | > 'os_name' | 'sys_platform' | 'platform_release' | > 'platform_system' | 'platform_version' | > 'platform_machine' | 'python_implementation' | > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ) > marker_var = env_var | python_str > marker_expr = ('(' wsp* marker wsp* ')' > | (marker_var wsp* marker_op wsp* marker_var)) > marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* > marker_expr)* > quoted_marker = ';' wsp* marker > > Optional components of a distribution may be specified using the extras > field:: > > identifier = letterOrDigit ( > letterOrDigit | > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) )* > name = identifier > extras_list = identifier (wsp* ',' wsp* identifier)* > extras = '[' wsp* extras_list? wsp* ']' > > Giving us a rule for name based requirements:: > > name_req = name wsp* extras? wsp* versionspec? wsp* quoted_marker? > > And a rule for direct reference specifications:: > > url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? > > Leading to the unified rule that can specify a dependency.:: > > specification = wsp* ( url_req | name_req ) wsp* > > Whitespace > ---------- > > Non line-breaking whitespace is mostly optional with no semantic meaning. > The > sole exception is detecting the end of a URL requirement. > > Names > ----- > > Python distribution names are currently defined in PEP-345 [#pep345]_. > Names > act as the primary identifier for distributions. They are present in all > dependency specifications, and are sufficient to be a specification on > their > own. However, PyPI places strict restrictions on names - they must match a > case insensitive regex or they won't be accepted. Accordingly in this PEP > we > limit the acceptable values for identifiers to that regex. A full > redefinition > of name may take place in a future metadata PEP:: > > ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ > > Extras > ------ > > An extra is an optional part of a distribution. Distributions can specify > as > many extras as they wish, and each extra results in the declaration of > additional dependencies of the distribution **when** the extra is used in a > dependency specification. For instance:: > > requests[security] > > Extras union in the dependencies they define with the dependencies of the > distribution they are attached to. The example above would result in > requests > being installed, and requests own dependencies, and also any dependencies > that > are listed in the "security" extra of requests. > > If multiple extras are listed, all the dependencies are unioned together. > > Versions > -------- > > See PEP-440 [#pep440]_ for more detail on both version numbers and version > comparisons. Version specifications limit the versions of a distribution > that > can be used. They only apply to distributions looked up by name, rather > than > via a URL. Version comparison are also used in the markers feature. The > optional brackets around a version are present for compatibility with > PEP-345 > [#pep345]_ but should not be generated, only accepted. > > Environment Markers > ------------------- > > Environment markers allow a dependency specification to provide a rule that > describes when the dependency should be used. For instance, consider a > package > that needs argparse. In Python 2.7 argparse is always present. On older > Python > versions it has to be installed as a dependency. This can be expressed as > so:: > > argparse;python_version<"2.7" > > A marker expression evalutes to either True or False. When it evaluates to > False, the dependency specification should be ignored. > > The marker language is a subset of Python itself, chosen for the ability to > safely evaluate it without running arbitrary code that could become a > security > vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This > PEP > fixes some issues that were observed in the design described in PEP-426 > [#pep426]_. > > Comparisons in marker expressions are typed by the comparison operator. > The > operators that are not in perform the same as > they > do for strings in Python. The operators use the PEP-440 > [#pep440]_ version comparison rules when those are defined (that is when > both > sides have a valid version specifier). If there is no defined PEP-440 > behaviour and the operator exists in Python, then the operator falls back > to > the Python behaviour. Otherwise an error should be raised. e.g. the > following > will result in errors:: > > "dog" ~= "fred" > python_version ~= "surprise" > > User supplied constants are always encoded as strings with either ``'`` or > ``"`` quote marks. Note that backslash escapes are not defined, but > existing > implementations do support them. They are not included in this > specification because they add complexity and there is no observable need > for > them today. Similarly we do not define non-ASCII character support: all the > runtime variables we are referencing are expected to be ASCII-only. > > The variables in the marker grammar such as "os_name" resolve to values > looked > up in the Python runtime. With the exception of "extra" all values are > defined > on all Python versions today - it is an error in the implementation of > markers > if a value is not defined. > > Unknown variables must raise an error rather than resulting in a comparison > that evaluates to True or False. > > Variables whose value cannot be calculated on a given Python implementation > should evaluate to ``0`` for versions, and an empty string for all other > variables. > > The "extra" variable is special. It is used by wheels to signal which > specifications apply to a given extra in the wheel ``METADATA`` file, but > since the ``METADATA`` file is based on a draft version of PEP-426, there > is > no current specification for this. Regardless, outside of a context where > this > special handling is taking place, the "extra" variable should result in an > error like all other unknown variables. > > .. list-table:: > :header-rows: 1 > > * - Marker > - Python equivalent > - Sample values > * - ``os_name`` > - ``os.name`` > - ``posix``, ``java`` > * - ``sys_platform`` > - ``sys.platform`` > - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that > "linux" > is from Python3 and "linux2" from Python2) > * - ``platform_machine`` > - ``platform.machine()`` > - ``x86_64`` > * - ``python_implementation`` > - ``platform.python_implementation()`` > - ``CPython``, ``Jython`` > * - ``platform_release`` > - ``platform.release()`` > - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` > * - ``platform_system`` > - ``platform.system()`` > - ``Linux``, ``Windows``, ``Java`` > * - ``platform_version`` > - ``platform.version()`` > - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` > ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` > ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; > root:xnu-2782.40.9~2/RELEASE_X86_64`` > * - ``python_version`` > - ``platform.python_version()[:3]`` > - ``3.4``, ``2.7`` > * - ``python_full_version`` > - ``platform.python_version()`` > - ``3.4.0``, ``3.5.0b1`` > * - ``implementation_name`` > - ``sys.implementation.name`` > - ``cpython`` > * - ``implementation_version`` > - see definition below > - ``3.4.0``, ``3.5.0b1`` > * - ``extra`` > - An error except when defined by the context interpreting the > specification. > - ``test`` > > The ``implementation_version`` marker variable is derived from > ``sys.implementation.version``:: > > def format_full_version(info): > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > kind = info.releaselevel > if kind != 'final': > version += kind[0] + str(info.serial) > return version > > if hasattr(sys, 'implementation'): > implementation_version = > format_full_version(sys.implementation.version) > else: > implementation_version = "0" > > Backwards Compatibility > ======================= > > Most of this PEP is already widely deployed and thus offers no > compatibiltiy > concerns. > > There are however a few points where the PEP differs from the deployed > base. > > Firstly, PEP-440 direct references haven't actually been deployed in the > wild, > but they were designed to be compatibly added, and there are no known > obstacles to adding them to pip or other tools that consume the existing > dependency metadata in distributions - particularly since they won't be > permitted to be present in PyPI uploaded distributions anyway. > > Secondly, PEP-426 markers which have had some reasonable deployment, > particularly in wheels and pip, will handle version comparisons with > ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is > less > than "2.7.9". This backward incompatibility is deliberate. We are also > defining new operators - "~=" and "===", and new variables - > ``platform_release``, ``platform_system``, ``implementation_name``, and > ``implementation_version`` which are not present in older marker > implementations. The variables will error on those implementations. Users > of > both features will need to make a judgement as to when support has become > sufficiently widespread in the ecosystem that using them will not cause > compatibility issues. > > Thirdly, PEP-345 required brackets around version specifiers. In order to > accept PEP-345 dependency specifications, brackets are accepted, but they > should not be generated. > > Rationale > ========= > > In order to move forward with any new PEPs that depend on environment > markers, > we needed a specification that included them in their modern form. This PEP > brings together all the currently unspecified components into a specified > form. > > The requirement specifier was adopted from the EBNF in the setuptools > pkg_resources documentation, since we wish to avoid depending on a > defacto, vs > PEP specified, standard. > > Complete Grammar > ================ > > The complete parsley grammar:: > > wsp = ' ' | '\t' > version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | > '==='> > version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | > '+' | '!' )+> > version_one = version_cmp:op version:v wsp* -> (op, v) > version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 > versionspec = ('(' version_many:v ')' ->v) | version_many > urlspec = '@' wsp* > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*' | '#') > dquote = '"' > squote = '\\'' > python_str = (squote <(python_str_c | dquote)*>:s squote | > dquote <(python_str_c | squote)*>:s dquote) -> s > env_var = ('python_version' | 'python_full_version' | > 'os_name' | 'sys_platform' | 'platform_release' | > 'platform_system' | 'platform_version' | > 'platform_machine' | 'python_implementation' | > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ):varname -> lookup(varname) > marker_var = env_var | python_str > marker_expr = (("(" wsp* marker:m wsp* ")" -> m) > | ((marker_var:l wsp* marker_op:o wsp* > marker_var:r)) > -> (l, o, r)) > marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* > marker_expr:r -> (o, r))*:ms -> (m, ms)) > quoted_marker = ';' wsp* marker > identifier = letterOrDigit | > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) > )*> > name = identifier > extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + > ids > extras = '[' wsp* extras_list?:e wsp* ']' -> e > name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* > quoted_marker?:m > -> (n, e or [], v or [], m)) > url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ > quoted_marker?:m > -> (n, e or [], v, m)) > specification = wsp* ( url_req | name_req ):s wsp* -> s > # The result is a tuple - name, list-of-extras, > # list-of-version-constraints-or-a-url, marker-ast or None > > > URI_reference = > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > hier_part = ('//' authority path_abempty) | path_absolute | > path_rootless | path_empty > absolute_URI = scheme ':' hier_part ( '?' query )? > relative_ref = relative_part ( '?' query )? ( '#' fragment )? > relative_part = '//' authority path_abempty | path_absolute | > path_noscheme | path_empty > scheme = letter ( letter | digit | '+' | '-' | '.')* > authority = ( userinfo '@' )? host ( ':' port )? > userinfo = ( unreserved | pct_encoded | sub_delims | ':')* > host = IP_literal | IPv4address | reg_name > port = digit* > IP_literal = '[' ( IPv6address | IPvFuture) ']' > IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ > IPv6address = ( > ( h16 ':'){6} ls32 > | '::' ( h16 ':'){5} ls32 > | ( h16 )? '::' ( h16 ':'){4} ls32 > | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 > | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 > | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 > | ( ( h16 ':'){0,4} h16 )? '::' ls32 > | ( ( h16 ':'){0,5} h16 )? '::' h16 > | ( ( h16 ':'){0,6} h16 )? '::' ) > h16 = hexdig{1,4} > ls32 = ( h16 ':' h16) | IPv4address > IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet > nz = ~'0' digit > dec_octet = ( > digit # 0-9 > | nz digit # 10-99 > | '1' digit{2} # 100-199 > | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 > | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# > %250-255 > reg_name = ( unreserved | pct_encoded | sub_delims)* > path = ( > path_abempty # begins with '/' or is empty > | path_absolute # begins with '/' but not '//' > | path_noscheme # begins with a non-colon segment > | path_rootless # begins with a segment > | path_empty ) # zero characters > path_abempty = ( '/' segment)* > path_absolute = '/' ( segment_nz ( '/' segment)* )? > path_noscheme = segment_nz_nc ( '/' segment)* > path_rootless = segment_nz ( '/' segment)* > path_empty = pchar{0} > segment = pchar* > segment_nz = pchar+ > segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ > # non-zero-length segment without any colon ':' > pchar = unreserved | pct_encoded | sub_delims | ':' | '@' > query = ( pchar | '/' | '?')* > fragment = ( pchar | '/' | '?')* > pct_encoded = '%' hexdig > unreserved = letter | digit | '-' | '.' | '_' | '~' > reserved = gen_delims | sub_delims > gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' > sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | > ',' | ';' | '=' > hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | > 'D' | 'e' | 'E' | 'f' | 'F' > > A test program - if the grammar is in a string ``grammar``:: > > import os > import sys > import platform > > from parsley import makeGrammar > > grammar = """ > wsp ... > """ > tests = [ > "A", > "aa", > "name", > "name>=3", > "name>=3,<2", > "name [fred,bar] @ http://foo.com ; python_version=='2.7'", > "name[quux, strange];python_version<'2.7' and > platform_version=='2'", > "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", > "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", > ] > > def format_full_version(info): > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > kind = info.releaselevel > if kind != 'final': > version += kind[0] + str(info.serial) > return version > > if hasattr(sys, 'implementation'): > implementation_version = > format_full_version(sys.implementation.version) > implementation_name = sys.implementation.name > else: > implementation_version = '0' > implementation_name = '' > bindings = { > 'implementation_name': implementation_name, > 'implementation_version': implementation_version, > 'os_name': os.name, > 'platform_machine': platform.machine(), > 'platform_release': platform.release(), > 'platform_system': platform.system(), > 'platform_version': platform.version(), > 'python_full_version': platform.python_version(), > 'python_implementation': platform.python_implementation(), > 'python_version': platform.python_version()[:3], > 'sys_platform': sys.platform, > } > > compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) > for test in tests: > parsed = compiled(test).specification() > print(parsed) > > References > ========== > > .. [#pip] pip, the recommended installer for Python packages > (http://pip.readthedocs.org/en/stable/) > > .. [#pep345] PEP-345, Python distribution metadata version 1.2. > (https://www.python.org/dev/peps/pep-0345/) > > .. [#pep426] PEP-426, Python distribution metadata. > (https://www.python.org/dev/peps/pep-0426/) > > .. [#pep440] PEP-440, Python distribution metadata. > (https://www.python.org/dev/peps/pep-0440/) > > .. [#std66] The URL specification. > (https://tools.ietf.org/html/rfc3986) > > .. [#parsley] The parsley PEG library. > (https://pypi.python.org/pypi/parsley/) > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Nov 18 14:12:34 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 19 Nov 2015 08:12:34 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: On 19 November 2015 at 07:30, Marcus Smith wrote: >> >> Its included in the complete grammar, otherwise it can't be tested. >> Note that that the PEP body refers to the IETF document for the >> definition of URIs. e.g. exactly what you suggest. > > > doesn't this imply any possible URI can theoretically be a PEP440 direct > reference URI ? > > Is that true? > > It's unclear to me what PEP440's definition really is with words like "The > exact URLs and targets supported will be tool dependent" > > Will "direct references" ever be well-defined? or open to whatever any tool > decides can be an artifact reference? We can define the syntax without capturing all the tool support, which is what PEP-440 and thus this PEP does. What I mean here is that e.g. pip may not support a given VCS today, but it can be added without changing the definition of a URI. We can't demand that all tools support all VCS's reasonably, but we can demand that all tools be able to recognise that its a direct reference and act accordingly (e.g. try to clone it, error because direct references aren't allowed in that context, etc). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From qwcode at gmail.com Wed Nov 18 14:40:54 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 18 Nov 2015 11:40:54 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: > > > > Will "direct references" ever be well-defined? or open to whatever any > tool > > decides can be an artifact reference? > > We can define the syntax without capturing all the tool support, which > is what PEP-440 and thus this PEP does. > so, to be clear, what syntax for the URI portion does it define or require? (beyond it just being a valid URI) it sounds like you're saying nothing? i.e. although PEP440 says things like it "may" be a sdist or a wheel target or a "source_url", its wide open to whatever a tool may decide is a unique artifact reference? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Nov 18 14:42:12 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 19 Nov 2015 08:42:12 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: On 19 November 2015 at 08:40, Marcus Smith wrote: >> >> > Will "direct references" ever be well-defined? or open to whatever any >> > tool >> > decides can be an artifact reference? >> >> We can define the syntax without capturing all the tool support, which >> is what PEP-440 and thus this PEP does. > > > so, to be clear, what syntax for the URI portion does it define or require? > (beyond it just being a valid URI) > > it sounds like you're saying nothing? i.e. although PEP440 says things like > it "may" be a sdist or a wheel target or a "source_url", its wide open to > whatever a tool may decide is a unique artifact reference? Thats my understanding from PEP-440 today. And given that an arbitrary url can contain wheel content, I'd be cautious about trying to place semantic constraints on the syntax (vs the content). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Wed Nov 18 14:42:59 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 18 Nov 2015 14:42:59 -0500 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: > On Nov 18, 2015, at 2:40 PM, Marcus Smith wrote: > > > > Will "direct references" ever be well-defined? or open to whatever any tool > > decides can be an artifact reference? > > We can define the syntax without capturing all the tool support, which > is what PEP-440 and thus this PEP does. > > so, to be clear, what syntax for the URI portion does it define or require? (beyond it just being a valid URI) > > it sounds like you're saying nothing? i.e. although PEP440 says things like it "may" be a sdist or a wheel target or a "source_url", its wide open to whatever a tool may decide is a unique artifact reference? > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Only half way thinking about this right this moment, but I think so yes. It?s largely designed for private use cases which is why it?s not allowed on PyPI. It?s essentially a replacement for dependency_links. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From qwcode at gmail.com Wed Nov 18 15:14:33 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 18 Nov 2015 12:14:33 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: On Wed, Nov 18, 2015 at 11:42 AM, Donald Stufft wrote: > > On Nov 18, 2015, at 2:40 PM, Marcus Smith wrote: > > >> > Will "direct references" ever be well-defined? or open to whatever any >> tool >> > decides can be an artifact reference? >> >> We can define the syntax without capturing all the tool support, which >> is what PEP-440 and thus this PEP does. >> > > so, to be clear, what syntax for the URI portion does it define or > require? (beyond it just being a valid URI) > > it sounds like you're saying nothing? i.e. although PEP440 says things > like it "may" be a sdist or a wheel target or a "source_url", its wide open > to whatever a tool may decide is a unique artifact reference? > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > > > Only half way thinking about this right this moment, but I think so yes. > It?s largely designed for private use cases which is why it?s not allowed > on PyPI. It?s essentially a replacement for dependency_links. > practically speaking, isn't it also a future replacement for "#egg=name" syntax in pip vcs urls?... i.e. using "name@" instead? -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed Nov 18 15:36:56 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 19 Nov 2015 09:36:56 +1300 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: I think we should start supporting that, yes. On 19 November 2015 at 09:14, Marcus Smith wrote: > > > On Wed, Nov 18, 2015 at 11:42 AM, Donald Stufft wrote: >> >> >> On Nov 18, 2015, at 2:40 PM, Marcus Smith wrote: >> >>> >>> > Will "direct references" ever be well-defined? or open to whatever any >>> > tool >>> > decides can be an artifact reference? >>> >>> We can define the syntax without capturing all the tool support, which >>> is what PEP-440 and thus this PEP does. >> >> >> so, to be clear, what syntax for the URI portion does it define or >> require? (beyond it just being a valid URI) >> >> it sounds like you're saying nothing? i.e. although PEP440 says things >> like it "may" be a sdist or a wheel target or a "source_url", its wide open >> to whatever a tool may decide is a unique artifact reference? >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> Only half way thinking about this right this moment, but I think so yes. >> It?s largely designed for private use cases which is why it?s not allowed on >> PyPI. It?s essentially a replacement for dependency_links. > > > practically speaking, isn't it also a future replacement for > "#egg=name" syntax in pip vcs urls?... i.e. using "name@" > instead? -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Wed Nov 18 21:37:37 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 19 Nov 2015 12:37:37 +1000 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: On 19 November 2015 at 06:14, Marcus Smith wrote: > On Wed, Nov 18, 2015 at 11:42 AM, Donald Stufft wrote: >> Only half way thinking about this right this moment, but I think so yes. >> It?s largely designed for private use cases which is why it?s not allowed on >> PyPI. It?s essentially a replacement for dependency_links. > > > practically speaking, isn't it also a future replacement for > "#egg=name" syntax in pip vcs urls?... i.e. using "name@" > instead? Yep, pip's VCS URLs were one of the main motivators for that feature: http://pip.readthedocs.org/en/stable/reference/pip_install/#vcs-support The reason the support is defined as tool dependent is because we have no idea how version control is going to evolve, and different tools will support different version control systems (e.g. pip itself supports bzr, but I'd be surprised if any new tools did, and it's entirely possible now for tools to become popular while only supporting git). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at vorpus.org Thu Nov 19 01:58:33 2015 From: njs at vorpus.org (Nathaniel Smith) Date: Wed, 18 Nov 2015 22:58:33 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <20151117145949.038695e3@fsol> Message-ID: On Nov 18, 2015 6:37 PM, "Nick Coghlan" wrote: > > On 19 November 2015 at 06:14, Marcus Smith wrote: > > On Wed, Nov 18, 2015 at 11:42 AM, Donald Stufft wrote: > >> Only half way thinking about this right this moment, but I think so yes. > >> It?s largely designed for private use cases which is why it?s not allowed on > >> PyPI. It?s essentially a replacement for dependency_links. > > > > > > practically speaking, isn't it also a future replacement for > > "#egg=name" syntax in pip vcs urls?... i.e. using "name@" > > instead? > > Yep, pip's VCS URLs were one of the main motivators for that feature: > http://pip.readthedocs.org/en/stable/reference/pip_install/#vcs-support > > The reason the support is defined as tool dependent is because we have > no idea how version control is going to evolve, and different tools > will support different version control systems (e.g. pip itself > supports bzr, but I'd be surprised if any new tools did, and it's > entirely possible now for tools to become popular while only > supporting git). Another protocol that new tools might reasonably disagree about supporting is good ol' ftp. (Not sure if even pip supports it or not.) Besides which, we haven't yet standardized what should be found at the end of that URL (unless it happens to be a prebuilt wheel, but that's probably not the most common usage). -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Thu Nov 19 10:40:36 2015 From: holger at merlinux.eu (holger krekel) Date: Thu, 19 Nov 2015 15:40:36 +0000 Subject: [Distutils] devpi-{server,web}-2.5.0 perf/bug fix releases Message-ID: <20151119154036.GQ16107@merlinux.eu> We just released devpi-{server,web}-2.5.0 to pypi, see changelogs below for more details. While it's not required to do an export/import cycle for this release it's recommended especially if you are running with replicas. Docs for the private pypi packaging server at: http://doc.devpi.net Thanks to Florian Schulze, Jason R. Coombs and all issue reporters. For your information, we are now starting work for devpi-server-3.0 which will introduce further speedups, internal code simplifications and new features (like mirroring from arbitrary pypi-servers). cheers, holger krekel server-2.5.0 (2015-11-19) ------------------------- - fix a regression of 2.3.0 which would cause many write-transactions for mirrored simple-page entries that didn't change. Previous to the fix, accesses to mirrored simple pages will result in a new write-transaction every 30 minutes if the page is accessed which is likely on a somewhat busy site. If you running with replicas it is recommended to do an an export/import cycle to remove all the unneccessary writes that were produced since devpi-server-2.3.0. They delay the setup of new replicas considerably. - add info about pypi_whitelist on simple page when root/pypi is blocked for a project. - replica simple-page serving will not unneccessarily wait for new simple-page entries to arrive at the replication side if the master does not return any changes in the initial simple-page request. Previously a replica would wait for the replication-thread to catch up even if no links changed. - fix setup.py to work on py34 and with LANG="C" environments. Thanks Jason R. Coombs. - fix issue284: allow users who are listed in acl_upload to delete packages web-2.5.0 (2015-11-19) ---------------------- - fix issue288: classifiers rendering wrong with read only data views - index.pt, project.pt, version.pt: added info about pypi_whitelist. This requires devpi-server > 2.4.0 to work. - fix issue286: indexing of most data failed due to new read only views From donald at stufft.io Fri Nov 20 11:22:03 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 20 Nov 2015 11:22:03 -0500 Subject: [Distutils] New Design Landed in Warehouse Message-ID: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> As many of you may know, I?ve been working on Warehouse which is designed to replace the PyPI code base with something modern and maintainable. That is progressing but we?ve hit a particular milestone that I?m really excited by. With the help of the awesome nlhkabu [1][2] we?ve gotten a (still WIP) design that has now landed in the repository and has been deployed to warehouse.python.org and warehouse-staging.python.org. Check it out and see what you think. If you find any issues feel free to open them up on https://github.com/pypa/warehouse. [1] https://twitter.com/nlhkabu/ [2] https://github.com/nlhkabu/ ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Fri Nov 20 11:40:25 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 20 Nov 2015 16:40:25 +0000 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: On 20 November 2015 at 16:22, Donald Stufft wrote: > As many of you may know, I?ve been working on Warehouse which is designed to replace the PyPI code base with something modern and maintainable. That is progressing but we?ve hit a particular milestone that I?m really excited by. With the help of the awesome nlhkabu [1][2] we?ve gotten a (still WIP) design that has now landed in the repository and has been deployed to warehouse.python.org and warehouse-staging.python.org. Check it out and see what you think. If you find any issues feel free to open them up on https://github.com/pypa/warehouse. Nice! Paul From qwcode at gmail.com Fri Nov 20 11:56:30 2015 From: qwcode at gmail.com (Marcus Smith) Date: Fri, 20 Nov 2015 08:56:30 -0800 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: what's the sharp thing hanging from the "P" a device that measures the packaging cubes? loads them? : ) On Fri, Nov 20, 2015 at 8:22 AM, Donald Stufft wrote: > As many of you may know, I?ve been working on Warehouse which is designed > to replace the PyPI code base with something modern and maintainable. That > is progressing but we?ve hit a particular milestone that I?m really excited > by. With the help of the awesome nlhkabu [1][2] we?ve gotten a (still WIP) > design that has now landed in the repository and has been deployed to > warehouse.python.org and warehouse-staging.python.org. Check it out and > see what you think. If you find any issues feel free to open them up on > https://github.com/pypa/warehouse. > > > [1] https://twitter.com/nlhkabu/ > [2] https://github.com/nlhkabu/ > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p at 2015.forums.dobrogost.net Fri Nov 20 12:01:58 2015 From: p at 2015.forums.dobrogost.net (Piotr Dobrogost) Date: Fri, 20 Nov 2015 18:01:58 +0100 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: On Fri, Nov 20, 2015 at 5:56 PM, Marcus Smith wrote: > what's the sharp thing hanging from the "P" > a device that measures the packaging cubes? loads them? I think this is simply "pip" :) Regards, Piotr From wes.turner at gmail.com Fri Nov 20 16:34:09 2015 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 20 Nov 2015 15:34:09 -0600 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: Looks great, thanks! https://warehouse.python.org/ > "Top Projects" / "New Releases" > "Show more" > https://warehouse.python.org/TODO > 404 - [ ] ENH: schema.org/SoftwareApplication ./Code JSONLD, RDFa linked data On Nov 20, 2015 10:22 AM, "Donald Stufft" wrote: > As many of you may know, I?ve been working on Warehouse which is designed > to replace the PyPI code base with something modern and maintainable. That > is progressing but we?ve hit a particular milestone that I?m really excited > by. With the help of the awesome nlhkabu [1][2] we?ve gotten a (still WIP) > design that has now landed in the repository and has been deployed to > warehouse.python.org and warehouse-staging.python.org. Check it out and > see what you think. If you find any issues feel free to open them up on > https://github.com/pypa/warehouse. > > > [1] https://twitter.com/nlhkabu/ > [2] https://github.com/nlhkabu/ > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 21 00:12:58 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 21 Nov 2015 15:12:58 +1000 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: On 21 November 2015 at 07:34, Wes Turner wrote: > Looks great, thanks! > > https://warehouse.python.org/ > "Top Projects" / "New Releases" > "Show > more" > https://warehouse.python.org/TODO > 404 > > - [ ] ENH: schema.org/SoftwareApplication ./Code JSONLD, RDFa linked data > Wes, the JSON-LD spec isn't going anywhere, so we can adopt it whenever we decide it's useful to do so. However, that also means that bringing it up in every single discussion in the meantime is annoying, not helpful. You've been heard, and JSON-LD is indeed interesting, it just doesn't solve any high priority problems for the Python packaging ecosystem. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Nov 21 02:46:45 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 21 Nov 2015 08:46:45 +0100 Subject: [Distutils] New Design Landed in Warehouse References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> Message-ID: <20151121084645.20aaa7c1@fsol> On Fri, 20 Nov 2015 11:40:23 -0500 Randy Syring wrote: > I'm glad to see progress being made. Thanks for the time and effort > that is being put into this. > > After taking a look, the one thing that really stuck out to me in a > negative way was how much screen space the header is using up. I've > created an issue for discussion here: Agreed. The look of the average project page is a bit depressing: https://warehouse.python.org/project/six/ Useful content starts only 2/3 down the first page. The large "pip install six" snippet probably doesn't deserve being that proeminent (or being there at all), and is ironically redundant with the "how do I install this?" link just below. I would also suggest a bit more care on the typography :-) The main body text here: https://warehouse.python.org/project/requests/ looks much less nice than here: https://pypi.python.org/pypi/requests/ (actually, in both those pages, it would be nice trying justifying the text, as well) Regards Antoine. From noah at coderanger.net Sat Nov 21 02:51:31 2015 From: noah at coderanger.net (Noah Kantrowitz) Date: Fri, 20 Nov 2015 23:51:31 -0800 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121084645.20aaa7c1@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> Message-ID: <2309BBBD-3678-467C-8C65-70FB73D8CCBF@coderanger.net> > On Nov 20, 2015, at 11:46 PM, Antoine Pitrou wrote: > > On Fri, 20 Nov 2015 11:40:23 -0500 > Randy Syring > wrote: > >> I'm glad to see progress being made. Thanks for the time and effort >> that is being put into this. >> >> After taking a look, the one thing that really stuck out to me in a >> negative way was how much screen space the header is using up. I've >> created an issue for discussion here: > > Agreed. The look of the average project page is a bit depressing: > https://warehouse.python.org/project/six/ > > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. I think you have a highly specialized view of what is "useful content" compared to the average user. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sat Nov 21 05:46:49 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 21 Nov 2015 10:46:49 +0000 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121084645.20aaa7c1@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> Message-ID: On 21 November 2015 at 07:46, Antoine Pitrou wrote: > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. One possibly useful bit of experience, I just scrolled down on one of the new warehouse pages to read through the description and then scrolled back up to the top quickly. I instinctively stopped with about half of the header showing. My interpretation of this is that resizing the header down to half its current height would probably feel more "natural" (at least to me). Paul From solipsis at pitrou.net Sat Nov 21 05:59:18 2015 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 21 Nov 2015 11:59:18 +0100 Subject: [Distutils] New Design Landed in Warehouse References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> Message-ID: <20151121115918.1b20dedd@fsol> On Sat, 21 Nov 2015 08:46:45 +0100 Antoine Pitrou wrote: > > Agreed. The look of the average project page is a bit depressing: > https://warehouse.python.org/project/six/ > > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. > > I would also suggest a bit more care on the typography :-) The main > body text here: > https://warehouse.python.org/project/requests/ > looks much less nice than here: > https://pypi.python.org/pypi/requests/ To elaborate a bit and avoid misunderstandings, here are screenshots of the aforementioned pages on my Web browser: http://pitrou.net/warehouse.png http://pitrou.net/pypi.png Note how you get much more content on the current PyPI version, without the page being too cluttered (some of the clutter actually comes from the generic python.org navigation bar). Of course I'm not saying the current PyPI layout is great and it shouldn't be replaced or overhauled, but it does have that virtue of leaving most of the page area to the project description. (btw, I've shrank the warehouse font size a bit to make the comparison fair, otherwise it would have been worse) Regards Antoine. From donald at stufft.io Sat Nov 21 08:29:43 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 Nov 2015 08:29:43 -0500 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121091128.502856f3@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> <2309BBBD-3678-467C-8C65-70FB73D8CCBF@coderanger.net> <20151121091128.502856f3@fsol> Message-ID: > On Nov 21, 2015, at 3:11 AM, Antoine Pitrou wrote: > > On Fri, 20 Nov 2015 23:51:31 -0800 > Noah Kantrowitz wrote: >>> >>> Useful content starts only 2/3 down the first page. The large "pip >>> install six" snippet probably doesn't deserve being that proeminent >>> (or being there at all), and is ironically redundant with the "how do >>> I install this?" link just below. >> >> I think you have a highly specialized view of what is "useful content" compared to the average user. > > "Useful content" is the page body, i.e. the project's description, as > opposed to navigation and other layout elements. > > As for "highly specialized", we won't know without a user study. What > does the average user look for on a PyPI project page? My hypothesis is > that they mainly want to learn about the package, i.e. read its > description (perhaps also the list of supported platforms and the > changelog). It makes sense to prioritize the content the user is > looking for. > We have an issue for reducing the size of the header at https://github.com/pypa/warehouse/issues/793. We also have a number of users (of varying skill levels I believe) lined up to do user testing of the new design with. Nicole is handling that but I believe she is planning to start that the beginning of December. If you?re interested in participating with that contacting Nicole to see if she needs any more volunteers for it would be a good idea. I?d also like to stress that prior to, well Yesterday really, this design had not been paired with live data at all and had just lived in a static site. So part of this all is getting it integrated with the site and seeing how it feels with it and what works and what doesn?t. Feedback is useful, though the best form of it would be to open or comment on issues on the repository so we can track when we?ve either fixed it, or can at least acknowledge it and say that we believe the current system is better for more users. I want to stress that the design is not ?finished?, it?s very much a WIP still. Consider this sort of like an alpha or something. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Nov 21 08:33:30 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 Nov 2015 08:33:30 -0500 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121084645.20aaa7c1@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> Message-ID: <7498FF5C-334B-4FAC-BD1F-38C49CDB5F15@stufft.io> > On Nov 21, 2015, at 2:46 AM, Antoine Pitrou wrote: > > On Fri, 20 Nov 2015 11:40:23 -0500 > Randy Syring > wrote: > >> I'm glad to see progress being made. Thanks for the time and effort >> that is being put into this. >> >> After taking a look, the one thing that really stuck out to me in a >> negative way was how much screen space the header is using up. I've >> created an issue for discussion here: > > Agreed. The look of the average project page is a bit depressing: > https://warehouse.python.org/project/six/ > > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. The ?How do I Install this? is actually designed to take someone to a beginners guide on how to use pip to install packages while the ?pip install six? snippet is designed for someone who already knows the basics to just have something they can easily copy/paste (including a little button you can click to get it copied to your clipboard). We have a number of technical writers who have volunteered to help make packaging.python.org better and a beginners guide is one of the items they?re going to be working on. > > I would also suggest a bit more care on the typography :-) The main > body text here: > https://warehouse.python.org/project/requests/ > looks much less nice than here: > https://pypi.python.org/pypi/requests/ > > (actually, in both those pages, it would be nice trying justifying the > text, as well) The main body text on the detail pages hasn?t been styled at all yet (tracked by https://github.com/pypa/warehouse/issues/801). > > Regards > > Antoine. > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Sat Nov 21 08:38:48 2015 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 Nov 2015 08:38:48 -0500 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121143243.64b61663@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> <2309BBBD-3678-467C-8C65-70FB73D8CCBF@coderanger.net> <20151121091128.502856f3@fsol> <20151121143243.64b61663@fsol> Message-ID: <5925E584-85CE-41D6-8710-C0A6A33ECD9A@stufft.io> > On Nov 21, 2015, at 8:32 AM, Antoine Pitrou wrote: > > On Sat, 21 Nov 2015 08:29:43 -0500 > Donald Stufft wrote: >> >> We have an issue for reducing the size of the header at https://github.com/pypa/warehouse/issues/793. >> >> We also have a number of users (of varying skill levels I believe) lined up to do user testing of the new design with. Nicole is handling that but I believe she is planning to start that the beginning of December. If you?re interested in participating with that contacting Nicole to see if she needs any more volunteers for it would be a good idea. > > Great to hear about! I'm ok with participating but I'm not sure where > to contact Nicole (also, how much time is expected from participants?). Ah oops, here?s the Call to Action issue we had https://github.com/pypa/warehouse/issues/717. > >> Feedback is useful, though the best form of it would be to open or comment on issues on the repository so we can track when we?ve either fixed it, or can at least acknowledge it and say that we believe the current system is better for more users. I want to stress that the design is not ?finished?, it?s very much a WIP still. Consider this sort of like an alpha or something. > > Thank you. I assumed that some kind of feedback was expected. Apologies > if that was too early. Oh not at all. I just wanted to make sure it was clear that it was still a construction zone of sorts. I think we?ve had some folks (not you or anything) think the design was ?done? (in so much that anything with software is ever ?done?) and just wanted to call it out explicitly. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From randy.syring at level12.io Fri Nov 20 11:40:23 2015 From: randy.syring at level12.io (Randy Syring) Date: Fri, 20 Nov 2015 11:40:23 -0500 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> Message-ID: <564F4CF7.9060506@level12.io> I'm glad to see progress being made. Thanks for the time and effort that is being put into this. After taking a look, the one thing that really stuck out to me in a negative way was how much screen space the header is using up. I've created an issue for discussion here: https://github.com/pypa/warehouse/issues/793 *Randy Syring* Chief Executive Developer Direct: 502.276.0459 Office: 812.285.8766 Level 12 On 11/20/2015 11:22 AM, Donald Stufft wrote: > As many of you may know, I?ve been working on Warehouse which is designed to replace the PyPI code base with something modern and maintainable. That is progressing but we?ve hit a particular milestone that I?m really excited by. With the help of the awesome nlhkabu [1][2] we?ve gotten a (still WIP) design that has now landed in the repository and has been deployed to warehouse.python.org and warehouse-staging.python.org. Check it out and see what you think. If you find any issues feel free to open them up on https://github.com/pypa/warehouse. > > > [1] https://twitter.com/nlhkabu/ > [2] https://github.com/nlhkabu/ > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: email-sigs-logo.png Type: image/png Size: 2857 bytes Desc: not available URL: From wolfgang.maier at biologie.uni-freiburg.de Sun Nov 22 17:58:13 2015 From: wolfgang.maier at biologie.uni-freiburg.de (Wolfgang Maier) Date: Sun, 22 Nov 2015 23:58:13 +0100 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <20151121084645.20aaa7c1@fsol> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> Message-ID: <56524885.9030900@biologie.uni-freiburg.de> On 21.11.2015 08:46, Antoine Pitrou wrote: > > Useful content starts only 2/3 down the first page. The large "pip > install six" snippet probably doesn't deserve being that proeminent > (or being there at all), and is ironically redundant with the "how do > I install this?" link just below. > My favourite is "pip install pip". Seriously, is this installation one-liner going to be configurable by package authors eventually or will it just always say pip install no-matter-what? From donald at stufft.io Sun Nov 22 19:45:50 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 22 Nov 2015 19:45:50 -0500 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: Message-ID: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> Okay. I?ve read over this, implemented enough of it, and I think it?s gone through enough nit picking. I?m going to go ahead and accept this PEP. It?s largely just standardizing what we are already doing so it?s pretty low impact other than fixing up a few issues and giving implementations something they can point at for what the standard behavior is. So congratulations to everyone working on this :) I?ll get this into the PEPs repo and get it pushed. > On Nov 16, 2015, at 3:46 PM, Robert Collins wrote: > > :PEP: XX > :Title: Dependency specification for Python Software Packages > :Version: $Revision$ > :Last-Modified: $Date$ > :Author: Robert Collins > :BDFL-Delegate: Donald Stufft > :Discussions-To: distutils-sig > :Status: Draft > :Type: Standards Track > :Content-Type: text/x-rst > :Created: 11-Nov-2015 > :Post-History: XX > > > Abstract > ======== > > This PEP specifies the language used to describe dependencies for packages. > It draws a border at the edge of describing a single dependency - the > different sorts of dependencies and when they should be installed is a higher > level problem. The intent is provide a building block for higher layer > specifications. > > The job of a dependency is to enable tools like pip [#pip]_ to find the right > package to install. Sometimes this is very loose - just specifying a name, and > sometimes very specific - referring to a specific file to install. Sometimes > dependencies are only relevant in one platform, or only some versions are > acceptable, so the language permits describing all these cases. > > The language defined is a compact line based format which is already in > widespread use in pip requirements files, though we do not specify the command > line option handling that those files permit. There is one caveat - the > URL reference form, specified in PEP-440 [#pep440]_ is not actually > implemented in pip, but since PEP-440 is accepted, we use that format rather > than pip's current native format. > > Motivation > ========== > > Any specification in the Python packaging ecosystem that needs to consume > lists of dependencies needs to build on an approved PEP for such, but > PEP-426 [#pep426]_ is mostly aspirational - and there are already existing > implementations of the dependency specification which we can instead adopt. > The existing implementations are battle proven and user friendly, so adopting > them is arguably much better than approving an aspirational, unconsumed, format. > > Specification > ============= > > Examples > -------- > > All features of the language shown with a name based lookup:: > > requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < "2.7.10" > > A minimal URL based lookup:: > > pip @ https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 > > Concepts > -------- > > A dependency specification always specifies a distribution name. It may > include extras, which expand the dependencies of the named distribution to > enable optional features. The version installed can be controlled using > version limits, or giving the URL to a specific artifact to install. Finally > the dependency can be made conditional using environment markers. > > Grammar > ------- > > We first cover the grammar briefly and then drill into the semantics of each > section later. > > A distribution specification is written in ASCII text. We use a parsley > [#parsley]_ grammar to provide a precise grammar. It is expected that the > specification will be embedded into a larger system which offers framing such > as comments, multiple line support via continuations, or other such features. > > The full grammar including annotations to build a useful parse tree is > included at the end of the PEP. > > Versions may be specified according to the PEP-440 [#pep440]_ rules. (Note: > URI is defined in std-66 [#std66]_:: > > version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '===' > version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ > version_one = version_cmp version wsp* > version_many = version_one (wsp* ',' version_one)* > versionspec = ( '(' version_many ')' ) | version_many > urlspec = '@' wsp* > > Environment markers allow making a specification only take effect in some > environments:: > > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*') > dquote = '"' > squote = '\\'' > python_str = (squote (python_str_c | dquote)* squote | > dquote (python_str_c | squote)* dquote) > env_var = ('python_version' | 'python_full_version' | > 'os_name' | 'sys_platform' | 'platform_release' | > 'platform_system' | 'platform_version' | > 'platform_machine' | 'python_implementation' | > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ) > marker_var = env_var | python_str > marker_expr = ('(' wsp* marker wsp* ')' > | (marker_var wsp* marker_op wsp* marker_var)) > marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* marker_expr)* > quoted_marker = ';' wsp* marker > > Optional components of a distribution may be specified using the extras > field:: > > identifier = letterOrDigit ( > letterOrDigit | > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) )* > name = identifier > extras_list = identifier (wsp* ',' wsp* identifier)* > extras = '[' wsp* extras_list? wsp* ']' > > Giving us a rule for name based requirements:: > > name_req = name wsp* extras? wsp* versionspec? wsp* quoted_marker? > > And a rule for direct reference specifications:: > > url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? > > Leading to the unified rule that can specify a dependency.:: > > specification = wsp* ( url_req | name_req ) wsp* > > Whitespace > ---------- > > Non line-breaking whitespace is mostly optional with no semantic meaning. The > sole exception is detecting the end of a URL requirement. > > Names > ----- > > Python distribution names are currently defined in PEP-345 [#pep345]_. Names > act as the primary identifier for distributions. They are present in all > dependency specifications, and are sufficient to be a specification on their > own. However, PyPI places strict restrictions on names - they must match a > case insensitive regex or they won't be accepted. Accordingly in this PEP we > limit the acceptable values for identifiers to that regex. A full redefinition > of name may take place in a future metadata PEP:: > > ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ > > Extras > ------ > > An extra is an optional part of a distribution. Distributions can specify as > many extras as they wish, and each extra results in the declaration of > additional dependencies of the distribution **when** the extra is used in a > dependency specification. For instance:: > > requests[security] > > Extras union in the dependencies they define with the dependencies of the > distribution they are attached to. The example above would result in requests > being installed, and requests own dependencies, and also any dependencies that > are listed in the "security" extra of requests. > > If multiple extras are listed, all the dependencies are unioned together. > > Versions > -------- > > See PEP-440 [#pep440]_ for more detail on both version numbers and version > comparisons. Version specifications limit the versions of a distribution that > can be used. They only apply to distributions looked up by name, rather than > via a URL. Version comparison are also used in the markers feature. The > optional brackets around a version are present for compatibility with PEP-345 > [#pep345]_ but should not be generated, only accepted. > > Environment Markers > ------------------- > > Environment markers allow a dependency specification to provide a rule that > describes when the dependency should be used. For instance, consider a package > that needs argparse. In Python 2.7 argparse is always present. On older Python > versions it has to be installed as a dependency. This can be expressed as so:: > > argparse;python_version<"2.7" > > A marker expression evalutes to either True or False. When it evaluates to > False, the dependency specification should be ignored. > > The marker language is a subset of Python itself, chosen for the ability to > safely evaluate it without running arbitrary code that could become a security > vulnerability. Markers were first standardised in PEP-345 [#pep345]_. This PEP > fixes some issues that were observed in the design described in PEP-426 > [#pep426]_. > > Comparisons in marker expressions are typed by the comparison operator. The > operators that are not in perform the same as they > do for strings in Python. The operators use the PEP-440 > [#pep440]_ version comparison rules when those are defined (that is when both > sides have a valid version specifier). If there is no defined PEP-440 > behaviour and the operator exists in Python, then the operator falls back to > the Python behaviour. Otherwise an error should be raised. e.g. the following > will result in errors:: > > "dog" ~= "fred" > python_version ~= "surprise" > > User supplied constants are always encoded as strings with either ``'`` or > ``"`` quote marks. Note that backslash escapes are not defined, but existing > implementations do support them. They are not included in this > specification because they add complexity and there is no observable need for > them today. Similarly we do not define non-ASCII character support: all the > runtime variables we are referencing are expected to be ASCII-only. > > The variables in the marker grammar such as "os_name" resolve to values looked > up in the Python runtime. With the exception of "extra" all values are defined > on all Python versions today - it is an error in the implementation of markers > if a value is not defined. > > Unknown variables must raise an error rather than resulting in a comparison > that evaluates to True or False. > > Variables whose value cannot be calculated on a given Python implementation > should evaluate to ``0`` for versions, and an empty string for all other > variables. > > The "extra" variable is special. It is used by wheels to signal which > specifications apply to a given extra in the wheel ``METADATA`` file, but > since the ``METADATA`` file is based on a draft version of PEP-426, there is > no current specification for this. Regardless, outside of a context where this > special handling is taking place, the "extra" variable should result in an > error like all other unknown variables. > > .. list-table:: > :header-rows: 1 > > * - Marker > - Python equivalent > - Sample values > * - ``os_name`` > - ``os.name`` > - ``posix``, ``java`` > * - ``sys_platform`` > - ``sys.platform`` > - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that "linux" > is from Python3 and "linux2" from Python2) > * - ``platform_machine`` > - ``platform.machine()`` > - ``x86_64`` > * - ``python_implementation`` > - ``platform.python_implementation()`` > - ``CPython``, ``Jython`` > * - ``platform_release`` > - ``platform.release()`` > - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` > * - ``platform_system`` > - ``platform.system()`` > - ``Linux``, ``Windows``, ``Java`` > * - ``platform_version`` > - ``platform.version()`` > - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` > ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle Corporation`` > ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; > root:xnu-2782.40.9~2/RELEASE_X86_64`` > * - ``python_version`` > - ``platform.python_version()[:3]`` > - ``3.4``, ``2.7`` > * - ``python_full_version`` > - ``platform.python_version()`` > - ``3.4.0``, ``3.5.0b1`` > * - ``implementation_name`` > - ``sys.implementation.name`` > - ``cpython`` > * - ``implementation_version`` > - see definition below > - ``3.4.0``, ``3.5.0b1`` > * - ``extra`` > - An error except when defined by the context interpreting the > specification. > - ``test`` > > The ``implementation_version`` marker variable is derived from > ``sys.implementation.version``:: > > def format_full_version(info): > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > kind = info.releaselevel > if kind != 'final': > version += kind[0] + str(info.serial) > return version > > if hasattr(sys, 'implementation'): > implementation_version = format_full_version(sys.implementation.version) > else: > implementation_version = "0" > > Backwards Compatibility > ======================= > > Most of this PEP is already widely deployed and thus offers no compatibiltiy > concerns. > > There are however a few points where the PEP differs from the deployed base. > > Firstly, PEP-440 direct references haven't actually been deployed in the wild, > but they were designed to be compatibly added, and there are no known > obstacles to adding them to pip or other tools that consume the existing > dependency metadata in distributions - particularly since they won't be > permitted to be present in PyPI uploaded distributions anyway. > > Secondly, PEP-426 markers which have had some reasonable deployment, > particularly in wheels and pip, will handle version comparisons with > ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is less > than "2.7.9". This backward incompatibility is deliberate. We are also > defining new operators - "~=" and "===", and new variables - > ``platform_release``, ``platform_system``, ``implementation_name``, and > ``implementation_version`` which are not present in older marker > implementations. The variables will error on those implementations. Users of > both features will need to make a judgement as to when support has become > sufficiently widespread in the ecosystem that using them will not cause > compatibility issues. > > Thirdly, PEP-345 required brackets around version specifiers. In order to > accept PEP-345 dependency specifications, brackets are accepted, but they > should not be generated. > > Rationale > ========= > > In order to move forward with any new PEPs that depend on environment markers, > we needed a specification that included them in their modern form. This PEP > brings together all the currently unspecified components into a specified > form. > > The requirement specifier was adopted from the EBNF in the setuptools > pkg_resources documentation, since we wish to avoid depending on a defacto, vs > PEP specified, standard. > > Complete Grammar > ================ > > The complete parsley grammar:: > > wsp = ' ' | '\t' > version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | '==='> > version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | > '+' | '!' )+> > version_one = version_cmp:op version:v wsp* -> (op, v) > version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 > versionspec = ('(' version_many:v ')' ->v) | version_many > urlspec = '@' wsp* > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > '-' | '_' | '*' | '#') > dquote = '"' > squote = '\\'' > python_str = (squote <(python_str_c | dquote)*>:s squote | > dquote <(python_str_c | squote)*>:s dquote) -> s > env_var = ('python_version' | 'python_full_version' | > 'os_name' | 'sys_platform' | 'platform_release' | > 'platform_system' | 'platform_version' | > 'platform_machine' | 'python_implementation' | > 'implementation_name' | 'implementation_version' | > 'extra' # ONLY when defined by a containing layer > ):varname -> lookup(varname) > marker_var = env_var | python_str > marker_expr = (("(" wsp* marker:m wsp* ")" -> m) > | ((marker_var:l wsp* marker_op:o wsp* marker_var:r)) > -> (l, o, r)) > marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* > marker_expr:r -> (o, r))*:ms -> (m, ms)) > quoted_marker = ';' wsp* marker > identifier = letterOrDigit | > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) )*> > name = identifier > extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + ids > extras = '[' wsp* extras_list?:e wsp* ']' -> e > name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* > quoted_marker?:m > -> (n, e or [], v or [], m)) > url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ quoted_marker?:m > -> (n, e or [], v, m)) > specification = wsp* ( url_req | name_req ):s wsp* -> s > # The result is a tuple - name, list-of-extras, > # list-of-version-constraints-or-a-url, marker-ast or None > > > URI_reference = > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > hier_part = ('//' authority path_abempty) | path_absolute | > path_rootless | path_empty > absolute_URI = scheme ':' hier_part ( '?' query )? > relative_ref = relative_part ( '?' query )? ( '#' fragment )? > relative_part = '//' authority path_abempty | path_absolute | > path_noscheme | path_empty > scheme = letter ( letter | digit | '+' | '-' | '.')* > authority = ( userinfo '@' )? host ( ':' port )? > userinfo = ( unreserved | pct_encoded | sub_delims | ':')* > host = IP_literal | IPv4address | reg_name > port = digit* > IP_literal = '[' ( IPv6address | IPvFuture) ']' > IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ > IPv6address = ( > ( h16 ':'){6} ls32 > | '::' ( h16 ':'){5} ls32 > | ( h16 )? '::' ( h16 ':'){4} ls32 > | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 > | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 > | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 > | ( ( h16 ':'){0,4} h16 )? '::' ls32 > | ( ( h16 ':'){0,5} h16 )? '::' h16 > | ( ( h16 ':'){0,6} h16 )? '::' ) > h16 = hexdig{1,4} > ls32 = ( h16 ':' h16) | IPv4address > IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet > nz = ~'0' digit > dec_octet = ( > digit # 0-9 > | nz digit # 10-99 > | '1' digit{2} # 100-199 > | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 > | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# %250-255 > reg_name = ( unreserved | pct_encoded | sub_delims)* > path = ( > path_abempty # begins with '/' or is empty > | path_absolute # begins with '/' but not '//' > | path_noscheme # begins with a non-colon segment > | path_rootless # begins with a segment > | path_empty ) # zero characters > path_abempty = ( '/' segment)* > path_absolute = '/' ( segment_nz ( '/' segment)* )? > path_noscheme = segment_nz_nc ( '/' segment)* > path_rootless = segment_nz ( '/' segment)* > path_empty = pchar{0} > segment = pchar* > segment_nz = pchar+ > segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ > # non-zero-length segment without any colon ':' > pchar = unreserved | pct_encoded | sub_delims | ':' | '@' > query = ( pchar | '/' | '?')* > fragment = ( pchar | '/' | '?')* > pct_encoded = '%' hexdig > unreserved = letter | digit | '-' | '.' | '_' | '~' > reserved = gen_delims | sub_delims > gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' > sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | > ',' | ';' | '=' > hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | > 'D' | 'e' | 'E' | 'f' | 'F' > > A test program - if the grammar is in a string ``grammar``:: > > import os > import sys > import platform > > from parsley import makeGrammar > > grammar = """ > wsp ... > """ > tests = [ > "A", > "aa", > "name", > "name>=3", > "name>=3,<2", > "name [fred,bar] @ http://foo.com ; python_version=='2.7'", > "name[quux, strange];python_version<'2.7' and platform_version=='2'", > "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", > "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", > ] > > def format_full_version(info): > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > kind = info.releaselevel > if kind != 'final': > version += kind[0] + str(info.serial) > return version > > if hasattr(sys, 'implementation'): > implementation_version = format_full_version(sys.implementation.version) > implementation_name = sys.implementation.name > else: > implementation_version = '0' > implementation_name = '' > bindings = { > 'implementation_name': implementation_name, > 'implementation_version': implementation_version, > 'os_name': os.name, > 'platform_machine': platform.machine(), > 'platform_release': platform.release(), > 'platform_system': platform.system(), > 'platform_version': platform.version(), > 'python_full_version': platform.python_version(), > 'python_implementation': platform.python_implementation(), > 'python_version': platform.python_version()[:3], > 'sys_platform': sys.platform, > } > > compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) > for test in tests: > parsed = compiled(test).specification() > print(parsed) > > References > ========== > > .. [#pip] pip, the recommended installer for Python packages > (http://pip.readthedocs.org/en/stable/) > > .. [#pep345] PEP-345, Python distribution metadata version 1.2. > (https://www.python.org/dev/peps/pep-0345/) > > .. [#pep426] PEP-426, Python distribution metadata. > (https://www.python.org/dev/peps/pep-0426/) > > .. [#pep440] PEP-440, Python distribution metadata. > (https://www.python.org/dev/peps/pep-0440/) > > .. [#std66] The URL specification. > (https://tools.ietf.org/html/rfc3986) > > .. [#parsley] The parsley PEG library. > (https://pypi.python.org/pypi/parsley/) > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Sun Nov 22 21:29:09 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 23 Nov 2015 12:29:09 +1000 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> References: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> Message-ID: On 23 November 2015 at 10:45, Donald Stufft wrote: > Okay. I?ve read over this, implemented enough of it, and I think it?s gone through enough nit picking. I?m going to go ahead and accept this PEP. It?s largely just standardizing what we are already doing so it?s pretty low impact other than fixing up a few issues and giving implementations something they can point at for what the standard behavior is. > > So congratulations to everyone working on this :) Huzzah! Thanks for working through these details, folks :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From cwilson at cdwilson.us Sun Nov 22 12:49:48 2015 From: cwilson at cdwilson.us (Christopher Wilson) Date: Sun, 22 Nov 2015 09:49:48 -0800 Subject: [Distutils] Does PEP 0440 allow omitting the separator for development releases? Message-ID: (I posted this to http://stackoverflow.com/questions/33849399/does-pep-0440-allow-omitting-the-separator-for-development-releases and was recommended to send an inquiry to this mailer) PEP 0440 includes the following two statements which on first glance seem to be contradictory: Development releases allow a . , - , or a _ separator as well as omitting the separator all together. The normal form of this is with the . separator. This allows versions such as 1.2-dev2 or 1.2dev2 which normalize to 1.2.dev2 . and Note that devN and postN MUST always be preceded by a dot, even when used immediately following a numeric version (e.g. 1.0.dev456 , 1.0.post1 ). Is the 2nd statement true for developers of Python distributions? Or is it stating that the version (once normalized) must always be preceded by a dot? Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon Nov 23 08:22:32 2015 From: donald at stufft.io (Donald Stufft) Date: Mon, 23 Nov 2015 08:22:32 -0500 Subject: [Distutils] Does PEP 0440 allow omitting the separator for development releases? In-Reply-To: References: Message-ID: > On Nov 22, 2015, at 12:49 PM, Christopher Wilson wrote: > > (I posted this to http://stackoverflow.com/questions/33849399/does-pep-0440-allow-omitting-the-separator-for-development-releases and was recommended to send an inquiry to this mailer) > > PEP 0440 includes the following two statements which on first glance seem to be contradictory: > > Development releases allow a . , - , or a _ separator as well as omitting the separator all together. The normal form of this is with the . separator. This allows versions such as 1.2-dev2 or 1.2dev2 which normalize to 1.2.dev2 . > > and > > Note that devN and postN MUST always be preceded by a dot, even when used immediately following a numeric version (e.g. 1.0.dev456 , 1.0.post1 ). > > Is the 2nd statement true for developers of Python distributions? Or is it stating that the version (once normalized) must always be preceded by a dot? > > Thanks, > Chris > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig Once normalized it must have a dot. The normalization is applied first. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Mon Nov 23 11:45:37 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 23 Nov 2015 08:45:37 -0800 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: <56524885.9030900@biologie.uni-freiburg.de> References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> <56524885.9030900@biologie.uni-freiburg.de> Message-ID: On Sun, Nov 22, 2015 at 2:58 PM, Wolfgang Maier < wolfgang.maier at biologie.uni-freiburg.de> wrote: > Useful content starts only 2/3 down the first page. The large "pip >> install six" snippet probably doesn't deserve being that proeminent >> (or being there at all), and is ironically redundant with the "how do >> I install this?" link just below. >> >> > My favorite is "pip install pip". Seriously, is this installation > one-liner going to be configurable by package authors eventually or will it > just always say pip install no-matter-what? This is a good point -- is there a need for generic install instruction on every package page? If so -- it should be small :-) Also -- maybe not the thread to bring this up in -- but maybe it should be:: python -m pip install package_name -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Nov 23 13:33:38 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 23 Nov 2015 12:33:38 -0600 Subject: [Distutils] New Design Landed in Warehouse In-Reply-To: References: <63C4F751-A862-49D6-BF1E-2D1E32C07FD8@stufft.io> <564F4CF7.9060506@level12.io> <20151121084645.20aaa7c1@fsol> <56524885.9030900@biologie.uni-freiburg.de> Message-ID: On Nov 23, 2015 10:46 AM, "Chris Barker" wrote: > > On Sun, Nov 22, 2015 at 2:58 PM, Wolfgang Maier < wolfgang.maier at biologie.uni-freiburg.de> wrote: > >>> >>> Useful content starts only 2/3 down the first page. The large "pip >>> install six" snippet probably doesn't deserve being that proeminent >>> (or being there at all), and is ironically redundant with the "how do >>> I install this?" link just below. >>> >> >> My favorite is "pip install pip". Seriously, is this installation one-liner going to be configurable by package authors eventually or will it just always say pip install no-matter-what? > > > This is a good point -- is there a need for generic install instruction on every package page? > > If so -- it should be small :-) > > Also -- maybe not the thread to bring this up in -- but maybe it should be:: > > python -m pip install package_name > I find this form preferable, especially for [shell] scripting, because: * $PATH: ./bin/python[3.5] and ./bin/pip may not be in the same path (e.g. system, virtualenv, condaenv, --prefix) type python type pip which python which pip python -m site python -m site --user-base python -m site --user-site * declare -rx PYTHONBIN="${VIRTUAL_ENV}/bin/python" "${PYTHONBIN}" -m pip install -U pip "${PYTHONBIN}" -m site https://docs.python.org/2/library/site.html > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Tue Nov 24 17:36:51 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Tue, 24 Nov 2015 22:36:51 +0000 Subject: [Distutils] night build just started failing with TypeError: must be type, not classobj on Python 2.7 Message-ID: <5654E683.9010707@simplistix.co.uk> Hi All, Nightly builds of one of my packages (https://travis-ci.org/Mortar/mortar_rdb) just began failing with: TypeError: must be type, not classobj ...but only on Python 2. The full traceback is here: https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 I haven't pushed any changed to that project in a few weeks/days. Does anyone know what might have changed to cause this problem? I couldn't see any recent releases of setuptools or pip, so wondering what I'm missing... cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at simplistix.co.uk Tue Nov 24 17:37:17 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Tue, 24 Nov 2015 22:37:17 +0000 Subject: [Distutils] nightly build just started failing with TypeError: must be type, not classobj on Python 2.7 In-Reply-To: <5654D5DC.3070602@withers.org> References: <5654D5DC.3070602@withers.org> Message-ID: <5654E69D.5040705@simplistix.co.uk> Looks like it's not isolated to that package: https://travis-ci.org/Simplistix/mush/jobs/93042214 On 24/11/2015 21:25, Chris Withers wrote: > Hi All, > > Nightly builds of one of my packages > (https://travis-ci.org/Mortar/mortar_rdb) just began failing with: > > TypeError: must be type, not classobj > > ...but only on Python 2. The full traceback is here: > > https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 > > I haven't pushed any changed to that project in a few weeks/days. > > Does anyone know what might have changed to cause this problem? I > couldn't see any recent releases of setuptools or pip, so wondering > what I'm missing... > > cheers, > > Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmcgibbo at gmail.com Tue Nov 24 17:53:52 2015 From: rmcgibbo at gmail.com (Robert McGibbon) Date: Tue, 24 Nov 2015 14:53:52 -0800 Subject: [Distutils] nightly build just started failing with TypeError: must be type, not classobj on Python 2.7 In-Reply-To: <5654E69D.5040705@simplistix.co.uk> References: <5654D5DC.3070602@withers.org> <5654E69D.5040705@simplistix.co.uk> Message-ID: It looks like this is a setuptools bug that has been reported upstream [1]. [1] https://bitbucket.org/pypa/setuptools/issues/464/typeerror-in-install_wrapper_scripts -Robert On Tue, Nov 24, 2015 at 2:37 PM, Chris Withers wrote: > Looks like it's not isolated to that package: > > https://travis-ci.org/Simplistix/mush/jobs/93042214 > > On 24/11/2015 21:25, Chris Withers wrote: > > Hi All, > > Nightly builds of one of my packages ( > https://travis-ci.org/Mortar/mortar_rdb) just began failing with: > > TypeError: must be type, not classobj > > ...but only on Python 2. The full traceback is here: > > https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 > > I haven't pushed any changed to that project in a few weeks/days. > > Does anyone know what might have changed to cause this problem? I couldn't > see any recent releases of setuptools or pip, so wondering what I'm > missing... > > cheers, > > Chris > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at withers.org Tue Nov 24 16:25:48 2015 From: chris at withers.org (Chris Withers) Date: Tue, 24 Nov 2015 21:25:48 +0000 Subject: [Distutils] night build just started failing with TypeError: must be type, not classobj on Python 2.7 Message-ID: <5654D5DC.3070602@withers.org> Hi All, Nightly builds of one of my packages (https://travis-ci.org/Mortar/mortar_rdb) just began failing with: TypeError: must be type, not classobj ...but only on Python 2. The full traceback is here: https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 I haven't pushed any changed to that project in a few weeks/days. Does anyone know what might have changed to cause this problem? I couldn't see any recent releases of setuptools or pip, so wondering what I'm missing... cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at withers.org Tue Nov 24 17:21:43 2015 From: chris at withers.org (Chris Withers) Date: Tue, 24 Nov 2015 22:21:43 +0000 Subject: [Distutils] night build just started failing with TypeError: must be type, not classobj on Python 2.7 In-Reply-To: <5654D5DC.3070602@withers.org> References: <5654D5DC.3070602@withers.org> Message-ID: <5654E2F7.3040707@withers.org> Looks like it's not isolated to that package: https://travis-ci.org/Simplistix/mush/jobs/93042214 On 24/11/2015 21:25, Chris Withers wrote: > Hi All, > > Nightly builds of one of my packages > (https://travis-ci.org/Mortar/mortar_rdb) just began failing with: > > TypeError: must be type, not classobj > > ...but only on Python 2. The full traceback is here: > > https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 > > I haven't pushed any changed to that project in a few weeks/days. > > Does anyone know what might have changed to cause this problem? I > couldn't see any recent releases of setuptools or pip, so wondering > what I'm missing... > > cheers, > > Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at withers.org Tue Nov 24 17:37:08 2015 From: chris at withers.org (Chris Withers) Date: Tue, 24 Nov 2015 22:37:08 +0000 Subject: [Distutils] nightly build just started failing with TypeError: must be type, not classobj on Python 2.7 In-Reply-To: <5654D5DC.3070602@withers.org> References: <5654D5DC.3070602@withers.org> Message-ID: <5654E694.1010203@withers.org> Looks like it's not isolated to that package: https://travis-ci.org/Simplistix/mush/jobs/93042214 On 24/11/2015 21:25, Chris Withers wrote: > Hi All, > > Nightly builds of one of my packages > (https://travis-ci.org/Mortar/mortar_rdb) just began failing with: > > TypeError: must be type, not classobj > > ...but only on Python 2. The full traceback is here: > > https://travis-ci.org/Mortar/mortar_rdb/jobs/93030519#L256 > > I haven't pushed any changed to that project in a few weeks/days. > > Does anyone know what might have changed to cause this problem? I > couldn't see any recent releases of setuptools or pip, so wondering > what I'm missing... > > cheers, > > Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu Nov 26 13:48:03 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 27 Nov 2015 07:48:03 +1300 Subject: [Distutils] build system abstraction PEP, take #2 In-Reply-To: References: <20151117152712.2c162dcb@fsol> <681D877A-C3FF-485F-B6A2-34701E336037@stufft.io> <20151117154547.7e73e88b@fsol> Message-ID: Updated with: - the pep 0508 reference - reviews from github - document --prefix and --root to develop. -Rob diff --git a/build-system-abstraction.rst b/build-system-abstraction.rst index d36b7d5..762cd88 100644 --- a/build-system-abstraction.rst +++ b/build-system-abstraction.rst @@ -40,9 +40,9 @@ easy-install. As PEP-426 [#pep426]_ is draft, we cannot utilise the metadata format it defined. However PEP-427 wheels are in wide use and fairly well specified, so we have adopted the METADATA format from that for specifying distribution -dependencies. However something was needed for communicating bootstrap -requirements and build requirements - but a thin JSON schema is sufficient -when overlaid over the new dependency specification PEP. +dependencies and general project metadata. PEP-0508 [#pep508] provides a self +contained language for describing a dependency, which we encapsulate in a thin +JSON schema to describe bootstrap dependencies. Motivation ========== @@ -123,6 +123,8 @@ PYTHON ${PYTHON} -m foo +PYTHONPATH + Used to control sys.path per the normal Python mechanisms. Subcommands ----------- @@ -164,7 +166,7 @@ wheel -d OUTPUT_DIR flit wheel -d /tmp/pip-build_1234 -develop +develop [--prefix PREFIX] [--root ROOT] Command to do an in-place 'development' installation of the project. Stdout and stderr have no semantic meaning. @@ -173,6 +175,20 @@ develop that doing so will cause use operations like ``pip install -e foo`` to fail. + The prefix option is used for defining an alternative prefix within the + installation root. + + The root option is used to define an alternative root within which the + command should operate. + + For instance:: + + flit develop --root /tmp/ --prefix /usr/local + + Should install scripts within `/tmp/usr/local/bin`, even if the Python + environment in use reports that the sys.prefix is `/usr/` which would lead + to using `/tmp/usr/bin/`. Similar logic applies for package files etc. + The build environment --------------------- @@ -326,7 +342,7 @@ had. Minutes from that were posted to the list [#minutes]_. This specification is a translation of the consensus reached there into PEP form, along with some arbitrary choices on the minor remaining questions. -The basic heuristic for the design has to been to focus on introducing an +The basic heuristic for the design has been to focus on introducing an abstraction without requiring development not strictly tied to the abstraction. Where the gap is small to improvements, or the cost of using the existing interface is very high, then we've taken on having the improvement as @@ -343,7 +359,7 @@ CLI). The use of 'develop' as a command is because there is no PEP specifying the interoperability of things that do what 'setuptools develop' does - so we'll need to define that before pip can take on the responsibility for doing the -'develop' step. Once thats done we can issue a successor PEP to this one. +'develop' step. Once that's done we can issue a successor PEP to this one. The use of a command line API rather than a Python API is a little contentious. Fundamentally anything can be made to work, and the pip @@ -410,8 +426,8 @@ References .. [#strformat] The Python string formatting syntax. (https://docs.python.org/3.1/library/string.html#format-string-syntax) -.. [#dependencyspec] Dependency specification language PEP. - (https://github.com/pypa/interoperability-peps/pull/56) +.. [#pep508] Dependency specification language PEP. + (https://www.python.org/dev/peps/pep-0508/) Copyright ========= -- Robert Collins Distinguished Technologist HP Converged Cloud From qwcode at gmail.com Wed Nov 25 15:05:49 2015 From: qwcode at gmail.com (Marcus Smith) Date: Wed, 25 Nov 2015 12:05:49 -0800 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> References: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> Message-ID: PEP number yet? On Sun, Nov 22, 2015 at 4:45 PM, Donald Stufft wrote: > Okay. I?ve read over this, implemented enough of it, and I think it?s gone > through enough nit picking. I?m going to go ahead and accept this PEP. It?s > largely just standardizing what we are already doing so it?s pretty low > impact other than fixing up a few issues and giving implementations > something they can point at for what the standard behavior is. > > So congratulations to everyone working on this :) > > I?ll get this into the PEPs repo and get it pushed. > > > > On Nov 16, 2015, at 3:46 PM, Robert Collins > wrote: > > > > :PEP: XX > > :Title: Dependency specification for Python Software Packages > > :Version: $Revision$ > > :Last-Modified: $Date$ > > :Author: Robert Collins > > :BDFL-Delegate: Donald Stufft > > :Discussions-To: distutils-sig > > :Status: Draft > > :Type: Standards Track > > :Content-Type: text/x-rst > > :Created: 11-Nov-2015 > > :Post-History: XX > > > > > > Abstract > > ======== > > > > This PEP specifies the language used to describe dependencies for > packages. > > It draws a border at the edge of describing a single dependency - the > > different sorts of dependencies and when they should be installed is a > higher > > level problem. The intent is provide a building block for higher layer > > specifications. > > > > The job of a dependency is to enable tools like pip [#pip]_ to find the > right > > package to install. Sometimes this is very loose - just specifying a > name, and > > sometimes very specific - referring to a specific file to install. > Sometimes > > dependencies are only relevant in one platform, or only some versions are > > acceptable, so the language permits describing all these cases. > > > > The language defined is a compact line based format which is already in > > widespread use in pip requirements files, though we do not specify the > command > > line option handling that those files permit. There is one caveat - the > > URL reference form, specified in PEP-440 [#pep440]_ is not actually > > implemented in pip, but since PEP-440 is accepted, we use that format > rather > > than pip's current native format. > > > > Motivation > > ========== > > > > Any specification in the Python packaging ecosystem that needs to consume > > lists of dependencies needs to build on an approved PEP for such, but > > PEP-426 [#pep426]_ is mostly aspirational - and there are already > existing > > implementations of the dependency specification which we can instead > adopt. > > The existing implementations are battle proven and user friendly, so > adopting > > them is arguably much better than approving an aspirational, unconsumed, > format. > > > > Specification > > ============= > > > > Examples > > -------- > > > > All features of the language shown with a name based lookup:: > > > > requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < > "2.7.10" > > > > A minimal URL based lookup:: > > > > pip @ > https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 > > > > Concepts > > -------- > > > > A dependency specification always specifies a distribution name. It may > > include extras, which expand the dependencies of the named distribution > to > > enable optional features. The version installed can be controlled using > > version limits, or giving the URL to a specific artifact to install. > Finally > > the dependency can be made conditional using environment markers. > > > > Grammar > > ------- > > > > We first cover the grammar briefly and then drill into the semantics of > each > > section later. > > > > A distribution specification is written in ASCII text. We use a parsley > > [#parsley]_ grammar to provide a precise grammar. It is expected that the > > specification will be embedded into a larger system which offers framing > such > > as comments, multiple line support via continuations, or other such > features. > > > > The full grammar including annotations to build a useful parse tree is > > included at the end of the PEP. > > > > Versions may be specified according to the PEP-440 [#pep440]_ rules. > (Note: > > URI is defined in std-66 [#std66]_:: > > > > version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | > '===' > > version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ > > version_one = version_cmp version wsp* > > version_many = version_one (wsp* ',' version_one)* > > versionspec = ( '(' version_many ')' ) | version_many > > urlspec = '@' wsp* > > > > Environment markers allow making a specification only take effect in some > > environments:: > > > > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > > '-' | '_' | '*') > > dquote = '"' > > squote = '\\'' > > python_str = (squote (python_str_c | dquote)* squote | > > dquote (python_str_c | squote)* dquote) > > env_var = ('python_version' | 'python_full_version' | > > 'os_name' | 'sys_platform' | 'platform_release' | > > 'platform_system' | 'platform_version' | > > 'platform_machine' | 'python_implementation' | > > 'implementation_name' | 'implementation_version' | > > 'extra' # ONLY when defined by a containing layer > > ) > > marker_var = env_var | python_str > > marker_expr = ('(' wsp* marker wsp* ')' > > | (marker_var wsp* marker_op wsp* marker_var)) > > marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* > marker_expr)* > > quoted_marker = ';' wsp* marker > > > > Optional components of a distribution may be specified using the extras > > field:: > > > > identifier = letterOrDigit ( > > letterOrDigit | > > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) > )* > > name = identifier > > extras_list = identifier (wsp* ',' wsp* identifier)* > > extras = '[' wsp* extras_list? wsp* ']' > > > > Giving us a rule for name based requirements:: > > > > name_req = name wsp* extras? wsp* versionspec? wsp* > quoted_marker? > > > > And a rule for direct reference specifications:: > > > > url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? > > > > Leading to the unified rule that can specify a dependency.:: > > > > specification = wsp* ( url_req | name_req ) wsp* > > > > Whitespace > > ---------- > > > > Non line-breaking whitespace is mostly optional with no semantic > meaning. The > > sole exception is detecting the end of a URL requirement. > > > > Names > > ----- > > > > Python distribution names are currently defined in PEP-345 [#pep345]_. > Names > > act as the primary identifier for distributions. They are present in all > > dependency specifications, and are sufficient to be a specification on > their > > own. However, PyPI places strict restrictions on names - they must match > a > > case insensitive regex or they won't be accepted. Accordingly in this > PEP we > > limit the acceptable values for identifiers to that regex. A full > redefinition > > of name may take place in a future metadata PEP:: > > > > ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ > > > > Extras > > ------ > > > > An extra is an optional part of a distribution. Distributions can > specify as > > many extras as they wish, and each extra results in the declaration of > > additional dependencies of the distribution **when** the extra is used > in a > > dependency specification. For instance:: > > > > requests[security] > > > > Extras union in the dependencies they define with the dependencies of the > > distribution they are attached to. The example above would result in > requests > > being installed, and requests own dependencies, and also any > dependencies that > > are listed in the "security" extra of requests. > > > > If multiple extras are listed, all the dependencies are unioned together. > > > > Versions > > -------- > > > > See PEP-440 [#pep440]_ for more detail on both version numbers and > version > > comparisons. Version specifications limit the versions of a distribution > that > > can be used. They only apply to distributions looked up by name, rather > than > > via a URL. Version comparison are also used in the markers feature. The > > optional brackets around a version are present for compatibility with > PEP-345 > > [#pep345]_ but should not be generated, only accepted. > > > > Environment Markers > > ------------------- > > > > Environment markers allow a dependency specification to provide a rule > that > > describes when the dependency should be used. For instance, consider a > package > > that needs argparse. In Python 2.7 argparse is always present. On older > Python > > versions it has to be installed as a dependency. This can be expressed > as so:: > > > > argparse;python_version<"2.7" > > > > A marker expression evalutes to either True or False. When it evaluates > to > > False, the dependency specification should be ignored. > > > > The marker language is a subset of Python itself, chosen for the ability > to > > safely evaluate it without running arbitrary code that could become a > security > > vulnerability. Markers were first standardised in PEP-345 [#pep345]_. > This PEP > > fixes some issues that were observed in the design described in PEP-426 > > [#pep426]_. > > > > Comparisons in marker expressions are typed by the comparison operator. > The > > operators that are not in perform the same as > they > > do for strings in Python. The operators use the PEP-440 > > [#pep440]_ version comparison rules when those are defined (that is when > both > > sides have a valid version specifier). If there is no defined PEP-440 > > behaviour and the operator exists in Python, then the operator falls > back to > > the Python behaviour. Otherwise an error should be raised. e.g. the > following > > will result in errors:: > > > > "dog" ~= "fred" > > python_version ~= "surprise" > > > > User supplied constants are always encoded as strings with either ``'`` > or > > ``"`` quote marks. Note that backslash escapes are not defined, but > existing > > implementations do support them. They are not included in this > > specification because they add complexity and there is no observable > need for > > them today. Similarly we do not define non-ASCII character support: all > the > > runtime variables we are referencing are expected to be ASCII-only. > > > > The variables in the marker grammar such as "os_name" resolve to values > looked > > up in the Python runtime. With the exception of "extra" all values are > defined > > on all Python versions today - it is an error in the implementation of > markers > > if a value is not defined. > > > > Unknown variables must raise an error rather than resulting in a > comparison > > that evaluates to True or False. > > > > Variables whose value cannot be calculated on a given Python > implementation > > should evaluate to ``0`` for versions, and an empty string for all other > > variables. > > > > The "extra" variable is special. It is used by wheels to signal which > > specifications apply to a given extra in the wheel ``METADATA`` file, but > > since the ``METADATA`` file is based on a draft version of PEP-426, > there is > > no current specification for this. Regardless, outside of a context > where this > > special handling is taking place, the "extra" variable should result in > an > > error like all other unknown variables. > > > > .. list-table:: > > :header-rows: 1 > > > > * - Marker > > - Python equivalent > > - Sample values > > * - ``os_name`` > > - ``os.name`` > > - ``posix``, ``java`` > > * - ``sys_platform`` > > - ``sys.platform`` > > - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that > "linux" > > is from Python3 and "linux2" from Python2) > > * - ``platform_machine`` > > - ``platform.machine()`` > > - ``x86_64`` > > * - ``python_implementation`` > > - ``platform.python_implementation()`` > > - ``CPython``, ``Jython`` > > * - ``platform_release`` > > - ``platform.release()`` > > - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` > > * - ``platform_system`` > > - ``platform.system()`` > > - ``Linux``, ``Windows``, ``Java`` > > * - ``platform_version`` > > - ``platform.version()`` > > - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` > > ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle > Corporation`` > > ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; > > root:xnu-2782.40.9~2/RELEASE_X86_64`` > > * - ``python_version`` > > - ``platform.python_version()[:3]`` > > - ``3.4``, ``2.7`` > > * - ``python_full_version`` > > - ``platform.python_version()`` > > - ``3.4.0``, ``3.5.0b1`` > > * - ``implementation_name`` > > - ``sys.implementation.name`` > > - ``cpython`` > > * - ``implementation_version`` > > - see definition below > > - ``3.4.0``, ``3.5.0b1`` > > * - ``extra`` > > - An error except when defined by the context interpreting the > > specification. > > - ``test`` > > > > The ``implementation_version`` marker variable is derived from > > ``sys.implementation.version``:: > > > > def format_full_version(info): > > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > > kind = info.releaselevel > > if kind != 'final': > > version += kind[0] + str(info.serial) > > return version > > > > if hasattr(sys, 'implementation'): > > implementation_version = > format_full_version(sys.implementation.version) > > else: > > implementation_version = "0" > > > > Backwards Compatibility > > ======================= > > > > Most of this PEP is already widely deployed and thus offers no > compatibiltiy > > concerns. > > > > There are however a few points where the PEP differs from the deployed > base. > > > > Firstly, PEP-440 direct references haven't actually been deployed in the > wild, > > but they were designed to be compatibly added, and there are no known > > obstacles to adding them to pip or other tools that consume the existing > > dependency metadata in distributions - particularly since they won't be > > permitted to be present in PyPI uploaded distributions anyway. > > > > Secondly, PEP-426 markers which have had some reasonable deployment, > > particularly in wheels and pip, will handle version comparisons with > > ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" is > less > > than "2.7.9". This backward incompatibility is deliberate. We are also > > defining new operators - "~=" and "===", and new variables - > > ``platform_release``, ``platform_system``, ``implementation_name``, and > > ``implementation_version`` which are not present in older marker > > implementations. The variables will error on those implementations. > Users of > > both features will need to make a judgement as to when support has become > > sufficiently widespread in the ecosystem that using them will not cause > > compatibility issues. > > > > Thirdly, PEP-345 required brackets around version specifiers. In order to > > accept PEP-345 dependency specifications, brackets are accepted, but they > > should not be generated. > > > > Rationale > > ========= > > > > In order to move forward with any new PEPs that depend on environment > markers, > > we needed a specification that included them in their modern form. This > PEP > > brings together all the currently unspecified components into a specified > > form. > > > > The requirement specifier was adopted from the EBNF in the setuptools > > pkg_resources documentation, since we wish to avoid depending on a > defacto, vs > > PEP specified, standard. > > > > Complete Grammar > > ================ > > > > The complete parsley grammar:: > > > > wsp = ' ' | '\t' > > version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | > '==='> > > version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | > > '+' | '!' )+> > > version_one = version_cmp:op version:v wsp* -> (op, v) > > version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + v2 > > versionspec = ('(' version_many:v ')' ->v) | version_many > > urlspec = '@' wsp* > > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' > > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | > > '-' | '_' | '*' | '#') > > dquote = '"' > > squote = '\\'' > > python_str = (squote <(python_str_c | dquote)*>:s squote | > > dquote <(python_str_c | squote)*>:s dquote) -> s > > env_var = ('python_version' | 'python_full_version' | > > 'os_name' | 'sys_platform' | 'platform_release' | > > 'platform_system' | 'platform_version' | > > 'platform_machine' | 'python_implementation' | > > 'implementation_name' | 'implementation_version' | > > 'extra' # ONLY when defined by a containing layer > > ):varname -> lookup(varname) > > marker_var = env_var | python_str > > marker_expr = (("(" wsp* marker:m wsp* ")" -> m) > > | ((marker_var:l wsp* marker_op:o wsp* > marker_var:r)) > > -> (l, o, r)) > > marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* > > marker_expr:r -> (o, r))*:ms -> (m, ms)) > > quoted_marker = ';' wsp* marker > > identifier = > letterOrDigit | > > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) > )*> > > name = identifier > > extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] + > ids > > extras = '[' wsp* extras_list?:e wsp* ']' -> e > > name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* > > quoted_marker?:m > > -> (n, e or [], v or [], m)) > > url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ > quoted_marker?:m > > -> (n, e or [], v, m)) > > specification = wsp* ( url_req | name_req ):s wsp* -> s > > # The result is a tuple - name, list-of-extras, > > # list-of-version-constraints-or-a-url, marker-ast or None > > > > > > URI_reference = > > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? > > hier_part = ('//' authority path_abempty) | path_absolute | > > path_rootless | path_empty > > absolute_URI = scheme ':' hier_part ( '?' query )? > > relative_ref = relative_part ( '?' query )? ( '#' fragment )? > > relative_part = '//' authority path_abempty | path_absolute | > > path_noscheme | path_empty > > scheme = letter ( letter | digit | '+' | '-' | '.')* > > authority = ( userinfo '@' )? host ( ':' port )? > > userinfo = ( unreserved | pct_encoded | sub_delims | ':')* > > host = IP_literal | IPv4address | reg_name > > port = digit* > > IP_literal = '[' ( IPv6address | IPvFuture) ']' > > IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ > > IPv6address = ( > > ( h16 ':'){6} ls32 > > | '::' ( h16 ':'){5} ls32 > > | ( h16 )? '::' ( h16 ':'){4} ls32 > > | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 > > | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 > > | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 > > | ( ( h16 ':'){0,4} h16 )? '::' ls32 > > | ( ( h16 ':'){0,5} h16 )? '::' h16 > > | ( ( h16 ':'){0,6} h16 )? '::' ) > > h16 = hexdig{1,4} > > ls32 = ( h16 ':' h16) | IPv4address > > IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet > > nz = ~'0' digit > > dec_octet = ( > > digit # 0-9 > > | nz digit # 10-99 > > | '1' digit{2} # 100-199 > > | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 > > | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# > %250-255 > > reg_name = ( unreserved | pct_encoded | sub_delims)* > > path = ( > > path_abempty # begins with '/' or is empty > > | path_absolute # begins with '/' but not '//' > > | path_noscheme # begins with a non-colon segment > > | path_rootless # begins with a segment > > | path_empty ) # zero characters > > path_abempty = ( '/' segment)* > > path_absolute = '/' ( segment_nz ( '/' segment)* )? > > path_noscheme = segment_nz_nc ( '/' segment)* > > path_rootless = segment_nz ( '/' segment)* > > path_empty = pchar{0} > > segment = pchar* > > segment_nz = pchar+ > > segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ > > # non-zero-length segment without any colon ':' > > pchar = unreserved | pct_encoded | sub_delims | ':' | '@' > > query = ( pchar | '/' | '?')* > > fragment = ( pchar | '/' | '?')* > > pct_encoded = '%' hexdig > > unreserved = letter | digit | '-' | '.' | '_' | '~' > > reserved = gen_delims | sub_delims > > gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' > > sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | > > ',' | ';' | '=' > > hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | > > 'D' | 'e' | 'E' | 'f' | 'F' > > > > A test program - if the grammar is in a string ``grammar``:: > > > > import os > > import sys > > import platform > > > > from parsley import makeGrammar > > > > grammar = """ > > wsp ... > > """ > > tests = [ > > "A", > > "aa", > > "name", > > "name>=3", > > "name>=3,<2", > > "name [fred,bar] @ http://foo.com ; python_version=='2.7'", > > "name[quux, strange];python_version<'2.7' and > platform_version=='2'", > > "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", > > "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", > > ] > > > > def format_full_version(info): > > version = '{0.major}.{0.minor}.{0.micro}'.format(info) > > kind = info.releaselevel > > if kind != 'final': > > version += kind[0] + str(info.serial) > > return version > > > > if hasattr(sys, 'implementation'): > > implementation_version = > format_full_version(sys.implementation.version) > > implementation_name = sys.implementation.name > > else: > > implementation_version = '0' > > implementation_name = '' > > bindings = { > > 'implementation_name': implementation_name, > > 'implementation_version': implementation_version, > > 'os_name': os.name, > > 'platform_machine': platform.machine(), > > 'platform_release': platform.release(), > > 'platform_system': platform.system(), > > 'platform_version': platform.version(), > > 'python_full_version': platform.python_version(), > > 'python_implementation': platform.python_implementation(), > > 'python_version': platform.python_version()[:3], > > 'sys_platform': sys.platform, > > } > > > > compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) > > for test in tests: > > parsed = compiled(test).specification() > > print(parsed) > > > > References > > ========== > > > > .. [#pip] pip, the recommended installer for Python packages > > (http://pip.readthedocs.org/en/stable/) > > > > .. [#pep345] PEP-345, Python distribution metadata version 1.2. > > (https://www.python.org/dev/peps/pep-0345/) > > > > .. [#pep426] PEP-426, Python distribution metadata. > > (https://www.python.org/dev/peps/pep-0426/) > > > > .. [#pep440] PEP-440, Python distribution metadata. > > (https://www.python.org/dev/peps/pep-0440/) > > > > .. [#std66] The URL specification. > > (https://tools.ietf.org/html/rfc3986) > > > > .. [#parsley] The parsley PEG library. > > (https://pypi.python.org/pypi/parsley/) > > > > Copyright > > ========= > > > > This document has been placed in the public domain. > > > > > > > > .. > > Local Variables: > > mode: indented-text > > indent-tabs-mode: nil > > sentence-end-double-space: t > > fill-column: 70 > > coding: utf-8 > > End: > > > > > > -- > > Robert Collins > > Distinguished Technologist > > HP Converged Cloud > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xav.fernandez at gmail.com Fri Nov 27 08:28:58 2015 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Fri, 27 Nov 2015 14:28:58 +0100 Subject: [Distutils] FINAL DRAFT: Dependency specifier PEP In-Reply-To: References: <05AC1735-1744-4F8B-B1EE-FC9D70312148@stufft.io> Message-ID: https://www.python.org/dev/peps/pep-0508/ On Wed, Nov 25, 2015 at 9:05 PM, Marcus Smith wrote: > PEP number yet? > > On Sun, Nov 22, 2015 at 4:45 PM, Donald Stufft wrote: > >> Okay. I?ve read over this, implemented enough of it, and I think it?s >> gone through enough nit picking. I?m going to go ahead and accept this PEP. >> It?s largely just standardizing what we are already doing so it?s pretty >> low impact other than fixing up a few issues and giving implementations >> something they can point at for what the standard behavior is. >> >> So congratulations to everyone working on this :) >> >> I?ll get this into the PEPs repo and get it pushed. >> >> >> > On Nov 16, 2015, at 3:46 PM, Robert Collins >> wrote: >> > >> > :PEP: XX >> > :Title: Dependency specification for Python Software Packages >> > :Version: $Revision$ >> > :Last-Modified: $Date$ >> > :Author: Robert Collins >> > :BDFL-Delegate: Donald Stufft >> > :Discussions-To: distutils-sig >> > :Status: Draft >> > :Type: Standards Track >> > :Content-Type: text/x-rst >> > :Created: 11-Nov-2015 >> > :Post-History: XX >> > >> > >> > Abstract >> > ======== >> > >> > This PEP specifies the language used to describe dependencies for >> packages. >> > It draws a border at the edge of describing a single dependency - the >> > different sorts of dependencies and when they should be installed is a >> higher >> > level problem. The intent is provide a building block for higher layer >> > specifications. >> > >> > The job of a dependency is to enable tools like pip [#pip]_ to find the >> right >> > package to install. Sometimes this is very loose - just specifying a >> name, and >> > sometimes very specific - referring to a specific file to install. >> Sometimes >> > dependencies are only relevant in one platform, or only some versions >> are >> > acceptable, so the language permits describing all these cases. >> > >> > The language defined is a compact line based format which is already in >> > widespread use in pip requirements files, though we do not specify the >> command >> > line option handling that those files permit. There is one caveat - the >> > URL reference form, specified in PEP-440 [#pep440]_ is not actually >> > implemented in pip, but since PEP-440 is accepted, we use that format >> rather >> > than pip's current native format. >> > >> > Motivation >> > ========== >> > >> > Any specification in the Python packaging ecosystem that needs to >> consume >> > lists of dependencies needs to build on an approved PEP for such, but >> > PEP-426 [#pep426]_ is mostly aspirational - and there are already >> existing >> > implementations of the dependency specification which we can instead >> adopt. >> > The existing implementations are battle proven and user friendly, so >> adopting >> > them is arguably much better than approving an aspirational, >> unconsumed, format. >> > >> > Specification >> > ============= >> > >> > Examples >> > -------- >> > >> > All features of the language shown with a name based lookup:: >> > >> > requests [security,tests] >= 2.8.1, == 2.8.* ; python_version < >> "2.7.10" >> > >> > A minimal URL based lookup:: >> > >> > pip @ >> https://github.com/pypa/pip/archive/1.3.1.zip#sha1=da9234ee9982d4bbb3c72346a6de940a148ea686 >> > >> > Concepts >> > -------- >> > >> > A dependency specification always specifies a distribution name. It may >> > include extras, which expand the dependencies of the named distribution >> to >> > enable optional features. The version installed can be controlled using >> > version limits, or giving the URL to a specific artifact to install. >> Finally >> > the dependency can be made conditional using environment markers. >> > >> > Grammar >> > ------- >> > >> > We first cover the grammar briefly and then drill into the semantics of >> each >> > section later. >> > >> > A distribution specification is written in ASCII text. We use a parsley >> > [#parsley]_ grammar to provide a precise grammar. It is expected that >> the >> > specification will be embedded into a larger system which offers >> framing such >> > as comments, multiple line support via continuations, or other such >> features. >> > >> > The full grammar including annotations to build a useful parse tree is >> > included at the end of the PEP. >> > >> > Versions may be specified according to the PEP-440 [#pep440]_ rules. >> (Note: >> > URI is defined in std-66 [#std66]_:: >> > >> > version_cmp = wsp* '<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | >> '===' >> > version = wsp* ( letterOrDigit | '-' | '_' | '.' | '*' )+ >> > version_one = version_cmp version wsp* >> > version_many = version_one (wsp* ',' version_one)* >> > versionspec = ( '(' version_many ')' ) | version_many >> > urlspec = '@' wsp* >> > >> > Environment markers allow making a specification only take effect in >> some >> > environments:: >> > >> > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' >> > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | >> > '-' | '_' | '*') >> > dquote = '"' >> > squote = '\\'' >> > python_str = (squote (python_str_c | dquote)* squote | >> > dquote (python_str_c | squote)* dquote) >> > env_var = ('python_version' | 'python_full_version' | >> > 'os_name' | 'sys_platform' | 'platform_release' | >> > 'platform_system' | 'platform_version' | >> > 'platform_machine' | 'python_implementation' | >> > 'implementation_name' | 'implementation_version' | >> > 'extra' # ONLY when defined by a containing layer >> > ) >> > marker_var = env_var | python_str >> > marker_expr = ('(' wsp* marker wsp* ')' >> > | (marker_var wsp* marker_op wsp* marker_var)) >> > marker = wsp* marker_expr ( wsp* ('and' | 'or') wsp* >> marker_expr)* >> > quoted_marker = ';' wsp* marker >> > >> > Optional components of a distribution may be specified using the extras >> > field:: >> > >> > identifier = letterOrDigit ( >> > letterOrDigit | >> > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) >> )* >> > name = identifier >> > extras_list = identifier (wsp* ',' wsp* identifier)* >> > extras = '[' wsp* extras_list? wsp* ']' >> > >> > Giving us a rule for name based requirements:: >> > >> > name_req = name wsp* extras? wsp* versionspec? wsp* >> quoted_marker? >> > >> > And a rule for direct reference specifications:: >> > >> > url_req = name wsp* extras? wsp* urlspec wsp+ quoted_marker? >> > >> > Leading to the unified rule that can specify a dependency.:: >> > >> > specification = wsp* ( url_req | name_req ) wsp* >> > >> > Whitespace >> > ---------- >> > >> > Non line-breaking whitespace is mostly optional with no semantic >> meaning. The >> > sole exception is detecting the end of a URL requirement. >> > >> > Names >> > ----- >> > >> > Python distribution names are currently defined in PEP-345 [#pep345]_. >> Names >> > act as the primary identifier for distributions. They are present in all >> > dependency specifications, and are sufficient to be a specification on >> their >> > own. However, PyPI places strict restrictions on names - they must >> match a >> > case insensitive regex or they won't be accepted. Accordingly in this >> PEP we >> > limit the acceptable values for identifiers to that regex. A full >> redefinition >> > of name may take place in a future metadata PEP:: >> > >> > ^([A-Z0-9]|[A-Z0-9][A-Z0-9._-]*[A-Z0-9])$ >> > >> > Extras >> > ------ >> > >> > An extra is an optional part of a distribution. Distributions can >> specify as >> > many extras as they wish, and each extra results in the declaration of >> > additional dependencies of the distribution **when** the extra is used >> in a >> > dependency specification. For instance:: >> > >> > requests[security] >> > >> > Extras union in the dependencies they define with the dependencies of >> the >> > distribution they are attached to. The example above would result in >> requests >> > being installed, and requests own dependencies, and also any >> dependencies that >> > are listed in the "security" extra of requests. >> > >> > If multiple extras are listed, all the dependencies are unioned >> together. >> > >> > Versions >> > -------- >> > >> > See PEP-440 [#pep440]_ for more detail on both version numbers and >> version >> > comparisons. Version specifications limit the versions of a >> distribution that >> > can be used. They only apply to distributions looked up by name, rather >> than >> > via a URL. Version comparison are also used in the markers feature. The >> > optional brackets around a version are present for compatibility with >> PEP-345 >> > [#pep345]_ but should not be generated, only accepted. >> > >> > Environment Markers >> > ------------------- >> > >> > Environment markers allow a dependency specification to provide a rule >> that >> > describes when the dependency should be used. For instance, consider a >> package >> > that needs argparse. In Python 2.7 argparse is always present. On older >> Python >> > versions it has to be installed as a dependency. This can be expressed >> as so:: >> > >> > argparse;python_version<"2.7" >> > >> > A marker expression evalutes to either True or False. When it evaluates >> to >> > False, the dependency specification should be ignored. >> > >> > The marker language is a subset of Python itself, chosen for the >> ability to >> > safely evaluate it without running arbitrary code that could become a >> security >> > vulnerability. Markers were first standardised in PEP-345 [#pep345]_. >> This PEP >> > fixes some issues that were observed in the design described in PEP-426 >> > [#pep426]_. >> > >> > Comparisons in marker expressions are typed by the comparison >> operator. The >> > operators that are not in perform the same as >> they >> > do for strings in Python. The operators use the PEP-440 >> > [#pep440]_ version comparison rules when those are defined (that is >> when both >> > sides have a valid version specifier). If there is no defined PEP-440 >> > behaviour and the operator exists in Python, then the operator falls >> back to >> > the Python behaviour. Otherwise an error should be raised. e.g. the >> following >> > will result in errors:: >> > >> > "dog" ~= "fred" >> > python_version ~= "surprise" >> > >> > User supplied constants are always encoded as strings with either ``'`` >> or >> > ``"`` quote marks. Note that backslash escapes are not defined, but >> existing >> > implementations do support them. They are not included in this >> > specification because they add complexity and there is no observable >> need for >> > them today. Similarly we do not define non-ASCII character support: all >> the >> > runtime variables we are referencing are expected to be ASCII-only. >> > >> > The variables in the marker grammar such as "os_name" resolve to values >> looked >> > up in the Python runtime. With the exception of "extra" all values are >> defined >> > on all Python versions today - it is an error in the implementation of >> markers >> > if a value is not defined. >> > >> > Unknown variables must raise an error rather than resulting in a >> comparison >> > that evaluates to True or False. >> > >> > Variables whose value cannot be calculated on a given Python >> implementation >> > should evaluate to ``0`` for versions, and an empty string for all other >> > variables. >> > >> > The "extra" variable is special. It is used by wheels to signal which >> > specifications apply to a given extra in the wheel ``METADATA`` file, >> but >> > since the ``METADATA`` file is based on a draft version of PEP-426, >> there is >> > no current specification for this. Regardless, outside of a context >> where this >> > special handling is taking place, the "extra" variable should result in >> an >> > error like all other unknown variables. >> > >> > .. list-table:: >> > :header-rows: 1 >> > >> > * - Marker >> > - Python equivalent >> > - Sample values >> > * - ``os_name`` >> > - ``os.name`` >> > - ``posix``, ``java`` >> > * - ``sys_platform`` >> > - ``sys.platform`` >> > - ``linux``, ``linux2``, ``darwin``, ``java1.8.0_51`` (note that >> "linux" >> > is from Python3 and "linux2" from Python2) >> > * - ``platform_machine`` >> > - ``platform.machine()`` >> > - ``x86_64`` >> > * - ``python_implementation`` >> > - ``platform.python_implementation()`` >> > - ``CPython``, ``Jython`` >> > * - ``platform_release`` >> > - ``platform.release()`` >> > - ``3.14.1-x86_64-linode39``, ``14.5.0``, ``1.8.0_51`` >> > * - ``platform_system`` >> > - ``platform.system()`` >> > - ``Linux``, ``Windows``, ``Java`` >> > * - ``platform_version`` >> > - ``platform.version()`` >> > - ``#1 SMP Fri Apr 25 13:07:35 EDT 2014`` >> > ``Java HotSpot(TM) 64-Bit Server VM, 25.51-b03, Oracle >> Corporation`` >> > ``Darwin Kernel Version 14.5.0: Wed Jul 29 02:18:53 PDT 2015; >> > root:xnu-2782.40.9~2/RELEASE_X86_64`` >> > * - ``python_version`` >> > - ``platform.python_version()[:3]`` >> > - ``3.4``, ``2.7`` >> > * - ``python_full_version`` >> > - ``platform.python_version()`` >> > - ``3.4.0``, ``3.5.0b1`` >> > * - ``implementation_name`` >> > - ``sys.implementation.name`` >> > - ``cpython`` >> > * - ``implementation_version`` >> > - see definition below >> > - ``3.4.0``, ``3.5.0b1`` >> > * - ``extra`` >> > - An error except when defined by the context interpreting the >> > specification. >> > - ``test`` >> > >> > The ``implementation_version`` marker variable is derived from >> > ``sys.implementation.version``:: >> > >> > def format_full_version(info): >> > version = '{0.major}.{0.minor}.{0.micro}'.format(info) >> > kind = info.releaselevel >> > if kind != 'final': >> > version += kind[0] + str(info.serial) >> > return version >> > >> > if hasattr(sys, 'implementation'): >> > implementation_version = >> format_full_version(sys.implementation.version) >> > else: >> > implementation_version = "0" >> > >> > Backwards Compatibility >> > ======================= >> > >> > Most of this PEP is already widely deployed and thus offers no >> compatibiltiy >> > concerns. >> > >> > There are however a few points where the PEP differs from the deployed >> base. >> > >> > Firstly, PEP-440 direct references haven't actually been deployed in >> the wild, >> > but they were designed to be compatibly added, and there are no known >> > obstacles to adding them to pip or other tools that consume the existing >> > dependency metadata in distributions - particularly since they won't be >> > permitted to be present in PyPI uploaded distributions anyway. >> > >> > Secondly, PEP-426 markers which have had some reasonable deployment, >> > particularly in wheels and pip, will handle version comparisons with >> > ``python_version`` "2.7.10" differently. Specifically in 426 "2.7.10" >> is less >> > than "2.7.9". This backward incompatibility is deliberate. We are also >> > defining new operators - "~=" and "===", and new variables - >> > ``platform_release``, ``platform_system``, ``implementation_name``, and >> > ``implementation_version`` which are not present in older marker >> > implementations. The variables will error on those implementations. >> Users of >> > both features will need to make a judgement as to when support has >> become >> > sufficiently widespread in the ecosystem that using them will not cause >> > compatibility issues. >> > >> > Thirdly, PEP-345 required brackets around version specifiers. In order >> to >> > accept PEP-345 dependency specifications, brackets are accepted, but >> they >> > should not be generated. >> > >> > Rationale >> > ========= >> > >> > In order to move forward with any new PEPs that depend on environment >> markers, >> > we needed a specification that included them in their modern form. This >> PEP >> > brings together all the currently unspecified components into a >> specified >> > form. >> > >> > The requirement specifier was adopted from the EBNF in the setuptools >> > pkg_resources documentation, since we wish to avoid depending on a >> defacto, vs >> > PEP specified, standard. >> > >> > Complete Grammar >> > ================ >> > >> > The complete parsley grammar:: >> > >> > wsp = ' ' | '\t' >> > version_cmp = wsp* <'<' | '<=' | '!=' | '==' | '>=' | '>' | '~=' | >> '==='> >> > version = wsp* <( letterOrDigit | '-' | '_' | '.' | '*' | >> > '+' | '!' )+> >> > version_one = version_cmp:op version:v wsp* -> (op, v) >> > version_many = version_one:v1 (wsp* ',' version_one)*:v2 -> [v1] + >> v2 >> > versionspec = ('(' version_many:v ')' ->v) | version_many >> > urlspec = '@' wsp* >> > marker_op = version_cmp | 'in' | 'not' wsp+ 'in' >> > python_str_c = (wsp | letter | digit | '(' | ')' | '.' | '{' | '}' | >> > '-' | '_' | '*' | '#') >> > dquote = '"' >> > squote = '\\'' >> > python_str = (squote <(python_str_c | dquote)*>:s squote | >> > dquote <(python_str_c | squote)*>:s dquote) -> s >> > env_var = ('python_version' | 'python_full_version' | >> > 'os_name' | 'sys_platform' | 'platform_release' | >> > 'platform_system' | 'platform_version' | >> > 'platform_machine' | 'python_implementation' | >> > 'implementation_name' | 'implementation_version' | >> > 'extra' # ONLY when defined by a containing layer >> > ):varname -> lookup(varname) >> > marker_var = env_var | python_str >> > marker_expr = (("(" wsp* marker:m wsp* ")" -> m) >> > | ((marker_var:l wsp* marker_op:o wsp* >> marker_var:r)) >> > -> (l, o, r)) >> > marker = (wsp* marker_expr:m ( wsp* ("and" | "or"):o wsp* >> > marker_expr:r -> (o, r))*:ms -> (m, ms)) >> > quoted_marker = ';' wsp* marker >> > identifier = > > letterOrDigit | >> > (( letterOrDigit | '-' | '_' | '.')* letterOrDigit ) >> )*> >> > name = identifier >> > extras_list = identifier:i (wsp* ',' wsp* identifier)*:ids -> [i] >> + ids >> > extras = '[' wsp* extras_list?:e wsp* ']' -> e >> > name_req = (name:n wsp* extras?:e wsp* versionspec?:v wsp* >> > quoted_marker?:m >> > -> (n, e or [], v or [], m)) >> > url_req = (name:n wsp* extras?:e wsp* urlspec:v wsp+ >> quoted_marker?:m >> > -> (n, e or [], v, m)) >> > specification = wsp* ( url_req | name_req ):s wsp* -> s >> > # The result is a tuple - name, list-of-extras, >> > # list-of-version-constraints-or-a-url, marker-ast or None >> > >> > >> > URI_reference = >> > URI = scheme ':' hier_part ('?' query )? ( '#' fragment)? >> > hier_part = ('//' authority path_abempty) | path_absolute | >> > path_rootless | path_empty >> > absolute_URI = scheme ':' hier_part ( '?' query )? >> > relative_ref = relative_part ( '?' query )? ( '#' fragment )? >> > relative_part = '//' authority path_abempty | path_absolute | >> > path_noscheme | path_empty >> > scheme = letter ( letter | digit | '+' | '-' | '.')* >> > authority = ( userinfo '@' )? host ( ':' port )? >> > userinfo = ( unreserved | pct_encoded | sub_delims | ':')* >> > host = IP_literal | IPv4address | reg_name >> > port = digit* >> > IP_literal = '[' ( IPv6address | IPvFuture) ']' >> > IPvFuture = 'v' hexdig+ '.' ( unreserved | sub_delims | ':')+ >> > IPv6address = ( >> > ( h16 ':'){6} ls32 >> > | '::' ( h16 ':'){5} ls32 >> > | ( h16 )? '::' ( h16 ':'){4} ls32 >> > | ( ( h16 ':')? h16 )? '::' ( h16 ':'){3} ls32 >> > | ( ( h16 ':'){0,2} h16 )? '::' ( h16 ':'){2} ls32 >> > | ( ( h16 ':'){0,3} h16 )? '::' h16 ':' ls32 >> > | ( ( h16 ':'){0,4} h16 )? '::' ls32 >> > | ( ( h16 ':'){0,5} h16 )? '::' h16 >> > | ( ( h16 ':'){0,6} h16 )? '::' ) >> > h16 = hexdig{1,4} >> > ls32 = ( h16 ':' h16) | IPv4address >> > IPv4address = dec_octet '.' dec_octet '.' dec_octet '.' Dec_octet >> > nz = ~'0' digit >> > dec_octet = ( >> > digit # 0-9 >> > | nz digit # 10-99 >> > | '1' digit{2} # 100-199 >> > | '2' ('0' | '1' | '2' | '3' | '4') digit # 200-249 >> > | '25' ('0' | '1' | '2' | '3' | '4' | '5') )# >> %250-255 >> > reg_name = ( unreserved | pct_encoded | sub_delims)* >> > path = ( >> > path_abempty # begins with '/' or is empty >> > | path_absolute # begins with '/' but not '//' >> > | path_noscheme # begins with a non-colon segment >> > | path_rootless # begins with a segment >> > | path_empty ) # zero characters >> > path_abempty = ( '/' segment)* >> > path_absolute = '/' ( segment_nz ( '/' segment)* )? >> > path_noscheme = segment_nz_nc ( '/' segment)* >> > path_rootless = segment_nz ( '/' segment)* >> > path_empty = pchar{0} >> > segment = pchar* >> > segment_nz = pchar+ >> > segment_nz_nc = ( unreserved | pct_encoded | sub_delims | '@')+ >> > # non-zero-length segment without any colon ':' >> > pchar = unreserved | pct_encoded | sub_delims | ':' | '@' >> > query = ( pchar | '/' | '?')* >> > fragment = ( pchar | '/' | '?')* >> > pct_encoded = '%' hexdig >> > unreserved = letter | digit | '-' | '.' | '_' | '~' >> > reserved = gen_delims | sub_delims >> > gen_delims = ':' | '/' | '?' | '#' | '(' | ')?' | '@' >> > sub_delims = '!' | '$' | '&' | '\\'' | '(' | ')' | '*' | '+' | >> > ',' | ';' | '=' >> > hexdig = digit | 'a' | 'A' | 'b' | 'B' | 'c' | 'C' | 'd' | >> > 'D' | 'e' | 'E' | 'f' | 'F' >> > >> > A test program - if the grammar is in a string ``grammar``:: >> > >> > import os >> > import sys >> > import platform >> > >> > from parsley import makeGrammar >> > >> > grammar = """ >> > wsp ... >> > """ >> > tests = [ >> > "A", >> > "aa", >> > "name", >> > "name>=3", >> > "name>=3,<2", >> > "name [fred,bar] @ http://foo.com ; python_version=='2.7'", >> > "name[quux, strange];python_version<'2.7' and >> platform_version=='2'", >> > "name; os_name=='dud' and (os_name=='odd' or os_name=='fred')", >> > "name; os_name=='dud' and os_name=='odd' or os_name=='fred'", >> > ] >> > >> > def format_full_version(info): >> > version = '{0.major}.{0.minor}.{0.micro}'.format(info) >> > kind = info.releaselevel >> > if kind != 'final': >> > version += kind[0] + str(info.serial) >> > return version >> > >> > if hasattr(sys, 'implementation'): >> > implementation_version = >> format_full_version(sys.implementation.version) >> > implementation_name = sys.implementation.name >> > else: >> > implementation_version = '0' >> > implementation_name = '' >> > bindings = { >> > 'implementation_name': implementation_name, >> > 'implementation_version': implementation_version, >> > 'os_name': os.name, >> > 'platform_machine': platform.machine(), >> > 'platform_release': platform.release(), >> > 'platform_system': platform.system(), >> > 'platform_version': platform.version(), >> > 'python_full_version': platform.python_version(), >> > 'python_implementation': platform.python_implementation(), >> > 'python_version': platform.python_version()[:3], >> > 'sys_platform': sys.platform, >> > } >> > >> > compiled = makeGrammar(grammar, {'lookup': bindings.__getitem__}) >> > for test in tests: >> > parsed = compiled(test).specification() >> > print(parsed) >> > >> > References >> > ========== >> > >> > .. [#pip] pip, the recommended installer for Python packages >> > (http://pip.readthedocs.org/en/stable/) >> > >> > .. [#pep345] PEP-345, Python distribution metadata version 1.2. >> > (https://www.python.org/dev/peps/pep-0345/) >> > >> > .. [#pep426] PEP-426, Python distribution metadata. >> > (https://www.python.org/dev/peps/pep-0426/) >> > >> > .. [#pep440] PEP-440, Python distribution metadata. >> > (https://www.python.org/dev/peps/pep-0440/) >> > >> > .. [#std66] The URL specification. >> > (https://tools.ietf.org/html/rfc3986) >> > >> > .. [#parsley] The parsley PEG library. >> > (https://pypi.python.org/pypi/parsley/) >> > >> > Copyright >> > ========= >> > >> > This document has been placed in the public domain. >> > >> > >> > >> > .. >> > Local Variables: >> > mode: indented-text >> > indent-tabs-mode: nil >> > sentence-end-double-space: t >> > fill-column: 70 >> > coding: utf-8 >> > End: >> > >> > >> > -- >> > Robert Collins >> > Distinguished Technologist >> > HP Converged Cloud >> > _______________________________________________ >> > Distutils-SIG maillist - Distutils-SIG at python.org >> > https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> ----------------- >> Donald Stufft >> PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 >> DCFA >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kim.walisch at gmail.com Sun Nov 29 10:44:58 2015 From: kim.walisch at gmail.com (Kim Walisch) Date: Sun, 29 Nov 2015 16:44:58 +0100 Subject: [Distutils] distutils.command.build_clib: How to add additional compiler flags for cl.exe? Message-ID: Hi, For distutils.command.build_clib the commonly used code below does not work for adding additional compiler flags (it works using distutils.command.build_ext): extra_compile_args = '-fopenmp' On Unix-like system I found a workaround which allows to specify additional compiler flags for distutils.command.build_clib: cflags = distutils.sysconfig.get_config_var('CFLAGS') distutils.sysconfig._config_vars['CFLAGS'] = cflags + " -fopenmp" Unfortunately this does not work with Microsoft's C/C++ compiler cl.exe. Does anybody know how I can add additional compiler flags for cl.exe and distutils.command.build_clib? Thanks and best regards! -------------- next part -------------- An HTML attachment was scrubbed... URL: From patter001 at gmail.com Mon Nov 30 18:59:31 2015 From: patter001 at gmail.com (KP) Date: Mon, 30 Nov 2015 18:59:31 -0500 Subject: [Distutils] namespace_package Message-ID: I'm not sure where the issue is, but when I specify a namespace_package in the setup.py file, I can indeed have multiple packages with the same base (foo.bar, foo.blah, etc...). The files all install in to the same directory. It drops the foo/__init__.py that would be doing the extend_path, and instead adds a ".pth" file that is a bit over my head. The problem is that it does not seem to traverse the entire sys.path to find multiple foo packages. If I do not specify namespace_packages and instead just use the pkgutil.extend_path, then this seems to allow the packages to be in multiple places in the sys.path. Is there something additional for the namespace_package that i need to specify in order for all of the sys.path to be checked? I'm using 18.5 setuptools....but I am not sure if this somehow ties in to wheel/pip, since I'm using that for the actual install. -------------- next part -------------- An HTML attachment was scrubbed... URL: