From guettliml at thomas-guettler.de Mon May 2 03:03:36 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Mon, 2 May 2016 09:03:36 +0200 Subject: [Distutils] Two ways to download python packages - I prefer one Message-ID: <5726FBC8.50308@thomas-guettler.de> I was told this: > `python setup.py develop` uses urllib2 to download distributions whereas pip uses requests Souce: http://stackoverflow.com/a/36958874/633961 This can create confusing situations and I want to avoid this. Is there a way to use only **one** way to install python packages? Do wheels help here? Or is there a way to use npm for python packages? Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From noah at coderanger.net Mon May 2 03:14:27 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Mon, 2 May 2016 00:14:27 -0700 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: <5726FBC8.50308@thomas-guettler.de> References: <5726FBC8.50308@thomas-guettler.de> Message-ID: The correct way to do that these days is `pip install -e .` AFAIK. Setuptools should be considered an implementation detail of installs at best, not really used directly anymore (though entry points are still used by some projects, so this isn't really a strict dichotomy). --Noah > On May 2, 2016, at 12:03 AM, Thomas G?ttler wrote: > > I was told this: > > > `python setup.py develop` uses urllib2 to download distributions whereas pip uses requests > > Souce: http://stackoverflow.com/a/36958874/633961 > > This can create confusing situations and I want to avoid this. > > Is there a way to use only **one** way to install python packages? > > Do wheels help here? > > Or is there a way to use npm for python packages? > > Regards, > Thomas G?ttler > > -- > Thomas Guettler http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From alex.gronholm at nextday.fi Mon May 2 03:16:19 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Mon, 2 May 2016 10:16:19 +0300 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: References: <5726FBC8.50308@thomas-guettler.de> Message-ID: <5726FEC3.1060906@nextday.fi> You make it sound like there's a plausible alternative to setuptools entry points -- is there? 02.05.2016, 10:14, Noah Kantrowitz kirjoitti: > The correct way to do that these days is `pip install -e .` AFAIK. Setuptools should be considered an implementation detail of installs at best, not really used directly anymore (though entry points are still used by some projects, so this isn't really a strict dichotomy). > > --Noah > >> On May 2, 2016, at 12:03 AM, Thomas G?ttler wrote: >> >> I was told this: >> >>> `python setup.py develop` uses urllib2 to download distributions whereas pip uses requests >> Souce: http://stackoverflow.com/a/36958874/633961 >> >> This can create confusing situations and I want to avoid this. >> >> Is there a way to use only **one** way to install python packages? >> >> Do wheels help here? >> >> Or is there a way to use npm for python packages? >> >> Regards, >> Thomas G?ttler >> >> -- >> Thomas Guettler http://www.thomas-guettler.de/ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Mon May 2 04:13:35 2016 From: tritium-list at sdamon.com (Alexander Walters) Date: Mon, 2 May 2016 04:13:35 -0400 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: <5726FEC3.1060906@nextday.fi> References: <5726FBC8.50308@thomas-guettler.de> <5726FEC3.1060906@nextday.fi> Message-ID: <57270C2F.7060903@sdamon.com> Hypothetically, the alternative is to break non-application entrypoints (the ones NOT used for console scripts or gui applications) into some other infrastructure. The people that use entrypoints for their plugin systems might be given a build system agnostic option if that were the case. Console scripts et. al. are still build/install system dependent. On 5/2/2016 03:16, Alex Gr?nholm wrote: > You make it sound like there's a plausible alternative to setuptools > entry points -- is there? From guettliml at thomas-guettler.de Mon May 2 05:25:48 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Mon, 2 May 2016 11:25:48 +0200 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: References: <5726FBC8.50308@thomas-guettler.de> Message-ID: <57271D1C.7000008@thomas-guettler.de> Am 02.05.2016 um 09:14 schrieb Noah Kantrowitz: > The correct way to do that these days is `pip install -e .` AFAIK. Setuptools should be considered an implementation detail of installs at best, not really used directly anymore (though entry points are still used by some projects, so this isn't really a strict dichotomy). You say it is an "implementation detail". That's ok, I am user, and I don't want to know everything. There are two ways to handle the implementation detail. I ask myself: why? If I use `pip install -e mylib` and mylib uses installs_requires=[...] in its setup.py, what happens? I guess there will be two ways the packages get installed on my system. Is my assumption correct? Is this current behaviour intentional? Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From ncoghlan at gmail.com Mon May 2 06:15:47 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 2 May 2016 20:15:47 +1000 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: <57271D1C.7000008@thomas-guettler.de> References: <5726FBC8.50308@thomas-guettler.de> <57271D1C.7000008@thomas-guettler.de> Message-ID: On 2 May 2016 at 19:25, Thomas G?ttler wrote: > > > Am 02.05.2016 um 09:14 schrieb Noah Kantrowitz: >> >> The correct way to do that these days is `pip install -e .` AFAIK. >> Setuptools should be considered an implementation detail of installs at >> best, not really used directly anymore (though entry points are still used >> by some projects, so this isn't really a strict dichotomy). > > > You say it is an "implementation detail". That's ok, I am user, and I don't > want to know everything. > > There are two ways to handle the implementation detail. I ask myself: why? > > If I use `pip install -e mylib` and mylib uses installs_requires=[...] in > its setup.py, what happens? > > I guess there will be two ways the packages get installed on my system. Is > my assumption correct? Sort of - pip knows how to read setuptools produced installation metadata, so even if something is installed via easy_install (explicitly or implicitly), command like pip list and pip freeze will still tell report it accurately. That's how tools like pip-compile are able to work: https://github.com/nvie/pip-tools#readme The initial run of "pip-compile" may need to fallback to easy_install in some cases, but the subsequent use of the generated complete requirements file should always install everything with pip (the exact versions are pinned, so new dependencies can't be introduced without another pip-compile run). > Is this current behaviour intentional? Not really, it's mainly a matter of it being a lot of work to migrate implicit installation in setuptools away from easy_install (without breaking anything), and there being cases where setuptools/easy_install encourage practices that we genuinely want people to migrate away from doing (not because they don't work when you know what you're doing, but because they tend to require near-encyclopaedic knowledge of Python's import system to debug when they don't behave as you expect). (Some of those aspects are already covered in https://packaging.python.org/en/latest/additional/ and https://github.com/pypa/python-packaging-user-guide/pull/159/files covers a couple more) Folks getting their Python runtime from a commercial redistributor can potentially help by filing tickets with their supplier and generally agitating for them to increase their level of upstream investment in shared community infrastructure, but it's currently tough for community contributors to help move things forward (since the biggest problems involve interactions between the import system, CPython interpreter initialisation, setuptools/easy_install, pip, virtualenv, project setup.py files, and sometimes even the Python Package Index, and the folks that already have all those moving pieces in their heads unfortunately tend to have a lot of competing priorities) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Mon May 2 10:15:42 2016 From: dholth at gmail.com (Daniel Holth) Date: Mon, 02 May 2016 14:15:42 +0000 Subject: [Distutils] Two ways to download python packages - I prefer one In-Reply-To: References: <5726FBC8.50308@thomas-guettler.de> <57271D1C.7000008@thomas-guettler.de> Message-ID: Entry points are separate from the build and install systems bundled with setuptools. Usually when we talk about replacing or deprecating setuptools we mean first the install part, then the build part. Entry points are fine. The core reason we want to use pip to install (including for development installations) instead of setup.py to install is that "setup.py install" is potentially different for every single version of every package, while "pip install" is consistent. On Mon, May 2, 2016 at 6:19 AM Nick Coghlan wrote: > On 2 May 2016 at 19:25, Thomas G?ttler > wrote: > > > > > > Am 02.05.2016 um 09:14 schrieb Noah Kantrowitz: > >> > >> The correct way to do that these days is `pip install -e .` AFAIK. > >> Setuptools should be considered an implementation detail of installs at > >> best, not really used directly anymore (though entry points are still > used > >> by some projects, so this isn't really a strict dichotomy). > > > > > > You say it is an "implementation detail". That's ok, I am user, and I > don't > > want to know everything. > > > > There are two ways to handle the implementation detail. I ask myself: > why? > > > > If I use `pip install -e mylib` and mylib uses installs_requires=[...] in > > its setup.py, what happens? > > > > I guess there will be two ways the packages get installed on my system. > Is > > my assumption correct? > > Sort of - pip knows how to read setuptools produced installation > metadata, so even if something is installed via easy_install > (explicitly or implicitly), command like pip list and pip freeze will > still tell report it accurately. That's how tools like pip-compile are > able to work: https://github.com/nvie/pip-tools#readme > > The initial run of "pip-compile" may need to fallback to easy_install > in some cases, but the subsequent use of the generated complete > requirements file should always install everything with pip (the exact > versions are pinned, so new dependencies can't be introduced without > another pip-compile run). > > > Is this current behaviour intentional? > > Not really, it's mainly a matter of it being a lot of work to migrate > implicit installation in setuptools away from easy_install (without > breaking anything), and there being cases where > setuptools/easy_install encourage practices that we genuinely want > people to migrate away from doing (not because they don't work when > you know what you're doing, but because they tend to require > near-encyclopaedic knowledge of Python's import system to debug when > they don't behave as you expect). (Some of those aspects are already > covered in https://packaging.python.org/en/latest/additional/ and > https://github.com/pypa/python-packaging-user-guide/pull/159/files > covers a couple more) > > Folks getting their Python runtime from a commercial redistributor can > potentially help by filing tickets with their supplier and generally > agitating for them to increase their level of upstream investment in > shared community infrastructure, but it's currently tough for > community contributors to help move things forward (since the biggest > problems involve interactions between the import system, CPython > interpreter initialisation, setuptools/easy_install, pip, virtualenv, > project setup.py files, and sometimes even the Python Package Index, > and the folks that already have all those moving pieces in their heads > unfortunately tend to have a lot of competing priorities) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheus235 at gmail.com Mon May 2 12:40:05 2016 From: prometheus235 at gmail.com (Nick Timkovich) Date: Mon, 2 May 2016 11:40:05 -0500 Subject: [Distutils] Basic Markdown Readme Support Message-ID: Markdown READMEs are becoming increasingly ubiquitous for many projects. GitHub, GitLab, Bitbucket, among others, happily detect .md readme files and render them in their web interfaces. rST is nice, but is generally overkill for single-page documents (as opposed to more intricate documentation). To get something done sooner, rather than later, I'd prefer to come up with a two-phase solution, one narrow and "opt-in" (status-quo for all existing packages unless the maintainer does something) for quick implementation with hopefully minimal pushback. The other, later, not-proposed-here, could be more feature-rich/heuristic. So, to get Markdown supported in some form, here's some talking points to debate: * Add a "long_description_filename" to setup (suggested by @msabramo/GH [1]), which does the usual boilerplate "[codecs.]open(x, 'r', encoding=y).read()". To determine the format look at an additional "long_description_content_type" field (if provided), otherwise look at the file extension and assume/require UTF-8. * As an alternative, if there is no long_description, and the fall-back to README.rst fails, look for README.md and grab that. Such a strategy wouldn't be fully opt-in, however. * Markdown (just like reStructuredText) allows arbitrary HTML to be added. The renderer must then be upstream of the (existing) clean (with bleach) step. * [Optional]: Use common extensions provided by the PyPI/Markdown library to support GFM/SO stuff: fenced_code, smart_strong, nl2br Nick Timkovich Amaral Lab, Northwestern University [1]: https://github.com/pypa/readme_renderer/pull/3#issuecomment-72302732 [2]: https://github.com/pypa/readme_renderer/pull/3#issuecomment-66569248 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Tue May 3 00:19:18 2016 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 3 May 2016 00:19:18 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: Message-ID: <572826C6.8060605@sdamon.com> I am -1 on this on the basis that the services mentioned also happily support restructured text READMEs On 5/2/2016 12:40, Nick Timkovich wrote: > Markdown READMEs are becoming increasingly ubiquitous for many > projects. GitHub, GitLab, Bitbucket, among others, happily detect .md > readme files and render them in their web interfaces. rST is nice, but > is generally overkill for single-page documents (as opposed to more > intricate documentation). To get something done sooner, rather than > later, I'd prefer to come up with a two-phase solution, one narrow and > "opt-in" (status-quo for all existing packages unless the maintainer > does something) for quick implementation with hopefully minimal > pushback. The other, later, not-proposed-here, could be more > feature-rich/heuristic. > > So, to get Markdown supported in some form, here's some talking points > to debate: > > * Add a "long_description_filename" to setup (suggested by > @msabramo/GH [1]), which does the usual boilerplate "[codecs.]open(x, > 'r', encoding=y).read()". To determine the format look at an > additional "long_description_content_type" field (if provided), > otherwise look at the file extension and assume/require UTF-8. > > * As an alternative, if there is no long_description, and the > fall-back to README.rst fails, look for README.md and grab that. Such > a strategy wouldn't be fully opt-in, however. > > * Markdown (just like reStructuredText) allows arbitrary HTML to be > added. The renderer must then be upstream of the (existing) clean > (with bleach) step. > > * [Optional]: Use common extensions provided by the PyPI/Markdown > library to support GFM/SO stuff: fenced_code, smart_strong, nl2br > > Nick Timkovich > Amaral Lab, Northwestern University > > [1]: https://github.com/pypa/readme_renderer/pull/3#issuecomment-72302732 > [2]: https://github.com/pypa/readme_renderer/pull/3#issuecomment-66569248 > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Tue May 3 00:27:15 2016 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 3 May 2016 16:27:15 +1200 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: <572826C6.8060605@sdamon.com> References: <572826C6.8060605@sdamon.com> Message-ID: On 3 May 2016 4:19 PM, "Alexander Walters" wrote: > > I am -1 on this on the basis that the services mentioned also happily support restructured text READMEs I don't understand why that makes you say no to the ability to support markdown. Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Tue May 3 00:33:15 2016 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 3 May 2016 00:33:15 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> Message-ID: The justification was "Because Github et. al. support markdown, pypi should too", presumably for the purpose of allowing one to write their README once, and have it work in both places. This is already possible, and only adds unneeded complexity to an already complex system. If you want to make your README a write-once document, use the format already supported on both platforms. On 5/3/2016 00:27, Robert Collins wrote: > > > On 3 May 2016 4:19 PM, "Alexander Walters" > wrote: > > > > I am -1 on this on the basis that the services mentioned also > happily support restructured text READMEs > > I don't understand why that makes you say no to the ability to support > markdown. > > Rob > From ncoghlan at gmail.com Tue May 3 08:35:10 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 May 2016 22:35:10 +1000 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> Message-ID: On 3 May 2016 at 14:33, Alexander Walters wrote: > The justification was "Because Github et. al. support markdown, pypi should > too", presumably for the purpose of allowing one to write their README once, > and have it work in both places. This is already possible, and only adds > unneeded complexity to an already complex system. If you want to make your > README a write-once document, use the format already supported on both > platforms. As I understand it, it's more a matter of folks finding the context switch between Markdown and non-Sphinx reStructuredText a pain (with the main differences being double-backticks for inline code and `link text `_ instead of [link text](link target) for hyperlinks) and not being aware of (or caring about) the dire lack of commercial investment in tooling support for upstream Python software distribution (since all the redistributors have their own software distribution platforms that they recommend their users use instead). Encountering volunteer maintained community services can be quite a shock to the system for folks otherwise used to software tooling developed by venture capital backed companies making a landgrab for developer mindshare in the hopes of creating the next Oracle or Microsoft :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tritium-list at sdamon.com Tue May 3 08:46:58 2016 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Tue, 3 May 2016 08:46:58 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> Message-ID: <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> > -----Original Message----- > From: Nick Coghlan [mailto:ncoghlan at gmail.com] > As I understand it, it's more a matter of folks finding the context > switch between Markdown and non-Sphinx reStructuredText a pain (with > the main differences being double-backticks for inline code and `link > text `_ instead of [link text](link target) for > hyperlinks) and not being aware of (or caring about) the dire lack of > commercial investment in tooling support for upstream Python software > distribution (since all the redistributors have their own software > distribution platforms that they recommend their users use instead). As someone who writes quite a bit of non-sphinx Restructured Text, and a lot of Markdown (because I have to), I have zero sympathy for the argument of context switching. I can come up with an overblown analogy, but I will just reduce it to "it's not complicated to keep the two syntaxes straight". From fred at fdrake.net Tue May 3 09:18:31 2016 From: fred at fdrake.net (Fred Drake) Date: Tue, 3 May 2016 09:18:31 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: My perspective, for what it's worth, is that while I find Markdown a horrible pain, there are a lot of people who pick it up before picking up Python, and tools like GitHub and BitBucket encourage (and make it easier to add) README.md to a project. For someone who isn't familiar with reStructuredText, it's an easier on-ramp. So while I'm all for encouraging developers to prefer reStructuredText, I'm in favor of supporting Markdown as a long_description format. The format for a README file just doesn't seem such a big deal that alienating potential community members is worth it. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From jim at jimfulton.info Tue May 3 09:29:40 2016 From: jim at jimfulton.info (Jim Fulton) Date: Tue, 3 May 2016 09:29:40 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: On Tue, May 3, 2016 at 9:18 AM, Fred Drake wrote: > My perspective, for what it's worth, is that while I find Markdown a > horrible pain, But wait, it's worse. Unlike ReStructuredText, there's no Markdown standard. In my last job, I had to use a suite of tools (from a single company that I won't name but is easy to guess :) ) for which no 2 tools used the same dialect of Markdown. :( ... > So while I'm all for encouraging developers to prefer > reStructuredText, I'm in favor of supporting Markdown as a > long_description format. The format for a README file just doesn't > seem such a big deal that alienating potential community members is > worth it. Which begs the question, which dialect of Markdown are you suggesting we support. :) Jim -- Jim Fulton http://jimfulton.info From p.f.moore at gmail.com Tue May 3 09:35:04 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 May 2016 14:35:04 +0100 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: On 3 May 2016 at 14:18, Fred Drake wrote: > My perspective, for what it's worth, is that while I find Markdown a > horrible pain, there are a lot of people who pick it up before picking > up Python, and tools like GitHub and BitBucket encourage (and make it > easier to add) README.md to a project. For someone who isn't familiar > with reStructuredText, it's an easier on-ramp. > > So while I'm all for encouraging developers to prefer > reStructuredText, I'm in favor of supporting Markdown as a > long_description format. The format for a README file just doesn't > seem such a big deal that alienating potential community members is > worth it. Agreed. And like it or not, people get familiar with Markdown from things like github comment boxes, so it's very easy to just "go with the familiar option". Having said that, someone still needs to do the work to add Markdown support - and that's not trivial, as it will involve changes to how the metadata is held to include a content type for the long_description. And that means a discussion on how to include that. So while I don't mind if people want this support, it's not going to happen unless someone actually goes through the work of managing the proposal, co-ordinating the discussions, writing the code, etc. And I suspect that this is unlikely to be a priority of any of the PyPA team in the near future - so it'll need someone who's interested in seeing this feature added to take on that work. Paul From ncoghlan at gmail.com Tue May 3 09:47:58 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 3 May 2016 23:47:58 +1000 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: On 3 May 2016 at 23:18, Fred Drake wrote: > My perspective, for what it's worth, is that while I find Markdown a > horrible pain, there are a lot of people who pick it up before picking > up Python, and tools like GitHub and BitBucket encourage (and make it > easier to add) README.md to a project. For someone who isn't familiar > with reStructuredText, it's an easier on-ramp. > > So while I'm all for encouraging developers to prefer > reStructuredText, I'm in favor of supporting Markdown as a > long_description format. The format for a README file just doesn't > seem such a big deal that alienating potential community members is > worth it. Exactly. The lack of support for Markdown README files is mainly a matter of a historical quirk of the way the PyPI metadata upload API works making this way more work to implement than it seems at first glance, the current PyPI codebase being sufficiently fragile that we actively avoid changing it, and getting Warehouse to the point of being sufficiently feature complete for it to take over primary service responsibilities being a long hard slog for the folks working on it (it's a lot harder to find volunteers interested in working on paying down technical debt than it is to find folks that want to work on new user facing features). However, this SO answer provides some ideas on ways to convert from Markdown to reStructured Text when producing the sdist metadata, or to derive a checked in .rst file from a README.md file: http://stackoverflow.com/questions/10718767/have-the-same-readme-both-in-markdown-and-restructuredtext It is also seems plausible to me that a client-side solution could be designed that allowed the description metadata stored in the sdist to be overridden when uploading to PyPI (i.e. the description in PKG-INFO could be Markdown, but the upload tool could use pypandoc to convert that to reStructuredText in the uploaded metadata). I'm not sure if Donald would be open to that in twine (presumably via an extra to avoid having pypandoc as a standard dependency), but client-only changes are generally an easier pitch than changes to the interfaces between client tools and PyPI. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From fred at fdrake.net Tue May 3 10:01:45 2016 From: fred at fdrake.net (Fred Drake) Date: Tue, 3 May 2016 10:01:45 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: On Tue, May 3, 2016 at 9:29 AM, Jim Fulton wrote: > But wait, it's worse. Unlike ReStructuredText, there's no Markdown standard. We agree that this is a problem, and it's why I don't use Markdown when tools don't force it. > In my last job, I had to use a suite of tools (from a single company > that I won't name but is easy to guess :) ) for which no 2 tools used > the same dialect of Markdown. :( Truly a joy. I'm glad things are better now. :-) > Which begs the question, which dialect of Markdown are you suggesting > we support. :) If we go with Nick's idea of a client-side solution, we really don't have to pick just one flavor, though I'd expect GFM would win out for practical purposes. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From waynejwerner at gmail.com Tue May 3 10:09:45 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Tue, 3 May 2016 09:09:45 -0500 (CDT) Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: On Tue, 3 May 2016, Jim Fulton wrote: > In my last job, I had to use a suite of tools (from a single company > that I won't name but is easy to guess :) ) for which no 2 tools used > the same dialect of Markdown. :( > > Which begs the question, which dialect of Markdown are you suggesting > we support. :) My personal preference is CommonMark - mainly because it's actually well-defined. There are only a couple of cases that I've encountered where CommonMark didn't render exactly what I expected. But at least it's consistent, so it doesn't take much to adjust to. For a simple README, I actually *prefer* CommonMark/Markdown - I find it has all the features I need. Of course for more complicated documentation, reStructuredText has a lot more power, so I'm down with the extra complexity. Plus, I have actually come across more than one project on pypi right now where the readme is in markdown format, so looks fine on Github, but pretty funky on pypi. -Wayne From donald at stufft.io Tue May 3 10:12:57 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 3 May 2016 10:12:57 -0400 Subject: [Distutils] Basic Markdown Readme Support In-Reply-To: References: <572826C6.8060605@sdamon.com> <000101d1a539$e1f2a6f0$a5d7f4d0$@hotmail.com> Message-ID: <7E3EFB74-22A6-4033-90CF-E8D5A31DDDC5@stufft.io> > On May 3, 2016, at 9:47 AM, Nick Coghlan wrote: > > On 3 May 2016 at 23:18, Fred Drake wrote: >> My perspective, for what it's worth, is that while I find Markdown a >> horrible pain, there are a lot of people who pick it up before picking >> up Python, and tools like GitHub and BitBucket encourage (and make it >> easier to add) README.md to a project. For someone who isn't familiar >> with reStructuredText, it's an easier on-ramp. >> >> So while I'm all for encouraging developers to prefer >> reStructuredText, I'm in favor of supporting Markdown as a >> long_description format. The format for a README file just doesn't >> seem such a big deal that alienating potential community members is >> worth it. > > Exactly. The lack of support for Markdown README files is mainly a > matter of a historical quirk of the way the PyPI metadata upload API > works making this way more work to implement than it seems at first > glance, the current PyPI codebase being sufficiently fragile that we > actively avoid changing it, and getting Warehouse to the point of > being sufficiently feature complete for it to take over primary > service responsibilities being a long hard slog for the folks working > on it (it's a lot harder to find volunteers interested in working on > paying down technical debt than it is to find folks that want to work > on new user facing features). This is basically the answer here. It looks like the original post by Nick Timkovich tried to start a discussion about what this might look like, but really I think it focused too much on the setup.py API which isn't really the issue, we can do whatever there, what's more the issue is how is that represented in the metadata. Right now we have a singular metadata field which just contains all of the text of the long_description without any other information about it. The simplest thing to do is probably to just add a new field, something like:: Description-Markup: We'd need to define some values for for it like "txt", "md", "rst" or something along those lines. I'd suggest extensions so that in the future we can move the long description into it's own file in the metadata and just move that value to the file extension (like `DESCRIPTION.[ext]`). We'd also want to declare what behavior should be expected when that value doesn't exist (codifying the current behavior of, attempt to register as rst, fall back to txt). We'd probably also want some recommendations on what the different types of tools should do when encountering invalid markup for the declared markup language and also what they should do when encountering a markup language they don't know. For the server side (e.g. PyPI) I'd suggest erroring the upload whenever an invalid or unknown markup is attempted to be uploaded (where a undeclined markup is never invalid, it just does the fallback) and for anything clientside it jsut falls back to plaintext. This has a few benefits: * No more ugly when plaintext (or markdown!) accidently get rendered wrongly as restructeredtext. * We can hard fail uploads when their rendering is broken, leading people to be able to fix the problems instead of ending up with a bunch of broken markup all over PyPI. * We can allow markup languages other than txt and rst. However, I'm not going to have time or motivation to really work this into a valid spec or even a fully fleshed out idea. Nor will I be able to handle implementing this in PyPI or Warehouse right now since I'm primarily focused on trying to get Warehouse itself deployed for real. I am happy to review PRs and actual specs though, whether they take this idea or they use a different one. > > However, this SO answer provides some ideas on ways to convert from > Markdown to reStructured Text when producing the sdist metadata, or to > derive a checked in .rst file from a README.md file: > http://stackoverflow.com/questions/10718767/have-the-same-readme-both-in-markdown-and-restructuredtext > > It is also seems plausible to me that a client-side solution could be > designed that allowed the description metadata stored in the sdist to > be overridden when uploading to PyPI (i.e. the description in PKG-INFO > could be Markdown, but the upload tool could use pypandoc to convert > that to reStructuredText in the uploaded metadata). I'm not sure if > Donald would be open to that in twine (presumably via an extra to > avoid having pypandoc as a standard dependency), but client-only > changes are generally an easier pitch than changes to the interfaces > between client tools and PyPI. > I actually mostly don?t do much with twine anymore, Ian Cordasco has more or less taken over maintenance of it so it'd be up to him ultimately. That being said I think doing it in twine is the wrong layer. Ideally the metadata in PyPI matches the metadata in the file and people can "recreate" the PyPI database using nothing but the files [1]. I think if you want to shim over this on the client side, your best hope would be in setuptools, but I think if someone is motivated enough to actually do the spec and implementation work we can get proper support landed too. [1] Ok, this doesn't exactly work because of the dynamic nature of setup.py, but in practice you can get close, and we're moving closer to that day. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From brett at python.org Tue May 3 12:41:12 2016 From: brett at python.org (Brett Cannon) Date: Tue, 03 May 2016 16:41:12 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> Message-ID: On Wed, 27 Apr 2016 at 10:53 Donald Stufft wrote: > This isn't really a problem with what you're doing. Rather it's an issue > with the toolchain and and open question whether or not wheels should > conceptually be able to be produced from a checkout, or if they should only > be produced from a sdist. Problems like this are why I advocate the > Checkout -> sdist -> wheel being the only path, but others feel differently. > And Daniel Holth said he did feel differently, so obviously this hasn't moved forward in terms of finding consensus. Where does all of this sit in regards to trying to separate building from installation? From my perspective as mailing list lurker, it's sitting at an impasse as Nick hasn't made a final call nor has a BDFL delegate on the topic been chosen to settle the matter (obviously if I missed something then please let me know). Could we choose a Great Decider on this topic of build/installation separation and get final RFC/PEPs written so we can get passed this impasse and move forward? -Brett > > > ? Donald Stufft > > On Apr 27 2016, at 1:38 pm, Ethan Furman wrote: > >> On 04/26/2016 07:10 AM, Donald Stufft wrote: >> >> > Alternatively, he could have just produced a wheel from any checkout at >> > all if the MANIFEST.in excluded a file that would otherwise have been >> > installed. >> >> Yes. My MANIFEST.in starts with an 'exclude enum/*' and then includes >> all files it wants. >> >> > This sort of thing is why I'm an advocate that we should only >> > build sdists from checkouts, and wheels from sdists (at the low level >> > anyways, even if the UI allows people to appear to create a wheel >> > straight from a checkout). >> >> My current process is: >> >> python3.5 setup.py sdist --format=gztar,zip bdist_wheel upload >> >> What should I be doing instead? >> >> -- >> ~Ethan~ >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue May 3 12:47:20 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 3 May 2016 12:47:20 -0400 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> Message-ID: <427B161C-364C-4636-A577-5781098C8A61@stufft.io> It will likely get decided as part of the build system PEP, whenever that gets picked up again. > On May 3, 2016, at 12:41 PM, Brett Cannon wrote: > > > > On Wed, 27 Apr 2016 at 10:53 Donald Stufft > wrote: > This isn't really a problem with what you're doing. Rather it's an issue with the toolchain and and open question whether or not wheels should conceptually be able to be produced from a checkout, or if they should only be produced from a sdist. Problems like this are why I advocate the Checkout -> sdist -> wheel being the only path, but others feel differently. > > And Daniel Holth said he did feel differently, so obviously this hasn't moved forward in terms of finding consensus. > > Where does all of this sit in regards to trying to separate building from installation? From my perspective as mailing list lurker, it's sitting at an impasse as Nick hasn't made a final call nor has a BDFL delegate on the topic been chosen to settle the matter (obviously if I missed something then please let me know). Could we choose a Great Decider on this topic of build/installation separation and get final RFC/PEPs written so we can get passed this impasse and move forward? > > -Brett > > > > ? Donald Stufft > > On Apr 27 2016, at 1:38 pm, Ethan Furman > wrote: > On 04/26/2016 07:10 AM, Donald Stufft wrote: > > > Alternatively, he could have just produced a wheel from any checkout at > > all if the MANIFEST.in excluded a file that would otherwise have been > > installed. > > Yes. My MANIFEST.in starts with an 'exclude enum/*' and then includes > all files it wants. > > > This sort of thing is why I'm an advocate that we should only > > build sdists from checkouts, and wheels from sdists (at the low level > > anyways, even if the UI allows people to appear to create a wheel > > straight from a checkout). > > My current process is: > > python3.5 setup.py sdist --format=gztar,zip bdist_wheel upload > > What should I be doing instead? > > -- > ~Ethan~ > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Tue May 3 13:10:55 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 May 2016 18:10:55 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: <427B161C-364C-4636-A577-5781098C8A61@stufft.io> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 3 May 2016 at 17:47, Donald Stufft wrote: > It will likely get decided as part of the build system PEP, whenever that > gets picked up again. Yes, but on 15th March (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) Robert posted > Just to set expectations: this whole process seems stalled to me; I'm > going to context switch and focus on things that can move forward. > Someone please ping me when its relevant to put effort in again :). And I think that's right. The whole build system PEP issue appears stalled from a lack of someone willing (or with the time) to make a call on the approach we take. As far as I'm aware, the decision remains with Nick. With the possible exception of Donald's proposal (which AFAIK never got formally published as a PEP) everything that can be said on the other proposals has been said, and the remaining differences are ones of choice of approach rather than anything affecting capabilities. (Robert's message at https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html summarised the state of the 3 proposals at the time). I think this is something that should be resolved - we don't appear to be gaining anything by waiting, and until we have a decision on the approach that's being taken, we aren't going to get anyone writing code for their preferred option. Nick - do you have the time to pick this up? Or does it need someone to step up as BDFL-delegate? Robert, Nathaniel, do you have time to spend on a final round of discussion on this, on the assumption that the goal will be a final decision at the end of it? Donald, do you have the time and interest to complete and publish your proposal? Paul From graffatcolmingov at gmail.com Tue May 3 13:52:40 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 3 May 2016 12:52:40 -0500 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Tue, May 3, 2016 at 12:10 PM, Paul Moore wrote: > On 3 May 2016 at 17:47, Donald Stufft wrote: >> It will likely get decided as part of the build system PEP, whenever that >> gets picked up again. > > Yes, but on 15th March > (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) > Robert posted > >> Just to set expectations: this whole process seems stalled to me; I'm >> going to context switch and focus on things that can move forward. >> Someone please ping me when its relevant to put effort in again :). > > And I think that's right. The whole build system PEP issue appears > stalled from a lack of someone willing (or with the time) to make a > call on the approach we take. > > As far as I'm aware, the decision remains with Nick. With the possible > exception of Donald's proposal (which AFAIK never got formally > published as a PEP) everything that can be said on the other proposals > has been said, and the remaining differences are ones of choice of > approach rather than anything affecting capabilities. (Robert's > message at https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > summarised the state of the 3 proposals at the time). > > I think this is something that should be resolved - we don't appear to > be gaining anything by waiting, and until we have a decision on the > approach that's being taken, we aren't going to get anyone writing > code for their preferred option. > > Nick - do you have the time to pick this up? Or does it need someone > to step up as BDFL-delegate? Robert, Nathaniel, do you have time to > spend on a final round of discussion on this, on the assumption that > the goal will be a final decision at the end of it? Donald, do you > have the time and interest to complete and publish your proposal? > > Paul I was following that PEP and going to implement it in Twine for the PyPA. If it would help, I can help Nick with this process. I read both PEPs a while ago and I think updates have been made so I'd need to read them again, but I can probably make some time for this. -- Ian From dholth at gmail.com Tue May 3 14:04:10 2016 From: dholth at gmail.com (Daniel Holth) Date: Tue, 03 May 2016 18:04:10 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: We did separate build from install. Now we just want to be able to build without [having to emulate] distutils; just having some dependencies installed before setup.py runs would also be a great boon. I'm reading part of this conversation as "a simple bdist_wheel bug is a reason to do a lot of work standardizing file formats" which I find unfortunate. If he is still up for it let Robert implement his own PEP as the way forward for build system abstraction. The extra PEPs are just delaying action. On Tue, May 3, 2016 at 1:11 PM Paul Moore wrote: > On 3 May 2016 at 17:47, Donald Stufft wrote: > > It will likely get decided as part of the build system PEP, whenever that > > gets picked up again. > > Yes, but on 15th March > (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) > Robert posted > > > Just to set expectations: this whole process seems stalled to me; I'm > > going to context switch and focus on things that can move forward. > > Someone please ping me when its relevant to put effort in again :). > > And I think that's right. The whole build system PEP issue appears > stalled from a lack of someone willing (or with the time) to make a > call on the approach we take. > > As far as I'm aware, the decision remains with Nick. With the possible > exception of Donald's proposal (which AFAIK never got formally > published as a PEP) everything that can be said on the other proposals > has been said, and the remaining differences are ones of choice of > approach rather than anything affecting capabilities. (Robert's > message at > https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > summarised the state of the 3 proposals at the time). > > I think this is something that should be resolved - we don't appear to > be gaining anything by waiting, and until we have a decision on the > approach that's being taken, we aren't going to get anyone writing > code for their preferred option. > > Nick - do you have the time to pick this up? Or does it need someone > to step up as BDFL-delegate? Robert, Nathaniel, do you have time to > spend on a final round of discussion on this, on the assumption that > the goal will be a final decision at the end of it? Donald, do you > have the time and interest to complete and publish your proposal? > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Tue May 3 14:07:18 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 3 May 2016 21:07:18 +0300 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <5728E8D6.102@nextday.fi> Having setuptools process the setup requirements before parsing install requirements would be a good step forward. Had that been done before, we could've just added a setup requirement for a newer setuptools to enable PEP 508 conditional requirements. 03.05.2016, 21:04, Daniel Holth kirjoitti: > We did separate build from install. Now we just want to be able to > build without [having to emulate] distutils; just having some > dependencies installed before setup.py runs would also be a great boon. > > I'm reading part of this conversation as "a simple bdist_wheel bug is > a reason to do a lot of work standardizing file formats" which I find > unfortunate. > > If he is still up for it let Robert implement his own PEP as the way > forward for build system abstraction. The extra PEPs are just delaying > action. > > On Tue, May 3, 2016 at 1:11 PM Paul Moore > wrote: > > On 3 May 2016 at 17:47, Donald Stufft > wrote: > > It will likely get decided as part of the build system PEP, > whenever that > > gets picked up again. > > Yes, but on 15th March > (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) > Robert posted > > > Just to set expectations: this whole process seems stalled to > me; I'm > > going to context switch and focus on things that can move forward. > > Someone please ping me when its relevant to put effort in again :). > > And I think that's right. The whole build system PEP issue appears > stalled from a lack of someone willing (or with the time) to make a > call on the approach we take. > > As far as I'm aware, the decision remains with Nick. With the possible > exception of Donald's proposal (which AFAIK never got formally > published as a PEP) everything that can be said on the other proposals > has been said, and the remaining differences are ones of choice of > approach rather than anything affecting capabilities. (Robert's > message at > https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > summarised the state of the 3 proposals at the time). > > I think this is something that should be resolved - we don't appear to > be gaining anything by waiting, and until we have a decision on the > approach that's being taken, we aren't going to get anyone writing > code for their preferred option. > > Nick - do you have the time to pick this up? Or does it need someone > to step up as BDFL-delegate? Robert, Nathaniel, do you have time to > spend on a final round of discussion on this, on the assumption that > the goal will be a final decision at the end of it? Donald, do you > have the time and interest to complete and publish your proposal? > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From leorochael at gmail.com Tue May 3 14:26:52 2016 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Tue, 3 May 2016 15:26:52 -0300 Subject: [Distutils] moving things forward In-Reply-To: <5728E8D6.102@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> Message-ID: On 3 May 2016 at 15:07, Alex Gr?nholm wrote: > Having setuptools process the setup requirements before parsing install > requirements would be a good step forward. Had that been done before, we > could've just added a setup requirement for a newer setuptools to enable > PEP 508 conditional requirements. > Setuptools does process setup requirements before install requirements. The "chicken and egg" issue with setuptools is that, most of the time, setup requires are needed to calculate information that is passed into the `setup()` call itself. For example information on header files coming from the C api of `numpy` which is used to build extensions. This usually means importing code from the packages in "setup requires" before setuptools has a chance to actually look at it. A simple fix would be to allow `setup()` keywords to accept functions as well as direct values and only invoke the functions when the values are actually needed, but this idea never gained traction. Of course, even if this was implemented, it wouldn't help directly with "setup requiring" a new version of setuptools itself, unless setuptools detected this situation and reinvoked setup.py from scratch. Regards, Leo 03.05.2016, 21:04, Daniel Holth kirjoitti: > > We did separate build from install. Now we just want to be able to build > without [having to emulate] distutils; just having some dependencies > installed before setup.py runs would also be a great boon. > > I'm reading part of this conversation as "a simple bdist_wheel bug is a > reason to do a lot of work standardizing file formats" which I find > unfortunate. > > If he is still up for it let Robert implement his own PEP as the way > forward for build system abstraction. The extra PEPs are just delaying > action. > > On Tue, May 3, 2016 at 1:11 PM Paul Moore < > p.f.moore at gmail.com> wrote: > >> On 3 May 2016 at 17:47, Donald Stufft wrote: >> > It will likely get decided as part of the build system PEP, whenever >> that >> > gets picked up again. >> >> Yes, but on 15th March >> (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) >> Robert posted >> >> > Just to set expectations: this whole process seems stalled to me; I'm >> > going to context switch and focus on things that can move forward. >> > Someone please ping me when its relevant to put effort in again :). >> >> And I think that's right. The whole build system PEP issue appears >> stalled from a lack of someone willing (or with the time) to make a >> call on the approach we take. >> >> As far as I'm aware, the decision remains with Nick. With the possible >> exception of Donald's proposal (which AFAIK never got formally >> published as a PEP) everything that can be said on the other proposals >> has been said, and the remaining differences are ones of choice of >> approach rather than anything affecting capabilities. (Robert's >> message at >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >> summarised the state of the 3 proposals at the time). >> >> I think this is something that should be resolved - we don't appear to >> be gaining anything by waiting, and until we have a decision on the >> approach that's being taken, we aren't going to get anyone writing >> code for their preferred option. >> >> Nick - do you have the time to pick this up? Or does it need someone >> to step up as BDFL-delegate? Robert, Nathaniel, do you have time to >> spend on a final round of discussion on this, on the assumption that >> the goal will be a final decision at the end of it? Donald, do you >> have the time and interest to complete and publish your proposal? >> >> Paul >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue May 3 14:28:10 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 3 May 2016 11:28:10 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Tue, May 3, 2016 at 10:10 AM, Paul Moore wrote: > On 3 May 2016 at 17:47, Donald Stufft wrote: >> It will likely get decided as part of the build system PEP, whenever that >> gets picked up again. > > Yes, but on 15th March > (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) > Robert posted > >> Just to set expectations: this whole process seems stalled to me; I'm >> going to context switch and focus on things that can move forward. >> Someone please ping me when its relevant to put effort in again :). > > And I think that's right. The whole build system PEP issue appears > stalled from a lack of someone willing (or with the time) to make a > call on the approach we take. No, no, Nick's not the blocker. I'm the blocker! (Sorry) Donald + Robert + I had a longish conversation about this on IRC a month ago [1]. I volunteered to summarize back to the mailing list, and then I flaked -- so I guess this is that belated email :-). Here's the tentative conclusions we came to: Blocker 1 is figuring out what to do about the sdist format. The debate is between keeping something that's basically the current format, versus somehow cleaning it up (e.g. Donald's "source wheel" ideas). To move forward: - I'll write up a PEP that attempts to just document/standardize the current de facto sdist format and send it to the mailing list (basically: filename convention, PKG-INFO + a list of which fields in PKG-INFO pypi actually cares about, presence of setup.py), and adds some sort of optional-defaulting-to-1 SDist-Version (I guess in a file called SDIST by analogy with WHEEL). And also contains a rationale section explaining the trade-offs of standardizing this versus creating a new extension.) - Donald will make his case for the new extension approach on the mailing list - We beg Nick to read over both things and make a ruling so we can move on Blocker 2 is figuring out whether the new pip <-> build system "hook" interface should be command-line based (like the current draft of PEP 516) or Python function call based (like the current draft of PEP 517). It sounds like currently Donald and I are in favor of the python hooks approach, and Robert is indifferent between them and just wants to move forward, so we agreed that unless anyone objects we'll drop the command-line approach and go ahead with refining the Python function call approach. So... if you want to object then speak up now. Then there are a bunch of details to work out about what hooks to provide exactly and what their semantics should be, but hopefully once we've settled the two issues above that will be an easier discussion to have. So yeah, basically the next step is for me [2] to write up a spec for how sdists currently (really) work. -n [1] Logs (split across two pages in the log viewer): http://chat-logs.dcpython.org/day/pypa-dev/2016-03-30#18.23.46.njs http://chat-logs.dcpython.org/day/pypa-dev/2016-03-31#00.29.32.lifeless [2] Or if someone else wants to raise their hand and volunteer I wouldn't object, obviously I am a bit swamped right now :-) -- Nathaniel J. Smith -- https://vorpus.org From alex.gronholm at nextday.fi Tue May 3 14:28:55 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 3 May 2016 21:28:55 +0300 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> Message-ID: <5728EDE7.2040807@nextday.fi> No, setuptools parses the install requirements before acting on setup requirements. That is the source of the problem. If setuptools only parsed and acted on setup requirements before even parsing install requirements, this wouldn't be an issue. 03.05.2016, 21:26, Leonardo Rochael Almeida kirjoitti: > > > On 3 May 2016 at 15:07, Alex Gr?nholm > wrote: > > Having setuptools process the setup requirements before parsing > install requirements would be a good step forward. Had that been > done before, we could've just added a setup requirement for a > newer setuptools to enable PEP 508 conditional requirements. > > > Setuptools does process setup requirements before install > requirements. The "chicken and egg" issue with setuptools is that, > most of the time, setup requires are needed to calculate information > that is passed into the `setup()` call itself. > > For example information on header files coming from the C api of > `numpy` which is used to build extensions. > > This usually means importing code from the packages in "setup > requires" before setuptools has a chance to actually look at it. > > A simple fix would be to allow `setup()` keywords to accept functions > as well as direct values and only invoke the functions when the values > are actually needed, but this idea never gained traction. > > Of course, even if this was implemented, it wouldn't help directly > with "setup requiring" a new version of setuptools itself, unless > setuptools detected this situation and reinvoked setup.py from scratch. > > Regards, > > Leo > > > 03.05.2016, 21:04, Daniel Holth kirjoitti: >> We did separate build from install. Now we just want to be able >> to build without [having to emulate] distutils; just having some >> dependencies installed before setup.py runs would also be a great >> boon. >> >> I'm reading part of this conversation as "a simple bdist_wheel >> bug is a reason to do a lot of work standardizing file formats" >> which I find unfortunate. >> >> If he is still up for it let Robert implement his own PEP as the >> way forward for build system abstraction. The extra PEPs are just >> delaying action. >> >> On Tue, May 3, 2016 at 1:11 PM Paul Moore > > wrote: >> >> On 3 May 2016 at 17:47, Donald Stufft > > wrote: >> > It will likely get decided as part of the build system PEP, >> whenever that >> > gets picked up again. >> >> Yes, but on 15th March >> (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) >> Robert posted >> >> > Just to set expectations: this whole process seems stalled >> to me; I'm >> > going to context switch and focus on things that can move >> forward. >> > Someone please ping me when its relevant to put effort in >> again :). >> >> And I think that's right. The whole build system PEP issue >> appears >> stalled from a lack of someone willing (or with the time) to >> make a >> call on the approach we take. >> >> As far as I'm aware, the decision remains with Nick. With the >> possible >> exception of Donald's proposal (which AFAIK never got formally >> published as a PEP) everything that can be said on the other >> proposals >> has been said, and the remaining differences are ones of >> choice of >> approach rather than anything affecting capabilities. (Robert's >> message at >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >> summarised the state of the 3 proposals at the time). >> >> I think this is something that should be resolved - we don't >> appear to >> be gaining anything by waiting, and until we have a decision >> on the >> approach that's being taken, we aren't going to get anyone >> writing >> code for their preferred option. >> >> Nick - do you have the time to pick this up? Or does it need >> someone >> to step up as BDFL-delegate? Robert, Nathaniel, do you have >> time to >> spend on a final round of discussion on this, on the >> assumption that >> the goal will be a final decision at the end of it? Donald, >> do you >> have the time and interest to complete and publish your proposal? >> >> Paul >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> _______________________________________________ >> Distutils-SIG maillist -Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue May 3 14:33:31 2016 From: dholth at gmail.com (Daniel Holth) Date: Tue, 03 May 2016 18:33:31 +0000 Subject: [Distutils] moving things forward In-Reply-To: <5728EDE7.2040807@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> Message-ID: What happened is that only a half-dozen setuptools experts (I am not one of those six people) know how to write an extended command or whatever that would actually be able to take advantage of setup requirements as implemented by setuptools. Everyone else wants to "import x" at the top of setup.py and pass arguments to the setup() function. So it would be better to have the installer make that possible. On Tue, May 3, 2016 at 2:29 PM Alex Gr?nholm wrote: > No, setuptools parses the install requirements before acting on setup > requirements. That is the source of the problem. If setuptools only parsed > and acted on setup requirements before even parsing install requirements, > this wouldn't be an issue. > > > 03.05.2016, 21:26, Leonardo Rochael Almeida kirjoitti: > > > > On 3 May 2016 at 15:07, Alex Gr?nholm wrote: > >> Having setuptools process the setup requirements before parsing install >> requirements would be a good step forward. Had that been done before, we >> could've just added a setup requirement for a newer setuptools to enable >> PEP 508 conditional requirements. >> > > Setuptools does process setup requirements before install requirements. > The "chicken and egg" issue with setuptools is that, most of the time, > setup requires are needed to calculate information that is passed into the > `setup()` call itself. > > For example information on header files coming from the C api of `numpy` > which is used to build extensions. > > This usually means importing code from the packages in "setup requires" > before setuptools has a chance to actually look at it. > > A simple fix would be to allow `setup()` keywords to accept functions as > well as direct values and only invoke the functions when the values are > actually needed, but this idea never gained traction. > > Of course, even if this was implemented, it wouldn't help directly with > "setup requiring" a new version of setuptools itself, unless setuptools > detected this situation and reinvoked setup.py from scratch. > > Regards, > > Leo > > > 03.05.2016, 21:04, Daniel Holth kirjoitti: >> >> We did separate build from install. Now we just want to be able to build >> without [having to emulate] distutils; just having some dependencies >> installed before setup.py runs would also be a great boon. >> >> I'm reading part of this conversation as "a simple bdist_wheel bug is a >> reason to do a lot of work standardizing file formats" which I find >> unfortunate. >> >> If he is still up for it let Robert implement his own PEP as the way >> forward for build system abstraction. The extra PEPs are just delaying >> action. >> >> On Tue, May 3, 2016 at 1:11 PM Paul Moore wrote: >> >>> On 3 May 2016 at 17:47, Donald Stufft wrote: >>> > It will likely get decided as part of the build system PEP, whenever >>> that >>> > gets picked up again. >>> >>> Yes, but on 15th March >>> (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) >>> Robert posted >>> >>> > Just to set expectations: this whole process seems stalled to me; I'm >>> > going to context switch and focus on things that can move forward. >>> > Someone please ping me when its relevant to put effort in again :). >>> >>> And I think that's right. The whole build system PEP issue appears >>> stalled from a lack of someone willing (or with the time) to make a >>> call on the approach we take. >>> >>> As far as I'm aware, the decision remains with Nick. With the possible >>> exception of Donald's proposal (which AFAIK never got formally >>> published as a PEP) everything that can be said on the other proposals >>> has been said, and the remaining differences are ones of choice of >>> approach rather than anything affecting capabilities. (Robert's >>> message at >>> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >>> summarised the state of the 3 proposals at the time). >>> >>> I think this is something that should be resolved - we don't appear to >>> be gaining anything by waiting, and until we have a decision on the >>> approach that's being taken, we aren't going to get anyone writing >>> code for their preferred option. >>> >>> Nick - do you have the time to pick this up? Or does it need someone >>> to step up as BDFL-delegate? Robert, Nathaniel, do you have time to >>> spend on a final round of discussion on this, on the assumption that >>> the goal will be a final decision at the end of it? Donald, do you >>> have the time and interest to complete and publish your proposal? >>> >>> Paul >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue May 3 14:42:31 2016 From: brett at python.org (Brett Cannon) Date: Tue, 03 May 2016 18:42:31 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Thanks for the update! Glad this is still moving forward. I'll continue to prod the list if things stall again as I want to respond to "Python packaging is broken" with "actually your knowledge is just outdated, go read packaging.python.org". :) On Tue, 3 May 2016 at 11:28 Nathaniel Smith wrote: > On Tue, May 3, 2016 at 10:10 AM, Paul Moore wrote: > > On 3 May 2016 at 17:47, Donald Stufft wrote: > >> It will likely get decided as part of the build system PEP, whenever > that > >> gets picked up again. > > > > Yes, but on 15th March > > (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) > > Robert posted > > > >> Just to set expectations: this whole process seems stalled to me; I'm > >> going to context switch and focus on things that can move forward. > >> Someone please ping me when its relevant to put effort in again :). > > > > And I think that's right. The whole build system PEP issue appears > > stalled from a lack of someone willing (or with the time) to make a > > call on the approach we take. > > No, no, Nick's not the blocker. I'm the blocker! (Sorry) > > Donald + Robert + I had a longish conversation about this on IRC a > month ago [1]. I volunteered to summarize back to the mailing list, > and then I flaked -- so I guess this is that belated email :-). > > Here's the tentative conclusions we came to: > > Blocker 1 is figuring out what to do about the sdist format. The > debate is between keeping something that's basically the current > format, versus somehow cleaning it up (e.g. Donald's "source wheel" > ideas). To move forward: > - I'll write up a PEP that attempts to just document/standardize the > current de facto sdist format and send it to the mailing list > (basically: filename convention, PKG-INFO + a list of which fields in > PKG-INFO pypi actually cares about, presence of setup.py), and adds > some sort of optional-defaulting-to-1 SDist-Version (I guess in a file > called SDIST by analogy with WHEEL). And also contains a rationale > section explaining the trade-offs of standardizing this versus > creating a new extension.) > - Donald will make his case for the new extension approach on the mailing > list > - We beg Nick to read over both things and make a ruling so we can move on > > Blocker 2 is figuring out whether the new pip <-> build system "hook" > interface should be command-line based (like the current draft of PEP > 516) or Python function call based (like the current draft of PEP > 517). It sounds like currently Donald and I are in favor of the python > hooks approach, and Robert is indifferent between them and just wants > to move forward, so we agreed that unless anyone objects we'll drop > the command-line approach and go ahead with refining the Python > function call approach. So... if you want to object then speak up now. > > Then there are a bunch of details to work out about what hooks to > provide exactly and what their semantics should be, but hopefully once > we've settled the two issues above that will be an easier discussion > to have. > > So yeah, basically the next step is for me [2] to write up a spec for > how sdists currently (really) work. > > -n > > [1] Logs (split across two pages in the log viewer): > http://chat-logs.dcpython.org/day/pypa-dev/2016-03-30#18.23.46.njs > http://chat-logs.dcpython.org/day/pypa-dev/2016-03-31#00.29.32.lifeless > [2] Or if someone else wants to raise their hand and volunteer I > wouldn't object, obviously I am a bit swamped right now :-) > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue May 3 14:51:25 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 3 May 2016 19:51:25 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 3 May 2016 at 19:28, Nathaniel Smith wrote: > No, no, Nick's not the blocker. I'm the blocker! (Sorry) > > Donald + Robert + I had a longish conversation about this on IRC a > month ago [1]. I volunteered to summarize back to the mailing list, > and then I flaked -- so I guess this is that belated email :-). Not a problem - I'm glad my mail prompted some movement, but I completely understand that other things can get in the way. > Here's the tentative conclusions we came to: > > Blocker 1 is figuring out what to do about the sdist format. The > debate is between keeping something that's basically the current > format, versus somehow cleaning it up (e.g. Donald's "source wheel" > ideas). To move forward: > - I'll write up a PEP that attempts to just document/standardize the > current de facto sdist format and send it to the mailing list > (basically: filename convention, PKG-INFO + a list of which fields in > PKG-INFO pypi actually cares about, presence of setup.py), and adds > some sort of optional-defaulting-to-1 SDist-Version (I guess in a file > called SDIST by analogy with WHEEL). And also contains a rationale > section explaining the trade-offs of standardizing this versus > creating a new extension.) > - Donald will make his case for the new extension approach on the mailing list > - We beg Nick to read over both things and make a ruling so we can move on Even though I was one who wanted a properly defined sdist format, I'm inclined to be OK with a "whatever works for now" approach, so I don't see a problem with documenting and tidying up the current de facto standard. If Donald comes up with a good proposal, that's great - but if not, we can always revisit that side of things later. > Blocker 2 is figuring out whether the new pip <-> build system "hook" > interface should be command-line based (like the current draft of PEP > 516) or Python function call based (like the current draft of PEP > 517). It sounds like currently Donald and I are in favor of the python > hooks approach, and Robert is indifferent between them and just wants > to move forward, so we agreed that unless anyone objects we'll drop > the command-line approach and go ahead with refining the Python > function call approach. So... if you want to object then speak up now. Cool. No objections from me. > Then there are a bunch of details to work out about what hooks to > provide exactly and what their semantics should be, but hopefully once > we've settled the two issues above that will be an easier discussion > to have. > > So yeah, basically the next step is for me [2] to write up a spec for > how sdists currently (really) work. [...] > [2] Or if someone else wants to raise their hand and volunteer I > wouldn't object, obviously I am a bit swamped right now :-) I don't want to volunteer to take this on completely, as I'll probably not have the time either, but if I can help in any way (research, proofreading, writing parts of the document) let me know. Thanks for the update! Paul From alex.gronholm at nextday.fi Tue May 3 15:28:22 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 3 May 2016 22:28:22 +0300 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> Message-ID: <5728FBD6.8060805@nextday.fi> As I pointed out in my previous post, "extended commands" are not the only use case for setup_requires -- upgrading setuptools itself would enable support for PEP 508 style conditional requirements. This currently does not work because if you have such requirements in your setup(), setuptools will fail when parsing those before it even has a chance to act on the minimum setuptools requirement specification in setup_requires. 03.05.2016, 21:33, Daniel Holth kirjoitti: > What happened is that only a half-dozen setuptools experts (I am not > one of those six people) know how to write an extended command or > whatever that would actually be able to take advantage of setup > requirements as implemented by setuptools. Everyone else wants to > "import x" at the top of setup.py and pass arguments to the setup() > function. So it would be better to have the installer make that possible. > > On Tue, May 3, 2016 at 2:29 PM Alex Gr?nholm > wrote: > > No, setuptools parses the install requirements before acting on > setup requirements. That is the source of the problem. If > setuptools only parsed and acted on setup requirements before even > parsing install requirements, this wouldn't be an issue. > > > 03.05.2016, 21:26, Leonardo Rochael Almeida kirjoitti: >> >> >> On 3 May 2016 at 15:07, Alex Gr?nholm > > wrote: >> >> Having setuptools process the setup requirements before >> parsing install requirements would be a good step forward. >> Had that been done before, we could've just added a setup >> requirement for a newer setuptools to enable PEP 508 >> conditional requirements. >> >> >> Setuptools does process setup requirements before install >> requirements. The "chicken and egg" issue with setuptools is >> that, most of the time, setup requires are needed to calculate >> information that is passed into the `setup()` call itself. >> >> For example information on header files coming from the C api of >> `numpy` which is used to build extensions. >> >> This usually means importing code from the packages in "setup >> requires" before setuptools has a chance to actually look at it. >> >> A simple fix would be to allow `setup()` keywords to accept >> functions as well as direct values and only invoke the functions >> when the values are actually needed, but this idea never gained >> traction. >> >> Of course, even if this was implemented, it wouldn't help >> directly with "setup requiring" a new version of setuptools >> itself, unless setuptools detected this situation and reinvoked >> setup.py from scratch. >> >> Regards, >> >> Leo >> >> >> 03.05.2016, 21:04, Daniel Holth kirjoitti: >>> We did separate build from install. Now we just want to be >>> able to build without [having to emulate] distutils; just >>> having some dependencies installed before setup.py runs >>> would also be a great boon. >>> >>> I'm reading part of this conversation as "a simple >>> bdist_wheel bug is a reason to do a lot of work >>> standardizing file formats" which I find unfortunate. >>> >>> If he is still up for it let Robert implement his own PEP as >>> the way forward for build system abstraction. The extra PEPs >>> are just delaying action. >>> >>> On Tue, May 3, 2016 at 1:11 PM Paul Moore >>> > wrote: >>> >>> On 3 May 2016 at 17:47, Donald Stufft >> > wrote: >>> > It will likely get decided as part of the build system >>> PEP, whenever that >>> > gets picked up again. >>> >>> Yes, but on 15th March >>> (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) >>> Robert posted >>> >>> > Just to set expectations: this whole process seems >>> stalled to me; I'm >>> > going to context switch and focus on things that can >>> move forward. >>> > Someone please ping me when its relevant to put effort >>> in again :). >>> >>> And I think that's right. The whole build system PEP >>> issue appears >>> stalled from a lack of someone willing (or with the >>> time) to make a >>> call on the approach we take. >>> >>> As far as I'm aware, the decision remains with Nick. >>> With the possible >>> exception of Donald's proposal (which AFAIK never got >>> formally >>> published as a PEP) everything that can be said on the >>> other proposals >>> has been said, and the remaining differences are ones of >>> choice of >>> approach rather than anything affecting capabilities. >>> (Robert's >>> message at >>> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >>> summarised the state of the 3 proposals at the time). >>> >>> I think this is something that should be resolved - we >>> don't appear to >>> be gaining anything by waiting, and until we have a >>> decision on the >>> approach that's being taken, we aren't going to get >>> anyone writing >>> code for their preferred option. >>> >>> Nick - do you have the time to pick this up? Or does it >>> need someone >>> to step up as BDFL-delegate? Robert, Nathaniel, do you >>> have time to >>> spend on a final round of discussion on this, on the >>> assumption that >>> the goal will be a final decision at the end of it? >>> Donald, do you >>> have the time and interest to complete and publish your >>> proposal? >>> >>> Paul >>> _______________________________________________ >>> Distutils-SIG maillist - Distutils-SIG at python.org >>> >>> https://mail.python.org/mailman/listinfo/distutils-sig >>> >>> >>> >>> _______________________________________________ >>> Distutils-SIG maillist -Distutils-SIG at python.org >>> https://mail.python.org/mailman/listinfo/distutils-sig >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue May 3 15:33:00 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 3 May 2016 15:33:00 -0400 Subject: [Distutils] moving things forward In-Reply-To: <5728FBD6.8060805@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> <5728FBD6.8060805@nextday.fi> Message-ID: > On May 3, 2016, at 3:28 PM, Alex Gr?nholm wrote: > > As I pointed out in my previous post, "extended commands" are not the only use case for setup_requires -- upgrading setuptools itself would enable support for PEP 508 style conditional requirements. This currently does not work because if you have such requirements in your setup(), setuptools will fail when parsing those before it even has a chance to act on the minimum setuptools requirement specification in setup_requires. I don?t think this would work anyways without some terrible hacks since by the time setup_requires have been installed setuptools has already been imported so the *old* version will be in sys.modules. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Tue May 3 15:35:12 2016 From: dholth at gmail.com (Daniel Holth) Date: Tue, 03 May 2016 19:35:12 +0000 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> <5728FBD6.8060805@nextday.fi> Message-ID: Who cares exactly why it doesn't work? We know how to fix it by doing something different (put build dependencies in a static file and have them installed by pip before running setup.py). On Tue, May 3, 2016 at 3:33 PM Donald Stufft wrote: > > On May 3, 2016, at 3:28 PM, Alex Gr?nholm > wrote: > > As I pointed out in my previous post, "extended commands" are not the only > use case for setup_requires -- upgrading setuptools itself would enable > support for PEP 508 style conditional requirements. This currently does not > work because if you have such requirements in your setup(), setuptools will > fail when parsing those before it even has a chance to act on the minimum > setuptools requirement specification in setup_requires. > > > I don?t think this would work anyways without some terrible hacks since by > the time setup_requires have been installed setuptools has already been > imported so the *old* version will be in sys.modules. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Tue May 3 15:39:27 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 3 May 2016 15:39:27 -0400 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> <5728FBD6.8060805@nextday.fi> Message-ID: <9E8261DA-5707-45E6-9A40-643F6DA50326@stufft.io> > On May 3, 2016, at 3:35 PM, Daniel Holth wrote: > > Who cares exactly why it doesn't work? We know how to fix it by doing something different (put build dependencies in a static file and have them installed by pip before running setup.py). > Presumably Alex would like to know why we can?t implement his suggestion. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From alex.gronholm at nextday.fi Tue May 3 15:41:13 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 3 May 2016 22:41:13 +0300 Subject: [Distutils] moving things forward In-Reply-To: <9E8261DA-5707-45E6-9A40-643F6DA50326@stufft.io> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <5728E8D6.102@nextday.fi> <5728EDE7.2040807@nextday.fi> <5728FBD6.8060805@nextday.fi> <9E8261DA-5707-45E6-9A40-643F6DA50326@stufft.io> Message-ID: <5728FED9.4050502@nextday.fi> I certainly have no problem with Daniel's suggestion (and it would be much better than my solution) but would involve yet more standards work. Who's going to do that and when? 03.05.2016, 22:39, Donald Stufft kirjoitti: >> On May 3, 2016, at 3:35 PM, Daniel Holth wrote: >> >> Who cares exactly why it doesn't work? We know how to fix it by doing something different (put build dependencies in a static file and have them installed by pip before running setup.py). >> > Presumably Alex would like to know why we can?t implement his suggestion. > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > From ncoghlan at gmail.com Wed May 4 01:44:16 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 4 May 2016 15:44:16 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 04:28, Nathaniel Smith wrote: > On Tue, May 3, 2016 at 10:10 AM, Paul Moore wrote: >> On 3 May 2016 at 17:47, Donald Stufft wrote: >>> It will likely get decided as part of the build system PEP, whenever that >>> gets picked up again. >> >> Yes, but on 15th March >> (https://mail.python.org/pipermail/distutils-sig/2016-March/028457.html) >> Robert posted >> >>> Just to set expectations: this whole process seems stalled to me; I'm >>> going to context switch and focus on things that can move forward. >>> Someone please ping me when its relevant to put effort in again :). >> >> And I think that's right. The whole build system PEP issue appears >> stalled from a lack of someone willing (or with the time) to make a >> call on the approach we take. > > No, no, Nick's not the blocker. I'm the blocker! (Sorry) It's been an interesting couple of months, so even if you had got this post together earlier, it's quite possible we would have ended up blocked on me anyway :) > Donald + Robert + I had a longish conversation about this on IRC a > month ago [1]. I volunteered to summarize back to the mailing list, > and then I flaked -- so I guess this is that belated email :-). > > Here's the tentative conclusions we came to: > > Blocker 1 is figuring out what to do about the sdist format. The > debate is between keeping something that's basically the current > format, versus somehow cleaning it up (e.g. Donald's "source wheel" > ideas). To move forward: > - I'll write up a PEP that attempts to just document/standardize the > current de facto sdist format and send it to the mailing list > (basically: filename convention, PKG-INFO + a list of which fields in > PKG-INFO pypi actually cares about, presence of setup.py), and adds > some sort of optional-defaulting-to-1 SDist-Version (I guess in a file > called SDIST by analogy with WHEEL). And also contains a rationale > section explaining the trade-offs of standardizing this versus > creating a new extension.) > - Donald will make his case for the new extension approach on the mailing list > - We beg Nick to read over both things and make a ruling so we can move on +1 for just documenting the sdist-we-have-today, and avoiding making the build system decoupling proposals dependent on any changes to that. (That's not to say defining a better source format isn't desirable - it's just a nice-to-have future enhancement, rather than being essential in the near term) One of the big reasons I stopped working on metadata 2.0 is that there's no way for us to force re-releases of everything on PyPI, which means we need to accommodate already published releases if we want a new approach to be widely adopted That's why PEP 440 (particularly its normalisation scheme) was checked for a high level of pragmatic compatibility against PyPI's existing contents, why PEP 508's dependency specifier syntax is closer to that defined by setuptools than it is the PEP 345 one, and why I think better documenting the current sdist format is the right way forward here. > Blocker 2 is figuring out whether the new pip <-> build system "hook" > interface should be command-line based (like the current draft of PEP > 516) or Python function call based (like the current draft of PEP > 517). It sounds like currently Donald and I are in favor of the python > hooks approach, and Robert is indifferent between them and just wants > to move forward, so we agreed that unless anyone objects we'll drop > the command-line approach and go ahead with refining the Python > function call approach. So... if you want to object then speak up now. While I'd previously expressed a preference for a command line based approach, the subsequent discussion persuaded me the Python API was at least as viable, if not preferable, so I'd be happy to endorse that approach. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Wed May 4 02:03:49 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 4 May 2016 18:03:49 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 05:10, Paul Moore wrote: > Nick - do you have the time to pick this up? Or does it need someone > to step up as BDFL-delegate? Robert, Nathaniel, do you have time to > spend on a final round of discussion on this, on the assumption that > the goal will be a final decision at the end of it? Donald, do you > have the time and interest to complete and publish your proposal? I'm currently going through a redundancy process @ HPE - while I remain convinced that sorting out the issues with packaging is crucial for Python going forward, my time to work on it is now rather more limited than it was. I'm not sure where I'm going to end up, nor how much work time I'll have for this going forward: it may end up being a personal-time-only thing. Planning wise, we have to work on 'personal time only' going forward - which means I"m going to be very careful about biting off more than I can chew :) Right now, I don't see any point updating the PEP at this point. The edits I'd expect to make if the conclusions I suggested in https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html are adopted are: - change to a Python API - BFDL call on the file format and name There is no need to issue a new sdist thing, because sdists today are *already* documented across PEPs 241, 314 and 345. So - if Nick is ready to rule, there is basically about an hour of editing to switch the CLI to a Python API and update the file format bits, and then we can call it provisional and encourage implementations, update the thunk implementation and so on. If there's more debate to be had, thats fine too, but editing the PEP won't achieve that. -Rob From ncoghlan at gmail.com Wed May 4 03:39:46 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 4 May 2016 17:39:46 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 16:03, Robert Collins wrote: > The edits I'd expect to make if the conclusions I suggested in > https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > are adopted are: > > - change to a Python API > - BFDL call on the file format and name > > There is no need to issue a new sdist thing, because sdists today are > *already* documented across PEPs 241, 314 and 345. I already +1'ed using a Python API, but on the file name & format side, we have the following candidates and prior art floating around: pypa.json in PEP 516 pypackage.json in PEP 517 pydist.json in PEP 426 METADATA (Key: Value) in sdists and wheels WHEEL (Key: Value) in wheels My impression is that we're generally agreed on wanting to move from Key:Value to JSON as the baseline for interoperability formats, so my suggestion is to use the name "pybuild.json". The problem I have with pypa/pypackage/pydist is that they're all too broad - we're moving towards an explicitly multi-stage pipeline (tree -> sdist -> wheel -> installed) and additional metadata gets added at each step. The "pybuild.json" metadata specifically covers how to get from a source tree or sdist to a built wheel file, so I think it makes sense to use a name that reflects that. Cheers, Nick. P.S. If you search for "pybuild", there *are* some existing utilities out there by that name, including a package builder for Debian. If you search for "pybuild.json" instead, then the 3rd-ranked link, at least for me, is Antoine Pitrou's original suggestion of that name back in November. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed May 4 06:33:08 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 4 May 2016 06:33:08 -0400 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: > On May 4, 2016, at 3:39 AM, Nick Coghlan wrote: > > On 4 May 2016 at 16:03, Robert Collins wrote: >> The edits I'd expect to make if the conclusions I suggested in >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >> are adopted are: >> >> - change to a Python API >> - BFDL call on the file format and name >> >> There is no need to issue a new sdist thing, because sdists today are >> *already* documented across PEPs 241, 314 and 345. > > I already +1'ed using a Python API, but on the file name & format > side, we have the following candidates and prior art floating around: > > pypa.json in PEP 516 > pypackage.json in PEP 517 > pydist.json in PEP 426 > METADATA (Key: Value) in sdists and wheels > WHEEL (Key: Value) in wheels > > My impression is that we're generally agreed on wanting to move from > Key:Value to JSON as the baseline for interoperability formats, so my > suggestion is to use the name "pybuild.json?. I'd actually prefer not using JSON for something that is human editable/writable because I think it's a pretty poor format for that case. It is overly restrictive in what it allows (for instance, no trailing comma gets me every time) and the lack of comments I think make it a poor format for that. I think JSON is great for what gets included *IN* a sdist or a wheel, but for what sits inside of a VCS checkout that we expect human beings to edit, I think not. I'm +1 on tying this to a new extension because I feel like it fundamentally changes what it means to be a Python sdist. It eliminates the setup.py which is just about the only thing you actually can depend on existing inside of a Python sdist and there are a lot of things out there that make the assumption that Python + ".tar.gz/.zip/etc" == has a setup.py and are going to be broken from it. A new extension means those tools will ignore it and we can bake in versioning of the format itself right from the start (even if the new format looks remarkably like the old format with a different name). I also believe that we can't provide a replacement for setup.py without either purposely declaring we no longer support something that people used from it or providing a way to support that in the new, setup.py-less format. One thing I remarked to Nataniel yesterday was that it might be a good idea to drop the build system aspect of these for right now (since afaict all the invested parties are currently overloaded and/or have a lack of time) and focus soley on the part of the proposal that enables us to get a real setup_requires that doesn't involve needing to do the tricky delayed import thing that the current implementation of setup_requires needs. That would net a pretty huge win I think since people would be able to use abstractions in their setup.py (even if they still use setuptools) through a simple ``import statement``, including the ability to specify what version of setuptools they need to build. People could still implement non setuptools build systems by mimicing the setup.py interface (though it will still continue to be less then amazingly documented/defined) but some of the Numpy folks from one of the previous threads stated that mimicing setup.py wasn't really the hard part of making numpy.distutils anyways. The benefit of that is not only a smaller chunk, but also the chunk that I think (and I could be wrong) that there's no real disagreement on about how to go about doing it (besides some bikshedding things like what the filename should be). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dholth at gmail.com Wed May 4 09:00:38 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 04 May 2016 13:00:38 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: +1 It would be great to start with a real setup_requires and probably would not interfere with later build system abstractions at all. On Wed, May 4, 2016 at 6:33 AM Donald Stufft wrote: > > > On May 4, 2016, at 3:39 AM, Nick Coghlan wrote: > > > > On 4 May 2016 at 16:03, Robert Collins > wrote: > >> The edits I'd expect to make if the conclusions I suggested in > >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > >> are adopted are: > >> > >> - change to a Python API > >> - BFDL call on the file format and name > >> > >> There is no need to issue a new sdist thing, because sdists today are > >> *already* documented across PEPs 241, 314 and 345. > > > > I already +1'ed using a Python API, but on the file name & format > > side, we have the following candidates and prior art floating around: > > > > pypa.json in PEP 516 > > pypackage.json in PEP 517 > > pydist.json in PEP 426 > > METADATA (Key: Value) in sdists and wheels > > WHEEL (Key: Value) in wheels > > > > My impression is that we're generally agreed on wanting to move from > > Key:Value to JSON as the baseline for interoperability formats, so my > > suggestion is to use the name "pybuild.json?. > > I'd actually prefer not using JSON for something that is human > editable/writable because I think it's a pretty poor format for that case. > It > is overly restrictive in what it allows (for instance, no trailing comma > gets > me every time) and the lack of comments I think make it a poor format for > that. > > I think JSON is great for what gets included *IN* a sdist or a wheel, but > for > what sits inside of a VCS checkout that we expect human beings to edit, I > think > not. > > I'm +1 on tying this to a new extension because I feel like it > fundamentally > changes what it means to be a Python sdist. It eliminates the setup.py > which > is just about the only thing you actually can depend on existing inside of > a > Python sdist and there are a lot of things out there that make the > assumption > that Python + ".tar.gz/.zip/etc" == has a setup.py and are going to be > broken > from it. A new extension means those tools will ignore it and we can bake > in > versioning of the format itself right from the start (even if the new > format > looks remarkably like the old format with a different name). > > I also believe that we can't provide a replacement for setup.py without > either > purposely declaring we no longer support something that people used from > it or > providing a way to support that in the new, setup.py-less format. > > One thing I remarked to Nataniel yesterday was that it might be a good > idea to > drop the build system aspect of these for right now (since afaict all the > invested parties are currently overloaded and/or have a lack of time) and > focus > soley on the part of the proposal that enables us to get a real > setup_requires > that doesn't involve needing to do the tricky delayed import thing that the > current implementation of setup_requires needs. That would net a pretty > huge > win I think since people would be able to use abstractions in their > setup.py > (even if they still use setuptools) through a simple ``import statement``, > including the ability to specify what version of setuptools they need to > build. > People could still implement non setuptools build systems by mimicing the > setup.py interface (though it will still continue to be less then amazingly > documented/defined) but some of the Numpy folks from one of the previous > threads stated that mimicing setup.py wasn't really the hard part of making > numpy.distutils anyways. The benefit of that is not only a smaller chunk, > but > also the chunk that I think (and I could be wrong) that there's no real > disagreement on about how to go about doing it (besides some bikshedding > things > like what the filename should be). > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed May 4 09:28:24 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 4 May 2016 23:28:24 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 23:00, Daniel Holth wrote: > +1 It would be great to start with a real setup_requires and probably would > not interfere with later build system abstractions at all. If we're going to go down that path, perhaps it might make sense to just define a standard [setup_requires] section in setup.cfg? Quite a few projects already have one of those thanks to distutiils2, d2to1 and pbr, which means the pragmatic approach here might be to ask what needs to change so the qualifier can be removed from this current observation in the PBR docs: "The setup.cfg file is an ini-like file that can mostly replace the setup.py file." The build system abstraction config could then also just be another setup.cfg section. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Wed May 4 09:32:23 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 04 May 2016 13:32:23 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Agree. On Wed, May 4, 2016, 09:28 Nick Coghlan wrote: > On 4 May 2016 at 23:00, Daniel Holth wrote: > > +1 It would be great to start with a real setup_requires and probably > would > > not interfere with later build system abstractions at all. > > If we're going to go down that path, perhaps it might make sense to > just define a standard [setup_requires] section in setup.cfg? > > Quite a few projects already have one of those thanks to distutiils2, > d2to1 and pbr, which means the pragmatic approach here might be to ask > what needs to change so the qualifier can be removed from this current > observation in the PBR docs: "The setup.cfg file is an ini-like file > that can mostly replace the setup.py file." > > The build system abstraction config could then also just be another > setup.cfg section. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pombredanne at nexb.com Wed May 4 10:26:14 2016 From: pombredanne at nexb.com (Philippe Ombredanne) Date: Wed, 4 May 2016 16:26:14 +0200 Subject: [Distutils] [License-discuss] Trove Classifiers In-Reply-To: References: Message-ID: On Wed, May 4, 2016 at 4:02 PM, Paul R. Tagliamonte wrote: > Hey all, > > For those who don't know, Trove classifiers are used by the Python > world to talk about what is contained in the Python package. Stuff > like saying "It's under the MIT/Expat license!" or "It's beta!". > > > I was looking at the tags, and I saw one that made me "wat" a bit. > >> License :: OSI Approved :: GNU Free Documentation License (FDL) > > AFAIK the GFDL is *not* OSI approved, both due to it not being a > software license, as well as I'm sure the invariant clauses being an > issue. > > Has anyone come across this yet? Anyone have objections to me trying > to clean up the Trove list? Good catch! Cleaning the list is going to be easy on the Python.org side, especially since a new Pypi site is in the making. [1] The harder or impossible part would have be to clean up the 1000+ of packages using this faulty classifier.... But there is really only three of these [2] and all of them look either pretty old or abandoned and none has its packages effectively hosted or distributed on Pypi. [1] https://pypi.io/ [2] https://pypi.python.org/pypi?:action=browse&c=63 -- Cordially Philippe Ombredanne From robertc at robertcollins.net Wed May 4 16:25:05 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 5 May 2016 08:25:05 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 19:39, Nick Coghlan wrote: > On 4 May 2016 at 16:03, Robert Collins wrote: >> The edits I'd expect to make if the conclusions I suggested in >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html >> are adopted are: >> >> - change to a Python API >> - BFDL call on the file format and name >> >> There is no need to issue a new sdist thing, because sdists today are >> *already* documented across PEPs 241, 314 and 345. > > I already +1'ed using a Python API, but on the file name & format > side, we have the following candidates and prior art floating around: > > pypa.json in PEP 516 > pypackage.json in PEP 517 > pydist.json in PEP 426 > METADATA (Key: Value) in sdists and wheels > WHEEL (Key: Value) in wheels > > My impression is that we're generally agreed on wanting to move from > Key:Value to JSON as the baseline for interoperability formats, so my > suggestion is to use the name "pybuild.json". > > The problem I have with pypa/pypackage/pydist is that they're all too > broad - we're moving towards an explicitly multi-stage pipeline (tree > -> sdist -> wheel -> installed) and additional metadata gets added at > each step. The "pybuild.json" metadata specifically covers how to get > from a source tree or sdist to a built wheel file, so I think it makes > sense to use a name that reflects that. I don't think we have anything resembling consensus on that pipeline idea. pybuild.json would be fine as a name though :). -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From dholth at gmail.com Wed May 4 16:26:08 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 04 May 2016 20:26:08 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Just call it Steve On Wed, May 4, 2016, 16:25 Robert Collins wrote: > On 4 May 2016 at 19:39, Nick Coghlan wrote: > > On 4 May 2016 at 16:03, Robert Collins > wrote: > >> The edits I'd expect to make if the conclusions I suggested in > >> https://mail.python.org/pipermail/distutils-sig/2016-March/028437.html > >> are adopted are: > >> > >> - change to a Python API > >> - BFDL call on the file format and name > >> > >> There is no need to issue a new sdist thing, because sdists today are > >> *already* documented across PEPs 241, 314 and 345. > > > > I already +1'ed using a Python API, but on the file name & format > > side, we have the following candidates and prior art floating around: > > > > pypa.json in PEP 516 > > pypackage.json in PEP 517 > > pydist.json in PEP 426 > > METADATA (Key: Value) in sdists and wheels > > WHEEL (Key: Value) in wheels > > > > My impression is that we're generally agreed on wanting to move from > > Key:Value to JSON as the baseline for interoperability formats, so my > > suggestion is to use the name "pybuild.json". > > > > The problem I have with pypa/pypackage/pydist is that they're all too > > broad - we're moving towards an explicitly multi-stage pipeline (tree > > -> sdist -> wheel -> installed) and additional metadata gets added at > > each step. The "pybuild.json" metadata specifically covers how to get > > from a source tree or sdist to a built wheel file, so I think it makes > > sense to use a name that reflects that. > > I don't think we have anything resembling consensus on that pipeline idea. > > pybuild.json would be fine as a name though :). > > -Rob > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Wed May 4 16:28:26 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 5 May 2016 08:28:26 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 22:33, Donald Stufft wrote: > ..> I also believe that we can't provide a replacement for setup.py without either > purposely declaring we no longer support something that people used from it or > providing a way to support that in the new, setup.py-less format. The thunk I wrote being insufficient? > One thing I remarked to Nataniel yesterday was that it might be a good idea to > drop the build system aspect of these for right now (since afaict all the > invested parties are currently overloaded and/or have a lack of time) and focus > soley on the part of the proposal that enables us to get a real setup_requires ... the only reason I got involved in build system discussions was pushback 18months or so back when I implemented a proof of concept for pip that just used setup.cfg. I'd be very happy to ignore all the build system stuff and just do bootstrap requirements in setup.cfg. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From chris.barker at noaa.gov Wed May 4 18:11:38 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 4 May 2016 15:11:38 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 3:33 AM, Donald Stufft wrote: > I'd actually prefer not using JSON for something that is human > editable/writable because I think it's a pretty poor format for that case. > It > is overly restrictive in what it allows (for instance, no trailing comma > gets > me every time) and the lack of comments I think make it a poor format for > that. > yup -- these are really annoying when JSON is used for a config format. but INI pretty much sucks, too. What about PYSON (my term) -- python literals -- could be evaluated with ast.literal_eval to be safe, and would give us comments, and trailing commas, and python's richer data types. or just plain Python -- the file would be imported and we'd specify particular variables that needed to be defined -- maybe as simple as: config = a_big_dict_with_lots_of_stuff_in_it. so it could be purely declarative, but users could also put code in there to customize the configuration on the fly, too. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed May 4 18:28:31 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 4 May 2016 23:28:31 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 4 May 2016 at 23:11, Chris Barker wrote: > so it could be purely declarative, but users could also put code in there to > customize the configuration on the fly, too. That basically repeats the mistake that was made with setup.py. We explicitly don't want an executable format for specifying build configuration. Paul From ethan at stoneleaf.us Wed May 4 19:23:31 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 04 May 2016 16:23:31 -0700 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <572A8473.1000906@stoneleaf.us> On 05/04/2016 03:28 PM, Paul Moore wrote: > On 4 May 2016 at 23:11, Chris Barker wrote: > That basically repeats the mistake that was made with setup.py. We > explicitly don't want an executable format for specifying build > configuration. Executable code or not, we need to be able to specify different files depending on the python version. -- ~Ethan~ From alex.gronholm at nextday.fi Wed May 4 19:29:13 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Thu, 5 May 2016 02:29:13 +0300 Subject: [Distutils] moving things forward In-Reply-To: <572A8473.1000906@stoneleaf.us> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572A8473.1000906@stoneleaf.us> Message-ID: <572A85C9.6090306@nextday.fi> Different files for what? Something not covered by PEP 508? 05.05.2016, 02:23, Ethan Furman kirjoitti: > On 05/04/2016 03:28 PM, Paul Moore wrote: >> On 4 May 2016 at 23:11, Chris Barker wrote: > > >> That basically repeats the mistake that was made with setup.py. We >> explicitly don't want an executable format for specifying build >> configuration. > > Executable code or not, we need to be able to specify different files > depending on the python version. > > -- > ~Ethan~ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ethan at stoneleaf.us Wed May 4 19:58:06 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 04 May 2016 16:58:06 -0700 Subject: [Distutils] moving things forward In-Reply-To: <572A85C9.6090306@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572A8473.1000906@stoneleaf.us> <572A85C9.6090306@nextday.fi> Message-ID: <572A8C8E.4030005@stoneleaf.us> On 05/04/2016 04:29 PM, Alex Gr?nholm wrote: > Different files for what? Something not covered by PEP 508? Somebody will have to distill that PEP, I have only an small inkling of what it's trying to say. As for my specific use case: I have Python3-only files in my distribution, so they should only be installed on Python3 systems. Python2 systems generate useless errors. -- ~Ethan~ From dholth at gmail.com Wed May 4 20:09:12 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 05 May 2016 00:09:12 +0000 Subject: [Distutils] moving things forward In-Reply-To: <572A8C8E.4030005@stoneleaf.us> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572A8473.1000906@stoneleaf.us> <572A85C9.6090306@nextday.fi> <572A8C8E.4030005@stoneleaf.us> Message-ID: The only part that needs to be static is the metadata. The actual build can have code without hindering our dependency resolution aspirations. On Wed, May 4, 2016, 19:59 Ethan Furman wrote: > On 05/04/2016 04:29 PM, Alex Gr?nholm wrote: > > > Different files for what? Something not covered by PEP 508? > > Somebody will have to distill that PEP, I have only an small inkling of > what it's trying to say. > > As for my specific use case: I have Python3-only files in my > distribution, so they should only be installed on Python3 systems. > Python2 systems generate useless errors. > > -- > ~Ethan~ > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1chardj0n3s at gmail.com Wed May 4 20:28:06 2016 From: r1chardj0n3s at gmail.com (Richard Jones) Date: Wed, 4 May 2016 19:28:06 -0500 Subject: [Distutils] Calling for a volunteer to help admin PyPI Message-ID: Hi all, I've fallen seriously behind in trying to admin PyPI by myself, and I'm calling for someone to help. Generally this means helping people reset their email address for account recovery, or trying to contact owners of packages to facilitate ownership changes. The *ahem* "tools" available aren't the best, and will require privileged access to a system to do some of this work, so you'll need to be someone I personally trust, or at least vouched for by someone I personally trust in the Python community. Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed May 4 22:45:54 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 5 May 2016 12:45:54 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 08:28, Paul Moore wrote: > On 4 May 2016 at 23:11, Chris Barker wrote: >> so it could be purely declarative, but users could also put code in there to >> customize the configuration on the fly, too. > > That basically repeats the mistake that was made with setup.py. We > explicitly don't want an executable format for specifying build > configuration. This configuration vs customisation distinction is probably worth spelling out for folks without a formal software engineering or computer science background, so: Customisation is programming: you're writing plugins in a Turing complete language (where "Turing complete" is a computer science classification that says "any program that can be expressed at all, can be expressed in this language", although it doesn't make any promises about readability). As such, Python "config files" like setup.py in distutils and settings.py in Django are actually software *customisation* interfaces - the only way to reliably interact with them is to run them and see what happens. Anything you do via static analysis rather than code execution is a *heuristic* that may fail on some code that "works" when executed. Configuration is different: you're choosing amongst a set of possibilities that have been constrained in some way, and those constraints are structurally enforced. Usually that enforcement is handled by making the configuration declarative - it's in some passive format like an ini file or JSON, and if it gets too repetitive then you introduce a config generator, rather than making the format itself more sophisticated. The big advantage of configuration over customisation is that you substantially increase the degrees of freedom in how *consumers* of that configuration are implemented - no longer do you need a full Python runtime (or whatever), you just need an ini file parser, or a JSON decoder, and then you can look at just the bits you care about for your particular use case and ignore the rest. Regards, Nick. P.S. Fun fact: XML (as formally specified) is a customisation language, rather than a configuration language. If you want to safely use it as a configuration language, you need to use a library like defusedxml to cut it down to a non-Turing complete subset of the spec. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed May 4 23:09:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 5 May 2016 13:09:31 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 06:28, Robert Collins wrote: > the only reason I got involved in build system discussions was > pushback 18months or so back when I implemented a proof of concept for > pip that just used setup.cfg. I'd be very happy to ignore all the > build system stuff and just do bootstrap requirements in setup.cfg. I know I'm one of the folks that has historically been dubious of the "just use setup.cfg" idea, due to the assorted problems with the ini-style format not extending particularly well to tree-structured data (beyond the single level of file sections). However, my point of view has changed over that time: 1. We've repeatedly run up against the "JSON is good for programs talking to each other, but lousy as a human-facing interface" problem 2. By way of PEPs 440 and 508, we've got a lot more experience in figuring out how to effectively bless de facto practices as properly documented standards (even when it makes the latter more complicated than we'd like) 3. The ongoing popularity of setup.cfg shows that while ini-style may not be perfect for this use case, it clearly makes it over the threshold of "good enough" 4. Folks that *really* want a different input format (whether that's YAML, TOML, or something else entirely) will still be free to treat setup.cfg as a generated file, just as setup.py can be a generated file today The last couple of years have also given me a whole range of opportunities (outside distutils-sig) to apply the mantra "the goal is to make things better than the status quo, not to make them perfect", and that has meant getting better at distinguishing what I would do given unlimited development resources from what makes the most sense given the development resources that are actually available. So when I ask myself now "What's the *simplest* thing we could do that will make things better than the status quo?", then the answer I come up with today is your original idea: bless setup.cfg (or at least a subset of it) as a standardised interface. Don't get me wrong, I still think that answer has significant downsides - I've just come around to the view that "is likely to be easy to implement and adopt" are upsides that can outweigh a whole lot of downsides :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Wed May 4 23:36:34 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 5 May 2016 15:36:34 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Ok so, if i draft a pep for said proposal, will it die under the weight of a thousand bike sheds? On 5 May 2016 3:09 PM, "Nick Coghlan" wrote: > On 5 May 2016 at 06:28, Robert Collins wrote: > > the only reason I got involved in build system discussions was > > pushback 18months or so back when I implemented a proof of concept for > > pip that just used setup.cfg. I'd be very happy to ignore all the > > build system stuff and just do bootstrap requirements in setup.cfg. > > I know I'm one of the folks that has historically been dubious of the > "just use setup.cfg" idea, due to the assorted problems with the > ini-style format not extending particularly well to tree-structured > data (beyond the single level of file sections). > > However, my point of view has changed over that time: > > 1. We've repeatedly run up against the "JSON is good for programs > talking to each other, but lousy as a human-facing interface" problem > 2. By way of PEPs 440 and 508, we've got a lot more experience in > figuring out how to effectively bless de facto practices as properly > documented standards (even when it makes the latter more complicated > than we'd like) > 3. The ongoing popularity of setup.cfg shows that while ini-style may > not be perfect for this use case, it clearly makes it over the > threshold of "good enough" > 4. Folks that *really* want a different input format (whether that's > YAML, TOML, or something else entirely) will still be free to treat > setup.cfg as a generated file, just as setup.py can be a generated > file today > > The last couple of years have also given me a whole range of > opportunities (outside distutils-sig) to apply the mantra "the goal is > to make things better than the status quo, not to make them perfect", > and that has meant getting better at distinguishing what I would do > given unlimited development resources from what makes the most sense > given the development resources that are actually available. > > So when I ask myself now "What's the *simplest* thing we could do that > will make things better than the status quo?", then the answer I come > up with today is your original idea: bless setup.cfg (or at least a > subset of it) as a standardised interface. > > Don't get me wrong, I still think that answer has significant > downsides - I've just come around to the view that "is likely to be > easy to implement and adopt" are upsides that can outweigh a whole lot > of downsides :) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu May 5 00:22:46 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 4 May 2016 21:22:46 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 6:28 AM, Nick Coghlan wrote: > On 4 May 2016 at 23:00, Daniel Holth wrote: >> +1 It would be great to start with a real setup_requires and probably would >> not interfere with later build system abstractions at all. > > If we're going to go down that path, perhaps it might make sense to > just define a standard [setup_requires] section in setup.cfg? > > Quite a few projects already have one of those thanks to distutiils2, > d2to1 and pbr, which means the pragmatic approach here might be to ask > what needs to change so the qualifier can be removed from this current > observation in the PBR docs: "The setup.cfg file is an ini-like file > that can mostly replace the setup.py file." > > The build system abstraction config could then also just be another > setup.cfg section. I'm sympathetic to the general approach, but on net I think I prefer a slightly different proposal. Downsides to just standardizing [setup_requires]: - if projects have existing ones, that's actually kinda good / kinda bad -- pip has never been respecting these before, so if we suddenly start doing that then existing stuff could break. I don't know how likely this is, but experience suggests that *something* will break and make someone angry. (I'm still blinking at getting angry complaints arguing that uploading a wheel to pypi, where before there was only an sdist, should be treated as a compatibility-breaking change that requires a new version etc.) - IMO an extremely valuable aspect of this new declarative setup-requirements thing is that it gives us an opportunity to switch to enforcing the accuracy of this metadata. Right now we have a swamp we need to drain, where there's really no way to know what environment any given setup.py needs to run. Sometimes there are setup_requires, sometimes not; if there are setup_requires then sometimes they're real, sometimes they're lies. We'd like to reach the point where for a random package on pypi, either we can tell it's a legacy package, or else we can be confident that the setup-requirements it declares are actually accurate and sufficient to building the package. This is important because it unblocks ecosystem improvements like automated wheel builders, autoconfiguring CI systems, etc. And the only way we'll get there, AFAICT, is if at some point we start enforcing build isolation, so that by default a build *can't* import anything that wasn't declared as a setup-requirement. And the only way I can see for us to make this transition smoothly -- without any horrible flag days, and with a nice carrot so projects feel like we're giving them a gift instead of punishing them -- is if we make the rule be "projects that use the declarative setup-requirements feature also get isolated build environments". (Then the messaging is: "see, this helps you check that you actually set it up right! if your new metadata works for you in testing, it'll also work for your users!) But this again would mean we can't reuse the existing [setup_requires] section. - setup.cfg is kind of a terrible format for standardizing things because the only definition of the format is "read the ConfigParser source". You cannot parse setup.cfg from a non-Python language. And the *only* benefit is that it already exists; teaching pip to read pybuild.json or pybuild.toml instead would be completely trivial. So if we aren't going to try and grandfather in existing [setup_requires] sections, then we might as well switch to a better file format at the same time -- this is not the hard part. So I like the idea of splitting out the declarative setup-requirements PEP from the new build system hook interface PEP, but I think that the way we should do this is by defining a new pybuild.whatever file like the ones in PEP 516 / PEP 517 (they're identical in this regard) that *only* has schema-version and bootstrap-requirements, and then we can add the build backend key as a second step. -n -- Nathaniel J. Smith -- https://vorpus.org From donald at stufft.io Thu May 5 01:09:05 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 May 2016 01:09:05 -0400 Subject: [Distutils] Look for wonky serials on PyPI Message-ID: <9E3BE468-F528-4447-AB69-E6AFB0E765C8@stufft.io> Hey all, Just a heads up that due to hitting query timeouts when attempting to lookup serials I changed the way serials work and are queried on PyPI. This should have no visible changes to end users but keep an eye out for serials that don't look correct, particularly via bandersnatch failures. The good news is the new stuff is significantly faster (125x speed up on my local machine with no lock contention-- but more like 1250x given that it was timing out at 30s under load on PyPI) so API calls like XMLRLC list_packages_with_serial should be much faster now. Anyways. Let me know if anyone notices any broken bandersnatches. Sent from my iPhone From robertc at robertcollins.net Thu May 5 01:42:04 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 5 May 2016 17:42:04 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 16:22, Nathaniel Smith wrote: ... > I'm sympathetic to the general approach, but on net I think I prefer a > slightly different proposal. > > Downsides to just standardizing [setup_requires]: If I write a PEP, it won't be standardising setup_requires, it will be standardising bootstrap requirements. The distinction is nuance but critical: - we don't expect setuptools to ever need to honour it (of course, it could, should it choose to, and ease of adoption may suggest that it should) - as a new feature, making it opt in allows folk to adopt it when they are ready; if it was literally just a new home for setup_requires, we may see build systems auto-populating it, and the potential swamp you describe below would then apply. > - if projects have existing ones, that's actually kinda good / kinda > bad -- pip has never been respecting these before, so if we suddenly > start doing that then existing stuff could break. I don't know how > likely this is, but experience suggests that *something* will break > and make someone angry. (I'm still blinking at getting angry > complaints arguing that uploading a wheel to pypi, where before there > was only an sdist, should be treated as a compatibility-breaking > change that requires a new version etc.) Yes, things will break: anyone using this will need a new pip, by definition. Not everyone will be willing to wait 10 years before using it :). > - IMO an extremely valuable aspect of this new declarative > setup-requirements thing is that it gives us an opportunity to switch > to enforcing the accuracy of this metadata. Right now we have a swamp > we need to drain, where there's really no way to know what environment > any given setup.py needs to run. Sometimes there are setup_requires, > sometimes not; if there are setup_requires then sometimes they're Huh? I've not encountered any of this, ever. I'd love some examples to go look at. The only issue I've ever had with setup_requires is the easy_install stuff it ties into. ... > instead of punishing them -- is if we make the rule be "projects that > use the declarative setup-requirements feature also get isolated build > environments". (Then the messaging is: "see, this helps you check that > you actually set it up right! if your new metadata works for you in > testing, it'll also work for your users!) But this again would mean we > can't reuse the existing [setup_requires] section. I'm very much against forcing isolated build environments as part of this effort. I get where you are coming from, but really it conflates two entirely separate things, and you'll *utterly* break building anything with dependencies that are e.g. SWIG based unless you increase the size of the PEP by about 10-fold. (Thats not hyperbole, I think). Working around that will require a bunch of UX work, and its transitive: you have to expose how-to-workaround-the-fact-that-all-our-deps-are-not-installable all the way up the chain. That's probably a good thing to do (e.g. see bindep, or the aborted callout-to-system-packages we proposed after PyCon AU last year), but tying these two things together is not necessary, though I can certainly see the appeal; the main impact I see is that it will just impede adoption. The reality, AFAICT, is that most projects with undeclared build deps today get corrected fairly quickly: a bug is filed, folk fix it, and we move on. A robotic system that isolates everything such that folk *cannot* fix it is much less usable, and I'm very much in favour of pragmatism here. > - setup.cfg is kind of a terrible format for standardizing things > because the only definition of the format is "read the ConfigParser > source". You cannot parse setup.cfg from a non-Python language. And > the *only* benefit is that it already exists; teaching pip to read > pybuild.json or pybuild.toml instead would be completely trivial. So > if we aren't going to try and grandfather in existing [setup_requires] > sections, then we might as well switch to a better file format at the > same time -- this is not the hard part. The patch to read a single list-valued key out of setup.cfg is trivial and shallow. We've not managed to settle on consensus on a file format choice in a year of debate. I hold zero confidence we will going forward either. If the BDFL delegate makes a call - fine. I read Nick's earlier email in the thread as such a thing TBH :). > So I like the idea of splitting out the declarative setup-requirements > PEP from the new build system hook interface PEP, but I think that the > way we should do this is by defining a new pybuild.whatever file like > the ones in PEP 516 / PEP 517 (they're identical in this regard) that > *only* has schema-version and bootstrap-requirements, and then we can > add the build backend key as a second step. I think the risk there is that that presumes the answer, and without the abstract build effort actually moving forward we may be years before we actually see the file being used for anything else. If ever. -Rob From njs at pobox.com Thu May 5 02:32:33 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 4 May 2016 23:32:33 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 10:42 PM, Robert Collins wrote: > On 5 May 2016 at 16:22, Nathaniel Smith wrote: > ... >> I'm sympathetic to the general approach, but on net I think I prefer a >> slightly different proposal. >> >> Downsides to just standardizing [setup_requires]: > > If I write a PEP, it won't be standardising setup_requires, it will be > standardising bootstrap requirements. The distinction is nuance but > critical: > > - we don't expect setuptools to ever need to honour it (of course, it > could, should it choose to, and ease of adoption may suggest that it > should) > - as a new feature, making it opt in allows folk to adopt it when > they are ready; if it was literally just a new home for > setup_requires, we may see build systems auto-populating it, and the > potential swamp you describe below would then apply. The main argument I was making there was that it needs to be a new opt-in thing, so if we're agreed on that then great :-). >> - if projects have existing ones, that's actually kinda good / kinda >> bad -- pip has never been respecting these before, so if we suddenly >> start doing that then existing stuff could break. I don't know how >> likely this is, but experience suggests that *something* will break >> and make someone angry. (I'm still blinking at getting angry >> complaints arguing that uploading a wheel to pypi, where before there >> was only an sdist, should be treated as a compatibility-breaking >> change that requires a new version etc.) > > Yes, things will break: anyone using this will need a new pip, by > definition. Not everyone will be willing to wait 10 years before using > it :). Just to clarify (since we seem to agree): I meant that if pip starts interpreting an existing setup.cfg thing, then the new-pip/old-package situation could break, which would be bad. >> - IMO an extremely valuable aspect of this new declarative >> setup-requirements thing is that it gives us an opportunity to switch >> to enforcing the accuracy of this metadata. Right now we have a swamp >> we need to drain, where there's really no way to know what environment >> any given setup.py needs to run. Sometimes there are setup_requires, >> sometimes not; if there are setup_requires then sometimes they're > > Huh? I've not encountered any of this, ever. I'd love some examples to > go look at. The only issue I've ever had with setup_requires is the > easy_install stuff it ties into. I don't think I've ever seen a package that had accurate setup_requires (outside the trivial case of packages where setup_requires=[] is accurate). Scientific packages in particular universally have undeclared setup requirements. > ... >> instead of punishing them -- is if we make the rule be "projects that >> use the declarative setup-requirements feature also get isolated build >> environments". (Then the messaging is: "see, this helps you check that >> you actually set it up right! if your new metadata works for you in >> testing, it'll also work for your users!) But this again would mean we >> can't reuse the existing [setup_requires] section. > > I'm very much against forcing isolated build environments as part of > this effort. I get where you are coming from, but really it conflates > two entirely separate things, and you'll *utterly* break building > anything with dependencies that are e.g. SWIG based unless you > increase the size of the PEP by about 10-fold. (Thats not hyperbole, I > think). Okay, now's my turn to be baffled :-). I literally have no idea what you're talking about here. What would this 10x longer PEP be talking about? Why would this be needed? > Working around that will require a bunch of UX work, and its > transitive: you have to expose > how-to-workaround-the-fact-that-all-our-deps-are-not-installable all > the way up the chain. That's probably a good thing to do (e.g. see > bindep, or the aborted callout-to-system-packages we proposed after > PyCon AU last year), but tying these two things together is not > necessary, though I can certainly see the appeal; the main impact I > see is that it will just impede adoption. What are these things that aren't pip-installable and why isn't the solution to fix that? I definitely don't want to break things that work now, but providing new features that incentivize folks to clean up their stuff is a good thing, surely? Yeah, it means that the bootstrap-requirements stuff will take some time and cleanup to spread, but that's life. We've spent a huge amount of effort on reaching the point where pretty much everything *can* be made pip installable. Heck, *PyQt5*, which is my personal benchmark for a probably-totally-unpackageable package, announced last week that they now have binary wheels on pypi for all of Win/Mac/Linux: https://pypi.python.org/pypi/PyQt5/5.6 I want to work towards a world where this stuff just works, not keep holding the entire ecosystem back with compromise hacks to work around a minority of broken packages. > The reality, AFAICT, is that most projects with undeclared build deps > today get corrected fairly quickly: a bug is filed, folk fix it, and > we move on. A robotic system that isolates everything such that folk > *cannot* fix it is much less usable, and I'm very much in favour of > pragmatism here. Again, in my world ~100% of packages have undeclared build deps... >> - setup.cfg is kind of a terrible format for standardizing things >> because the only definition of the format is "read the ConfigParser >> source". You cannot parse setup.cfg from a non-Python language. And >> the *only* benefit is that it already exists; teaching pip to read >> pybuild.json or pybuild.toml instead would be completely trivial. So >> if we aren't going to try and grandfather in existing [setup_requires] >> sections, then we might as well switch to a better file format at the >> same time -- this is not the hard part. > > The patch to read a single list-valued key out of setup.cfg is trivial > and shallow. We've not managed to settle on consensus on a file format > choice in a year of debate. I hold zero confidence we will going > forward either. If the BDFL delegate makes a call - fine. I read > Nick's earlier email in the thread as such a thing TBH :). Oh sure, I think everyone agrees that the file format choice is not a make-or-break decision and that ultimately this is a good case for of BDFL-style pronouncement rather than endless consensus seeking. But I'm still allowed to make technical arguments for why I think the BDFL-delegate should pronounce one way or another :-). >> So I like the idea of splitting out the declarative setup-requirements >> PEP from the new build system hook interface PEP, but I think that the >> way we should do this is by defining a new pybuild.whatever file like >> the ones in PEP 516 / PEP 517 (they're identical in this regard) that >> *only* has schema-version and bootstrap-requirements, and then we can >> add the build backend key as a second step. > > I think the risk there is that that presumes the answer, and without > the abstract build effort actually moving forward we may be years > before we actually see the file being used for anything else. If ever. I see the risk, but I think it's minimal. We're not presuming anything about the answer except that somehow it will involve adding some kind of static metadata, which seems very safe. -n -- Nathaniel J. Smith -- https://vorpus.org From robertc at robertcollins.net Thu May 5 02:57:54 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 5 May 2016 18:57:54 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 18:32, Nathaniel Smith wrote: > On Wed, May 4, 2016 at 10:42 PM, Robert Collins >... >> Yes, things will break: anyone using this will need a new pip, by >> definition. Not everyone will be willing to wait 10 years before using >> it :). > > Just to clarify (since we seem to agree): I meant that if pip starts > interpreting an existing setup.cfg thing, then the new-pip/old-package > situation could break, which would be bad. No. Old pip new package will break, new pip old package is entirely safe AFAICT. >>> - IMO an extremely valuable aspect of this new declarative >>> setup-requirements thing is that it gives us an opportunity to switch >>> to enforcing the accuracy of this metadata. Right now we have a swamp >>> we need to drain, where there's really no way to know what environment >>> any given setup.py needs to run. Sometimes there are setup_requires, >>> sometimes not; if there are setup_requires then sometimes they're >> >> Huh? I've not encountered any of this, ever. I'd love some examples to >> go look at. The only issue I've ever had with setup_requires is the >> easy_install stuff it ties into. > > I don't think I've ever seen a package that had accurate > setup_requires (outside the trivial case of packages where > setup_requires=[] is accurate). Scientific packages in particular > universally have undeclared setup requirements. Are those requirements pip installable? .. >> I'm very much against forcing isolated build environments as part of >> this effort. I get where you are coming from, but really it conflates >> two entirely separate things, and you'll *utterly* break building >> anything with dependencies that are e.g. SWIG based unless you >> increase the size of the PEP by about 10-fold. (Thats not hyperbole, I >> think). > > Okay, now's my turn to be baffled :-). I literally have no idea what > you're talking about here. What would this 10x longer PEP be talking > about? Why would this be needed? Take an i386 linux machine, and build something needing pyqt5 on it :). Currently, you apt-get/yum/dnf/etc install python-pyqt5, then run pip install. If using a virtualenv you enable system site packages. When you introduce isolation, the build will only have the standard library + whatever is declared as a dep: and pyqt5 has no source on PyPI. So the 10x thing is defining how the thing doing the isolation (e.g. pip) should handle things that can't be installed but are already available on the system. And that has to tunnel all the way out to the user, because its context specific, its not an attribute of the dependencies per se (since new releases can add or remove this situation), nor of the consuming thing (same reason). Ultimately, its not even an interopability question: pip could do isolated builds now, if it chose, and it has no ramifications as far as PEPs etc are concerned. ... > What are these things that aren't pip-installable and why isn't the > solution to fix that? I definitely don't want to break things that > work now, but providing new features that incentivize folks to clean > up their stuff is a good thing, surely? Yeah, it means that the > bootstrap-requirements stuff will take some time and cleanup to > spread, but that's life. We've a history in this group of biting off too much and things not getting executed. We're *still* in the final phases of deploying PEP-508, and it was conceptually trivial. I'm not arguing that we shouldn't make things better, I'm arguing that tying two separate things together because we *can* seems, based on the historical record, to be unwise. > We've spent a huge amount of effort on reaching the point where pretty > much everything *can* be made pip installable. Heck, *PyQt5*, which is > my personal benchmark for a probably-totally-unpackageable package, > announced last week that they now have binary wheels on pypi for all > of Win/Mac/Linux: > > https://pypi.python.org/pypi/PyQt5/5.6 > > I want to work towards a world where this stuff just works, not keep > holding the entire ecosystem back with compromise hacks to work around > a minority of broken packages. Sure, but the underlying problem here is that manylinux is a 90% solve: its great for the common cases but it doesn't actually solve the actual baseline problem: we can't represent the actual system dependencies needed to rebuild many Python packages. pyqt5 not having i386 is just a trivial egregious case. ARM32 and 64 is going to be super common, Power8 another one, let alone less common but still extant and used architectures like PPC, itanium, or new ones like x86_32 [If I remember the abbreviation right - no, its not i386]. Solve that underlying problem - great, then isolation becomes an optimisation question for things without manylinux wheels. But if we don't solve it then isolation becomes a 'Can build X at all' question, which is qualitatively different. I'm all for solving the underlying problem, but not at the cost of *not solving* the 'any easy-install is triggered' problem for another X months while that work takes place. >> The reality, AFAICT, is that most projects with undeclared build deps >> today get corrected fairly quickly: a bug is filed, folk fix it, and >> we move on. A robotic system that isolates everything such that folk >> *cannot* fix it is much less usable, and I'm very much in favour of >> pragmatism here. > > Again, in my world ~100% of packages have undeclared build deps... So - put a patch forward to pip to do isolated builds. If/when bug reports come in, we can solve them there. There's no standards body work involved in that as far as I can see.... -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From marius at gedmin.as Thu May 5 04:00:58 2016 From: marius at gedmin.as (Marius Gedminas) Date: Thu, 5 May 2016 11:00:58 +0300 Subject: [Distutils] Things that are not pip-installable (was Re: moving things forward) shouldn't) In-Reply-To: References: <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <20160505080058.GA17571@platonas> On Wed, May 04, 2016 at 11:32:33PM -0700, Nathaniel Smith wrote: > What are these things that aren't pip-installable and why isn't the > solution to fix that? Things that are not pip-installable that I've personally missed include: - pygame (there are a bunch of tickets in their bug tracker, and upstream is working slowly to fix them, just ... very slowly) - pygobject (plus you need gobject-introspection files for all the libraries you want to actually use, like GLib, Pango, Cairo, and GTK itself, and those aren't on PyPI either) > We've spent a huge amount of effort on reaching the point where pretty > much everything *can* be made pip installable. Heck, *PyQt5*, which is > my personal benchmark for a probably-totally-unpackageable package, > announced last week that they now have binary wheels on pypi for all > of Win/Mac/Linux: > > https://pypi.python.org/pypi/PyQt5/5.6 Doesn't seem to work for me, with pip 8.1.1 on a 64-bit Linux machine: $ pip install pyqt5 Collecting pyqt5 Could not find a version that satisfies the requirement pyqt5 (from versions: ) No matching distribution found for pyqt5 Marius Gedminas -- Shift happens. -- Doppler -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 173 bytes Desc: not available URL: From p.f.moore at gmail.com Thu May 5 04:36:47 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 May 2016 09:36:47 +0100 Subject: [Distutils] moving things forward In-Reply-To: <572A8C8E.4030005@stoneleaf.us> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572A8473.1000906@stoneleaf.us> <572A85C9.6090306@nextday.fi> <572A8C8E.4030005@stoneleaf.us> Message-ID: On 5 May 2016 at 00:58, Ethan Furman wrote: > Somebody will have to distill that PEP, I have only an small inkling of what > it's trying to say. The relevant point for this discussion is "you can request that particular packages are installed before the build step for your package, and precisely what gets installed can depend on the Python version". If you're not intending to use a build system other than setuptools, this is probably not relevant to you (other than to require a new enough version of setuptools that has the bug fixes you need). > As for my specific use case: I have Python3-only files in my distribution, > so they should only be installed on Python3 systems. Python2 systems > generate useless errors. That's a build system issue, so not directly relevant here. Specifically, assuming it's the issue you reported here previously, it's a problem in setuptools (so when it's fixed there, with this new proposal you can specify that your build requires setuptools >= X.Y, to ensure the bug fix is present). Paul From njs at pobox.com Thu May 5 04:58:22 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 5 May 2016 01:58:22 -0700 Subject: [Distutils] Things that are not pip-installable (was Re: moving things forward) shouldn't) In-Reply-To: <20160505080058.GA17571@platonas> References: <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <20160505080058.GA17571@platonas> Message-ID: On Thu, May 5, 2016 at 1:00 AM, Marius Gedminas wrote: > On Wed, May 04, 2016 at 11:32:33PM -0700, Nathaniel Smith wrote: >> What are these things that aren't pip-installable and why isn't the >> solution to fix that? > > Things that are not pip-installable that I've personally missed include: > > - pygame (there are a bunch of tickets in their bug tracker, and > upstream is working slowly to fix them, just ... very slowly) > > - pygobject (plus you need gobject-introspection files for all the > libraries you want to actually use, like GLib, Pango, Cairo, and GTK > itself, and those aren't on PyPI either) 1) I don't think either of these provide build-time services, i.e., they wouldn't be showing up in setup_requires? 2) The question isn't "are there things that aren't pip-installable", obviously that's true :-). SciPy still doesn't have windows wheels on pypi. But we're fixing that by making a windows compiler that can build scipy [1]. NumPy has a build-time dependency on BLAS and its headers. But we're fixing that by making BLAS and its headers pip-installable [2]. Building wheels for binary packages is still something of a black art. So we're building tools like "auditwheel" to make it easier. Basically this is because we are fed up with putting up with all the brokenness and want to fix it right instead of continually compromising and piling up hacks. >> We've spent a huge amount of effort on reaching the point where pretty >> much everything *can* be made pip installable. Heck, *PyQt5*, which is >> my personal benchmark for a probably-totally-unpackageable package, >> announced last week that they now have binary wheels on pypi for all >> of Win/Mac/Linux: >> >> https://pypi.python.org/pypi/PyQt5/5.6 > > Doesn't seem to work for me, with pip 8.1.1 on a 64-bit Linux machine: > > $ pip install pyqt5 > Collecting pyqt5 > Could not find a version that satisfies the requirement pyqt5 (from versions: ) > No matching distribution found for pyqt5 Your python is out of date ;-) (For some reason they have only uploaded for py35 so far. I find this a bit frustrating, but I suspect it is a solvable problem.) -n [1] https://mingwpy.github.io/proposal_december2015.html [2] http://thread.gmane.org/gmane.comp.python.wheel-builders/91 -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Thu May 5 05:02:28 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 May 2016 10:02:28 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 07:57, Robert Collins wrote: > We've a history in this group of biting off too much and things not > getting executed. We're *still* in the final phases of deploying > PEP-508, and it was conceptually trivial. I'm not arguing that we > shouldn't make things better, I'm arguing that tying two separate > things together because we *can* seems, based on the historical > record, to be unwise. This is a very good point, and ties in nicely with Nick's comment about taking small steps to make things better than they currently are. On that basis, I'd be +1 on a simple proposal to add a new "install this stuff before we do the build" capability that sits in setup.cfg. Let's keep build isolation off the table for now. There's probably enough substantive detail (I'll do my best to avoid bikeshedding over trivia :-)) to thrash out in that simple proposal. For example, if package foo specifies that it needs a new version of setuptools to build, is it OK for "pip install foo" to automatically upgrade setuptools, or should it fail with an error "your setuptools is too old"? If it does auto-upgrade, then if the build of foo fails, is it OK that we won't be able to revert the upgrade of setuptools? How should we handle cases where a package specifies that the it needs an *older* version of setuptools? I'd expect we simply bail and report an error for that one - it should never really happen, so why waste time on "clever" solutions? Anyway, we can have these sorts of debate when we get down to details. Paul From njs at pobox.com Thu May 5 05:10:31 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 5 May 2016 02:10:31 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 2:02 AM, Paul Moore wrote: > On 5 May 2016 at 07:57, Robert Collins wrote: >> We've a history in this group of biting off too much and things not >> getting executed. We're *still* in the final phases of deploying >> PEP-508, and it was conceptually trivial. I'm not arguing that we >> shouldn't make things better, I'm arguing that tying two separate >> things together because we *can* seems, based on the historical >> record, to be unwise. > > This is a very good point, and ties in nicely with Nick's comment > about taking small steps to make things better than they currently > are. > > On that basis, I'd be +1 on a simple proposal to add a new "install > this stuff before we do the build" capability that sits in setup.cfg. > Let's keep build isolation off the table for now. > > There's probably enough substantive detail (I'll do my best to avoid > bikeshedding over trivia :-)) to thrash out in that simple proposal. > For example, if package foo specifies that it needs a new version of > setuptools to build, is it OK for "pip install foo" to automatically > upgrade setuptools, or should it fail with an error "your setuptools > is too old"? If it does auto-upgrade, then if the build of foo fails, > is it OK that we won't be able to revert the upgrade of setuptools? > How should we handle cases where a package specifies that the it needs > an *older* version of setuptools? I'd expect we simply bail and report > an error for that one - it should never really happen, so why waste > time on "clever" solutions? Uh, realistically speaking I'm pretty sure that setuptools will at some point in the future contain at least 1 regression. I.e. it will totally happen that at some point someone will pin an old version of setuptools in their build requirements. ...The main thing I want to point out though, is that all of these problems you're raising are complications caused entirely by wanting to avoid build isolation in the name of simplicity. If each package gets its own isolated build environment, then it can depend on whatever it wants without any danger of collision with the ambient environment. -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Thu May 5 05:47:29 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 5 May 2016 02:47:29 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 11:57 PM, Robert Collins wrote: > On 5 May 2016 at 18:32, Nathaniel Smith wrote: >> On Wed, May 4, 2016 at 10:42 PM, Robert Collins >>... >>> Yes, things will break: anyone using this will need a new pip, by >>> definition. Not everyone will be willing to wait 10 years before using >>> it :). >> >> Just to clarify (since we seem to agree): I meant that if pip starts >> interpreting an existing setup.cfg thing, then the new-pip/old-package >> situation could break, which would be bad. > > No. Old pip new package will break, new pip old package is entirely safe AFAICT. We're talking past each other... I'm saying, *if* pip started reinterpreting some existing thing as indicating setup-requirements, *then* things would break. You're saying, pip isn't going to do that, so they won't. So we're good :-) >>>> - IMO an extremely valuable aspect of this new declarative >>>> setup-requirements thing is that it gives us an opportunity to switch >>>> to enforcing the accuracy of this metadata. Right now we have a swamp >>>> we need to drain, where there's really no way to know what environment >>>> any given setup.py needs to run. Sometimes there are setup_requires, >>>> sometimes not; if there are setup_requires then sometimes they're >>> >>> Huh? I've not encountered any of this, ever. I'd love some examples to >>> go look at. The only issue I've ever had with setup_requires is the >>> easy_install stuff it ties into. >> >> I don't think I've ever seen a package that had accurate >> setup_requires (outside the trivial case of packages where >> setup_requires=[] is accurate). Scientific packages in particular >> universally have undeclared setup requirements. > > Are those requirements pip installable? Either they are, or they will be soon. > .. >>> I'm very much against forcing isolated build environments as part of >>> this effort. I get where you are coming from, but really it conflates >>> two entirely separate things, and you'll *utterly* break building >>> anything with dependencies that are e.g. SWIG based unless you >>> increase the size of the PEP by about 10-fold. (Thats not hyperbole, I >>> think). >> >> Okay, now's my turn to be baffled :-). I literally have no idea what >> you're talking about here. What would this 10x longer PEP be talking >> about? Why would this be needed? > > Take an i386 linux machine, and build something needing pyqt5 on it > :). Currently, you apt-get/yum/dnf/etc install python-pyqt5, then run > pip install. If using a virtualenv you enable system site packages. > > When you introduce isolation, the build will only have the standard > library + whatever is declared as a dep: and pyqt5 has no source on > PyPI. > > So the 10x thing is defining how the thing doing the isolation (e.g. > pip) should handle things that can't be installed but are already > available on the system. > > And that has to tunnel all the way out to the user, because its > context specific, its not an attribute of the dependencies per se > (since new releases can add or remove this situation), nor of the > consuming thing (same reason). # User experience today on i386 $ pip install foo <... error: missing pyqt5 ...> $ apt install python-pyqt5 $ pip install foo # User experience with build isolation on i386 $ pip install foo <... error: missing pyqt5 ...> $ apt install python-pyqt5 $ pip install --no-isolated-environment foo It'd even be straightforward for pip to notice that the requirement that it failed to satisfy is already satisfied by the ambient environment, and suggest --no-isolated-environment as a solution. > Ultimately, its not even an interopability question: pip could do > isolated builds now, if it chose, and it has no ramifications as far > as PEPs etc are concerned. That's not true. In fact, it seems dangerously naive :-/ If pip just went ahead and flipped a switch to start doing isolated builds now, then everything would burst into flame and there would be a howling mob in the bug tracker. Sure, there's no PEP saying we *can't* do that, but in practice it's utterly impossible. If we roll out this feature without build isolation, then next year we'll still be in the exact same situation we are today -- we'll have the theoretical capability of enabling build isolation, but everything would break, so in practice, we won't be able to. The reason I'm being so intense about this is that AFAICT these are all true: Premise 1: Without build isolation enabled by default, then in practice everyone will putter along putting up with broken builds all the time. It's *incredibly* easy to forget to declare a build dependency, it's the kind of mistake that every new user makes, and experienced users too. Premise 2: We can either enable build isolation together with the new static bootstrap requirements, or we can never enable build isolation at all, ever. Conclusion: If we want to ever reach a state where builds are reliable, we need to tie build isolation to the new static metadata. If you have some clever plan for how we could practically transition to build isolation without having them opt-in via a new feature, then that would be an interesting counter-argument; or an alternative plan for how to reach a point where build requirements are accurate without being enforced; or ...? > ... >> What are these things that aren't pip-installable and why isn't the >> solution to fix that? I definitely don't want to break things that >> work now, but providing new features that incentivize folks to clean >> up their stuff is a good thing, surely? Yeah, it means that the >> bootstrap-requirements stuff will take some time and cleanup to >> spread, but that's life. > > We've a history in this group of biting off too much and things not > getting executed. We're *still* in the final phases of deploying > PEP-508, and it was conceptually trivial. I'm not arguing that we > shouldn't make things better, I'm arguing that tying two separate > things together because we *can* seems, based on the historical > record, to be unwise. My argument is not that we can, it's that we have to :-). >> We've spent a huge amount of effort on reaching the point where pretty >> much everything *can* be made pip installable. Heck, *PyQt5*, which is >> my personal benchmark for a probably-totally-unpackageable package, >> announced last week that they now have binary wheels on pypi for all >> of Win/Mac/Linux: >> >> https://pypi.python.org/pypi/PyQt5/5.6 >> >> I want to work towards a world where this stuff just works, not keep >> holding the entire ecosystem back with compromise hacks to work around >> a minority of broken packages. > > Sure, but the underlying problem here is that manylinux is a 90% > solve: its great for the common cases but it doesn't actually solve > the actual baseline problem: we can't represent the actual system > dependencies needed to rebuild many Python packages. > > pyqt5 not having i386 is just a trivial egregious case. ARM32 and 64 > is going to be super common, Power8 another one, let alone less common > but still extant and used architectures like PPC, itanium, or new ones > like x86_32 [If I remember the abbreviation right - no, its not i386]. (it's x32) manylinux is helpful here, but it's not necessary -- build isolation just requires that the dependencies be pip installable, could be from source or whatever. In practice the wheel cache will kick in and handle most of the work. > Solve that underlying problem - great, then isolation becomes an > optimisation question for things without manylinux wheels. But if we > don't solve it then isolation becomes a 'Can build X at all' question, > which is qualitatively different. More like 'can build X at all (without adding one command line option)'. And even this is only if you're in some kind of environment that X upstream doesn't support -- no developer is going to make a release of X with build isolation turned on unless build isolation works on the platforms they care about. > I'm all for solving the underlying problem, but not at the cost of > *not solving* the 'any easy-install is triggered' problem for another > X months while that work takes place. > > >>> The reality, AFAICT, is that most projects with undeclared build deps >>> today get corrected fairly quickly: a bug is filed, folk fix it, and >>> we move on. A robotic system that isolates everything such that folk >>> *cannot* fix it is much less usable, and I'm very much in favour of >>> pragmatism here. >> >> Again, in my world ~100% of packages have undeclared build deps... > > So - put a patch forward to pip to do isolated builds. If/when bug > reports come in, we can solve them there. There's no standards body > work involved in that as far as I can see.... See above... -n -- Nathaniel J. Smith -- https://vorpus.org From ncoghlan at gmail.com Thu May 5 06:05:00 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 5 May 2016 20:05:00 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 14:22, Nathaniel Smith wrote: > On Wed, May 4, 2016 at 6:28 AM, Nick Coghlan wrote: >> On 4 May 2016 at 23:00, Daniel Holth wrote: >>> +1 It would be great to start with a real setup_requires and probably would >>> not interfere with later build system abstractions at all. >> >> If we're going to go down that path, perhaps it might make sense to >> just define a standard [setup_requires] section in setup.cfg? >> >> Quite a few projects already have one of those thanks to distutiils2, >> d2to1 and pbr, which means the pragmatic approach here might be to ask >> what needs to change so the qualifier can be removed from this current >> observation in the PBR docs: "The setup.cfg file is an ini-like file >> that can mostly replace the setup.py file." >> >> The build system abstraction config could then also just be another >> setup.cfg section. > > I'm sympathetic to the general approach, but on net I think I prefer a > slightly different proposal. > > Downsides to just standardizing [setup_requires]: > > - if projects have existing ones, that's actually kinda good / kinda > bad -- pip has never been respecting these before, so if we suddenly > start doing that then existing stuff could break. I don't know how > likely this is, but experience suggests that *something* will break > and make someone angry. (Robert & Nathaniel worked through this while I was writing it, but I figured it was worth clarifying my expectations anyway) I'm not aware of any current widely adopted systems that use a "setup_requires" or comparable bootstrapping section in setup.cfg - they pass setup_requires to the setup() call in setup.py. Examples: pbr: http://docs.openstack.org/developer/pbr/#setup-py d2to1: https://pypi.python.org/pypi/d2to1 I didn't find any clear examples like those for numpy.distutils, but it seems it works the same way. That's the whole problem: as long as people have to do their build dependency bootstrapping *from* setup.py, module level imports in setup.py don't work, and any replaced packages (such as a newer or older setuptools) need to be injected into an already running Python process. The closest thing that we have today to a bootstrapping system that's independent of easy_install that I'm aware of is Daniel Holth's concept demonstrator that uses setup.py to read a multi-valued key from setup.cfg, uses pip to install the extracted modules, and then goes on to execute a "real-setup.py" in a subprocess: https://bitbucket.org/dholth/setup-requires/src The fact that the "Python-style ini format" used for setup.cfg is an implementation defined format defined by the standard library's configparser module rather than a properly specified language independent format is certainly a downside, but we recently decided we're OK with assuming "Python related build tools will be implemented in Python (or at least have a Python shim)" when opting for a Python API rather than a CLI as the preferred build system abstraction. Changing that one assumption (i.e. that we're actually OK with using Python-centric interoperability formats where they make pragmatic sense) means that a number of other previously rejected ideas (like "let's just use a new section or multi-valued key in setup.cfg") can also be reconsidered. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu May 5 06:28:47 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 5 May 2016 20:28:47 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 19:47, Nathaniel Smith wrote: > The reason I'm being so intense about this is that AFAICT these are all true: > > Premise 1: Without build isolation enabled by default, then in > practice everyone will putter along putting up with broken builds all > the time. It's *incredibly* easy to forget to declare a build > dependency, it's the kind of mistake that every new user makes, and > experienced users too. > > Premise 2: We can either enable build isolation together with the new > static bootstrap requirements, or we can never enable build isolation > at all, ever. > > Conclusion: If we want to ever reach a state where builds are > reliable, we need to tie build isolation to the new static metadata. OK, I think I see where we're talking past each other here. Yes, being able to do isolated builds is important, but we don't need to invent a Python specific solution to build isolation, as build isolation can already be handled by running a build in a fresh VM, or in a container, and continuous integration systems already let people do exactly that. This means that even if the original publisher of a package doesn't regularly run a "Can I reliably rebuild this from source on a clean system?" check, plenty of consumers of their software will, and those folks will complain if the build dependencies are wrong. (And folks consuming pre-built binaries won't care in the first place). Longer term, as an example of what increasing automation makes possible, folks in Fedora are exploring what would be involved in doing automatic mass rebuilds of PyPI as RPM packages [1], and I assume they'll eventually get to a point where the problems in the automation pipeline are ironed out, so they'll instead be looking at problems like expressing external deps in the upstream metadata [2], as well as finding errors in the dependency definitions of individual packages. The only thing that absolutely *has* to be handled centrally by distutils-sig is ensuring that build requirements can be expressed accurately enough to allow for fully automated builds on a clean system. Everything else (including quality assurance on build dependencies) is more amenable to distributed effort. Cheers, Nick. [1] http://miroslav.suchy.cz/blog/archives/2016/04/21/wip_rebuilding_all_pypi_modules_as_rpm_packages/index.html [2] https://github.com/pypa/interoperability-peps/pull/30 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu May 5 06:42:42 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 May 2016 11:42:42 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 10:10, Nathaniel Smith wrote: > ...The main thing I want to point out though, is that all of these > problems you're raising are complications caused entirely by wanting > to avoid build isolation in the name of simplicity. If each package > gets its own isolated build environment, then it can depend on > whatever it wants without any danger of collision with the ambient > environment. Understood. But that doesn't mean they can *only* be solved by build isolation. One relatively naive approach might be: 1. Collect build requirements. If they are satisfied, fine, move on to build. 2. If the only unsatisfied requirements are new installs, install them and move on to build. 3. If there are unsatisfied requirements needing an upgrade, do the upgrades (possibly require a flag for the pip invocation to allow silent upgrades), and move on to build. 4. Anything more complicated (downgrades, conflicts) report the issue and stop to let the user resolve it manually. Sure, build isolation might be better (probably is, IMO, but that's clearly a topic for some serious debate) but the whole point here is to do something incremental, and "good enough to make things better", not to stall on a completely perfect solution. > Premise 1: Without build isolation enabled by default, then in > practice everyone will putter along putting up with broken builds all > the time. It's *incredibly* easy to forget to declare a build > dependency, it's the kind of mistake that every new user makes, and > experienced users too. That's certainly possible, but "some missing dependencies" is still better than "no way to specify dependencies at all". And all it needs is an end user to hit the problem, raise an issue, and the dependency gets added and we're closer to perfection. Why is that worse than the current status quo? > Premise 2: We can either enable build isolation together with the new > static bootstrap requirements, or we can never enable build isolation > at all, ever. I have no idea why you feel that's the case. Why can't we add build isolation later? You clearly have more experience in this area than I do, so I'm genuinely trying to see what I'm missing here. You mention "we can't add it later without breaking everything". But the same applies now surely? And why can't we introduce the feature gradually - initially as opt-in via an "--isolated-build" flag, then in a later release change the default to "--isolated-build" with "--no-isolated-build" as a fallback for people who haven't fixed their builds yet, and ultimately removing "--no-isolated-build" and making isolated builds the only option? That's the standard approach we use, and I see no reason it wouldn't work here for introducing isolated builds at *any* time. Paul From dholth at gmail.com Thu May 5 08:36:42 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 05 May 2016 12:36:42 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Here's the kind of thing that you should expect. Someone will write setup.cfg: [bootstrap_requires] pbr pip installs pbr in a directory that is added to PYTHONPATH for that build. The package builds. And there was much rejoicing. The big gain from this simple feature is that people will be able to re-use code and have actual abstractions in setup.py. People have been desperate for this feature for years. Imagine that someone forgets to declare their build dependency. Do enough hand-holding to build a locally cached wheel of the package and submit a patch to the author. Easy. In exchange for this potential inconvenience you will get dramatically easier to write setup.py. Here is my own implementation of the feature in 34 lines to be prepended to setup.py: https://bitbucket.org/dholth/setup-requires/src/tip/setup.py , I see Nick linked to it already, it only executes pip in a subprocess if the requirements are not already installed but your regularly scheduled setup.py can just go at the bottom of the file and runs in the same process. Pedantic note setup_requires is a setuptools parameter used to install packages after setup() is called. Even though very many people expect or want those packages to be installed before setup.py executes. I think it is reasonable to call the new feature setup_requires but some prefer to eliminate uncertainty by calling it bootstrap_requires. On Thu, May 5, 2016 at 6:42 AM Paul Moore wrote: > On 5 May 2016 at 10:10, Nathaniel Smith wrote: > > ...The main thing I want to point out though, is that all of these > > problems you're raising are complications caused entirely by wanting > > to avoid build isolation in the name of simplicity. If each package > > gets its own isolated build environment, then it can depend on > > whatever it wants without any danger of collision with the ambient > > environment. > > Understood. But that doesn't mean they can *only* be solved by build > isolation. One relatively naive approach might be: > > 1. Collect build requirements. If they are satisfied, fine, move on to > build. > 2. If the only unsatisfied requirements are new installs, install them > and move on to build. > 3. If there are unsatisfied requirements needing an upgrade, do the > upgrades (possibly require a flag for the pip invocation to allow > silent upgrades), and move on to build. > 4. Anything more complicated (downgrades, conflicts) report the issue > and stop to let the user resolve it manually. > > Sure, build isolation might be better (probably is, IMO, but that's > clearly a topic for some serious debate) but the whole point here is > to do something incremental, and "good enough to make things better", > not to stall on a completely perfect solution. > > > Premise 1: Without build isolation enabled by default, then in > > practice everyone will putter along putting up with broken builds all > > the time. It's *incredibly* easy to forget to declare a build > > dependency, it's the kind of mistake that every new user makes, and > > experienced users too. > > That's certainly possible, but "some missing dependencies" is still > better than "no way to specify dependencies at all". And all it needs > is an end user to hit the problem, raise an issue, and the dependency > gets added and we're closer to perfection. Why is that worse than the > current status quo? > > > Premise 2: We can either enable build isolation together with the new > > static bootstrap requirements, or we can never enable build isolation > > at all, ever. > > I have no idea why you feel that's the case. Why can't we add build > isolation later? You clearly have more experience in this area than I > do, so I'm genuinely trying to see what I'm missing here. You mention > "we can't add it later without breaking everything". But the same > applies now surely? And why can't we introduce the feature gradually - > initially as opt-in via an "--isolated-build" flag, then in a later > release change the default to "--isolated-build" with > "--no-isolated-build" as a fallback for people who haven't fixed their > builds yet, and ultimately removing "--no-isolated-build" and making > isolated builds the only option? That's the standard approach we use, > and I see no reason it wouldn't work here for introducing isolated > builds at *any* time. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 5 08:45:59 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 5 May 2016 22:45:59 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 22:36, Daniel Holth wrote: > Pedantic note > > setup_requires is a setuptools parameter used to install packages after > setup() is called. Even though very many people expect or want those > packages to be installed before setup.py executes. I think it is reasonable > to call the new feature setup_requires but some prefer to eliminate > uncertainty by calling it bootstrap_requires. The main advantage of a new feature name is that when someone searches the internet for "python bootstrap_requires", they won't find a decade+ worth of inapplicable documentation of the setuptools feature :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From p.f.moore at gmail.com Thu May 5 08:51:42 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 5 May 2016 13:51:42 +0100 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 13:36, Daniel Holth wrote: > Here's the kind of thing that you should expect. Someone will write > > setup.cfg: > > [bootstrap_requires] > pbr > > pip installs pbr in a directory that is added to PYTHONPATH for that build. Ah, so we don't install into the environment, just to a temporary location which we add to PYTHONPATH. That is indeed a much better approach than what I assumed, as it avoids any messing with the user's actual environment. (We may need to be a bit more clever than just adding the temporary directory to PYTHONPATH, to ensure that what we install *overrides* anything in the user's environment, such as a newer version of setuptools, but that's just implementation details). Paul From donald at stufft.io Thu May 5 11:47:10 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 5 May 2016 11:47:10 -0400 Subject: [Distutils] Calling for a volunteer to help admin PyPI In-Reply-To: References: Message-ID: > On May 4, 2016, at 8:28 PM, Richard Jones wrote: > > The *ahem* "tools" available aren't the best, and will require privileged access to a system to do some of this work I should really do something about this. Would you be able to give me a prioritized list of the admin tasks in terms of how helpful it would be if there was a more limited and streamlined web process to handle them? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Thu May 5 16:18:57 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 13:18:57 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 3:28 PM, Paul Moore wrote: > On 4 May 2016 at 23:11, Chris Barker wrote: > > so it could be purely declarative, but users could also put code in > there to > > customize the configuration on the fly, too. > > That basically repeats the mistake that was made with setup.py. We > explicitly don't want an executable format for specifying build > configuration. > I don't think it's the same thing -- setup.py is supposed to actually CALL setup() when it is imported or run. It is not declarative in the least. what I'm suggesting is that the API be purely declarative: "These values will be declared" (probably a single dict of stuff) but if you make it an executable python file, then users can write code to actually create their setup. Alternatively, you make the API purely declarative, and then folks have to write external scripts that create/populate the configuration files. I guess where I'm coming from is that I'm not sure we CAN make a purely, completely declarative API to a build system -- folks are always going to find some weird corner case where they need to do something special. making the configuration file executable is an easy way to allow this. Otherwise, you need to provide hooks to plug in custom functionality, which is jsut alot more awkward, though I suppose more robust. I've found it very handy to use an executable python file for configuration for a number of my projects -- it's really nice to just throw in a little calculation for something on the fly -- even if it could be done purely declaratively Anyway, if you stick with purely declarative, you can still use Python literals as an alternative to JSON (Or INI, or, ....). Python literals are richer, more flexible, and, of course, familiar to anyone developing a python package :-) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 16:30:20 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 13:30:20 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan wrote: > This configuration vs customisation distinction is probably worth > spelling out for folks without a formal software engineering or > computer science background, so: > fair enough -- good to be clear on the terms. > Configuration is different: you're choosing amongst a set of > possibilities that have been constrained in some way, and those > constraints are structurally enforced. That's a key point here -- I guess I'm skeptical that we can have the flexibility we need with a purely configuration-based system -- we probably don't WANT to constrain the options completely. If you think about it, while distutils has it's many, many flaws, what made it possible for it to be as useful as it is, and last as long as it has because is CAN be customized -- users are NOT constrained to the built-in functionality. I suspect the idea of this thread is to keep the API to a build system constrained -- and let the build systems themselves be as customizable as the want to be. And I haven't thought it out carefully, but I have a feeling that we're going to hit a wall that way .. but maybe not. > Usually that enforcement is > handled by making the configuration declarative - it's in some passive > format like an ini file or JSON, and if it gets too repetitive then > you introduce a config generator, rather than making the format itself > more sophisticated. > OK -- that's more or less my thought -- if it's python that gets run, then you've got your config generator built in -- why not? > The big advantage of configuration over customisation is that you > substantially increase the degrees of freedom in how *consumers* of > that configuration are implemented - no longer do you need a full > Python runtime (or whatever), you just need an ini file parser, or a > JSON decoder, and then you can look at just the bits you care about > for your particular use case and ignore the rest. > Sure -- but do we care? this is about python packaging -- is it too big a burden to say you need python to read the configuration? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Thu May 5 16:34:38 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Thu, 5 May 2016 23:34:38 +0300 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <572BAE5E.2010604@nextday.fi> I think it would be best to gather a few extreme examples of setup.py files from real world projects and figure out if they can be implemented in a declarative fashion. That at least would help us identify the pain points. For starters, gevent's setup.py looks like it needs a fair bit of custom logic: https://github.com/gevent/gevent/blob/master/setup.py 05.05.2016, 23:30, Chris Barker kirjoitti: > On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan > wrote: > > This configuration vs customisation distinction is probably worth > spelling out for folks without a formal software engineering or > computer science background, so: > > > fair enough -- good to be clear on the terms. > > Configuration is different: you're choosing amongst a set of > possibilities that have been constrained in some way, and those > constraints are structurally enforced. > > > That's a key point here -- I guess I'm skeptical that we can have the > flexibility we need with a purely configuration-based system -- we > probably don't WANT to constrain the options completely. If you think > about it, while distutils has it's many, many flaws, what made it > possible for it to be as useful as it is, and last as long as it has > because is CAN be customized -- users are NOT constrained to the > built-in functionality. > > I suspect the idea of this thread is to keep the API to a build system > constrained -- and let the build systems themselves be as customizable > as the want to be. And I haven't thought it out carefully, but I have > a feeling that we're going to hit a wall that way .. but maybe not. > > Usually that enforcement is > handled by making the configuration declarative - it's in some passive > format like an ini file or JSON, and if it gets too repetitive then > you introduce a config generator, rather than making the format itself > more sophisticated. > > > OK -- that's more or less my thought -- if it's python that gets run, > then you've got your config generator built in -- why not? > > The big advantage of configuration over customisation is that you > substantially increase the degrees of freedom in how *consumers* of > that configuration are implemented - no longer do you need a full > Python runtime (or whatever), you just need an ini file parser, or a > JSON decoder, and then you can look at just the bits you care about > for your particular use case and ignore the rest. > > > Sure -- but do we care? this is about python packaging -- is it too > big a burden to say you need python to read the configuration? > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 16:41:49 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 13:41:49 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 8:09 PM, Nick Coghlan wrote: > I know I'm one of the folks that has historically been dubious of the > "just use setup.cfg" idea, due to the assorted problems with the > ini-style format not extending particularly well to tree-structured > data (beyond the single level of file sections). > me too :-) > 1. We've repeatedly run up against the "JSON is good for programs talking to each other, but lousy as a human-facing interface" problem > yeah, JSON is annoying that way -- but not much worse than INI -- except for lack of comments. (side note, I'd really like it if the json module would accept "JSON extended with comments" as an option...) > 3. The ongoing popularity of setup.cfg shows that while ini-style may > not be perfect for this use case, it clearly makes it over the > threshold of "good enough" > it's only popular because it's what's there -- if we're using that standard, we could make the same argument about setuptools ;-) > So when I ask myself now "What's the *simplest* thing we could do that > will make things better than the status quo?", then the answer I come > up with today is your original idea: bless setup.cfg (or at least a > subset of it) as a standardised interface. > IIUC, we would be changin, or at least adding to the current setup.dfg spec. So this is a change, no matter how you slice it, saying "configuration will be specified in setup.something, in some other format, is the least significant part of all this change. And maybe it's good to keep "new style" configuration clearly separate. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu May 5 16:53:43 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 05 May 2016 20:53:43 +0000 Subject: [Distutils] moving things forward In-Reply-To: <572BAE5E.2010604@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> Message-ID: This is a recurring point of confusion. setup.py is not ever going away. In general it is necessary for you to be able to write software to build your software, and there is no intention to take that feature away. People repeatedly come to the conclusion that static metadata means the entire build is static. It's only the dependencies that need to be static to enable better dependency resolution in pip. The build does not need to be static. The proposed feature means you will be able to have a simpler setup.py or no setup.py it by using something like flit or pbr that are configured with .cfg or .ini. setup.py is not going away. Static metadata means the list of dependencies, author name, trove classifiers is static. Not the build itself. Enforced staticness of the build script is right out. On Thu, May 5, 2016 at 4:34 PM Alex Gr?nholm wrote: > I think it would be best to gather a few extreme examples of setup.py > files from real world projects and figure out if they can be implemented in > a declarative fashion. That at least would help us identify the pain points. > > For starters, gevent's setup.py looks like it needs a fair bit of custom > logic: > https://github.com/gevent/gevent/blob/master/setup.py > > 05.05.2016, 23:30, Chris Barker kirjoitti: > > On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan wrote: > > >> This configuration vs customisation distinction is probably worth >> spelling out for folks without a formal software engineering or >> computer science background, so: >> > > fair enough -- good to be clear on the terms. > > >> Configuration is different: you're choosing amongst a set of >> possibilities that have been constrained in some way, and those >> constraints are structurally enforced. > > > That's a key point here -- I guess I'm skeptical that we can have the > flexibility we need with a purely configuration-based system -- we probably > don't WANT to constrain the options completely. If you think about it, > while distutils has it's many, many flaws, what made it possible for it to > be as useful as it is, and last as long as it has because is CAN be > customized -- users are NOT constrained to the built-in functionality. > > I suspect the idea of this thread is to keep the API to a build system > constrained -- and let the build systems themselves be as customizable as > the want to be. And I haven't thought it out carefully, but I have a > feeling that we're going to hit a wall that way .. but maybe not. > > >> Usually that enforcement is >> handled by making the configuration declarative - it's in some passive >> format like an ini file or JSON, and if it gets too repetitive then >> you introduce a config generator, rather than making the format itself >> more sophisticated. >> > > OK -- that's more or less my thought -- if it's python that gets run, > then you've got your config generator built in -- why not? > > > >> The big advantage of configuration over customisation is that you >> substantially increase the degrees of freedom in how *consumers* of >> that configuration are implemented - no longer do you need a full >> Python runtime (or whatever), you just need an ini file parser, or a >> JSON decoder, and then you can look at just the bits you care about >> for your particular use case and ignore the rest. >> > > Sure -- but do we care? this is about python packaging -- is it too big a > burden to say you need python to read the configuration? > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 16:54:35 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 13:54:35 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: We've spent a huge amount of effort on reaching the point where pretty > much everything *can* be made pip installable. Heck, *PyQt5*, which is > my personal benchmark for a probably-totally-unpackageable package, > announced last week that they now have binary wheels on pypi for all > of Win/Mac/Linux: > that's thanks to binary wheels being a simple archive -- by definition, if you can build and install it *somehow* you can make a binary wheel out of it -- which is a great thing -- so nice to have wheel completely independent of the build system. isn't that where we are trying to go -- have pip be independent of the build system? > If the BDFL delegate makes a call - fine. I read > > Nick's earlier email in the thread as such a thing TBH :). > > Oh sure, I think everyone agrees that the file format choice is not a > make-or-break decision Am I the only one that thinks we're conflating two things here? choice of file format is separate from "use setup.cfg" -- I think we should use something NEW -- rather than try to patch on top of existing setup.cfg files -- that way we're free to change the rules, and no one (and no tools) will be confused about what version of what system is in play. In short -- we should do something new, not just extend setup.cfg, and not because we want a new file format. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Thu May 5 17:05:29 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Fri, 6 May 2016 00:05:29 +0300 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> Message-ID: <572BB599.8090409@nextday.fi> OK, so which setup() arguments do we want to leave out of the static metadata? 05.05.2016, 23:53, Daniel Holth kirjoitti: > This is a recurring point of confusion. setup.py is not ever going > away. In general it is necessary for you to be able to write software > to build your software, and there is no intention to take that feature > away. > > People repeatedly come to the conclusion that static metadata means > the entire build is static. It's only the dependencies that need to be > static to enable better dependency resolution in pip. The build does > not need to be static. > > The proposed feature means you will be able to have a simpler setup.py > or no setup.py it by using something like flit or pbr that are > configured with .cfg or .ini. setup.py is not going away. > > Static metadata means the list of dependencies, author name, trove > classifiers is static. Not the build itself. > > Enforced staticness of the build script is right out. > > On Thu, May 5, 2016 at 4:34 PM Alex Gr?nholm > wrote: > > I think it would be best to gather a few extreme examples of > setup.py files from real world projects and figure out if they can > be implemented in a declarative fashion. That at least would help > us identify the pain points. > > For starters, gevent's setup.py looks like it needs a fair bit of > custom logic: > https://github.com/gevent/gevent/blob/master/setup.py > > 05.05.2016, 23:30, Chris Barker kirjoitti: >> On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan > > wrote: >> >> This configuration vs customisation distinction is probably worth >> spelling out for folks without a formal software engineering or >> computer science background, so: >> >> >> fair enough -- good to be clear on the terms. >> >> Configuration is different: you're choosing amongst a set of >> possibilities that have been constrained in some way, and those >> constraints are structurally enforced. >> >> >> That's a key point here -- I guess I'm skeptical that we can have >> the flexibility we need with a purely configuration-based system >> -- we probably don't WANT to constrain the options completely. If >> you think about it, while distutils has it's many, many flaws, >> what made it possible for it to be as useful as it is, and last >> as long as it has because is CAN be customized -- users are NOT >> constrained to the built-in functionality. >> >> I suspect the idea of this thread is to keep the API to a build >> system constrained -- and let the build systems themselves be as >> customizable as the want to be. And I haven't thought it out >> carefully, but I have a feeling that we're going to hit a wall >> that way .. but maybe not. >> >> Usually that enforcement is >> handled by making the configuration declarative - it's in >> some passive >> format like an ini file or JSON, and if it gets too >> repetitive then >> you introduce a config generator, rather than making the >> format itself >> more sophisticated. >> >> >> OK -- that's more or less my thought -- if it's python that gets >> run, then you've got your config generator built in -- why not? >> >> The big advantage of configuration over customisation is that you >> substantially increase the degrees of freedom in how >> *consumers* of >> that configuration are implemented - no longer do you need a full >> Python runtime (or whatever), you just need an ini file >> parser, or a >> JSON decoder, and then you can look at just the bits you care >> about >> for your particular use case and ignore the rest. >> >> >> Sure -- but do we care? this is about python packaging -- is it >> too big a burden to say you need python to read the configuration? >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> >> _______________________________________________ >> Distutils-SIG maillist -Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Thu May 5 17:10:53 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 05 May 2016 21:10:53 +0000 Subject: [Distutils] moving things forward In-Reply-To: <572BB599.8090409@nextday.fi> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <572BB599.8090409@nextday.fi> Message-ID: C extensions, py-modules, ... On Thu, May 5, 2016, 17:05 Alex Gr?nholm wrote: > OK, so which setup() arguments do we want to leave out of the static > metadata? > > > 05.05.2016, 23:53, Daniel Holth kirjoitti: > > This is a recurring point of confusion. setup.py is not ever going away. > In general it is necessary for you to be able to write software to build > your software, and there is no intention to take that feature away. > > People repeatedly come to the conclusion that static metadata means the > entire build is static. It's only the dependencies that need to be static > to enable better dependency resolution in pip. The build does not need to > be static. > > The proposed feature means you will be able to have a simpler setup.py or > no setup.py it by using something like flit or pbr that are configured with > .cfg or .ini. setup.py is not going away. > > Static metadata means the list of dependencies, author name, trove > classifiers is static. Not the build itself. > > Enforced staticness of the build script is right out. > > On Thu, May 5, 2016 at 4:34 PM Alex Gr?nholm > wrote: > >> I think it would be best to gather a few extreme examples of setup.py >> files from real world projects and figure out if they can be implemented in >> a declarative fashion. That at least would help us identify the pain points. >> >> For starters, gevent's setup.py looks like it needs a fair bit of custom >> logic: >> https://github.com/gevent/gevent/blob/master/setup.py >> >> 05.05.2016, 23:30, Chris Barker kirjoitti: >> >> On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan wrote: >> >> >>> This configuration vs customisation distinction is probably worth >>> spelling out for folks without a formal software engineering or >>> computer science background, so: >>> >> >> fair enough -- good to be clear on the terms. >> >> >>> Configuration is different: you're choosing amongst a set of >>> possibilities that have been constrained in some way, and those >>> constraints are structurally enforced. >> >> >> That's a key point here -- I guess I'm skeptical that we can have the >> flexibility we need with a purely configuration-based system -- we probably >> don't WANT to constrain the options completely. If you think about it, >> while distutils has it's many, many flaws, what made it possible for it to >> be as useful as it is, and last as long as it has because is CAN be >> customized -- users are NOT constrained to the built-in functionality. >> >> I suspect the idea of this thread is to keep the API to a build system >> constrained -- and let the build systems themselves be as customizable as >> the want to be. And I haven't thought it out carefully, but I have a >> feeling that we're going to hit a wall that way .. but maybe not. >> >> >>> Usually that enforcement is >>> handled by making the configuration declarative - it's in some passive >>> format like an ini file or JSON, and if it gets too repetitive then >>> you introduce a config generator, rather than making the format itself >>> more sophisticated. >>> >> >> OK -- that's more or less my thought -- if it's python that gets run, >> then you've got your config generator built in -- why not? >> >> >> >>> The big advantage of configuration over customisation is that you >>> substantially increase the degrees of freedom in how *consumers* of >>> that configuration are implemented - no longer do you need a full >>> Python runtime (or whatever), you just need an ini file parser, or a >>> JSON decoder, and then you can look at just the bits you care about >>> for your particular use case and ignore the rest. >>> >> >> Sure -- but do we care? this is about python packaging -- is it too big a >> burden to say you need python to read the configuration? >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 18:15:32 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 15:15:32 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Wed, May 4, 2016 at 11:57 PM, Robert Collins wrote: > > No. Old pip new package will break, new pip old package is entirely safe > AFAICT. That's the goal, yes? So I think we need to get less obsessed with backward compatibility: pip will retain (for along time) the ability to install, build, whatever any packages that it currently works for -- nothing on PyPi breaks. New versions of pip can "detect" that a given package is "new style", and it will do the right thing with that -- maybe following a completely different code path. If we take this approach, then we can design the new pip interface however we want, rather than trying to keep compatibility. but we need to be abel to provide the build tool to go with it, yes? I think the way to do that is to have a new build tool that is forked from setuptools, and is essentially the same thing (in version 1), but with the functionality that it "shouldn't" have stripped out -- i.e. no easy_install! Pacakge developers that want to adpt the new system only need to change an import ln their setup.py: from setuptools-lite import setup and then do the right things with meta-data, and it will work with new versions of pip. Right now we have a swamp > >>> we need to drain, exactly -- but rather than trying to drain that swamp, let's just move over to the dry land next door.... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 18:18:35 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 15:18:35 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 2:10 AM, Nathaniel Smith wrote: > ...The main thing I want to point out though, is that all of these > problems you're raising are complications caused entirely by wanting > to avoid build isolation in the name of simplicity. If each package > gets its own isolated build environment, then it can depend on > whatever it wants without any danger of collision with the ambient > environment. You do know that we're on our way to re-writing conda, now, don't you? :-) I think we need to be careful of scope-creep... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu May 5 18:21:11 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 06 May 2016 10:21:11 +1200 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <572BC757.5020702@canterbury.ac.nz> Chris Barker wrote: > OK -- that's more or less my thought -- if it's python that gets run, > then you've got your config generator built in -- why not? The difference is that the config generator only gets run once, when the config info needs to get produced. If the config file itself is executable, it needs to be run each time the config info gets *used* -- which could be by a tool for which running arbitrary python code is awkward or undesirable. -- Greg From chris.barker at noaa.gov Thu May 5 18:24:34 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 15:24:34 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 2:47 AM, Nathaniel Smith wrote: > > When you introduce isolation, the build will only have the standard > > library + whatever is declared as a dep: and pyqt5 has no source on > > PyPI. > so build isolation isolates you from arbitrary system libs? now you NEED to ship all your non-trivial dependent libs. which I guess is the point, but again, we're re-implementing conda here. And the only time where it makes sense to force this build requirement is when you are building binary wheels to put up on PyPi. So maybe part of bdist_wheel?? OH -- and you need to make sure there is an isolated test environment too -- which is a bit of pain, particularly when the tests aren't installed as part of the package. And, of course this means a version of virtualenv that isolates you from system libs -- that might be quite hard to do, actually. and again, conda :-) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 18:35:37 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 15:35:37 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Last post! sorry to have not kept up last night.... > to call the new feature setup_requires but some prefer to eliminate > > uncertainty by calling it bootstrap_requires. > > The main advantage of a new feature name is that when someone searches > the internet for "python bootstrap_requires", they won't find a > decade+ worth of inapplicable documentation of the setuptools feature > :) new name for sure -- but I don't like bootstrap-requires -- is this bootstrapping isn't it jsut building? why not build_requires? BTW, conda has: requires: build: run: test: (I think that's it) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Thu May 5 18:47:27 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Thu, 5 May 2016 15:47:27 -0700 Subject: [Distutils] moving things forward In-Reply-To: <572BC757.5020702@canterbury.ac.nz> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> Message-ID: On Thu, May 5, 2016 at 3:21 PM, Greg Ewing wrote: > Chris Barker wrote: > > OK -- that's more or less my thought -- if it's python that gets run, >> then you've got your config generator built in -- why not? >> > > The difference is that the config generator only gets run > once, when the config info needs to get produced. If the > config file itself is executable, it needs to be run each > time the config info gets *used* exactly -- maybe some run-time configuration specialization? though maybe that's exactly what we want to avoid. > -- which could be by a > tool for which running arbitrary python code is awkward or > undesirable. This is python packaging, I'm still trying to see why you'd need to read the config without python available. But I think the key point is -- do we want to enforce that the config data is static -- it WILL not change depending on when or how or where it is read? If so, then yeah, no runnable code. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu May 5 19:32:03 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 5 May 2016 16:32:03 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 3:18 PM, Chris Barker wrote: > On Thu, May 5, 2016 at 2:10 AM, Nathaniel Smith wrote: >> >> ...The main thing I want to point out though, is that all of these >> problems you're raising are complications caused entirely by wanting >> to avoid build isolation in the name of simplicity. If each package >> gets its own isolated build environment, then it can depend on >> whatever it wants without any danger of collision with the ambient >> environment. > > > You do know that we're on our way to re-writing conda, now, don't you? :-) > > I think we need to be careful of scope-creep... conda did not invent the idea of creating separate python environments for different tasks :-) -n -- Nathaniel J. Smith -- https://vorpus.org From dholth at gmail.com Thu May 5 19:33:33 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 05 May 2016 23:33:33 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: Clearly it should spin up a Gentoo VM from scratch each time. No half measures. On Thu, May 5, 2016, 19:32 Nathaniel Smith wrote: > On Thu, May 5, 2016 at 3:18 PM, Chris Barker > wrote: > > On Thu, May 5, 2016 at 2:10 AM, Nathaniel Smith wrote: > >> > >> ...The main thing I want to point out though, is that all of these > >> problems you're raising are complications caused entirely by wanting > >> to avoid build isolation in the name of simplicity. If each package > >> gets its own isolated build environment, then it can depend on > >> whatever it wants without any danger of collision with the ambient > >> environment. > > > > > > You do know that we're on our way to re-writing conda, now, don't you? > :-) > > > > I think we need to be careful of scope-creep... > > conda did not invent the idea of creating separate python environments > for different tasks :-) > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robertc at robertcollins.net Thu May 5 21:37:04 2016 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 6 May 2016 13:37:04 +1200 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 5 May 2016 at 21:47, Nathaniel Smith wrote: > On Wed, May 4, 2016 at 11:57 PM, Robert Collins ... >>> I don't think I've ever seen a package that had accurate >>> setup_requires (outside the trivial case of packages where >>> setup_requires=[] is accurate). Scientific packages in particular >>> universally have undeclared setup requirements. >> >> Are those requirements pip installable? > > Either they are, or they will be soon. Thats good. It occurs to me that scientific builds may be univerally broken because folk want to avoid easy-install and the high cost of double builds of things. E.g. adding bootstrap_requires will let folk fix things? But the main question is : why are these packages staying inaccurate? Even in the absence of a systematic solution I'd expect bug reports and pull requests to converge on good dependencies. ... >> So the 10x thing is defining how the thing doing the isolation (e.g. >> pip) should handle things that can't be installed but are already >> available on the system. >> >> And that has to tunnel all the way out to the user, because its >> context specific, its not an attribute of the dependencies per se >> (since new releases can add or remove this situation), nor of the >> consuming thing (same reason). > > # User experience today on i386 > $ pip install foo > <... error: missing pyqt5 ...> > $ apt install python-pyqt5 > $ pip install foo > > # User experience with build isolation on i386 > $ pip install foo > <... error: missing pyqt5 ...> > $ apt install python-pyqt5 > $ pip install --no-isolated-environment foo So thats one way that isolation can be controlled yes. > It'd even be straightforward for pip to notice that the requirement > that it failed to satisfy is already satisfied by the ambient > environment, and suggest --no-isolated-environment as a solution. > >> Ultimately, its not even an interopability question: pip could do >> isolated builds now, if it chose, and it has no ramifications as far >> as PEPs etc are concerned. > > That's not true. In fact, it seems dangerously naive :-/ > > If pip just went ahead and flipped a switch to start doing isolated > builds now, then everything would burst into flame and there would be > a howling mob in the bug tracker. Sure, there's no PEP saying we > *can't* do that, but in practice it's utterly impossible. We've done similarly omg it could be bad changes before - e.g. introducing wheel caching. A couple of iterations later and we're in good shape. Paul seems to think we could introduce it gracefully - opt in, then default with fallback, then solely opt-out. > If we roll out this feature without build isolation, then next year > we'll still be in the exact same situation we are today -- we'll have > the theoretical capability of enabling build isolation, but everything > would break, so in practice, we won't be able to. > > The reason I'm being so intense about this is that AFAICT these are all true: > > Premise 1: Without build isolation enabled by default, then in > practice everyone will putter along putting up with broken builds all > the time. It's *incredibly* easy to forget to declare a build > dependency, it's the kind of mistake that every new user makes, and > experienced users too. I know lots of projects that already build in [mostly] clean environments all the time - e.g. anything using tox[*], most things that use Travis, Appveyor, Jenkins etc. Yes, if its not there by default it requires effort to opt-in, and there will be a tail of some sort. I don't disagree with the effect of the premise, but I do disagree about the intensity. [*]: yes, to runs setup.py sdist outside the environment, so setup_requires doesn't get well exercised. IIRC tox is adding a feature to build in a venv so they will be exercised. > Premise 2: We can either enable build isolation together with the new > static bootstrap requirements, or we can never enable build isolation > at all, ever. I don't agree with this one at all. We can enable it now if we want: yes, its a behavioural change, and we'd need to phase it in, but the logistics are pretty direct. > Conclusion: If we want to ever reach a state where builds are > reliable, we need to tie build isolation to the new static metadata. > > If you have some clever plan for how we could practically transition > to build isolation without having them opt-in via a new feature, then > that would be an interesting counter-argument; or an alternative plan > for how to reach a point where build requirements are accurate without > being enforced; or ...? As Paul said, add it off by default, phase it in over a couple of releases. 0: opt-in 1: opt-out but when builds fail disable and retry (and warn) 2: opt out >> pyqt5 not having i386 is just a trivial egregious case. ARM32 and 64 >> is going to be super common, Power8 another one, let alone less common >> but still extant and used architectures like PPC, itanium, or new ones >> like x86_32 [If I remember the abbreviation right - no, its not i386]. > > (it's x32) Thanks :) > manylinux is helpful here, but it's not necessary -- build isolation > just requires that the dependencies be pip installable, could be from > source or whatever. In practice the wheel cache will kick in and > handle most of the work. *only* for things that honour setup.py - which the vast majority of SWIG and SWIG style extension modules do not. >> Solve that underlying problem - great, then isolation becomes an >> optimisation question for things without manylinux wheels. But if we >> don't solve it then isolation becomes a 'Can build X at all' question, >> which is qualitatively different. > > More like 'can build X at all (without adding one command line > option)'. And even this is only if you're in some kind of environment > that X upstream doesn't support -- no developer is going to make a > release of X with build isolation turned on unless build isolation > works on the platforms they care about. There's some thinking here I don't follow: upstreams don't opt into-or-outof build isolation AFAICT, *other* than by your proposal to couple it to declaring bootstrap requirements. The idea that upstream should care about whether their thing is built in an isolated environment or not doesn't make any sense to me. Upstream *should* make sure their thing can build in an isolated fashion - and then not care anymore. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From dholth at gmail.com Thu May 5 23:13:41 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 06 May 2016 03:13:41 +0000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: >From my point of view mandatory build isolation will make building thinks slow and bad, besides being totally unrelated to the idea of a working bootstrap requirements feature. On Thu, May 5, 2016 at 9:37 PM Robert Collins wrote: > On 5 May 2016 at 21:47, Nathaniel Smith wrote: > > On Wed, May 4, 2016 at 11:57 PM, Robert Collins > ... > >>> I don't think I've ever seen a package that had accurate > >>> setup_requires (outside the trivial case of packages where > >>> setup_requires=[] is accurate). Scientific packages in particular > >>> universally have undeclared setup requirements. > >> > >> Are those requirements pip installable? > > > > Either they are, or they will be soon. > > Thats good. It occurs to me that scientific builds may be univerally > broken because folk want to avoid easy-install and the high cost of > double builds of things. E.g. adding bootstrap_requires will let folk > fix things? > > But the main question is : why are these packages staying inaccurate? > Even in the absence of a systematic solution I'd expect bug reports > and pull requests to converge on good dependencies. > > ... > >> So the 10x thing is defining how the thing doing the isolation (e.g. > >> pip) should handle things that can't be installed but are already > >> available on the system. > >> > >> And that has to tunnel all the way out to the user, because its > >> context specific, its not an attribute of the dependencies per se > >> (since new releases can add or remove this situation), nor of the > >> consuming thing (same reason). > > > > # User experience today on i386 > > $ pip install foo > > <... error: missing pyqt5 ...> > > $ apt install python-pyqt5 > > $ pip install foo > > > > # User experience with build isolation on i386 > > $ pip install foo > > <... error: missing pyqt5 ...> > > $ apt install python-pyqt5 > > $ pip install --no-isolated-environment foo > > So thats one way that isolation can be controlled yes. > > > It'd even be straightforward for pip to notice that the requirement > > that it failed to satisfy is already satisfied by the ambient > > environment, and suggest --no-isolated-environment as a solution. > > > >> Ultimately, its not even an interopability question: pip could do > >> isolated builds now, if it chose, and it has no ramifications as far > >> as PEPs etc are concerned. > > > > That's not true. In fact, it seems dangerously naive :-/ > > > > If pip just went ahead and flipped a switch to start doing isolated > > builds now, then everything would burst into flame and there would be > > a howling mob in the bug tracker. Sure, there's no PEP saying we > > *can't* do that, but in practice it's utterly impossible. > > We've done similarly omg it could be bad changes before - e.g. > introducing wheel caching. A couple of iterations later and we're in > good shape. Paul seems to think we could introduce it gracefully - opt > in, then default with fallback, then solely opt-out. > > > If we roll out this feature without build isolation, then next year > > we'll still be in the exact same situation we are today -- we'll have > > the theoretical capability of enabling build isolation, but everything > > would break, so in practice, we won't be able to. > > > > The reason I'm being so intense about this is that AFAICT these are all > true: > > > > Premise 1: Without build isolation enabled by default, then in > > practice everyone will putter along putting up with broken builds all > > the time. It's *incredibly* easy to forget to declare a build > > dependency, it's the kind of mistake that every new user makes, and > > experienced users too. > > I know lots of projects that already build in [mostly] clean > environments all the time - e.g. anything using tox[*], most things > that use Travis, Appveyor, Jenkins etc. Yes, if its not there by > default it requires effort to opt-in, and there will be a tail of some > sort. I don't disagree with the effect of the premise, but I do > disagree about the intensity. > > [*]: yes, to runs setup.py sdist outside the environment, so > setup_requires doesn't get well exercised. IIRC tox is adding a > feature to build in a venv so they will be exercised. > > > Premise 2: We can either enable build isolation together with the new > > static bootstrap requirements, or we can never enable build isolation > > at all, ever. > > I don't agree with this one at all. We can enable it now if we want: > yes, its a behavioural change, and we'd need to phase it in, but the > logistics are pretty direct. > > > Conclusion: If we want to ever reach a state where builds are > > reliable, we need to tie build isolation to the new static metadata. > > > > If you have some clever plan for how we could practically transition > > to build isolation without having them opt-in via a new feature, then > > that would be an interesting counter-argument; or an alternative plan > > for how to reach a point where build requirements are accurate without > > being enforced; or ...? > > As Paul said, add it off by default, phase it in over a couple of releases. > 0: opt-in > 1: opt-out but when builds fail disable and retry (and warn) > 2: opt out > > >> pyqt5 not having i386 is just a trivial egregious case. ARM32 and 64 > >> is going to be super common, Power8 another one, let alone less common > >> but still extant and used architectures like PPC, itanium, or new ones > >> like x86_32 [If I remember the abbreviation right - no, its not i386]. > > > > (it's x32) > > Thanks :) > > > manylinux is helpful here, but it's not necessary -- build isolation > > just requires that the dependencies be pip installable, could be from > > source or whatever. In practice the wheel cache will kick in and > > handle most of the work. > > *only* for things that honour setup.py - which the vast majority of > SWIG and SWIG style extension modules do not. > > >> Solve that underlying problem - great, then isolation becomes an > >> optimisation question for things without manylinux wheels. But if we > >> don't solve it then isolation becomes a 'Can build X at all' question, > >> which is qualitatively different. > > > > More like 'can build X at all (without adding one command line > > option)'. And even this is only if you're in some kind of environment > > that X upstream doesn't support -- no developer is going to make a > > release of X with build isolation turned on unless build isolation > > works on the platforms they care about. > > There's some thinking here I don't follow: upstreams don't opt > into-or-outof build isolation AFAICT, *other* than by your proposal to > couple it to declaring bootstrap requirements. The idea that upstream > should care about whether their thing is built in an isolated > environment or not doesn't make any sense to me. > Upstream *should* make sure their thing can build in an isolated > fashion - and then not care anymore. > > -Rob > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Fri May 6 01:45:29 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 06 May 2016 17:45:29 +1200 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> Message-ID: <572C2F79.3010906@canterbury.ac.nz> Chris Barker wrote: > This is python packaging, I'm still trying to see why you'd need to read > the config without python available. Even if python is available, you might not want to run arbitrary code just to install a package. If a config file can contain executable code, then it can contain bugs. Debugging is something the developer of a package should have to do, not the user. In my experience, fixing someone else's buggy setup.py is about as much fun as pulling one's own teeth out with a blunt screwdriver. -- Greg From ncoghlan at gmail.com Fri May 6 08:41:44 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 6 May 2016 22:41:44 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 6 May 2016 at 06:30, Chris Barker wrote: > On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan wrote: >> Usually that enforcement is >> handled by making the configuration declarative - it's in some passive >> format like an ini file or JSON, and if it gets too repetitive then >> you introduce a config generator, rather than making the format itself >> more sophisticated. > > > OK -- that's more or less my thought -- if it's python that gets run, then > you've got your config generator built in -- why not? The immediate reason is because Python allows imports, and if imports are permitted in the config script, people will use them, and if they're not permitted, they'll complain about their absence. The "Python-with-imports" case is the status quo with setup.py, and we already know that's a pain because you need to set up an environment that already has the right dependencies installed to enable your module level imports in order to run the script and find out what dependencies you need to install to let you run the script in the first place. The "Python-without-imports" approach would just be confusing - while it would avoid the dependency bootstrapping problem, it would only be kinda-sorta-Python rather than actual Python. So rather than saying "the bootstrapping dependency declaration file is Python-but-not-really", it's easier to say "it's an ini-file format that can be parsed with the configparser module" or "it's JSON" (I'm ruling out any options that don't have a stdlib parser in Python 2.7) The "future benefit" reason is that it's a lot easier to be confident that reading a config file from an uploaded artifact isn't going to compromise a web service, so a future version of PyPI can readily pull information out of the config file and republish it via an API. Once you have that kind of information available via an API, you can resolve it before downloading *anything* (which is especially useful when your goal is dependency graph analysis rather than downloading the whole of PyPI and running a Python script from every package you downloaded). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri May 6 08:55:30 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 6 May 2016 22:55:30 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 6 May 2016 at 06:41, Chris Barker wrote: > On Wed, May 4, 2016 at 8:09 PM, Nick Coghlan wrote: >> 3. The ongoing popularity of setup.cfg shows that while ini-style may >> not be perfect for this use case, it clearly makes it over the >> threshold of "good enough" > > it's only popular because it's what's there -- if we're using that standard, > we could make the same argument about setuptools ;-) That's exactly the course change we made following the release of Python 3.3 - rather than trying to replace setuptools directly, the scope of our ambitions was narrowed to eliminating the need to run "./setup.py install", while keeping setuptools as the default build system used to enable things like generating wheel files ("./setup.py bdist_wheel" doesn't work if you're importing setup from distutils.core - you have to be importing it from setuptools. If you have a setup.py file that imports from distutils.core, you have to use "pip wheel" if you want to get a wheel file) >> So when I ask myself now "What's the *simplest* thing we could do that >> will make things better than the status quo?", then the answer I come >> up with today is your original idea: bless setup.cfg (or at least a >> subset of it) as a standardised interface. > > IIUC, we would be changin, or at least adding to the current setup.dfg spec. > So this is a change, no matter how you slice it, saying "configuration will > be specified in setup.something, in some other format, is the least > significant part of all this change. > > And maybe it's good to keep "new style" configuration clearly separate. Part of my motivation for suggesting re-using setup.cfg is the proliferation of packaging related config sprawl in project root directories - setup.py won't be going anywhere any time soon for most projects, some folks already have a setup.cfg (e.g. to specify universal wheel creation), and there's also MANIFEST.in to control sdist generation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From benzolius at yahoo.com Fri May 6 03:11:48 2016 From: benzolius at yahoo.com (Benedek Zoltan) Date: Fri, 6 May 2016 07:11:48 +0000 (UTC) Subject: [Distutils] ez_setup.py can not get setuptools References: <1683253402.314191.1462518708750.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> Hi, I don't know what happened recently. Usually I install setuptools by a script using the ez_setup.py script. Recently I get an error: Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-21.0.0.zipTraceback (most recent call last):? File "downloads/ez_setup.py", line 415, in ? ? sys.exit(main())? File "downloads/ez_setup.py", line 411, in main? ? archive = download_setuptools(**_download_args(options))? File "downloads/ez_setup.py", line 336, in download_setuptools? ? downloader(url, saveto)? File "downloads/ez_setup.py", line 287, in download_file_insecure? ? src = urlopen(url)? File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen? ? return opener.open(url, data, timeout)? File "/usr/lib/python3.4/urllib/request.py", line 469, in open? ? response = meth(req, response)? File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response? ? 'http', request, response, code, msg, hdrs)? File "/usr/lib/python3.4/urllib/request.py", line 507, in error? ? return self._call_chain(*args)? File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain? ? result = func(*args)? File "/usr/lib/python3.4/urllib/request.py", line 587, in http_error_default? ? raise HTTPError(req.full_url, code, msg, hdrs, fp)urllib.error.HTTPError: HTTP Error 404: Not Found For now I can copy the package from an old virtualenv, but I'd appreciate a better solution/advise. ThanksZoltan -------------- next part -------------- An HTML attachment was scrubbed... URL: From cosimo.lupo at daltonmaag.com Thu May 5 04:14:52 2016 From: cosimo.lupo at daltonmaag.com (Cosimo Lupo) Date: Thu, 5 May 2016 09:14:52 +0100 Subject: [Distutils] Things that are not pip-installable (was Re: moving things forward) shouldn't) In-Reply-To: <20160505080058.GA17571@platonas> References: <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <20160505080058.GA17571@platonas> Message-ID: On 5 May 2016 at 09:00, Marius Gedminas wrote: > pip install pyqt5 You need Python 3.5, and you also need to ensure you are calling the `pip` command for Python 3.5, and not the default `pip` which may be linked to a different Python version. Try this for example: python3.5 -m pip install --user pyqt5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 11:26:07 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 08:26:07 -0700 Subject: [Distutils] moving things forward In-Reply-To: <572C2F79.3010906@canterbury.ac.nz> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> Message-ID: On Thu, May 5, 2016 at 10:45 PM, Greg Ewing wrote: > Even if python is available, you might not want to run > arbitrary code just to install a package. > > If a config file can contain executable code, then it > can contain bugs. Debugging is something the developer of > a package should have to do, not the user. In my experience, > fixing someone else's buggy setup.py is about as much fun > as pulling one's own teeth out with a blunt screwdriver. well, in my experience, debugging my OWN buggy setup.py is about equally pleasant! But I think there is consensus here that build systems need to be customisable -- which means arbitrary code may have to be run. So I don't think this is going to help anyone avoiding dealing with buggy builds :-( or are we talking about a config file that would be delivered with a binary wheel? IN which case, yes, it should not have any executable code in it. And anyway, it seems folks want to go with static config anyway -- clearly separating configuration from customization, so I'll stop now. I'd still like to be able to use Python literals, though :-) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 11:54:10 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 08:54:10 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 4:32 PM, Nathaniel Smith wrote: > > You do know that we're on our way to re-writing conda, now, don't you? > :-) > > > > I think we need to be careful of scope-creep... > > conda did not invent the idea of creating separate python environments > for different tasks :-) I'm not suggesting conda invented anything -- I'm suggesting it has implemented many of the things being talked about here -- the truth is conda was designed to solve exactly the problems that scientific python packages had that pip+wheel+setuptools do not currently solve. So my point is about scope-creep -- if you want the PyPa tools to solve all these problems, then you are re-inventing conda -- better to simply adopt conda (or fork it and fix issues that I'm sure are there....) On the other hand, improving the PyPa tools while maintaining their current scope is a lovely idea -- but that means leaving isolation of build environments etc, to external tools: Docker, VMs, conda...... Actually, conda has a good lesson here: conda is about packaging, NOT building -- I was quite disappointed that conda provided NO support for cross platform building at all -- but after using it a bit I realized that that was a great decision -- if you want to support a wide variety of packages, you really need to let the package authors use whatever the heck build system they want -- you really don't want to have to implement (or even debug) everyone else's builds. And IIUC, that is the direction we are trying to go with pip now -- making pip+wheel build-system independent -- good goal. Which actually give me an idea: it seems we are very bogged down in struggles with keeping backward compatibility, and coming up with a API for an arbitrary build system. Maybe we can take a lesson from conda and essentially punt: conda works by reading a meta.yaml -- a purely declarative package configuration. The actual building is done by calling a build script -- conda itself does not need to know or care what the heck is in that build script -- all it needs to know is that it can be run. Why not keep the API that simple for pip? we could do total backward compatible, and simply say: pip will call "python setup.py install". And that's it, end of story -- individual packages could do ANYTHING in that setup.py script -- current packages would be using setuptools, but pip wouldn't need to care at all what was in there -- only know that it would install the package. And, I suppose, a setup.py bdist_wheel for building wheels, and setup.py sdist for creating source distributions), though I'm not sure that those need to be a standardized API -- or at least not the SAME standardized API. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 12:03:55 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 09:03:55 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Thu, May 5, 2016 at 6:37 PM, Robert Collins wrote: > > Thats good. It occurs to me that scientific builds may be univerally > broken because folk want to avoid easy-install and the high cost of > double builds of things. E.g. adding bootstrap_requires will let folk > fix things? > scientific builds are universally "broken" because their dependencies are often non-python (and oftne complex and hard to build) external libs. that is simply a problem that setuptools and pip were never designed to address. > But the main question is : why are these packages staying inaccurate? > Even in the absence of a systematic solution I'd expect bug reports > and pull requests to converge on good dependencies. > the problem is not that people haven't declared the proper dependencies in their packages -- it's because it's impossible to declare the proper dependencies in their packages... The solution is to do one of three things: 1) punt -- and let folks use Canopy, or conda, or third party collections of wheels like the Gohlke repository, or let folks build it themselves (a lot of packages have all kinds of custom code to go look in macports, or homebrew, or fink locations for libs on OS-X for instance. 2) build binary wheels that statically link in everything needed (or include the dlls in the package) 3) make binary wheels out of C libs, so that other packages can depend on them -- this is kind of a kludgy hack (abuse?) of pip/wheel but should be do-able. Some folks are making a valiant effort to do a combination of (2) and (3) for Windows, OS-X, and now many linux -- but it is too bad that the tools don't make it easy. And so far, all we really have are the core scipy stack (and pyQT5???) In short -- building fully supported binary packages is possible, but requires a fair bit of expertise and a valiant effort on the part of someone... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 12:12:28 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 09:12:28 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Fri, May 6, 2016 at 5:55 AM, Nick Coghlan wrote: > > And maybe it's good to keep "new style" configuration clearly separate. > > Part of my motivation for suggesting re-using setup.cfg is the > proliferation of packaging related config sprawl in project root > directories - setup.py won't be going anywhere any time soon for most > projects, some folks already have a setup.cfg (e.g. to specify > universal wheel creation), and there's also MANIFEST.in to control > sdist generation. yeah -- ugly, but will one more file make a difference? now that I think about it -- IIUC the goals here, we want to separate packaging from building -- I'd argue that setuup.cfg is about building -- we really should keep the package configuration separate. Some day, some how, we hoping to have a new (or multiple build systems) they won't use setup.cfg to configure the build. So we probably shouldn't marry ourselves to setup.cfg for package configuration. The other issue is social -- if we stick with setup.cfg, we are implying that this is all about tweaking distutils/setuptools, not doing something different -- seemingly keeping the mingling of building and packaging forevermore.... But this isn't that big a deal -- enough bike-shedding, time to make a declaration. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 12:15:42 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 09:15:42 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Fri, May 6, 2016 at 5:41 AM, Nick Coghlan wrote: > The "Python-with-imports" case is the status quo with setup.py, and we > already know that's a pain because you need to set up an environment > that already has the right dependencies installed to enable your > module level imports in order to run the script and find out what > dependencies you need to install to let you run the script in the > first place. > good point -- this is really key. The "Python-without-imports" approach would just be confusing - I agree -- I never suggested that -- it's full python or fully declarative. > So rather than saying "the bootstrapping dependency declaration file > is Python-but-not-really", it's easier to say "it's an ini-file format > that can be parsed with the configparser module" or "it's JSON" (I'm > ruling out any options that don't have a stdlib parser in Python 2.7) > Last time, I promise :-) "python literals" is perfectly well defined -- both by the language reference, and by "can be parsed by ast.literal_eval" and it addresses the limitations of JSON and is fully declarative. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 12:17:22 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 09:17:22 -0700 Subject: [Distutils] ez_setup.py can not get setuptools In-Reply-To: <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> References: <1683253402.314191.1462518708750.JavaMail.yahoo.ref@mail.yahoo.com> <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> Message-ID: ez_setup.py is pretty darn old. Any reason you can't: python -m pip install setuptools ? -CHB On Fri, May 6, 2016 at 12:11 AM, Benedek Zoltan via Distutils-SIG < distutils-sig at python.org> wrote: > Hi, > > I don't know what happened recently. Usually I install setuptools by a > script using the ez_setup.py script. > > Recently I get an error: > > Downloading > https://pypi.python.org/packages/source/s/setuptools/setuptools-21.0.0.zip > Traceback (most recent call last): > File "downloads/ez_setup.py", line 415, in > sys.exit(main()) > File "downloads/ez_setup.py", line 411, in main > archive = download_setuptools(**_download_args(options)) > File "downloads/ez_setup.py", line 336, in download_setuptools > downloader(url, saveto) > File "downloads/ez_setup.py", line 287, in download_file_insecure > src = urlopen(url) > File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen > return opener.open(url, data, timeout) > File "/usr/lib/python3.4/urllib/request.py", line 469, in open > response = meth(req, response) > File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response > 'http', request, response, code, msg, hdrs) > File "/usr/lib/python3.4/urllib/request.py", line 507, in error > return self._call_chain(*args) > File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain > result = func(*args) > File "/usr/lib/python3.4/urllib/request.py", line 587, in > http_error_default > raise HTTPError(req.full_url, code, msg, hdrs, fp) > urllib.error.HTTPError: HTTP Error 404: Not Found > > For now I can copy the package from an old virtualenv, but I'd appreciate > a better solution/advise. > > Thanks > Zoltan > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Fri May 6 12:31:19 2016 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Fri, 6 May 2016 12:31:19 -0400 Subject: [Distutils] ez_setup.py can not get setuptools In-Reply-To: References: <1683253402.314191.1462518708750.JavaMail.yahoo.ref@mail.yahoo.com> <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> Message-ID: <16c5701d1a7b4$b8737020$295a5060$@hotmail.com> If you are using ez_setup in your setup.py, presumably you have guarded against the presence of setuptools in the target environment. If you don?t have setuptools, you don?t have pip. From: Distutils-SIG [mailto:distutils-sig-bounces+tritium-list=sdamon.com at python.org] On Behalf Of Chris Barker Sent: Friday, May 6, 2016 12:17 PM To: Benedek Zoltan Cc: distutils-sig at python.org Subject: Re: [Distutils] ez_setup.py can not get setuptools ez_setup.py is pretty darn old. Any reason you can't: python -m pip install setuptools ? -CHB On Fri, May 6, 2016 at 12:11 AM, Benedek Zoltan via Distutils-SIG > wrote: Hi, I don't know what happened recently. Usually I install setuptools by a script using the ez_setup.py script. Recently I get an error: Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-21.0.0.zip Traceback (most recent call last): File "downloads/ez_setup.py", line 415, in sys.exit(main()) File "downloads/ez_setup.py", line 411, in main archive = download_setuptools(**_download_args(options)) File "downloads/ez_setup.py", line 336, in download_setuptools downloader(url, saveto) File "downloads/ez_setup.py", line 287, in download_file_insecure src = urlopen(url) File "/usr/lib/python3.4/urllib/request.py", line 161, in urlopen return opener.open(url, data, timeout) File "/usr/lib/python3.4/urllib/request.py", line 469, in open response = meth(req, response) File "/usr/lib/python3.4/urllib/request.py", line 579, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib/python3.4/urllib/request.py", line 507, in error return self._call_chain(*args) File "/usr/lib/python3.4/urllib/request.py", line 441, in _call_chain result = func(*args) File "/usr/lib/python3.4/urllib/request.py", line 587, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found For now I can copy the package from an old virtualenv, but I'd appreciate a better solution/advise. Thanks Zoltan _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri May 6 12:36:12 2016 From: brett at python.org (Brett Cannon) Date: Fri, 06 May 2016 16:36:12 +0000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> Message-ID: The emails seem to have reached an equilibrium point of bikeshedding on the (bootstrap|setup)_requires issue that is being discussed (as Daniel points out below, this has nothing to do with how building works and instead is only about statically declaring what tools need to be installed to simply run your builder to do whatever the heck it wants; this is the first baby step to build decoupling/customization). So who is the BDFL on this decision? It seems we need someone to stop the bikeshedding on the field name and what file is going to house this configuration data. And do we need someone to write a PEP for this proposal to have something to target? On Thu, 5 May 2016 at 13:54 Daniel Holth wrote: > This is a recurring point of confusion. setup.py is not ever going away. > In general it is necessary for you to be able to write software to build > your software, and there is no intention to take that feature away. > > People repeatedly come to the conclusion that static metadata means the > entire build is static. It's only the dependencies that need to be static > to enable better dependency resolution in pip. The build does not need to > be static. > > The proposed feature means you will be able to have a simpler setup.py or > no setup.py it by using something like flit or pbr that are configured with > .cfg or .ini. setup.py is not going away. > > Static metadata means the list of dependencies, author name, trove > classifiers is static. Not the build itself. > > Enforced staticness of the build script is right out. > > On Thu, May 5, 2016 at 4:34 PM Alex Gr?nholm > wrote: > >> I think it would be best to gather a few extreme examples of setup.py >> files from real world projects and figure out if they can be implemented in >> a declarative fashion. That at least would help us identify the pain points. >> >> For starters, gevent's setup.py looks like it needs a fair bit of custom >> logic: >> https://github.com/gevent/gevent/blob/master/setup.py >> >> 05.05.2016, 23:30, Chris Barker kirjoitti: >> >> On Wed, May 4, 2016 at 7:45 PM, Nick Coghlan wrote: >> >> >>> This configuration vs customisation distinction is probably worth >>> spelling out for folks without a formal software engineering or >>> computer science background, so: >>> >> >> fair enough -- good to be clear on the terms. >> >> >>> Configuration is different: you're choosing amongst a set of >>> possibilities that have been constrained in some way, and those >>> constraints are structurally enforced. >> >> >> That's a key point here -- I guess I'm skeptical that we can have the >> flexibility we need with a purely configuration-based system -- we probably >> don't WANT to constrain the options completely. If you think about it, >> while distutils has it's many, many flaws, what made it possible for it to >> be as useful as it is, and last as long as it has because is CAN be >> customized -- users are NOT constrained to the built-in functionality. >> >> I suspect the idea of this thread is to keep the API to a build system >> constrained -- and let the build systems themselves be as customizable as >> the want to be. And I haven't thought it out carefully, but I have a >> feeling that we're going to hit a wall that way .. but maybe not. >> >> >>> Usually that enforcement is >>> handled by making the configuration declarative - it's in some passive >>> format like an ini file or JSON, and if it gets too repetitive then >>> you introduce a config generator, rather than making the format itself >>> more sophisticated. >>> >> >> OK -- that's more or less my thought -- if it's python that gets run, >> then you've got your config generator built in -- why not? >> >> >> >>> The big advantage of configuration over customisation is that you >>> substantially increase the degrees of freedom in how *consumers* of >>> that configuration are implemented - no longer do you need a full >>> Python runtime (or whatever), you just need an ini file parser, or a >>> JSON decoder, and then you can look at just the bits you care about >>> for your particular use case and ignore the rest. >>> >> >> Sure -- but do we care? this is about python packaging -- is it too big a >> burden to say you need python to read the configuration? >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Fri May 6 12:39:10 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 May 2016 12:39:10 -0400 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <717248AF-10EA-4C71-8B57-B4200A75F318@stufft.io> > On May 6, 2016, at 11:54 AM, Chris Barker wrote: > > So my point is about scope-creep -- if you want the PyPa tools to solve all these problems, then you are re-inventing conda -- better to simply adopt conda (or fork it and fix issues that I'm sure are there?.) Adopting Conda is unlikely to ever happen for the defaulting tooling. The problems that the default tooling attempt to solve are significantly harder than the problems that Conda attempts to solve and switching to it would be a regression. The primary benefit of Conda?s packages over Wheels are that they have a curated repository with people who are ensuring that things build correctly and since they don?t rely on authors to do it correctly, they don?t have to wait for, or convince authors to do this fresh new thing. The problem is, none of those benefits are something that would apply if we decided to throw away the 588,074 files that are currently able to be installed on PyPI. Us deciding that Conda is the thing to use isn?t going to magically create an army of volunteers to go through and take all 80,000 packages on PyPI and ensure that we get a correctly compiled package on every platform for each version. If we could do that we could just convince everyone to go out and build binary packages, we could just do that with Wheels without requiring forklifting an entire ecosystem. While wheels are optimized for the pure Python case, there is nothing preventing them from being used to install anything else (Erlang or R or even Python itself). The pynativelib stuff is proof of the ability to do just that? and in distutils-sig land we tend to care a lot more about how these things will affect our downstream consumers like Debian and such than Conda needs to (or should need to!). Now, this isn?t to say that Conda is bad or anything, but it?s use as a replacement for the current ecosystem is about as interesting as suggesting we all adopt RPM, or apt-get, or Choclatey, or Ports, or Pacman or whatever flavor of downstream you wish to insert here. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri May 6 12:40:48 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 May 2016 12:40:48 -0400 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> Message-ID: <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> > On May 6, 2016, at 12:36 PM, Brett Cannon wrote: > > So who is the BDFL on this decision? It seems we need someone to stop the bikeshedding on the field name and what file is going to house this configuration data. And do we need someone to write a PEP for this proposal to have something to target? We need someone to write the PEP and someone to offer to be BDFL for it. For this particular change the default would be Nick for BDFL but if he?s busy someone else can take it over for this PEP. Though I think we need someone writing an actual PEP first. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Fri May 6 12:43:32 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 May 2016 12:43:32 -0400 Subject: [Distutils] ez_setup.py can not get setuptools In-Reply-To: <16c5701d1a7b4$b8737020$295a5060$@hotmail.com> References: <1683253402.314191.1462518708750.JavaMail.yahoo.ref@mail.yahoo.com> <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> <16c5701d1a7b4$b8737020$295a5060$@hotmail.com> Message-ID: <82087DDD-654D-4BF7-993C-D6D3BDB637DC@stufft.io> > On May 6, 2016, at 12:31 PM, tritium-list at sdamon.com wrote: > > If you don?t have setuptools, you don?t have pip. Not true anymore, pip is perfectly capable of running and installing things without setuptools now days. The only time you *need* setuptools installed is if you?re installing from a sdist (and setuptools has wheels, so you can install setuptools with pip withouth setuptools already being installed). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From leorochael at gmail.com Fri May 6 12:48:13 2016 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Fri, 6 May 2016 13:48:13 -0300 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 6 May 2016 at 13:15, Chris Barker wrote: > On Fri, May 6, 2016 at 5:41 AM, Nick Coghlan wrote: > >> [...] > > > >> So rather than saying "the bootstrapping dependency declaration file >> is Python-but-not-really", it's easier to say "it's an ini-file format >> that can be parsed with the configparser module" or "it's JSON" (I'm >> ruling out any options that don't have a stdlib parser in Python 2.7) >> > > Last time, I promise :-) > > "python literals" is perfectly well defined -- both by the language > reference, and by "can be parsed by ast.literal_eval" > > and it addresses the limitations of JSON and is fully declarative. > There is actually prior art for that kind of use. Odoo uses such a language for its addons system, including package dependencies. See example file here: https://github.com/OCA/maintainer-tools/blob/master/template/module/__openerp__.py Notice the `depends` key, that lists other addons, and the `external_dependencies` key that can list both python distribution dependencies as well as external program dependencies. ISTM the reluctance in adopting this idea comes from the desire of using other programming languages to parse the bootstraping information (imagine an sdist -> deb converter implemented in go. It will have to exec python eventually, during the build process, but not before it has prepared the build environment). And even though, unlike JSON, the ConfigParser format of setup.cfg is not officially defined anywhere, there are sufficiently compatible ini parsers in other languages that the idea still has merit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Fri May 6 13:40:36 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 06 May 2016 10:40:36 -0700 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: <572CD714.70705@stoneleaf.us> On 05/06/2016 09:48 AM, Leonardo Rochael Almeida wrote: > On 6 May 2016 at 13:15, Chris Barker wrote: >> "python literals" is perfectly well defined -- both by the language >> reference, and by "can be parsed by ast.literal_eval" and it addresses >> the limitations of JSON and is fully declarative. > > There is actually prior art for that kind of use. Odoo uses such a > language for its addons system, including package dependencies. See > example file here: > > https://github.com/OCA/maintainer-tools/blob/master/template/module/__openerp__.py > > Notice the `depends` key, that lists other addons, and the > `external_dependencies` key that can list both python distribution > dependencies as well as external program dependencies. This is one of the very few things Odoo got right. Let's not look to them for other examples of good coding practices. -- ~Ethan~ From brett at python.org Fri May 6 14:14:40 2016 From: brett at python.org (Brett Cannon) Date: Fri, 06 May 2016 18:14:40 +0000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Fri, 6 May 2016 at 09:40 Donald Stufft wrote: > > On May 6, 2016, at 12:36 PM, Brett Cannon wrote: > > So who is the BDFL on this decision? It seems we need someone to stop the > bikeshedding on the field name and what file is going to house this > configuration data. And do we need someone to write a PEP for this proposal > to have something to target? > > > We need someone to write the PEP and someone to offer to be BDFL for it. > For this particular change the default would be Nick for BDFL but if he?s > busy someone else can take it over for this PEP. Though I think we need > someone writing an actual PEP first. > OK, assuming the Nick will be pronouncing, who wants to write the PEP? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 6 17:49:56 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 6 May 2016 14:49:56 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: <717248AF-10EA-4C71-8B57-B4200A75F318@stufft.io> References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <717248AF-10EA-4C71-8B57-B4200A75F318@stufft.io> Message-ID: On Fri, May 6, 2016 at 9:39 AM, Donald Stufft wrote: > On May 6, 2016, at 11:54 AM, Chris Barker wrote: > > So my point is about scope-creep -- if you want the PyPa tools to solve > all these problems, then you are re-inventing conda -- better to simply > adopt conda (or fork it and fix issues that I'm sure are there?.) > > > Adopting Conda is unlikely to ever happen for the defaulting tooling. > I didn't put the emphasis right in that sentence -- I was not actually suggesting that conda be adopted -- what I was suggesting was the we DON'T expand the scope of pip et al to be able to do everything that conda does. That being said, I think there are some misunderstanding here that may be relevant to this discussion: > The problems that the default tooling attempt to solve are significantly > harder than the problems that Conda attempts to solve > kinda-sorta -- conda is a packaging system. period, end of story (actually it is an isolated environment system, too -- is that packaging? )-- it is NOT a build system. Right now, you can invoke pip to: * Find and install a binary package (wheel) -- this conda does * Find, download, build and install install a source package -- nope, conda doesn't do anything like that. * Build and install a local package from source * Install a local package from source in "develop mode" (editable mde) -- actually conda does have that -- though I'm not sure who well it works, of it it's python specific. But the fact that this all is (apparently) done by one tool is actually a problem -- we have a tangled mess of binary and source and building and installing, and it isn't obvious to the user what they are getting ) often they don't care -- as long as it works :-) ) And, under the hood, it's all driven by setuptools, which also has duplicate and slightly different functionality that can be accidentally invoked (fun with easy_install!). So I thought the goal now was to clean up an untangle this mess, yes? and switching to it would be a regression. > only in the sense the switching to pip (and getting rid of setuptools) would be a regression.... > The primary benefit of Conda?s packages over Wheels are that they have a > curated repository with people who are ensuring that things build correctly > and since they don?t rely on authors to do it correctly, they don?t have to > wait for, or convince authors to do this fresh new thing. > That simply is not true. Let's not mix social issues from technical ones. Continuum set out to create a curated set of packages, yes -- but they didn't put the enormous amount of work into conda for no reason -- they did it because pip + wheel didn't meet their needs (and it doesn't meet a lot of our needs, either). And, as I understand it, they came to the Python community, and said something along the lines of "the packaging tooling doesn't meet the needs of the scientific community, can we contribute changes, improvements" -- and were told something along the lines of: "We don't want the standard tooling to solve those problems -- go make what you need, and then maybe we'll talk" (I can dig out the message from Travis Oliphant about this if you like) And they did go out and make something else, and they have released it as an open source project, and a number of folks are adopting it to solve their problems, too. > The problem is, none of those benefits are something that would apply if > we decided to throw away the 588,074 files that are currently able to be > installed on PyPI. > conda aside, I think there is a real hang-up here. yes, of course you don't want to throw away all that -- we absolutely need to maintain the current packaging, building structure to support existing projects. But that doesn't mean something new and better can't be built to be used in parallel -- as folks move forward with pacakge development, they can adopt the new system or they can keep using teh same old setup.py they have. MAybe decades later something will get depreciated. But this approach is what got us in the ugly mess to begin with. Us deciding that Conda is the thing to use isn?t going to magically create > an army of volunteers to go through and take all 80,000 packages on PyPI > and ensure that we get a correctly compiled package on every platform for > each version. If we could do that we could just convince everyone to go out > and build binary packages, we could just do that with Wheels without > requiring forklifting an entire ecosystem. > Again, conda packages and wheels are not the same thing -- there are technical differences, and those differences really are why continuum uses it for Anaconda, and why a lot of folks are adopting it for community led efforts, like conda-forge. And there are a LOT of packages on conda-forge despite it being very new and only a small handful of people contributing. https://conda-forge.github.io/feedstocks.html > While wheels are optimized for the pure Python case, there is nothing > preventing them from being used to install anything else (Erlang or R or > even Python itself). > OK, wheel is essentially "just" a archive -- so are conda packages -- they are the same in that regard. but the tools that build and install wheels aren't set up to support non-python. (and virtualenv vs. conda environments, etc) > The pynativelib stuff is proof of the ability to do just that? and in > distutils-sig land we tend to care a lot more about how these things will > affect our downstream consumers like Debian and such than Conda needs to > (or should need to!). > if you split out building from package management, then the package management has no impact on Debian et al. They use their package management, and python's build system -- isn't that how it's done now? Now, this isn?t to say that Conda is bad or anything, but it?s use as a > replacement for the current ecosystem is about as interesting as suggesting > we all adopt RPM, or apt-get, or Chocolatey, or Ports, or Pacman or > whatever flavor of downstream you wish to insert here. > Again -- not suggesting that -- though I honestly think it's more driven by "not invented here" than real technical reasons. A couple years ago, the Python community very clearly said they didn't want to support non-python libs, etc, etc. Now apparetnly we've changed our minds, but want to keep down the road we started out on... Now I see scope creep pulling in -- we really want the "standard" python packaging system to be able to support complex packages like numpy, scipy, etc. So If the community has changed its mind, and wants to support that, then it makes sense to step back a second and ask yourselves what the architecture of the future system should look like? What do you want to support? If nothing else, there have got to be lessons that can be learned from conda (and of course, rpm, deb etc, too) BTW, if PyPa does eventually get around to re-inventing conda -- great! I was pretty unhappy when conda came out -- I was putting in real efforts to try to get wheels built for various things on OS-X, and looking at how to support my users with pip, pypi, and wheel. But in the end, it turned out that conda made a whole lot of this a whole lot easier - it really did (some of that technical, some of it social, but certainly mostly technical). So I've been putting efforts into using conda for my stuff, and supporting the community when I can. But, of course, it turns out not all of my users want to use conda, so now I need to put efforts into building wheels of my stuff, too -- and I am NOT looking forward to that. So one system to rule them all would be great. So what is the scope of all this? Do you want to support non-python C libs as first class citizens? (and Fortran, and...) Do you want to support isolated environments (for not just python) Do you want to support non-python systems as well -- R, perl, Julia, who knows? (I'm sure there are others....) If that's the long view, great -- but start thinking about it now. OK -- didn't intend that to be so long! Carry on, I'll shut up now. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri May 6 18:21:07 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 6 May 2016 23:21:07 +0100 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On 6 May 2016 at 19:14, Brett Cannon wrote: > On Fri, 6 May 2016 at 09:40 Donald Stufft wrote: >> >> >> On May 6, 2016, at 12:36 PM, Brett Cannon wrote: >> >> So who is the BDFL on this decision? It seems we need someone to stop the >> bikeshedding on the field name and what file is going to house this >> configuration data. And do we need someone to write a PEP for this proposal >> to have something to target? >> >> >> We need someone to write the PEP and someone to offer to be BDFL for it. >> For this particular change the default would be Nick for BDFL but if he?s >> busy someone else can take it over for this PEP. Though I think we need >> someone writing an actual PEP first. > > > OK, assuming the Nick will be pronouncing, who wants to write the PEP? ... and if Nick doesn't want to pronounce, I'm willing to offer to be BDFL for this one. But a PEP is the first thing. (And IMO the key point of the PEP is to be very clear on what is in scope and what isn't - the discussions have covered a *lot* of ground and being clear on what's excluded will be at least as important as stating what's in scope). Paul From njs at pobox.com Fri May 6 19:58:45 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 May 2016 16:58:45 -0700 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Fri, May 6, 2016 at 11:14 AM, Brett Cannon wrote: > > > On Fri, 6 May 2016 at 09:40 Donald Stufft wrote: >> >> >> On May 6, 2016, at 12:36 PM, Brett Cannon wrote: >> >> So who is the BDFL on this decision? It seems we need someone to stop the >> bikeshedding on the field name and what file is going to house this >> configuration data. And do we need someone to write a PEP for this proposal >> to have something to target? >> >> >> We need someone to write the PEP and someone to offer to be BDFL for it. >> For this particular change the default would be Nick for BDFL but if he?s >> busy someone else can take it over for this PEP. Though I think we need >> someone writing an actual PEP first. > > > OK, assuming the Nick will be pronouncing, who wants to write the PEP? I've just been writing up a comparison of the different file formats, partly in case it's useful to others and partly just for my own use in looking at them against each other and seeing how much it actually matters. This might also be reusable for the rationale/rejected-alternatives section of a PEP, if anyone wants it, or I could go ahead and add a few paragraphs to turn it into a proper PEP. -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Fri May 6 22:59:22 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 May 2016 19:59:22 -0700 Subject: [Distutils] comparison of configuration languages Message-ID: Here's that one-stop writeup/comparison of all the major configuration languages that I mentioned: https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f -n -- Nathaniel J. Smith -- https://vorpus.org From donald at stufft.io Fri May 6 23:14:13 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 6 May 2016 23:14:13 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: > On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: > > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig While I personally prefer YAML to any of the options on a purely syntax based level, when you weigh in all the other considerations for this I think that it makes sense to go with TOML for it. The only other option I think that could work is what Chris (I think?) suggested and just use a Python literal evaluated using ``ast.literal_eval()`` this is safe to do but it would make it harder for other languages to parse our files. It's similar to the approach taken by Lua Rocks for how their packaging system works (although their uses variables instead of one big dictionary which I think looks nicer) but Lua is much better suited for trying to execute safely outside of ``ast.literal_eval()`` too. All in all, I think TOML is the right answer (and that's why my partially written PEP used TOML). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Fri May 6 23:29:51 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 6 May 2016 20:29:51 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Fri, May 6, 2016 at 8:14 PM, Donald Stufft wrote: [...] > The only other option I think that could work is what Chris (I think?) > suggested and just use a Python literal evaluated using ``ast.literal_eval()`` Oh, good point, that should definitely be on the list of options considered regardless of whether we go for it. I won't add it now because I just switched to Windows to play games with my wife and my emacs is on the other partition, but I'll add it later. Does anyone know how literal_eval handles Unicode on py2? -n -- Nathaniel J. Smith -- https://vorpus.org From greg.ewing at canterbury.ac.nz Fri May 6 20:55:42 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 07 May 2016 12:55:42 +1200 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> Message-ID: <572D3D0E.3050307@canterbury.ac.nz> Chris Barker wrote: > But I think there is consensus here that build systems need to be > customisable -- which means arbitrary code may have to be run. I think different people are using the word "build" in different ways here. To my mind, "building" is what the developer of a package does, and a "build system" is what he uses to do it. I don't care how much arbitrary code gets run during that process. But when I do "python setup.py install" or "pip install" or whatever the recommended way is going to be, from my point of view I'm not "building" the package, I'm *installing* it. Confusion arises because the process of installation may require running a C compiler to generate extension modules. But figuring out how to do that shouldn't require running arbitrary code supplied by the developer. All the tricky stuff should have been done before the package was released. If it's having trouble finding some library or header file or whatever on my system, I'd much rather have a nice, clear declarative config file that I can edit to tell it where to find them, than some overly clever piece of python code that's great when it works but a pain to unravel when it doesn't. -- Greg From fred at fdrake.net Sat May 7 02:25:45 2016 From: fred at fdrake.net (Fred Drake) Date: Sat, 7 May 2016 02:25:45 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f Thank you for this! A very nice summary. On Fri, May 6, 2016 at 11:14 PM, Donald Stufft wrote: > While I personally prefer YAML to any of the options on a purely syntax based > level, when you weigh in all the other considerations for this I think that it > makes sense to go with TOML for it. I expect either YAML or TOML would be acceptable, based on this. I'll admit that I'd not heard of TOML before, but it warrants consideration (possibly for some of my projects as well). I've spent a fair bit of time using YAML with Ansible lately, as well as some time looking at RAML, and don't find myself worried about the syntax. Every oddness I've run across has been handled with an error when the content couldn't be parsed correctly, rather than unexpected behavior resulting from misunderstanding how it would be parsed. It's entirely possible I just haven't run across the particular problems Donald has run across, though. (The embedded Jinja2 in Ansible playbooks is another matter; let's not make that mistake.) -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From alex.gronholm at nextday.fi Sat May 7 04:28:43 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sat, 7 May 2016 11:28:43 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <572DA73B.1080805@nextday.fi> +1. I don't think the pathological cases of YAML syntax are of any concern in this context. Plus it has excellent tooling support, unlike TOML. 07.05.2016, 09:25, Fred Drake kirjoitti: > On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: >> Here's that one-stop writeup/comparison of all the major configuration >> languages that I mentioned: >> >> https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > Thank you for this! A very nice summary. > > On Fri, May 6, 2016 at 11:14 PM, Donald Stufft wrote: >> While I personally prefer YAML to any of the options on a purely syntax based >> level, when you weigh in all the other considerations for this I think that it >> makes sense to go with TOML for it. > I expect either YAML or TOML would be acceptable, based on this. I'll > admit that I'd not heard of TOML before, but it warrants consideration > (possibly for some of my projects as well). > > I've spent a fair bit of time using YAML with Ansible lately, as well > as some time looking at RAML, and don't find myself worried about the > syntax. Every oddness I've run across has been handled with an error > when the content couldn't be parsed correctly, rather than unexpected > behavior resulting from misunderstanding how it would be parsed. It's > entirely possible I just haven't run across the particular problems > Donald has run across, though. > > (The embedded Jinja2 in Ansible playbooks is another matter; let's not > make that mistake.) > > > -Fred > From tritium-list at sdamon.com Sat May 7 05:55:18 2016 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Sat, 7 May 2016 05:55:18 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <572DA73B.1080805@nextday.fi> References: <572DA73B.1080805@nextday.fi> Message-ID: <16def01d1a846$90866940$b1933bc0$@hotmail.com> I am +1 to TOML; it's INI (a human editable format) with data-types (I think it is even valid INI). I find the format pleasant to work with both in the available libraries and in the editor. -----Original Message----- From: Distutils-SIG [mailto:distutils-sig-bounces+tritium-list=sdamon.com at python.org] On Behalf Of Alex Gr?nholm Sent: Saturday, May 7, 2016 4:29 AM To: distutils-sig at python.org Subject: Re: [Distutils] comparison of configuration languages +1. I don't think the pathological cases of YAML syntax are of any concern in this context. Plus it has excellent tooling support, unlike TOML. 07.05.2016, 09:25, Fred Drake kirjoitti: > On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: >> Here's that one-stop writeup/comparison of all the major configuration >> languages that I mentioned: >> >> https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > Thank you for this! A very nice summary. > > On Fri, May 6, 2016 at 11:14 PM, Donald Stufft wrote: >> While I personally prefer YAML to any of the options on a purely syntax based >> level, when you weigh in all the other considerations for this I think that it >> makes sense to go with TOML for it. > I expect either YAML or TOML would be acceptable, based on this. I'll > admit that I'd not heard of TOML before, but it warrants consideration > (possibly for some of my projects as well). > > I've spent a fair bit of time using YAML with Ansible lately, as well > as some time looking at RAML, and don't find myself worried about the > syntax. Every oddness I've run across has been handled with an error > when the content couldn't be parsed correctly, rather than unexpected > behavior resulting from misunderstanding how it would be parsed. It's > entirely possible I just haven't run across the particular problems > Donald has run across, though. > > (The embedded Jinja2 in Ansible playbooks is another matter; let's not > make that mistake.) > > > -Fred > _______________________________________________ Distutils-SIG maillist - Distutils-SIG at python.org https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Sat May 7 06:00:27 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 May 2016 11:00:27 +0100 Subject: [Distutils] moving things forward In-Reply-To: <572D3D0E.3050307@canterbury.ac.nz> References: <571F7134.80709@stoneleaf.us> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> <572D3D0E.3050307@canterbury.ac.nz> Message-ID: tl;dr version I think you're right that terminology can be confusing. I think the definitions people typically work to are: 1. The "packaging" or "release" process - the process (run on a developer's machine) of creating files that get published for users to download and install. 2. The "install" process - the process (run on a user's machine) of taking a published file and making it available in their environment. This consists of separate steps: 2a. Optional, and to be avoided wherever possible (by distribution of wheels) - the "build" step that takes a published file and configures (compiles) it for the user's environment 2b. The "install" step (confusion alert! - the "install" step is only one step of the "install" *process*) that puts the files in the correct places on the user's machine. We're not interested in trying to dictate the "packaging" process - pip isn't involved in that process at all (see flit for a system that lets projects build releases in a completely different way). Sigh. Even the tl;dr version is too long :-) On 7 May 2016 at 01:55, Greg Ewing wrote: > Chris Barker wrote: >> >> But I think there is consensus here that build systems need to be >> customisable -- which means arbitrary code may have to be run. > > I think different people are using the word "build" in > different ways here. > > To my mind, "building" is what the developer of a package > does, and a "build system" is what he uses to do it. I > don't care how much arbitrary code gets run during that > process. That is correct, and I agree with you that making a build process like this declarative is not particularly useful. However... > But when I do "python setup.py install" or "pip install" > or whatever the recommended way is going to be, from my > point of view I'm not "building" the package, I'm > *installing* it. Unfortunately, "python setup.py install" does not work that way - it builds the project and then installs the files. So whether you want to or not, you're building. That's basically why we're trying to make "pip install foo" the canonical way of installing packages. So let's ignore "setup.py install" for the rest of this discussion. Now, for "pip install foo", *if the foo project provides a wheel compatible with your system* then what you expect is what you get - a pure install with no build step. The problem lies with projects that don't supply wheels, only source. Or unusual systems that we can't expect projects to have wheels for. Or local checkouts ("pip install ."). In those cases, it's necessary to do a build before you can install. So while we're aiming for 80% or more of the time "pip install" to do a pure install from a binary distribution, we can't avoid the fact that occasionally the install will need to run an implicit build step. > Confusion arises because the process of installation may > require running a C compiler to generate extension modules. > But figuring out how to do that shouldn't require > running arbitrary code supplied by the developer. All the > tricky stuff should have been done before the package > was released. I'm not sure I follow what you are suggesting here. Do you expect that projects should be able to publish something (it's not exactly a sdist, but it's not a wheel either as it doesn't contain everything built) should (somehow) contain simplified instructions on how to build the various C/Fortran extensions supplied in the bundle as source code? That's an interesting idea, but how would it work in practice? The bundles would need to be platform specific, I assume? Or would the user need to put all the various details of his system into a configuration file somewhere (and update that file whenever he installs a new library, updates his compiler, or whatever)? How would this cope with (for example) projects on Windows that *have* to be compiled with mingw, and not with MSVC? > If it's having trouble finding some library or header > file or whatever on my system, I'd much rather have a > nice, clear declarative config file that I can edit to > tell it where to find them, than some overly clever > piece of python code that's great when it works but > a pain to unravel when it doesn't. This sounds to me more like an attempt to replace the "build" part of distutils/setuptools with a more declarative system. While that may be a worthwhile goal (I genuinely have no opinion on that) it's largely orthogonal to the current discussions. Except in the sense that if you were to build such a system, the proposals currently on the table would allow you to ask pip to use that build system rather than setuptools: # Invented syntax, because syntax is what we *haven't* agreed on yet :-) [buildsystem] requires=gregsbuild build_command=gregsbuild --make-wheel Then if a user on a system for which the project doesn't have a binary wheel installed tries to install the project, your build system will be downloaded, and the "gregsbuild --make-wheel" command will be run to make a wheel. That's a one-off event - the wheel will be cached on the user's system for future use. I think the key point to understand is that of necessity, "pip install" runs two *steps* - one is the obvious install step, the other is a build step. We're working to reduce the number of cases where a build step is needed as far as we can, but the discussion here is about making life easier for projects who can't provide wheels for all their users, and need the build *step* (the one run on the user's machine, not the build *process* - which I'd describe as a "release" or "packaging" process, as it creates the distribution files made available to users - that runs on the developer's machine. Paul From p.f.moore at gmail.com Sat May 7 06:22:34 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 May 2016 11:22:34 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On 7 May 2016 at 04:14, Donald Stufft wrote: > While I personally prefer YAML to any of the options on a purely syntax based > level, when you weigh in all the other considerations for this I think that it > makes sense to go with TOML for it. > > The only other option I think that could work is what Chris (I think?) > suggested and just use a Python literal evaluated using ``ast.literal_eval()`` > this is safe to do but it would make it harder for other languages to parse our > files. It's similar to the approach taken by Lua Rocks for how their packaging > system works (although their uses variables instead of one big dictionary which > I think looks nicer) but Lua is much better suited for trying to execute safely > outside of ``ast.literal_eval()`` too. > > All in all, I think TOML is the right answer (and that's why my partially > written PEP used TOML). FWIW (and because obsessing about config formats is a long-running hobby of mine :-)) my view is: - JSON sucks as a human-editable format, because it's too strict over things like commas and has no comments. It's supported by the stdlib, though, which is nice. - YAML ought to be wonderful, but it ended up over-engineered (yes, we can ignore the bits we don't care about). Also, pyYAML is a bit of an annoying dependency (big, reportedly slow unless you use the C version) - not something I'd want pip to have to vendor. - INI (ConfigParser) format is too simple for many use cases. It has stdlib support, though. - Python literals are good, but they define values, they aren't a file format. We'd need to write a parser for "the rest" - TOML seems pretty good, but is immature (I remember when YAML seemed like TOML does now...) Also, it's unfamiliar to people (I wasn't aware of the use in Rust) - Just for a laugh, can I mention XML? :-) Overall, *any* choice feels like I'm choosing the "best of a bad job" :-( With all that said, I'm inclined to agree that TOML looks good. Paul From cournape at gmail.com Sat May 7 07:20:58 2016 From: cournape at gmail.com (David Cournapeau) Date: Sat, 7 May 2016 12:20:58 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: A missing dimension for comparison: round tripping support. It is quite useful for formats when used as a configuration. The best I know in that dimension is yaml (if using ruamel.yaml), which round trip comments. OTOH, adding round tripping to something like toml should not be too hard if the need arises. David On Sat, May 7, 2016 at 3:59 AM, Nathaniel Smith wrote: > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat May 7 08:06:46 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 7 May 2016 07:06:46 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: <572DA73B.1080805@nextday.fi> References: <572DA73B.1080805@nextday.fi> Message-ID: +1 for YAML YAML-LD (YAML & JSONLD) would make expressing the actual graphs for what could be "#PEP426JSONLD" much easier. https://github.com/pypa/interoperability-peps/issues/31 On Saturday, May 7, 2016, Alex Gr?nholm wrote: > +1. I don't think the pathological cases of YAML syntax are of any concern > in this context. Plus it has excellent tooling support, unlike TOML. > > 07.05.2016, 09:25, Fred Drake kirjoitti: > >> On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: >> >>> Here's that one-stop writeup/comparison of all the major configuration >>> languages that I mentioned: >>> >>> https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f >>> >> Thank you for this! A very nice summary. >> >> On Fri, May 6, 2016 at 11:14 PM, Donald Stufft wrote: >> >>> While I personally prefer YAML to any of the options on a purely syntax >>> based >>> level, when you weigh in all the other considerations for this I think >>> that it >>> makes sense to go with TOML for it. >>> >> I expect either YAML or TOML would be acceptable, based on this. I'll >> admit that I'd not heard of TOML before, but it warrants consideration >> (possibly for some of my projects as well). >> >> I've spent a fair bit of time using YAML with Ansible lately, as well >> as some time looking at RAML, and don't find myself worried about the >> syntax. Every oddness I've run across has been handled with an error >> when the content couldn't be parsed correctly, rather than unexpected >> behavior resulting from misunderstanding how it would be parsed. It's >> entirely possible I just haven't run across the particular problems >> Donald has run across, though. >> >> (The embedded Jinja2 in Ansible playbooks is another matter; let's not >> make that mistake.) >> >> >> -Fred >> >> > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Sat May 7 09:17:52 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 08 May 2016 01:17:52 +1200 Subject: [Distutils] moving things forward In-Reply-To: References: <571F7134.80709@stoneleaf.us> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> <572D3D0E.3050307@canterbury.ac.nz> Message-ID: <572DEB00.20401@canterbury.ac.nz> Paul Moore wrote: > Do you expect that > projects ... should (somehow) contain simplified instructions on how to > build the various C/Fortran extensions supplied in the bundle as > source code? Essentially, yes. I'm not sure how achievable it would be, but ideally that's what I'd like. > would the user need to put all the various details of his system into > a configuration file somewhere (and update that file whenever he > installs a new library, updates his compiler, or whatever)? On a unix system, most of the time they would all be in well-known locations. If I install something in an unusual place or in an unusual way, I'm going to have to tell something about it anyway. I don't see how an executable setup file provided by the package author is going to magically figure out all the weird stuff I've done. I don't know if there are conventions for such things on Windows. I suspect not, in which case manual input is going to be needed one way or another. > How would > this cope with (for example) projects on Windows that *have* to be > compiled with mingw, and not with MSVC? An option to specify which compiler to use would be a fairly obvious thing that the config format should provide. As would a way to include different option settings for different platforms. Combine them in the obvious way. > This sounds to me more like an attempt to replace the "build" part of > distutils/setuptools with a more declarative system. Not all of it, only the parts that strictly have to be performed as part of the build step of the install process, to use your terminology. That's a fairly restricted subset of the whole problem of compiling software. (Ideally, I would make it *very* restricted, such as only compiling C code (and possibly C++, not sure). For anything else you would have to write a C wrapper or use a tool that generates C. But that might be a step too far.) Could a purely declarative config file be flexible enough to handle this? I don't know. The distutils API is actually pretty declarative already if you use it in a straightforward way. The trouble is that there's nothing to prevent, or even discourage, writing a setup.py that works in non-straightforward ways. I've seen some pretty convoluted setup.py code that worked great when your system happened to match one of the possibilites that the author had in mind, but was very difficult to deal with otherwise. If I'd been able to just set a few compile options and library paths directly, it would have been a lot easier. There's also nothing to prevent the setup.py from doing things that don't strictly need to be done at install time. Running Pyrex to generate .c files from .pyx files is one that I've encountered. (I encouraged that one myself by including a distutils extension for it, which I later decided had been a mistake.) > the proposals currently on the table > would allow you to ask pip to use that build system rather than > setuptools: > > [buildsystem] > requires=gregsbuild > build_command=gregsbuild --make-wheel That's nice, but it wouldn't help me when I encounter a package that *hadn't* been set up to use gregsbuild. :-( -- Greg From ncoghlan at gmail.com Sat May 7 09:51:55 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 7 May 2016 23:51:55 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 7 May 2016 01:55, "Chris Barker" wrote: > So my point is about scope-creep -- if you want the PyPa tools to solve all these problems, then you are re-inventing conda -- better to simply adopt conda (or fork it and fix issues that I'm sure are there....) conda doesn't solve these problems either - it solves the *end user* problem for data analysts (install the Python library they want to use), by ignoring the system integrator one (interoperate with the system integrator's existing platform level package management systems, of which we all tend to have our own with no plans for consolidation) That's by design, though - conda was specifically created as a language independent cross-platform platform, not as a cross-platform plugin management system for Python runtimes. For a long time I was convinced the existence of conda and Linux containers as end user tools meant we wouldn't ever need to solve these problems at the language runtime layer, but it's since become clear to me that there's significant value in solving these problems in a way that doesn't care about how you obtained your Python runtime. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sat May 7 09:55:48 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 7 May 2016 14:55:48 +0100 Subject: [Distutils] moving things forward In-Reply-To: <572DEB00.20401@canterbury.ac.nz> References: <571F7134.80709@stoneleaf.us> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> <572D3D0E.3050307@canterbury.ac.nz> <572DEB00.20401@canterbury.ac.nz> Message-ID: On 7 May 2016 at 14:17, Greg Ewing wrote: > I don't know if there are conventions for such things on > Windows. I suspect not, in which case manual input is > going to be needed one way or another. There aren't. You typically need to specify the exact locations of all non-system libraries you use when you link. Judging by the various complex systems in use (autoconf, pkg_config, ...) I don't think it's as simple as you're suggesting on Unix either. The complexities of configuring and building C libraries are a hard problem, and one that people have been trying to solve for years. It's not an area we (the Python packaging community) have any interest or intention of getting involved in. Distutils went there and provided a solution that did a fantastic job of solving the simpler parts of the problem[1], but these days, the limitations of that approach are clear - and we'd much rather enable specialists to build better (or more specific) solutions and plug them into pip, than (as non-experts) try to write those solutions for them. > That's nice, but it wouldn't help me when I encounter > a package that *hadn't* been set up to use gregsbuild. :-( And ultimately that's the key issue that pip has to deal with - we have to provide support for the many thousands of packages on PyPI and elsewhere (including in closed-source in-house projects that we aren't even theoretically capable of finding out about) that as you say haven't been set up to use gregsbuild. You suggest a declarative way of specifying compilation details. But I'm not clear how *that* would help when I encounter a project that hasn't been set up to use that new system, either. Paul [1] I was around before distutils, and it's incredibly easy these days, with such a high awareness of the limitations of distutils, to forget what a huge improvement it was over the previous situation. I strongly believe that distutils - and the way it enabled all of Python's subsequent packaging infrastructure - is one of the key reasons why Python has become as popular as it is today. The debt we owe to Greg Ward, who wrote distutils, is huge. From waynejwerner at gmail.com Sat May 7 10:03:21 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Sat, 7 May 2016 09:03:21 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On May 6, 2016 10:14 PM, "Donald Stufft" wrote: > > While I personally prefer YAML to any of the options on a purely syntax based > level, when you weigh in all the other considerations for this I think that it > makes sense to go with TOML for it. I feel the same way. I use YAML fairly extensively with Salt, and while most of the basic cases are usually fine, any time I try to do something advanced, I find that it takes a few tries to get right. Also note with YAML that whichever library we picked would *also* become a pip upstream, FWIW. > The only other option I think that could work is what Chris (I think?) > suggested and just use a Python literal evaluated using ``ast.literal_eval()`` > this is safe to do but it would make it harder for other languages to parse our > files. It's similar to the approach taken by Lua Rocks for how their packaging > system works (although their uses variables instead of one big dictionary which > I think looks nicer) but Lua is much better suited for trying to execute safely > outside of ``ast.literal_eval()`` too. I'd be interested to see what that option looked like, though I *suspect* that the vagaries in unicode/non-unicode between 2.x and 3.x may produce the same weakness that we saw in the ConfigParser option. -W -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sat May 7 10:46:53 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sat, 7 May 2016 09:46:53 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <572DA73B.1080805@nextday.fi> Message-ID: TOML-LD might work for representing JSONLD, as well. http://json-ld.org/#developers * https://github.com/RDFLib/rdflib-jsonld * https://github.com/digitalbazaar/pyld JSON-LD as a target makes sense because we're describing nodes (with attributes) and edges in a package graph. On Sat, May 7, 2016 at 7:06 AM, Wes Turner wrote: > +1 for YAML > > YAML-LD (YAML & JSONLD) would make expressing the actual graphs for what > could be "#PEP426JSONLD" much easier. > > https://github.com/pypa/interoperability-peps/issues/31 > > > On Saturday, May 7, 2016, Alex Gr?nholm wrote: > >> +1. I don't think the pathological cases of YAML syntax are of any >> concern in this context. Plus it has excellent tooling support, unlike TOML. >> >> 07.05.2016, 09:25, Fred Drake kirjoitti: >> >>> On May 6, 2016, at 10:59 PM, Nathaniel Smith wrote: >>> >>>> Here's that one-stop writeup/comparison of all the major configuration >>>> languages that I mentioned: >>>> >>>> https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f >>>> >>> Thank you for this! A very nice summary. >>> >>> On Fri, May 6, 2016 at 11:14 PM, Donald Stufft wrote: >>> >>>> While I personally prefer YAML to any of the options on a purely syntax >>>> based >>>> level, when you weigh in all the other considerations for this I think >>>> that it >>>> makes sense to go with TOML for it. >>>> >>> I expect either YAML or TOML would be acceptable, based on this. I'll >>> admit that I'd not heard of TOML before, but it warrants consideration >>> (possibly for some of my projects as well). >>> >>> I've spent a fair bit of time using YAML with Ansible lately, as well >>> as some time looking at RAML, and don't find myself worried about the >>> syntax. Every oddness I've run across has been handled with an error >>> when the content couldn't be parsed correctly, rather than unexpected >>> behavior resulting from misunderstanding how it would be parsed. It's >>> entirely possible I just haven't run across the particular problems >>> Donald has run across, though. >>> >>> (The embedded Jinja2 in Ansible playbooks is another matter; let's not >>> make that mistake.) >>> >>> >>> -Fred >>> >>> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat May 7 10:48:57 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 May 2016 00:48:57 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On 7 May 2016 13:00, "Nathaniel Smith" wrote: > > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f Thanks for that, and "yikes" on the comment handling variations in ConfigParser - you can tell I've never even tried to use end-of-line comments in INI files, and apparently neither has anyone I've worked with :) For YAML, my main concern isn't quirkiness of the syntax, or code quality in PyYAML, it's the ease with which you can expose yourself to security problems (even if *pip* loads the config file safely, that doesn't mean every other tool will). Since we don't need the extra power, the easiest way to reduce the collective attack surface is to use a strictly less powerful (but still sufficient) format. For ast.literal_eval, we'd still need to come up with a way to do sections, key:value mappings and define rules for comments. For completeness, I'll note that XML combines even more user unfriendly syntax than JSON with similar security risks to YAML. So with the trade-offs laid out like that (and particularly the inconsistent comment and Unicode handling in ConfigParser), I'm prompted to favour following Rust in adopting TOML. Cheers, Nick. P.S. I particularly like the idea of using extension sections to eventually consolidate other static config into a common file - that nicely addresses my concern with config file proliferation, since it opens the door to eventually subsuming other files like MANIFEST.in and setup.cfg as archiving and build systems are updated > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat May 7 12:32:35 2016 From: brett at python.org (Brett Cannon) Date: Sat, 07 May 2016 16:32:35 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Sat, 7 May 2016 at 07:49 Nick Coghlan wrote: > > On 7 May 2016 13:00, "Nathaniel Smith" wrote: > > > > Here's that one-stop writeup/comparison of all the major configuration > > languages that I mentioned: > > > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > Thanks for that, and "yikes" on the comment handling variations in > ConfigParser - you can tell I've never even tried to use end-of-line > comments in INI files, and apparently neither has anyone I've worked with :) > Yeah, that's pretty bad. :/ I checked when ConfigParser was added to Python and it's late 1997: https://hg.python.org/cpython/rev/5b24cbb1f99b, so rather old and predates our stricter code quality rules for additions to the stdlib. > For YAML, my main concern isn't quirkiness of the syntax, or code quality > in PyYAML, it's the ease with which you can expose yourself to security > problems (even if *pip* loads the config file safely, that doesn't mean > every other tool will). Since we don't need the extra power, the easiest > way to reduce the collective attack surface is to use a strictly less > powerful (but still sufficient) format. > > For ast.literal_eval, we'd still need to come up with a way to do > sections, key:value mappings and define rules for comments. > > For completeness, I'll note that XML combines even more user unfriendly > syntax than JSON with similar security risks to YAML. > > So with the trade-offs laid out like that (and particularly the > inconsistent comment and Unicode handling in ConfigParser), I'm prompted to > favour following Rust in adopting TOML. > +1 for TOML from me as well. I know Paul brought up the lack of familiarity, but the format is simple and the Rust community is already fully dependent on it so at worst Rust + us could always just ignore future format versions if necessary. If TOML is the chosen format we could ask how long until a 1.0 release to know if we waited a month or so to implement we could make sure we're compliant with that version. I also checked pytoml at https://github.com/avakar/pytoml and it looks like it's pretty stable; no changes in the past 5 months except to support Python 3.5 and only 3 issues. And the format is simple enough that if someone had to fork the code like Nathaniel suggested or we did it from scratch it wouldn't be a huge burden. -Brett > Cheers, > Nick. > > P.S. I particularly like the idea of using extension sections to > eventually consolidate other static config into a common file - that nicely > addresses my concern with config file proliferation, since it opens the > door to eventually subsuming other files like MANIFEST.in and setup.cfg as > archiving and build systems are updated > > > > > -n > > > > -- > > Nathaniel J. Smith -- https://vorpus.org > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat May 7 14:15:50 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Sat, 7 May 2016 11:15:50 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Sat, May 7, 2016 at 6:51 AM, Nick Coghlan wrote: > On 7 May 2016 01:55, "Chris Barker" wrote: > > So my point is about scope-creep -- if you want the PyPa tools to solve > all these problems, then you are re-inventing conda -- better to simply > adopt conda (or fork it and fix issues that I'm sure are there....) > > conda doesn't solve these problems either - it solves the *end user* > problem for data analysts (install the Python library they want to use), > I really need to make this clear -- there is NOTHING "data analyst" specific about these problems -- they do come up more in the computational programming domain, but there are any number of other application that have the same problems (pyQT has come up in this conversation, yes?) -- and we're finding conda to be a good solution for our web development needs, too -- a conda environment is kinda like a lighter-weight, platform independent docker container. And of course, there is more an more data analysis going on behind web services these days too -- any python developer is going to run into these issues eventually... > by ignoring the system integrator one (interoperate with the system > integrator's existing platform level package management systems, of which > we all tend to have our own with no plans for consolidation) > > That's by design, though - conda was specifically created as a language > independent cross-platform platform, not as a cross-platform plugin > management system for Python runtimes. > exactly! > For a long time I was convinced the existence of conda and Linux > containers as end user tools meant we wouldn't ever need to solve these > problems at the language runtime layer, but it's since become clear to me > that there's significant value in solving these problems in a way that > doesn't care about how you obtained your Python runtime. > yup -- that would be the scope creep I'm referring too :-) But now I'm confused about what problems we're talking about. From above: """ interoperate with the system integrator's existing platform level package management systems """ you mean rpm, .deb, homebrew, etc? distutils did have (does it still) bdist_rpm (and, for that matter bdist_wininst) -- which I see as an attempt to inter-operate with the system integrator's platform. But I think it turns out that this is a hopeless task -- there are just too many different systems to consider and maintain -- much better to let the external systems handle that. And it got really ugly when you wanted to mingle virtualenv with rpm, etc.... The "solution" to that is to actually not do it -- use Docker or Conda if you want an isolated environment. What all this means to me is that we really do need to keep the building separate from the packaging -- so that the building tools can be used / supported by the systems integrator. But people shouldn't be using rpm to manage their system, and expect to install binary wheels... The trick is that the pip/wheel system is really handy -- certainly for Windows and OS-X that don't provide a standard system package manager. It is nice to be able to "just pip install" various packages. But the problem is that it is a half way solution -- it only works well for python-centered packages -- pure python is trivial of course, but custom written C extensions work great too -- but as soon as you get external dependencies, it all goes to heck. We can improve that -- defining the manylinux platform is a great step, but to really get there, there needs to be a way to handle third party libs (which we could probably do with the existing tools by establishing conventions), but then you get to non-python support tools: Fortran compilers, other language run times, all sorts of stuff. Oh, and then we want isolated environments, and they need to isolate these other non-python tools, so .... I think it's WAY beyond the current PEP under discussion, but we really should map out a "where we'd like to get" plan. "significant value in solving these problems in a way that doesn't care about how you obtained your Python runtime." I don't get this -- I don't think it's possible to not care about the python run-time -- we've only gotten this far because most folks have more or less standardized on the python.org builds as the standard python build. Even when you get your python elsewhere (Anaconda, for instance) they take pains to keep it compatible. I think we should count on python itself being managed by the same package manager as the packages -- or at least conforming to a standard (which I think we have with teh ABI tags, yes?) BTW, I only recently learned that there are more "other" python builds out there than I thought. Apparently the Visual Effect folks are on the habit of building their own python2.7 on Windows, so they can use a newer MS compiler, and then build their extensions to be compatible (all so you can use modern C++ in the extension code). I don't know that they are careful about setting the ABI tags, but they could be. > For a long time I was convinced the existence of conda and Linux containers as end user > tools meant we wouldn't ever need to solve these problems at the language runtime layer to an extent that is true -- but it does mean a proliferation of options, and as I think you pointed out earlier, that sucks because you need someone to support all those options -- for instance, most conda packages are built by third parties now - not the original package author. This is (very slowly) starting to change (Ned Batchelder recently asked about building conda packages for coverage...). and of course, the same is true with homebrew, and rpm, and deb, and.... So yeah, it would be nice to have ONE thing that python package developers need to do to make their work available to everyone. So I'll leave off with this: If there is even going to be one way for python package developers to provide binary packages for their users -- that system will need to support all (most) of what conda supports. Every feature in conda was added because it was needed. And THAT's what I've been meaning every time i say "reimpliment conda" -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat May 7 14:18:18 2016 From: brett at python.org (Brett Cannon) Date: Sat, 07 May 2016 18:18:18 +0000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Fri, 6 May 2016 at 16:58 Nathaniel Smith wrote: > On Fri, May 6, 2016 at 11:14 AM, Brett Cannon wrote: > > > > > > On Fri, 6 May 2016 at 09:40 Donald Stufft wrote: > >> > >> > >> On May 6, 2016, at 12:36 PM, Brett Cannon wrote: > >> > >> So who is the BDFL on this decision? It seems we need someone to stop > the > >> bikeshedding on the field name and what file is going to house this > >> configuration data. And do we need someone to write a PEP for this > proposal > >> to have something to target? > >> > >> > >> We need someone to write the PEP and someone to offer to be BDFL for it. > >> For this particular change the default would be Nick for BDFL but if > he?s > >> busy someone else can take it over for this PEP. Though I think we need > >> someone writing an actual PEP first. > > > > > > OK, assuming the Nick will be pronouncing, who wants to write the PEP? > And Paul also stepped forward to pronounce if Nick didn't want to, so we have the role of Great Decider covered one way or another. > > I've just been writing up a comparison of the different file formats, > partly in case it's useful to others and partly just for my own use in > looking at them against each other and seeing how much it actually > matters. This might also be reusable for the > rationale/rejected-alternatives section of a PEP, if anyone wants it, > or I could go ahead and add a few paragraphs to turn it into a proper > PEP. > What does the PEP need to cover? 1. The syntax of the file (which based on the replies to your great overview, Nathaniel, looks to be TOML). 2. The name of the file (although I'm assuming it will be setup.* out of tradition, probably setup.toml if TOML wins the format battle). 3. What fields there will be and their semantics ... 1. Format version (so just deciding on a name -- which also includes whether it should be top-level or in a subsection -- and initial value) 2. The actual bootstrap field (so its name and what to do if a dependency is already installed but at a version that doesn't match what the bootstrap specification asks for) Am I missing anything? And since I keep pushing on this I'm willing to be a co-author on any PEP if there's no hard deadline in getting the PEP written (i.e. I can help write the prose, but I don't have the time to do the coding as this will be the fourth PEP I have going in some state; got to catch up to Nick's 35 PEPs somehow ;). -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat May 7 15:11:15 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Sat, 7 May 2016 12:11:15 -0700 Subject: [Distutils] moving things forward In-Reply-To: <572DEB00.20401@canterbury.ac.nz> References: <571F7134.80709@stoneleaf.us> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BC757.5020702@canterbury.ac.nz> <572C2F79.3010906@canterbury.ac.nz> <572D3D0E.3050307@canterbury.ac.nz> <572DEB00.20401@canterbury.ac.nz> Message-ID: On Sat, May 7, 2016 at 6:17 AM, Greg Ewing wrote: > Do you expect that > >> projects ... should (somehow) contain simplified instructions on how to >> build the various C/Fortran extensions supplied in the bundle as >> source code? >> > > Essentially, yes. I'm not sure how achievable it would > be, but ideally that's what I'd like. > I think we've all come to conclusion that that's simply not possible -- build configuration cannot be purely declarative in the general case -- if you are building a package you are going to be running arbitrary code (and really, if you're running a compiler, you're doing that anyway, so there isn't an additional security issue here -- if you trust the code, you might as well trust the build script) > On a unix system, most of the time they would all be in > well-known locations. If I install something in an unusual > place or in an unusual way, I'm going to have to tell > something about it anyway. I don't see how an executable > setup file provided by the package author is going to > magically figure out all the weird stuff I've done. > You can look at any number of packages -- essentially, they do what configure scripts do with autotools -- various hacky ways to find the stuff it's looking for. and users really want this -- maybe on a "normal" *nix system there isn't much to it, but on OS-X there sure is -- people may have hand installed the dependencies, or they may have used fink, or macports, or homebrew, or... and, except for the hand-install case, they may have no idea what the heck they did and where it is. I don't know if there are conventions for such things on > Windows. I suspect not, in which case manual input is > going to be needed one way or another. > Windows is even worse -- not only no conventions, but also no common package manager, either (at least OS-X is limited to four :-) ) Not all of it, only the parts that strictly have to be > performed as part of the build step of the install > process, to use your terminology. That's a fairly > restricted subset of the whole problem of compiling > software. > I don't think it's possible (or desirable) to make a clear distinction between "the build step of the install process" and building in general. Could a purely declarative config file be flexible > enough to handle this? I don't know. The distutils > API is actually pretty declarative already if you > use it in a straightforward way. > indeed -- and for the common cases, that works fine, but there's always SOMETHING That some weird case is going to need. You could, I suppose, separate out the configuration step from the build step -- analogous to ./configure, vs make. So the configure step would generate a purely declarative config file of some sort, and then the build step would use that. In the simple case, there might be no need for a configure step at all. though I'm not sure even this is possible -- for instance, numpy used a heavily enhanced distutils to do it's thing. Cython extends distutils to understand the "build the C from the pyx" step, etc.... This is still used decoratively, but it's third party code that is doing part of the build step -- so in order to make the build system extendable, you need to it run code.... Anyway, I thought it was clear that we need to make the distinction between building and installing/packaging, etc clear -- both form an API perspective and a user perspective. So at this point, we need to establish and API to the build system (probably compatible with what we have (setup.py build) but we leave it up to the build system to figure out how to do it's thing -- maybe someone will come up with a purely declarative system -- who knows? > Running Pyrex to generate .c files > from .pyx files is one that I've encountered. > (I encouraged that one myself by including a distutils > extension for it, which I later decided had been a > mistake.) > I don't think it was a mistake :-) -- that's got to be done some time -- why add another layer??? That's nice, but it wouldn't help me when I encounter > a package that *hadn't* been set up to use gregsbuild. :-( > sure -- but we can't have a one-build-system-to-rule-them-all until we can first have a way to have ANY build system other than setuptools :-) Key issue here: Right now, in order to make things easier for users, and due to history, the whole build/package/install processes are fully intermingled. I think this is a BAD THING, for two reasons: 1) you can't replace any of the tools individually (except I suppose the packaging part, as it's the last step) 2) user's don't know what's going on, or what's going to happen when they try to "intsall" something: It's great that you can "just run pip install" and it often "just works" -- but when it doesn't it's kind of ugly. And there are security concerns: When a user runs: pip install some_package They don't know what's going to happen. pip goes and looks on PyPi to see if it's there. If it's there, it looks for a binary wheel that's compatible with the current system. if there is one, then it gets installed (and hopefully works :-) If it's not there, then it downloads the source archive from pypi, and tries to build and install it, by monkey patching setuptools, and then running "setup.py install". At this point, essentially arbitrary code is being run (which makes my IT folks nervous, thought that code came from the same place a binary would have...) This whole thing is great for pure python packages (which don't really need "building" anyway), but not so great for anything that needs compilation, and particularly anything that needs third-party libs to compile. I'd like to see binary packaging more clearly separated from source, needs to be built, packaging -- so the systems can be more flexible, and so it's clear to users what exactly is happening or what's wrong when it doesn't work. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat May 7 15:15:48 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Sat, 7 May 2016 12:15:48 -0700 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Sat, May 7, 2016 at 11:18 AM, Brett Cannon wrote: > What fields there will be and their semantics ... > > 1. Format version (so just deciding on a name -- which also includes > whether it should be top-level or in a subsection -- and initial value) > 2. The actual bootstrap field (so its name and what to do if a > dependency is already installed but at a version that doesn't match what > the bootstrap specification asks for) > > Am I missing anything? > So what is this new configuration file supposed to cover? All the package meta-data? i.e. everything that would be needed by a package manager to properly install the package? or the build meta-data: everything needed by the build system to build the package? both in one file? And am missing something? how is this about "bootstrapping" -- to me, bootstrapping is when you need X to build X. Isn't this just regular old configuration: you need x,y to build z? -CHB > And since I keep pushing on this I'm willing to be a co-author on any PEP > if there's no hard deadline in getting the PEP written (i.e. I can help > write the prose, but I don't have the time to do the coding as this will be > the fourth PEP I have going in some state; got to catch up to Nick's 35 > PEPs somehow ;). > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukasz at langa.pl Sat May 7 16:42:59 2016 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Sat, 7 May 2016 13:42:59 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: > On May 7, 2016, at 9:32 AM, Brett Cannon wrote: > > On Sat, 7 May 2016 at 07:49 Nick Coghlan > wrote: > > On 7 May 2016 13:00, "Nathaniel Smith" > wrote: > > > > Here's that one-stop writeup/comparison of all the major configuration > > languages that I mentioned: > > > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > Thanks for that, and "yikes" on the comment handling variations in ConfigParser - you can tell I've never even tried to use end-of-line comments in INI files, and apparently neither has anyone I've worked with :) > > Yeah, that's pretty bad. :/ I checked when ConfigParser was added to Python and it's late 1997: https://hg.python.org/cpython/rev/5b24cbb1f99b , so rather old and predates our stricter code quality rules for additions to the stdlib. Some context: treating just semicolons as inline comments was broken anyway as there were legitimate cases where people wanted to use semicolons inside values and it didn?t work. There were more examples of similar features with lurking edge cases. We?ve plowed through them with Fred in 2010 and so configparser in 3.2+ should have less surprising characteristics. Some things we sadly had to leave as they are due to unsolvable backwards compatibility issues. We don?t have to consider using the legacy ConfigParser even for 2.7 compat at all as there is a backport of the new version which handles Unicode, etc. I know, it?s due a release, I?m on it. But what I?m saying is, for Python 2.7, we can and should just bundle the backport and be done with it. If there?s issues with it, let me know and yo I'll fix ?em. I was involved in standardizing TOML to some extent, seeing it as a clean way out from the undefined INI world. Unfortunately it doesn?t seem like it got very popular and the tooling around it is still lacking. This is why I?d still prefer YAML. Rust is depending on TOML already but that?s still very limited adoption and more cognitive churn for users. OTOH, us becoming serious TOML users might just change the landscape of its adoption? -- Lukasz Langa | Facebook Production Engineer | The Ministry of Silly Walks (+1) 650-681-7811 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Sat May 7 17:05:25 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 8 May 2016 09:05:25 +1200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: Couple thoughts. Firstly, the human-editable bit: who in the last *decade* has been writing code using a non-syntax-aware/helping editor? Its a supremely uninteresting aspect IMO. On ConfigParser - yes, its horrid. OTOH we do get all the lines reliably, and setuptools will need to cover the unicode aspect itself in its own time. All we need to do is permit # inline as a comment - line a requirements.txt file for pip - and it becomes trivial to parse in all cases that we need *now*. Either we are defining the long term thing now, in which case that huge pile of complexity lands on us, and we have to get everything right. Or we are defining a thing which solves the present bug, and as long as we make sure it does not bind us in future, we're not hamstrung. E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, as pybuild is a thing in the debian space, and this will confuse the heck out of folk). https://wiki.debian.org/Python/Pybuild -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From lukasz at langa.pl Sat May 7 17:42:45 2016 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Sat, 7 May 2016 14:42:45 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <86B99E99-358E-432D-83A5-8E1DD5097901@langa.pl> > On May 7, 2016, at 2:05 PM, Robert Collins wrote: > > Couple thoughts. > > Firstly, the human-editable bit: who in the last *decade* has been > writing code using a non-syntax-aware/helping editor? Its a supremely > uninteresting aspect IMO. Unless you?re faced with adding that syntax highlighting yourself because your favorite language du jour is not supported. > On ConfigParser - yes, its horrid. OTOH we do get all the lines > reliably, and setuptools will need to cover the unicode aspect itself > in its own time. All we need to do is permit # inline as a comment - > line a requirements.txt file for pip - and it becomes trivial to parse > in all cases that we need *now*. Yeah, the point about removing implicit inline comment handling was that it?s possible by the application to strip the inline comments itself if needed, whereas it?s impossible to add that information back if it wasn?t really supposed to be a comment. In other words, we can wrap a ConfigParser class around and add this ourselves. -- Lukasz Langa | Facebook Production Engineer | The Ministry of Silly Walks (+1) 650-681-7811 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Sat May 7 18:31:18 2016 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 7 May 2016 15:31:18 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: To further explore what would be involved if we did go down the TOML route, I posted an issue to give the pytoml developer(s) a heads up about this conversation: https://github.com/avakar/pytoml/issues/15 On Fri, May 6, 2016 at 7:59 PM, Nathaniel Smith wrote: > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org -- Nathaniel J. Smith -- https://vorpus.org From donald at stufft.io Sat May 7 18:46:23 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 May 2016 18:46:23 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: > On May 7, 2016, at 5:05 PM, Robert Collins wrote: > > Either we are defining the long term thing now, in which case that > huge pile of complexity lands on us, and we have to get everything > right. > > Or we are defining a thing which solves the present bug, and as long > as we make sure it does not bind us in future, we're not hamstrung. > > E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, > as pybuild is a thing in the debian space, and this will confuse the > heck out of folk). https://wiki.debian.org/Python/Pybuild I think this is roughly true, we could either do the simplest thing and just add ``setup_requires`` to ``setup.cfg`` and teach pip how to understand them and then worry about a new format later, or we can do a new format now and add a bit of complexity to what we need to specify (though I don't think _too_ much complexity, we don't have to define the build system stuff now, just make sure we don't back ourselves into a corner with that). I think either answer is OK, just the second one is a bit more work and we might either get the start of a better format _now_ or end up regretting what we pick when we add more things to it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From alex.gronholm at nextday.fi Sat May 7 19:05:54 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sun, 8 May 2016 02:05:54 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <572E74D2.9040308@nextday.fi> 07.05.2016, 17:48, Nick Coghlan kirjoitti: > > > On 7 May 2016 13:00, "Nathaniel Smith" > wrote: > > > > Here's that one-stop writeup/comparison of all the major configuration > > languages that I mentioned: > > > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > Thanks for that, and "yikes" on the comment handling variations in > ConfigParser - you can tell I've never even tried to use end-of-line > comments in INI files, and apparently neither has anyone I've worked > with :) > > For YAML, my main concern isn't quirkiness of the syntax, or code > quality in PyYAML, it's the ease with which you can expose yourself to > security problems (even if *pip* loads the config file safely, that > doesn't mean every other tool will). Since we don't need the extra > power, the easiest way to reduce the collective attack surface is to > use a strictly less powerful (but still sufficient) format. > Sounds like a far-fetched hypothetical problem. You're concerned about the custom tags provided by PyYAML? Do you happen to know a tool that defaults to loading files in unsafe mode? > > For ast.literal_eval, we'd still need to come up with a way to do > sections, key:value mappings and define rules for comments. > > For completeness, I'll note that XML combines even more user > unfriendly syntax than JSON with similar security risks to YAML. > > So with the trade-offs laid out like that (and particularly the > inconsistent comment and Unicode handling in ConfigParser), I'm > prompted to favour following Rust in adopting TOML. > > Cheers, > Nick. > > P.S. I particularly like the idea of using extension sections to > eventually consolidate other static config into a common file - that > nicely addresses my concern with config file proliferation, since it > opens the door to eventually subsuming other files like MANIFEST.in > and setup.cfg as archiving and build systems are updated > > > > > -n > > > > -- > > Nathaniel J. Smith -- https://vorpus.org > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > > https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sat May 7 19:08:25 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 7 May 2016 19:08:25 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <572E74D2.9040308@nextday.fi> References: <572E74D2.9040308@nextday.fi> Message-ID: > On May 7, 2016, at 7:05 PM, Alex Gr?nholm wrote: > > 07.05.2016, 17:48, Nick Coghlan kirjoitti: >> >> On 7 May 2016 13:00, "Nathaniel Smith" < njs at pobox.com > wrote: >> > >> > Here's that one-stop writeup/comparison of all the major configuration >> > languages that I mentioned: >> > >> > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f >> Thanks for that, and "yikes" on the comment handling variations in ConfigParser - you can tell I've never even tried to use end-of-line comments in INI files, and apparently neither has anyone I've worked with :) >> >> For YAML, my main concern isn't quirkiness of the syntax, or code quality in PyYAML, it's the ease with which you can expose yourself to security problems (even if *pip* loads the config file safely, that doesn't mean every other tool will). Since we don't need the extra power, the easiest way to reduce the collective attack surface is to use a strictly less powerful (but still sufficient) format. >> > Sounds like a far-fetched hypothetical problem. You're concerned about the custom tags provided by PyYAML? Do you happen to know a tool that defaults to loading files in unsafe mode? Yea, pyYAML itself does (yaml.load() does it unsafely, you have to use yaml.safe_load()). I don?t think it?s that big of a deal though, we could easily add a thing to PyPI that rejects any YAML file that can?t be parsed in safe mode. The bigger deal to me is just that the library to work with it is a bit of a bear to use as a dependency. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Sat May 7 19:11:23 2016 From: robertc at robertcollins.net (Robert Collins) Date: Sun, 8 May 2016 11:11:23 +1200 Subject: [Distutils] comparison of configuration languages In-Reply-To: <572E74D2.9040308@nextday.fi> References: <572E74D2.9040308@nextday.fi> Message-ID: Actually, Nathaniel didn't test vendorability of the libraries, and pip needs that. Pyyaml isn't in good shape there. On 8 May 2016 11:06 AM, "Alex Gr?nholm" wrote: > 07.05.2016, 17:48, Nick Coghlan kirjoitti: > > > On 7 May 2016 13:00, "Nathaniel Smith" wrote: > > > > Here's that one-stop writeup/comparison of all the major configuration > > languages that I mentioned: > > > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > Thanks for that, and "yikes" on the comment handling variations in > ConfigParser - you can tell I've never even tried to use end-of-line > comments in INI files, and apparently neither has anyone I've worked with :) > > For YAML, my main concern isn't quirkiness of the syntax, or code quality > in PyYAML, it's the ease with which you can expose yourself to security > problems (even if *pip* loads the config file safely, that doesn't mean > every other tool will). Since we don't need the extra power, the easiest > way to reduce the collective attack surface is to use a strictly less > powerful (but still sufficient) format. > > Sounds like a far-fetched hypothetical problem. You're concerned about the > custom tags provided by PyYAML? Do you happen to know a tool that defaults > to loading files in unsafe mode? > > For ast.literal_eval, we'd still need to come up with a way to do > sections, key:value mappings and define rules for comments. > > For completeness, I'll note that XML combines even more user unfriendly > syntax than JSON with similar security risks to YAML. > > So with the trade-offs laid out like that (and particularly the > inconsistent comment and Unicode handling in ConfigParser), I'm prompted to > favour following Rust in adopting TOML. > > Cheers, > Nick. > > P.S. I particularly like the idea of using extension sections to > eventually consolidate other static config into a common file - that nicely > addresses my concern with config file proliferation, since it opens the > door to eventually subsuming other files like MANIFEST.in and setup.cfg as > archiving and build systems are updated > > > > > -n > > > > -- > > Nathaniel J. Smith -- https://vorpus.org > > _______________________________________________ > > Distutils-SIG maillist - Distutils-SIG at python.org > > https://mail.python.org/mailman/listinfo/distutils-sig > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.orghttps://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Sat May 7 19:19:43 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sun, 8 May 2016 02:19:43 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <572E74D2.9040308@nextday.fi> Message-ID: <572E780F.6000605@nextday.fi> 08.05.2016, 02:08, Donald Stufft kirjoitti: > >> On May 7, 2016, at 7:05 PM, Alex Gr?nholm > > wrote: >> >> 07.05.2016, 17:48, Nick Coghlan kirjoitti: >>> >>> >>> On 7 May 2016 13:00, "Nathaniel Smith" wrote: >>> > >>> > Here's that one-stop writeup/comparison of all the major configuration >>> > languages that I mentioned: >>> > >>> >https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f >>> >>> Thanks for that, and "yikes" on the comment handling variations in >>> ConfigParser - you can tell I've never even tried to use end-of-line >>> comments in INI files, and apparently neither has anyone I've worked >>> with :) >>> >>> For YAML, my main concern isn't quirkiness of the syntax, or code >>> quality in PyYAML, it's the ease with which you can expose yourself >>> to security problems (even if *pip* loads the config file safely, >>> that doesn't mean every other tool will). Since we don't need the >>> extra power, the easiest way to reduce the collective attack surface >>> is to use a strictly less powerful (but still sufficient) format. >>> >> Sounds like a far-fetched hypothetical problem. You're concerned >> about the custom tags provided by PyYAML? Do you happen to know a >> tool that defaults to loading files in unsafe mode? > > Yea, pyYAML itself does (yaml.load() does it unsafely, you have to use > yaml.safe_load()). > > I don?t think it?s that big of a deal though, we could easily add a > thing to PyPI that rejects any YAML file that can?t be parsed in safe > mode. The bigger deal to me is just that the library to work with it > is a bit of a bear to use as a dependency. Sounds like we'd need an alternate implementation of YAML then (I'd love to see a "yaml" module in the standard library too, but PyYAML isn't a good candidate for that, agreed). > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 > 3372 DCFA > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat May 7 19:23:30 2016 From: brett at python.org (Brett Cannon) Date: Sat, 07 May 2016 23:23:30 +0000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Sat, May 7, 2016, 12:16 Chris Barker wrote: > On Sat, May 7, 2016 at 11:18 AM, Brett Cannon wrote: > >> What fields there will be and their semantics ... >> >> 1. Format version (so just deciding on a name -- which also includes >> whether it should be top-level or in a subsection -- and initial value) >> 2. The actual bootstrap field (so its name and what to do if a >> dependency is already installed but at a version that doesn't match what >> the bootstrap specification asks for) >> >> Am I missing anything? >> > > So what is this new configuration file supposed to cover? > How to specify what has to be installed to simply build a project, e.g. is setuptools needed to run setup.py, and if so what version? All the package meta-data? i.e. everything that would be needed by a > package manager to properly install the package? > > or the build meta-data: everything needed by the build system to build the > package? > > both in one file? > > And am missing something? > You're missing that you're talking several PEPs down the road. :) Right now all we are discussing is how to specify what build tools a project needs (historically setuptools, but in the future it could be flit or something else). how is this about "bootstrapping" -- to me, bootstrapping is when you need > X to build X. Isn't this just regular old configuration: you need x,y to > build z? > Sure, if you don't like the term "bootstrap" then you can call it "build requirements". We have not been calling it " configuration" in a general sense as this doesn't cover how to invoke the build step (that will probably be the next PEP), just what needs to be installed to even potentially do a build. -Brett > -CHB > > > > > > > >> And since I keep pushing on this I'm willing to be a co-author on any PEP >> if there's no hard deadline in getting the PEP written (i.e. I can help >> write the prose, but I don't have the time to do the coding as this will be >> the fourth PEP I have going in some state; got to catch up to Nick's 35 >> PEPs somehow ;). >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> >> > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat May 7 21:04:24 2016 From: brett at python.org (Brett Cannon) Date: Sun, 08 May 2016 01:04:24 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Sat, May 7, 2016, 15:47 Donald Stufft wrote: > > > On May 7, 2016, at 5:05 PM, Robert Collins > wrote: > > > > Either we are defining the long term thing now, in which case that > > huge pile of complexity lands on us, and we have to get everything > > right. > > > > Or we are defining a thing which solves the present bug, and as long > > as we make sure it does not bind us in future, we're not hamstrung. > > > > E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, > > as pybuild is a thing in the debian space, and this will confuse the > > heck out of folk). https://wiki.debian.org/Python/Pybuild > > I think this is roughly true, we could either do the simplest thing and > just > add ``setup_requires`` to ``setup.cfg`` and teach pip how to understand > them > and then worry about a new format later, or we can do a new format now and > add > a bit of complexity to what we need to specify (though I don't think _too_ > much > complexity, we don't have to define the build system stuff now, just make > sure > we don't back ourselves into a corner with that). > > I think either answer is OK, just the second one is a bit more work and we > might either get the start of a better format _now_ or end up regretting > what > we pick when we add more things to it. > For both options I hear "pick a new format", which suggests we might as well do it from the get-go for clear separation of the new stuff and just bite the bullet instead of simply postponing a decision; it isn't like our format options are going to significantly change between now and later in the year. -Brett > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sun May 8 01:07:06 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Sat, 7 May 2016 22:07:06 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Sat, May 7, 2016 at 6:04 PM, Brett Cannon wrote: For both options I hear "pick a new format", which suggests we might as > well do it from the get-go for clear separation of the new stuff and just > bite the bullet instead of simply postponing a decision; it isn't like our > format options are going to significantly change between now and later in > the year. > Agreed. However, in another thread, I understood you to say that ALL we are talking about now is how to specify the build requirements. If that's the case, then we might a well as well just go with setup.cfg. However, I'd rather we were setting the stage for grater things -- in which case, let's go with a new config file. BTW, IIRC, there seemed to a consensus moving toward using a Python API, rather than a command line API for the mythical pluggable build systems.... In which case, we can require python, and could use python literals for configuration. With the discussion of PyYaml, I"m thinking more and more that something that can be parsed with only the stdlib is a good idea. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Sun May 8 01:10:15 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Sun, 8 May 2016 08:10:15 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <572ECA37.2010303@nextday.fi> This is fine as long as developer convenience does not suffer. Underlying implementations can always be improved, but if we decide on a sucky format, we'll have to live with that for a long time. 08.05.2016, 08:07, Chris Barker kirjoitti: > On Sat, May 7, 2016 at 6:04 PM, Brett Cannon > wrote: > > For both options I hear "pick a new format", which suggests we > might as well do it from the get-go for clear separation of the > new stuff and just bite the bullet instead of simply postponing a > decision; it isn't like our format options are going to > significantly change between now and later in the year. > > > Agreed. However, in another thread, I understood you to say that ALL > we are talking about now is how to specify the build requirements. If > that's the case, then we might a well as well just go with setup.cfg. > > However, I'd rather we were setting the stage for grater things -- in > which case, let's go with a new config file. > > BTW, IIRC, there seemed to a consensus moving toward using a Python > API, rather than a command line API for the mythical pluggable build > systems.... > > In which case, we can require python, and could use python literals > for configuration. With the discussion of PyYaml, I"m thinking more > and more that something that can be parsed with only the stdlib is a > good idea. > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun May 8 04:42:25 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 8 May 2016 03:42:25 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Sat, May 7, 2016 at 4:05 PM, Robert Collins wrote: > Couple thoughts. > > Firstly, the human-editable bit: who in the last *decade* has been > writing code using a non-syntax-aware/helping editor? Its a supremely > uninteresting aspect IMO. > > On ConfigParser - yes, its horrid. OTOH we do get all the lines > reliably, and setuptools will need to cover the unicode aspect itself > in its own time. All we need to do is permit # inline as a comment - > line a requirements.txt file for pip - and it becomes trivial to parse > in all cases that we need *now*. > ConfigObj is read/write with full Unicode support (and type validation) | Src: https://github.com/DiffSK/configobj | PyPI: https://pypi.python.org/pypi/configobj/5.0.6 | Docs: https://configobj.readthedocs.io/en/latest/ *Having read/write support avoids the primary issue (injection) with templating config files* >From https://github.com/DiffSK/configobj/blob/master/setup.py LONG_DESCRIPTION = """**ConfigObj** is a simple but powerful config file reader and writer: an *ini file round tripper*. Its main feature is that it is very easy to use, with a straightforward programmer's interface and a simple syntax for config files. It has lots of other features though : * Nested sections (subsections), to any level * List values * Multiple line values * Full Unicode support * String interpolation (substitution) * Integrated with a powerful validation system - including automatic type checking/conversion - and allowing default values - repeated sections * All comments in the file are preserved * The order of keys/sections is preserved * Powerful ``unrepr`` mode for storing/retrieving Python data-types > > Either we are defining the long term thing now, in which case that > huge pile of complexity lands on us, and we have to get everything > right. > > Or we are defining a thing which solves the present bug, and as long > as we make sure it does not bind us in future, we're not hamstrung. > > E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, > as pybuild is a thing in the debian space, and this will confuse the > heck out of folk). https://wiki.debian.org/Python/Pybuild > > > -Rob > > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun May 8 08:31:25 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 May 2016 22:31:25 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 8 May 2016 at 04:15, Chris Barker wrote: > On Sat, May 7, 2016 at 6:51 AM, Nick Coghlan wrote: >> >> On 7 May 2016 01:55, "Chris Barker" wrote: >> > So my point is about scope-creep -- if you want the PyPa tools to solve >> > all these problems, then you are re-inventing conda -- better to simply >> > adopt conda (or fork it and fix issues that I'm sure are there....) >> >> conda doesn't solve these problems either - it solves the *end user* >> problem for data analysts (install the Python library they want to use), > > I really need to make this clear -- there is NOTHING "data analyst" specific > about these problems -- they do come up more in the computational > programming domain, but there are any number of other application that have > the same problems (pyQT has come up in this conversation, yes?) -- and we're > finding conda to be a good solution for our web development needs, too -- a > conda environment is kinda like a lighter-weight, platform independent > docker container. And of course, there is more an more data analysis going > on behind web services these days too -- any python developer is going to > run into these issues eventually... Aye, I know - conda was one of the systems I evaluated as a possible general purpose tool for user level package management in Fedora and derivatives (see https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement#Directions_to_be_Explored for details). The problem with it is a practical one related to barriers to adoption: to push any of the three systems I looked at, we'd face significant challenges on the system administrator side (providing capabilities comparable to what they're used to with RPMs) *and* on the developer side (interoperating with the existing language specific tooling for all of the language runtimes provided in Fedora). Hence my comment about conda not currently solving *system integrator* problems in the general case - it's fine as a platform for running software *on top of* Fedora and for DIY integration within an organisation, but as a tool for helping to *build Fedora*, it turned out to introduce a whole slew of new problems (like needing to add support to COPR/Koji/Pulp), without giving us any significant capabilities that can't be addressed just as well by increasing the level of automation in the upstream -> RPM pipeline and simplifying the end user experience for container images. However, the kinds of enhancements we're now considering upstream in pip should improve things for conda users as well - just as some folks in Fedora are exploring the feasibility of automatically rebuilding the whole of PyPI as RPMs (twice, actually, anchored on /usr/bin/python and /usr/bin/python3), it should eventually be feasible to have the upstream->conda pipeline fully automated as well. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun May 8 08:49:56 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 May 2016 22:49:56 +1000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On 8 May 2016 at 09:23, Brett Cannon wrote: > On Sat, May 7, 2016, 12:16 Chris Barker wrote: >> how is this about "bootstrapping" -- to me, bootstrapping is when you need >> X to build X. Isn't this just regular old configuration: you need x,y to >> build z? > > Sure, if you don't like the term "bootstrap" then you can call it "build > requirements". We have not been calling it " configuration" in a general > sense as this doesn't cover how to invoke the build step (that will probably > be the next PEP), just what needs to be installed to even potentially do a > build. The reason I think "bootstrap" is a better name at this point than "build" is that there are actually three commands we're installing the relevant dependencies for: * egg_info/dist_info (i.e. metadata generation) * sdist (i.e. archive generation) * bdist_wheel (i.e. building) The bootstrapping at the moment is taken care of by "assume everything uses setuptools, install setuptools by default, if you want to use something else, use setuptools to define and retrieve it". The new metadata aims to take the place of setuptools in that bootstrapping process: if the software publisher so chooses, they'll be able to both create an sdist from a source tree and a wheel archive from an sdist without ever installing setuptools. (Of course, one or more of their dependencies are likely to bring in setuptools anyway for the foreseeable future, but at the level of their own project they'll be able to ignore it) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sun May 8 09:13:24 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 8 May 2016 23:13:24 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On 8 May 2016 at 08:46, Donald Stufft wrote: > >> On May 7, 2016, at 5:05 PM, Robert Collins wrote: >> >> Either we are defining the long term thing now, in which case that >> huge pile of complexity lands on us, and we have to get everything >> right. >> >> Or we are defining a thing which solves the present bug, and as long >> as we make sure it does not bind us in future, we're not hamstrung. >> >> E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, >> as pybuild is a thing in the debian space, and this will confuse the >> heck out of folk). https://wiki.debian.org/Python/Pybuild > > I think this is roughly true, we could either do the simplest thing and just > add ``setup_requires`` to ``setup.cfg`` and teach pip how to understand them > and then worry about a new format later, or we can do a new format now and add > a bit of complexity to what we need to specify (though I don't think _too_ much > complexity, we don't have to define the build system stuff now, just make sure > we don't back ourselves into a corner with that). > > I think either answer is OK, just the second one is a bit more work and we > might either get the start of a better format _now_ or end up regretting what > we pick when we add more things to it. Hmm, are we perhaps forcing a false choice on ourselves here? 1. setup_requires in setup.py is the de facto baseline 2. d2to1 and pbr already use setup.cfg, so any arguments about the readability of the format are moot for users of those projects 3. we're not sure we're comfortable mandating the use of an ini-style format for all *future* Python projects So let's reduce our scope to: "We want *current users* of d2to1 and pbr to be able to declare those dependencies to pip and other installation tools in a way that avoids the implicit invocation of easy_install by setuptools" For *that* design goal, the conclusions I'd draw from Nathaniel's write-up would be those ?ukasz suggested: - use https://pypi.python.org/pypi/configparser in Python 2.7 - explicitly define how to handle end-of-line comments We should perhaps also recommend that "#" be used for comments (even if the parser didn't enforce that), as I haven't seen anyone using ';' for comments since I stopped writing assembly code by hand. Improving the status quo for d2to1 and pbr users doesn't lock us into anything in terms of the "future of Python packaging as a whole". Individual build system authors may *choose* to take advantage of the setup.cfg mechanism, but they wouldn't be obliged to - they could keep using setup.py based bootstrapping if they preferred to do so. This approach would also provide the ability to iterate on a TOML based dependency declaration system *independent of pip itself*: setup.cfg could be used to bootstrap that system while it was still experimental, letting the idiosyncrasies get worked out before we commit to anything in pip itself. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun May 8 11:43:16 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 May 2016 11:43:16 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> > On May 8, 2016, at 9:13 AM, Nick Coghlan wrote: > > So let's reduce our scope to: "We want *current users* of d2to1 and > pbr to be able to declare those dependencies to pip and other > installation tools in a way that avoids the implicit invocation of > easy_install by setuptools" I think it's less pbr and d2to1 that we're designing for here (although they'll benefit from it) but more like numpy.distutils. The reasoning is that pbr and such have already been able to architecture themselves so that they work within the current system. One of the goals here is to enable projects that haven't been able to (or can't reasonably) be written to work as a setuptools extension to be used here without requiring a manual pre-installation step. The other side of this is anytime you have dependencies that aren't installed by pip (such as setup_requires being installed by setuptools) you end up with a confusing situation where settings don't get passed down into the "inner" installer (like ``--index-url``) or when they have different resolution algorithms (like setuptools supporting pre-releases by default) or when they support different formats (like pip not supporting eggs, but setuptools not supporting wheels). However, while I think that a new format is more work just to make sure we don't back ourselves into a corner, I also don't think it's so much more work that it's really worth it to skip doing it now. Their isn't a whole lot of ways to solve the very narrow problem of "we need some dependencies installed prior to building". We need a field in a file that takes a list of PEP 508 style specifiers. We can skip any of the build system support right now and add it in later (and when we add it in, we can just make it so that a lack of a declaration is an implicit setuptool). So the only real things we need to decide on are: 1) What format to use? For this I think that it's been pretty clearly that TOML has had the greatest amount of support here and I think it represents the best set of trade offs for us. 2) What do we want to call this file? This one is pure bikeshed but also a bit important since this is one of the things that will be the hardest to change once we actually pick the name. Ideas I've seen so far (using the toml extension) are: * pypa.toml - This one is specific to us which is good, but it's also a bit bad in that I don't think it's very widely known what the PyPA even is and I think that the PyPA works better if it's just sort of the thing in the background. * pybuild.toml - This one might be a bit too oriented towards building rather than all of the possible uses of this file, but the bigger problem with it I think is that hte name clashes with pybuild on Debian which is their tool used for building Python packages. * pip.toml - Also specific to us which is good, but I think it's a bit too overly specific maybe? One of our goals has been to make this stuff not dependent on a specific implementation so that we can replace it if we need to (as we more or less replaced distutils with setuptools, or easy_install with pip) however this also isn't exactly depending on an implementation-- just the name of a particular implementation. It does have the benefit that for a lot of people they associate "pip" with everything packaging related (e.g. I see people calling them "pip packages" now). * pymeta.toml - This one might be reasonable, it's a bit generic but at least it has the ``py`` prefix to tie it to us. Not much to say about it otherwise. Overall from those names, I think I probably like pymeta.toml the best (or maybe just ``meta.toml`` if we don't like the py prefix) but maybe other people have ideas/opinions that I haven't seen yet. 3) How do we structure the file and what keys do we use for just this part of the overall feature we're hoping to eventually get (and what semantics do we give these keys). We could just call it ``setup_requires``, but I think that's a bad name for something we're not planning on getting rid of since one of the eventual goals is to make ``setup.py`` itself optional. Another option is ``build_requires`` but that isn't quite right because we don't *just* need these dependencies for the build step (``setup.py bdist_wheel``) but also for the metadata step (``setup.py egg_info``). Although perhaps it doesn't really matter. Another option is to call them ``bootstrap_requires`` though it's a bit wonky to cram *build* requirements into that. I guess if I were deciding this I would just call it build requirements because the nature of ``setup.py`` is that pretty much anything you need to execute ``setup.py`` is going to be a requirement for all ways you invoke ``setup.py`` (metadata and build and the legacyish direct to install). We could then say that if a project is missing this new file, then there is an implicit ``build_requires`` of ``["setuptools", "wheel"]`` but that as soon as someone defines this file that they are expected to accurately list all of their build dependencies in it. Overall, my suggestion here would be to have a file called ``pymeta.toml`` (or ``meta.toml``) and have it look like:: [dependencies] build = [ "setuptools", "wheel>=1.0", ] If at some point we decide we need to add a bootstrap_requires (and then the ability to add dynamic build requirements) we can do that by just saying that if you plan on having dynamic build requirements, you need to omit the build key under the [dependencies] section. This same thing could be used for other kinds of dependencies too (I've come around to the idea that you can't always declare a static set of dependencies in a sdist, but you can in a wheel) so that at some point in the future we could add additional keys like ``runtime`` to this where, if declared, will be assumed to be a static set of dependencies for that type of dependency. This also doesn't prevent us from moving more of the metadata to this file in the future either. For an example, we could (at some point in the future, not now-- we shouldn't get bogged down in this now, it's just an example of this not tying us down too much) add something like:: [package] name = "mycoolpackage" version = "1.0" [dependencies] build = ["setuptools", "wheel>=1.0"] runtime = ["requests==2.*"] ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From nicholas.chammas at gmail.com Sun May 8 17:38:54 2016 From: nicholas.chammas at gmail.com (Nicholas Chammas) Date: Sun, 08 May 2016 21:38:54 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On Sat, May 7, 2016 at 6:23 AM Paul Moore p.f.moore at gmail.com wrote: - YAML ought to be wonderful, but it ended up over-engineered (yes, we > can ignore the bits we don't care about). Also, pyYAML is a bit of an > annoying dependency (big, reportedly slow unless you use the C > version) - not something I'd want pip to have to vendor. > Perhaps a more limited YAML library like Poyo would address some of the concerns we have about using PyYAML. https://github.com/hackebrot/poyo#readme Poyo in particular may be *too* limited, but I wonder how much it would sway people if we used YAML but not PyYAML. Anyway, here are some relevant snippets from Poyo?s README: Please note that Poyo supports only a chosen subset of the YAML format. It can only read but not write and is not compatible with JSON. ? Poyo is 100% Python and does not require any additional libs. Nick ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Sun May 8 17:57:03 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 8 May 2016 17:57:03 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <2417C313-A579-4412-997B-09CB3B227B20@stufft.io> > On May 8, 2016, at 5:38 PM, Nicholas Chammas wrote: > > Poyo in particular may be too limited, but I wonder how much it would sway people if we used YAML but not PyYAML. If we found a reasonable library for parsing YAML it might be a more reasonable alternative. Though I think we really need something that supports most if not all of YAML otherwise I think it starts to get confusing because you have to figure out what the specific subset of YAML that the library that whatever packaging tool you?re using supports. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Sun May 8 18:50:05 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 8 May 2016 23:50:05 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On 8 May 2016 at 22:38, Nicholas Chammas wrote: > Perhaps a more limited YAML library like Poyo would address some of the > concerns we have about using PyYAML. I'm -1 on using "a subset of" a standard format. It immediately invites debate and confusion over *precisely* which subset is supported. I like (basic) YAML, but IMO it made the mistake of allowing itself to get too complex. Paul From brett at python.org Sun May 8 19:44:37 2016 From: brett at python.org (Brett Cannon) Date: Sun, 08 May 2016 23:44:37 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: Based on this email and Nathaniel's evaluation I've gone ahead and taken it upon myself to start writing a PEP so we have something concrete to work from. I'm hoping to have it done some time this week. On Sun, 8 May 2016 at 08:43 Donald Stufft wrote: > > > On May 8, 2016, at 9:13 AM, Nick Coghlan wrote: > > > > So let's reduce our scope to: "We want *current users* of d2to1 and > > pbr to be able to declare those dependencies to pip and other > > installation tools in a way that avoids the implicit invocation of > > easy_install by setuptools" > > I think it's less pbr and d2to1 that we're designing for here (although > they'll > benefit from it) but more like numpy.distutils. The reasoning is that pbr > and > such have already been able to architecture themselves so that they work > within > the current system. One of the goals here is to enable projects that > haven't > been able to (or can't reasonably) be written to work as a setuptools > extension > to be used here without requiring a manual pre-installation step. > > The other side of this is anytime you have dependencies that aren't > installed > by pip (such as setup_requires being installed by setuptools) you end up > with > a confusing situation where settings don't get passed down into the "inner" > installer (like ``--index-url``) or when they have different resolution > algorithms (like setuptools supporting pre-releases by default) or when > they > support different formats (like pip not supporting eggs, but setuptools not > supporting wheels). > > However, while I think that a new format is more work just to make sure we > don't back ourselves into a corner, I also don't think it's so much more > work > that it's really worth it to skip doing it now. Their isn't a whole lot of > ways > to solve the very narrow problem of "we need some dependencies installed > prior > to building". We need a field in a file that takes a list of PEP 508 style > specifiers. We can skip any of the build system support right now and add > it in > later (and when we add it in, we can just make it so that a lack of a > declaration is an implicit setuptool). So the only real things we need to > decide on are: > > 1) What format to use? For this I think that it's been pretty clearly that > TOML > has had the greatest amount of support here and I think it represents > the > best set of trade offs for us. > > 2) What do we want to call this file? This one is pure bikeshed but also a > bit > important since this is one of the things that will be the hardest to > change > once we actually pick the name. Ideas I've seen so far (using the toml > extension) are: > > * pypa.toml - This one is specific to us which is good, but it's also a > bit > bad in that I don't think it's very widely known what the PyPA even > is and > I think that the PyPA works better if it's just sort of the thing in > the > background. > > * pybuild.toml - This one might be a bit too oriented towards building > rather than all of the possible uses of this file, but the bigger > problem > with it I think is that hte name clashes with pybuild on Debian which > is > their tool used for building Python packages. > > * pip.toml - Also specific to us which is good, but I think it's a bit > too > overly specific maybe? One of our goals has been to make this stuff > not > dependent on a specific implementation so that we can replace it if we > need to (as we more or less replaced distutils with setuptools, or > easy_install with pip) however this also isn't exactly depending on an > implementation-- just the name of a particular implementation. It does > have the benefit that for a lot of people they associate "pip" with > everything packaging related (e.g. I see people calling them > "pip packages" now). > > * pymeta.toml - This one might be reasonable, it's a bit generic but at > least it has the ``py`` prefix to tie it to us. Not much to say about > it > otherwise. > > Overall from those names, I think I probably like pymeta.toml the best > (or > maybe just ``meta.toml`` if we don't like the py prefix) but maybe other > people have ideas/opinions that I haven't seen yet. > > 3) How do we structure the file and what keys do we use for just this part > of > the overall feature we're hoping to eventually get (and what semantics > do > we give these keys). We could just call it ``setup_requires``, but I > think > that's a bad name for something we're not planning on getting rid of > since > one of the eventual goals is to make ``setup.py`` itself optional. > Another > option is ``build_requires`` but that isn't quite right because we don't > *just* need these dependencies for the build step (``setup.py > bdist_wheel``) > but also for the metadata step (``setup.py egg_info``). Although > perhaps it > doesn't really matter. Another option is to call them > ``bootstrap_requires`` > though it's a bit wonky to cram *build* requirements into that. > > I guess if I were deciding this I would just call it build requirements > because the nature of ``setup.py`` is that pretty much anything you > need to > execute ``setup.py`` is going to be a requirement for all ways you > invoke > ``setup.py`` (metadata and build and the legacyish direct to install). > We > could then say that if a project is missing this new file, then there > is an > implicit ``build_requires`` of ``["setuptools", "wheel"]`` but that as > soon > as someone defines this file that they are expected to accurately list > all > of their build dependencies in it. > > > Overall, my suggestion here would be to have a file called ``pymeta.toml`` > (or > ``meta.toml``) and have it look like:: > > [dependencies] > build = [ > "setuptools", > "wheel>=1.0", > ] > > If at some point we decide we need to add a bootstrap_requires (and then > the > ability to add dynamic build requirements) we can do that by just saying > that > if you plan on having dynamic build requirements, you need to omit the > build > key under the [dependencies] section. This same thing could be used for > other > kinds of dependencies too (I've come around to the idea that you can't > always > declare a static set of dependencies in a sdist, but you can in a wheel) so > that at some point in the future we could add additional keys like > ``runtime`` > to this where, if declared, will be assumed to be a static set of > dependencies > for that type of dependency. > > This also doesn't prevent us from moving more of the metadata to this file > in > the future either. For an example, we could (at some point in the future, > not > now-- we shouldn't get bogged down in this now, it's just an example of > this > not tying us down too much) add something like:: > > [package] > name = "mycoolpackage" > version = "1.0" > > [dependencies] > build = ["setuptools", "wheel>=1.0"] > runtime = ["requests==2.*"] > > > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benzolius at yahoo.com Sun May 8 13:48:26 2016 From: benzolius at yahoo.com (Benedek Zoltan) Date: Sun, 8 May 2016 17:48:26 +0000 (UTC) Subject: [Distutils] ez_setup.py can not get setuptools In-Reply-To: <82087DDD-654D-4BF7-993C-D6D3BDB637DC@stufft.io> References: <1683253402.314191.1462518708750.JavaMail.yahoo.ref@mail.yahoo.com> <1683253402.314191.1462518708750.JavaMail.yahoo@mail.yahoo.com> <16c5701d1a7b4$b8737020$295a5060$@hotmail.com> <82087DDD-654D-4BF7-993C-D6D3BDB637DC@stufft.io> Message-ID: <715620173.637502.1462729706727.JavaMail.yahoo@mail.yahoo.com> Hi, Thanks for all the comments. I tried the get-pip.py script and installs successfully setuptools and pip. Usually I installed virtualenv and setuptools from the install scripts: virtualenv.py, ez_setup.py and pip from pip-*.*.*.tar.gz.I liked this approach, because in this way I didn't affect the system by any of those packages, everything remained in the created virtualenv. Now I have to change only my install script in order to use get-pip.py Best regardsZoltan From: Donald Stufft To: tritium-list at sdamon.com Cc: Chris Barker ; Benedek Zoltan ; distutils-sig at python.org Sent: Friday, May 6, 2016 7:43 PM Subject: Re: [Distutils] ez_setup.py can not get setuptools On May 6, 2016, at 12:31 PM, tritium-list at sdamon.com wrote: If you don?t have setuptools, you don?t have pip. Not true anymore, pip is perfectly capable of running and installing things without setuptools now days. The only time you *need* setuptools installed is if you?re installing from a sdist (and setuptools has wheels, so you can install setuptools with pip withouth setuptools already being installed). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B?7C5D 5E2B 6356 A926 F04F 6E3C?BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at realpath.org Mon May 9 08:37:32 2016 From: sebastian at realpath.org (Sebastian Krause) Date: Mon, 09 May 2016 14:37:32 +0200 Subject: [Distutils] setuptools >= 20.2 may break applications using pyparsing Message-ID: <8c2811c23dba9ccac564fd3422518c02@leo.uberspace.de> Hi, I'm developing an application that uses pyparsing and after upgrading setuptools to the newest version I noticed some tests failing. In my main parser module I define an alias for the ParseBaseException which I then use in other parts of the application to catch exceptions: # definition of the ParseException ParseException = pyparsing.ParseBaseException # importing this alias in another module from ...filterreader.parser import ParseException Now my tests were failing because the ParseException was never actually caught. Some investigation by comparing the id() of the objects showed that the ParseException alias was no longer the same object as pyparsing.ParseBaseException. This was because the module "pyparsing" at the time of the alias definition was not the same "pyparsing" module which is later used for parsing. Looking at sys.module I can see that I have two pyparsing modules: pyparsing: pkg_resources.extern.pyparsing: At the time of the alias definition id(pyparsing) is equal to the id() of pkg_resources.extern.pyparsing. When I later import pyparsing I get the other module. This whole problem only happens when I use the application packaged by cx_Freeze, so maybe some kind of race condition happens when importing from a ZIP file. I'm using 64 bit Python 3.4.4 on Windows. The first version of setuptools where I can see this problem is 20.2, until 20.1 everything is fine. Looking at the source I can see that starting with 20.2 setuptools also includes its own pyparsing copy, so most likely that change is related to my problem. Is there a simple way in which I can guarantee that there will only ever be a single "pyparsing" module in my application? Of course I could just stop using the alias and use the pyparsing exceptions directly, but I feel a bit uneasy when a module just changes its identity at some point between imports. Sebastian From donald at stufft.io Mon May 9 09:17:37 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 May 2016 09:17:37 -0400 Subject: [Distutils] setuptools >= 20.2 may break applications using pyparsing In-Reply-To: <8c2811c23dba9ccac564fd3422518c02@leo.uberspace.de> References: <8c2811c23dba9ccac564fd3422518c02@leo.uberspace.de> Message-ID: <0667A134-27B9-4DC1-B8E0-7D88A145DA8B@stufft.io> > On May 9, 2016, at 8:37 AM, Sebastian Krause wrote: > > Hi, > > I'm developing an application that uses pyparsing and after upgrading setuptools to the newest version I noticed some tests failing. In my main parser module I define an alias for the ParseBaseException which I then use in other parts of the application to catch exceptions: > > # definition of the ParseException > ParseException = pyparsing.ParseBaseException > > # importing this alias in another module > from ...filterreader.parser import ParseException > > Now my tests were failing because the ParseException was never actually caught. Some investigation by comparing the id() of the objects showed that the ParseException alias was no longer the same object as pyparsing.ParseBaseException. This was because the module "pyparsing" at the time of the alias definition was not the same "pyparsing" module which is later used for parsing. Looking at sys.module I can see that I have two pyparsing modules: > > pyparsing: > pkg_resources.extern.pyparsing: > > At the time of the alias definition id(pyparsing) is equal to the id() of pkg_resources.extern.pyparsing. When I later import pyparsing I get the other module. This whole problem only happens when I use the application packaged by cx_Freeze, so maybe some kind of race condition happens when importing from a ZIP file. I'm using 64 bit Python 3.4.4 on Windows. > > The first version of setuptools where I can see this problem is 20.2, until 20.1 everything is fine. Looking at the source I can see that starting with 20.2 setuptools also includes its own pyparsing copy, so most likely that change is related to my problem. > > Is there a simple way in which I can guarantee that there will only ever be a single "pyparsing" module in my application? Of course I could just stop using the alias and use the pyparsing exceptions directly, but I feel a bit uneasy when a module just changes its identity at some point between imports. > > Sebastian > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig This sounds like something you should open as a bug with setuptools, the problem lies with pkg_resources.extern:VendorImporter. Probably it should stop trying to be as tricky and do something more like what pip does here. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon May 9 09:28:49 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 9 May 2016 23:28:49 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: On 9 May 2016 at 01:43, Donald Stufft wrote: > Overall, my suggestion here would be to have a file called ``pymeta.toml`` (or > ``meta.toml``) pymeta.toml would be fine by me. I don't really buy the "collision with Debian build tool" argument against "pybuild" (if I did, I'd be objecting to "pymeta" colliding with an existing PyPI package), so it's mainly the fact the metadata in this file covers more than just building has soured me on it. > and have it look like:: > > [dependencies] > build = [ > "setuptools", > "wheel>=1.0", > ] > > If at some point we decide we need to add a bootstrap_requires (and then the > ability to add dynamic build requirements) we can do that by just saying that > if you plan on having dynamic build requirements, you need to omit the build > key under the [dependencies] section. Looking at my previous ideas for semantic dependencies in PEP 426, what if we start in the near term by defining development requirements? That can then be used to hold arbitrary development dependencies (metadata generation, sdist creation, test execution, wheel build, docs build, etc), everything that you need to work on, build, and test the software, but don't need if you just want to run the published releases. We may later decide that we want to break that down and separate out specific requirements for sdist creation and for wheel creation, but we can handle that by saying that if there's no more specific dependency definition for an operation, then the tools will fall back to pre-installing all the listed development dependencies. That is, someone might write: [dependencies] develop = [ "setuptools", "wheel>=1.0", "sphinx", "pytest", ] And it would be useful not only for running setup.py commands, but setting up their local dev environment in general. (Having docs build and test dependencies listed would be entirely acceptable for the RPM spec autogeneration case, since we just have the BuildRequires/Requires split at the RPM level, and often need the docs dependency to generate man pages, and the test dependencies to run the unit tests in the %check scriptlet) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Mon May 9 09:38:28 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 May 2016 09:38:28 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: <8E273E55-041C-418B-B305-F41F88FE4FFD@stufft.io> > On May 9, 2016, at 9:28 AM, Nick Coghlan wrote: > > Looking at my previous ideas for semantic dependencies in PEP 426, > what if we start in the near term by defining development > requirements? I think the biggest reason not to do this, but instead do something like build requirements is that development dependencies is already reasonably well addressed in a way that something other than setuptools can access it using extras. It's not as great as a dedicated key for it, but it works pretty OK. The thing that is really painful is setup_requires and how it forces you to delay importing until *during* the execution of the setup() function. We could try and lump setup_requires and development dependencies together, but that seems less than optimal to me. Unless someone's setup.py uses pytest, I'm not sure I see a reason for pytest to be installed anytime pip builds that project. A more concrete example would be pyca/cryptography, which has a development dependency that consists of 27MB of data which was purposely kept separate from cryptography itself so as not to incur an additional 27MB of download just to install cryptography. I think this *could* make sense, if we could reasonably assume that the 95% case would always be handled by wheels, but I don't think that we can. A lot of projects have compiled C code as part of them, and as soon as you do that you end up where installation from sdist is one of your main supported methods (yes Wheel will cover a lot of them, but only for the most popular platforms). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Mon May 9 09:36:49 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 9 May 2016 23:36:49 +1000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On 7 May 2016 at 08:21, Paul Moore wrote: > On 6 May 2016 at 19:14, Brett Cannon wrote: >> OK, assuming the Nick will be pronouncing, who wants to write the PEP? > > ... and if Nick doesn't want to pronounce, I'm willing to offer to be > BDFL for this one. But a PEP is the first thing. (And IMO the key > point of the PEP is to be very clear on what is in scope and what > isn't - the discussions have covered a *lot* of ground and being clear > on what's excluded will be at least as important as stating what's in > scope). Answering this specifically: I'm happy to be the arbiter-of-consensus for this one, as my schedule's pretty clear right now (at least until I head to PyCon US on the 27th). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Mon May 9 09:52:49 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 9 May 2016 23:52:49 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: <8E273E55-041C-418B-B305-F41F88FE4FFD@stufft.io> References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> <8E273E55-041C-418B-B305-F41F88FE4FFD@stufft.io> Message-ID: On 9 May 2016 at 23:38, Donald Stufft wrote: > >> On May 9, 2016, at 9:28 AM, Nick Coghlan wrote: >> >> Looking at my previous ideas for semantic dependencies in PEP 426, >> what if we start in the near term by defining development >> requirements? > > I think the biggest reason not to do this, but instead do something like build > requirements is that development dependencies is already reasonably well > addressed in a way that something other than setuptools can access it using > extras. It's not as great as a dedicated key for it, but it works pretty OK. > The thing that is really painful is setup_requires and how it forces you to > delay importing until *during* the execution of the setup() function. We could > try and lump setup_requires and development dependencies together, but that > seems less than optimal to me. Unless someone's setup.py uses pytest, I'm not > sure I see a reason for pytest to be installed anytime pip builds that project. > > A more concrete example would be pyca/cryptography, which has a development > dependency that consists of 27MB of data which was purposely kept separate from > cryptography itself so as not to incur an additional 27MB of download just to > install cryptography. OK, that makes sense to me as a rationale for prioritising build dependencies over more general dev environment dependencies. To feel confident we haven't overlooked a near term need by doing that, I think the main question I'd like to see the PEP explicitly cover is "when will these new requirements be implicitly installed?" I currently believe the answers to that are: 1. When pip* is asked to install from an sdist 2. When pip* is asked to install from a VCS URL 3. When pip* is asked to install from a local directory 4. When pip* is asked to create a wheel file for any of those * - or another compatible installer If those answers are correct, then the metadata consumer will always be proceeding on to do a build in the use cases we currently care about, so it's OK that it may also be relying on those build requirements to generate metadata or create an sdist archive. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Mon May 9 10:30:33 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 May 2016 07:30:33 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> <8E273E55-041C-418B-B305-F41F88FE4FFD@stufft.io> Message-ID: On Mon, May 9, 2016 at 6:52 AM, Nick Coghlan wrote: > On 9 May 2016 at 23:38, Donald Stufft wrote: >> >>> On May 9, 2016, at 9:28 AM, Nick Coghlan wrote: >>> >>> Looking at my previous ideas for semantic dependencies in PEP 426, >>> what if we start in the near term by defining development >>> requirements? >> >> I think the biggest reason not to do this, but instead do something like build >> requirements is that development dependencies is already reasonably well >> addressed in a way that something other than setuptools can access it using >> extras. It's not as great as a dedicated key for it, but it works pretty OK. >> The thing that is really painful is setup_requires and how it forces you to >> delay importing until *during* the execution of the setup() function. We could >> try and lump setup_requires and development dependencies together, but that >> seems less than optimal to me. Unless someone's setup.py uses pytest, I'm not >> sure I see a reason for pytest to be installed anytime pip builds that project. >> >> A more concrete example would be pyca/cryptography, which has a development >> dependency that consists of 27MB of data which was purposely kept separate from >> cryptography itself so as not to incur an additional 27MB of download just to >> install cryptography. > > OK, that makes sense to me as a rationale for prioritising build > dependencies over more general dev environment dependencies. > > To feel confident we haven't overlooked a near term need by doing > that, I think the main question I'd like to see the PEP explicitly > cover is "when will these new requirements be implicitly installed?" > > I currently believe the answers to that are: > > 1. When pip* is asked to install from an sdist > 2. When pip* is asked to install from a VCS URL > 3. When pip* is asked to install from a local directory > 4. When pip* is asked to create a wheel file for any of those > > * - or another compatible installer > > If those answers are correct, then the metadata consumer will always > be proceeding on to do a build in the use cases we currently care > about, so it's OK that it may also be relying on those build > requirements to generate metadata or create an sdist archive. I think what we want is precisely, "what stuff do you need before you can invoke setup.py", and in the future that will be replaced with "what stuff do you need before you can invoke [our awesome hypothetical build system abstraction interface TBD, which will default to a wrapper around setup.py]". We all agree that there will be some sort of "build system" concept, and that invoking a build system is something that requires running arbitrary code, some of which might come from PyPI. It remains to be determined exactly what the boundary of that "build system" concept is (e.g., it definitely will have 'build a wheel'; not as obvious whether it will have 'build an sdist' or 'run tests'). But I think for now that's fine -- for now we can just say that eventually we will fill in the details of this interface, and the point of this part of the config file is that it lets you bootstrap into whatever that ends up being. (Notice that both the PEP 516/517 proposals actually distinguish between bootstrap-requires, which are what you need before invoking the build system, and build-requires, which is a dynamic hook for querying the build system to ask it what else it needs before it can build a wheel. Example of where the wheel-build requirements might vary at runtime: when building an extension that needs a C library, your build system might provide envvar that lets you pick between 'I'm building an RPM, so please just link to the system-provided copy of this library' versus 'I'm building a wheel, so please install the relevant pynativelib package -- making it a build requirement -- and then link to that'. I think we should focus narrowly on the bootstrapping problem right now, because it's not yet clear whether things like test dependencies actually make more sense as static metadata or as something you query the build system for dynamically. And a single static list of build-system-requirements is already enough to bandage over most of the really egregious setup_requires problems.) So I'd suggest: [build-system] requires = [ ... ] # Later we will add something like: # entry-point = ... -n -- Nathaniel J. Smith -- https://vorpus.org From njs at pobox.com Mon May 9 11:01:04 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 May 2016 08:01:04 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: On Mon, May 9, 2016 at 6:28 AM, Nick Coghlan wrote: > On 9 May 2016 at 01:43, Donald Stufft wrote: >> Overall, my suggestion here would be to have a file called ``pymeta.toml`` (or >> ``meta.toml``) > > pymeta.toml would be fine by me. > > I don't really buy the "collision with Debian build tool" argument > against "pybuild" (if I did, I'd be objecting to "pymeta" colliding > with an existing PyPI package), so it's mainly the fact the metadata > in this file covers more than just building has soured me on it. Re: filename bikeshedding: "pymeta" feels very "inessentially weird" to me [1]. This file is going to front and center for newcomers, many of whom will never have encountered the word "metadata" and especially not the hacker fetish for the "meta" morpheme. I like meta-things in general! But I don't like the image of trying to explain what a "pymeta" is over and over and over again when teaching :-). pymetadata would be better, but it seems like there must be something less jargony available? pypackage pypackaging pydevelop pysource pytools pysettings ...? Or if we're really daring and wasteful of characters, I guess we could even go for something like python-tools.toml :-) -n (Tangent, but I'll write it down so I don't forget when we circle back to the question of adding config for third-party tools into this thing: [tool.flit], [tool.coverage] is probably a lot more obvious to newcomers than [extension.flit], [extension.coverage]!) [1] https://www.crummy.com/sumana/2014/08/10/1 -- Nathaniel J. Smith -- https://vorpus.org From brett at python.org Mon May 9 12:06:54 2016 From: brett at python.org (Brett Cannon) Date: Mon, 09 May 2016 16:06:54 +0000 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Mon, 9 May 2016 at 06:42 Nick Coghlan wrote: > On 7 May 2016 at 08:21, Paul Moore wrote: > > On 6 May 2016 at 19:14, Brett Cannon wrote: > >> OK, assuming the Nick will be pronouncing, who wants to write the PEP? > > > > ... and if Nick doesn't want to pronounce, I'm willing to offer to be > > BDFL for this one. But a PEP is the first thing. (And IMO the key > > point of the PEP is to be very clear on what is in scope and what > > isn't - the discussions have covered a *lot* of ground and being clear > > on what's excluded will be at least as important as stating what's in > > scope). > > Answering this specifically: I'm happy to be the arbiter-of-consensus > for this one, as my schedule's pretty clear right now (at least until > I head to PyCon US on the 27th). > OK, I'll list Nick as the BDFL Delegate in the PEP then. Hoping to have a draft to send to the list today or tomorrow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon May 9 12:14:53 2016 From: brett at python.org (Brett Cannon) Date: Mon, 09 May 2016 16:14:53 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: On Mon, 9 May 2016 at 06:29 Nick Coghlan wrote: > On 9 May 2016 at 01:43, Donald Stufft wrote: > > Overall, my suggestion here would be to have a file called > ``pymeta.toml`` (or > > ``meta.toml``) > > pymeta.toml would be fine by me. > > I don't really buy the "collision with Debian build tool" argument > against "pybuild" (if I did, I'd be objecting to "pymeta" colliding > with an existing PyPI package), so it's mainly the fact the metadata > in this file covers more than just building has soured me on it. > > > and have it look like:: > > > > [dependencies] > > build = [ > > "setuptools", > > "wheel>=1.0", > > ] > > > > If at some point we decide we need to add a bootstrap_requires (and then > the > > ability to add dynamic build requirements) we can do that by just saying > that > > if you plan on having dynamic build requirements, you need to omit the > build > > key under the [dependencies] section. > > Looking at my previous ideas for semantic dependencies in PEP 426, > what if we start in the near term by defining development > requirements? > > That can then be used to hold arbitrary development dependencies > (metadata generation, sdist creation, test execution, wheel build, > docs build, etc), everything that you need to work on, build, and test > the software, but don't need if you just want to run the published > releases. > > We may later decide that we want to break that down and separate out > specific requirements for sdist creation and for wheel creation, but > we can handle that by saying that if there's no more specific > dependency definition for an operation, then the tools will fall back > to pre-installing all the listed development dependencies. > > That is, someone might write: > > [dependencies] > develop = [ > "setuptools", > "wheel>=1.0", > "sphinx", > "pytest", > ] > > And it would be useful not only for running setup.py commands, but > setting up their local dev environment in general. > I'm not going to touch the concept of development dependencies to keep the PEP focused. But one thing this discussion does bring up is everyone is assuming there is simply going to be a [dependencies] section to the configuration file where build, development, and/or installation dependencies will eventually end up. My current plan for the PEP is to not do that. Instead, I'm going to have a [build] section that will have a `dependencies` field. My thinking is that we don't need to have any end-users who choose to look at this file care about anything related to building or developing a project. Plus if we expose any details about the Future Awesome Build API (aka the FAB API ;) then it should probably be kept near all of the other build-related details, including the dependencies for building instead of inherently spanning multiple sections (yes, tools will quite possibly have their own section, but we can try to keep it to [build] and e.g. [tools.setuptools] and not [dependencies], [build], and [tools.setuptools]). -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon May 9 13:41:18 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 May 2016 10:41:18 -0700 Subject: [Distutils] who is BDFL for the boostrap/requires declaration? (was: moving things forward) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> <572BAE5E.2010604@nextday.fi> <6E21E2B4-CF22-4BC3-AE92-E06704DF2CDE@stufft.io> Message-ID: On Sun, May 8, 2016 at 5:49 AM, Nick Coghlan wrote: > The reason I think "bootstrap" is a better name at this point I *really* don't want to add to the bike-shedding of the name at this point -- I really don't care. I was just trying to see if I was misunderstanding something, as it didn't seem to be bootstrapping anything to me, and I see I was kinda missing some of the scope and complexity -- thanks. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon May 9 14:20:26 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 May 2016 11:20:26 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: > > "pymeta" feels very "inessentially weird" to me [1]. yeah... setup.toml ??? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon May 9 15:11:04 2016 From: brett at python.org (Brett Cannon) Date: Mon, 09 May 2016 19:11:04 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: On Mon, 9 May 2016 at 11:21 Chris Barker wrote: > "pymeta" feels very "inessentially weird" to me [1]. > > > yeah... > > setup.toml > > ??? > You can all stop guessing at file names. The PEP will have a recommendation and you all can either agree or disagree at that point. Please don't give me more names to list in the rejected section. :) -Brett > > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon May 9 15:19:23 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 May 2016 15:19:23 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <6C58B59F-9761-4314-8202-BFD18C50C4D2@stufft.io> Message-ID: <5E286424-5BA8-41A8-991D-5D6ABBA7E2EC@stufft.io> dstufft.toml imo Sent from my iPhone > On May 9, 2016, at 3:11 PM, Brett Cannon wrote: > > You can all stop guessing at file names. The PEP will have a recommendation and you all can either agree or disagree at that point. Please don't give me more names to list in the rejected section. :) From barry at python.org Mon May 9 15:30:33 2016 From: barry at python.org (Barry Warsaw) Date: Mon, 9 May 2016 14:30:33 -0500 Subject: [Distutils] comparison of configuration languages References: Message-ID: <20160509143033.29dcc3ea@anarchist.wooz.org> On May 08, 2016, at 09:05 AM, Robert Collins wrote: >E.g. use setup.cfg now. Add pybuild.toml later. (btw, terrible name, >as pybuild is a thing in the debian space, and this will confuse the >heck out of folk). https://wiki.debian.org/Python/Pybuild Yes, please don't call it pybuild ;) Also, I think it makes a lot of sense to go with YAML even if it isn't the best most readable option. It's much more common than TOML so the learning curve will be lessened. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From chris.barker at noaa.gov Mon May 9 17:08:10 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 May 2016 14:08:10 -0700 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On Sun, May 8, 2016 at 5:31 AM, Nick Coghlan wrote: > > any python developer is going to > > run into these issues eventually... > > Aye, I know - conda was one of the systems I evaluated as a possible > general purpose tool for user level package management in Fedora and > derivatives (see > > https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement#Directions_to_be_Explored > for details). > > Hence my comment about conda not currently solving *system integrator* > problems in the general case - it's fine as a platform for running > software *on top of* Fedora and for DIY integration within an > organisation, but as a tool for helping to *build Fedora*, it turned > out to introduce a whole slew of new problems Sure -- I don't think it was ever intended to support rpm et. all -- it's to be used INSTEAD of rpm et. all -- for the most part, rpm has solved the problems that conda is trying to solve -- but only for rpm-based Linux systems. And I'm going to guess that Continuum didn't want to: build packages for rpm systems (which ones?) build packages for deb-based systems (which ones?) build packages for gentoo build packages for arch.. ..... build packages for homebrew build packages for cygwin build packages for Windows.. OOPS, there IS not package manager for Windows!! And, AFAICT, none of those package management systems support "environments", either. Clearly -- a tool like conda was required to meet Continuum's goals -- and those goals are very much the same as PyPa's goals, actually. (except the curated collection of packages part, but conda itself is not about the Curation...) However, the kinds of enhancements we're now considering upstream in > pip should improve things for conda users as well - just as some folks > in Fedora are exploring the feasibility of automatically rebuilding > the whole of PyPI as RPMs yes -- that's a good analogy -- for python packages, conda relies entirely on distutils/setuptools/pip -- so yes, making those tools better and more flexible is great. but I'm still confused about what "the kinds of enhancements we're now considering upstream in pip" are. Here are a few: More flexibility about the build system used Easier to get metadata without actually initiating a build These are great! But I started this whole line of conversation because it seemed that there was desire for: Ability to isolate the build environment. Ability to better handle/manage non-python dependencies These are what I was referring to as mission-creep, and that overlap with conda (and other tools). > (twice, actually, anchored on > /usr/bin/python and /usr/bin/python3), it should eventually be > feasible to have the upstream->conda pipeline fully automated as well. yeah -- there's been talk for ages of automatically building conda packages (on the fly, maybe) from PyPi packages. But currently on conda-forge we've decided to NOT try to do that -- it's turned out in practice that enough pypi packages end up needing some hand-tweaking to build. So teh planned workflow is now: Auto-build a conda build script for a PyPi package Test it Tweak it as required Add it to conda-forge. Then -- *maybe* write a tool that auto-updates the PyPi based packages in a chron job or whatever. So not quite a automated conda-PyPi bridge, but not a bad start. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian at realpath.org Mon May 9 17:35:24 2016 From: sebastian at realpath.org (Sebastian Krause) Date: Mon, 9 May 2016 23:35:24 +0200 Subject: [Distutils] setuptools >= 20.2 may break applications using pyparsing In-Reply-To: <0667A134-27B9-4DC1-B8E0-7D88A145DA8B@stufft.io> References: <8c2811c23dba9ccac564fd3422518c02@leo.uberspace.de> <0667A134-27B9-4DC1-B8E0-7D88A145DA8B@stufft.io> Message-ID: <1137758E-B35B-456C-AF3F-62C236266871@realpath.org> On 09.05.2016, at 15:17, Donald Stufft wrote: > This sounds like something you should open as a bug with setuptools, the problem lies with pkg_resources.extern:VendorImporter. Probably it should stop trying to be as tricky and do something more like what pip does here. Done: https://github.com/pypa/setuptools/issues/580 Sebastian From ethan at stoneleaf.us Mon May 9 19:56:42 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 16:56:42 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <573123BA.3000108@stoneleaf.us> On 05/06/2016 07:59 PM, Nathaniel Smith wrote: > Here's that one-stop writeup/comparison of all the major configuration > languages that I mentioned: > > https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f Very nice work-up, thanks! However, you didn't include XML -- which, while absolutely horrid, can be quite readable with the appropriate preprocessor, such as xaml [1] : --- 8< whatever.xaml --------------------------------------------------- !!! xml1.0 ~base ~schema // optional ~version: 1 ~bootstrap ~requirements // Temporarily commented out 2016-01-10 // magic-build-helper ~setuptools ~version: >= 27 // for the new frobnicate feature ~numpy ~version: >= 1.10 //Pinned until we get a fix for // @https://github.com/cyberdyne/the-versionator/issues/123 ~the-versionator ~version: 0.13 // The owner of pypi name "flit" decides what goes under the // extension: flit: // key ~extensions ~flit ~whatever: true --- 8< ----------------------------------------------------------------- which ends up as: --- 8< whatever.xml ---------------------------------------------------- 1 >= 27 >= 1.10 0.13 true --- 8< ----------------------------------------------------------------- -- ~Ethan~ [1] https://pypi.python.org/pypi/xaml From ethan at stoneleaf.us Mon May 9 20:19:58 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 17:19:58 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <5731292E.4030601@stoneleaf.us> On 05/07/2016 09:32 AM, Brett Cannon wrote: > +1 for TOML from me as well. I know Paul brought up the lack of > familiarity, but the format is simple and the Rust community is already > fully dependent on it so at worst Rust + us could always just ignore > future format versions if necessary. > > If TOML is the chosen format we could ask how long until a 1.0 release > to know if we waited a month or so to implement we could make sure we're > compliant with that version. > > I also checked pytoml at https://github.com/avakar/pytoml and it looks > like it's pretty stable; no changes in the past 5 months except to > support Python 3.5 and only 3 issues. And the format is simple enough > that if someone had to fork the code like Nathaniel suggested or we did > it from scratch it wouldn't be a huge burden. I would prefer TOML over anything else mentioned so far, and taking over pytoml would probably be beneficial, given the author's comments. -- ~Ethan~ From ethan at stoneleaf.us Mon May 9 20:21:17 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 17:21:17 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <572E74D2.9040308@nextday.fi> Message-ID: <5731297D.6090300@stoneleaf.us> On 05/07/2016 04:11 PM, Robert Collins wrote: > Actually, Nathaniel didn't test vendorability of the libraries, and pip > needs that. Pyyaml isn't in good shape there. Um, what does "vendorability" mean? -- ~Ethan~ From ethan at stoneleaf.us Mon May 9 20:22:47 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 17:22:47 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160509143033.29dcc3ea@anarchist.wooz.org> References: <20160509143033.29dcc3ea@anarchist.wooz.org> Message-ID: <573129D7.3090401@stoneleaf.us> On 05/09/2016 12:30 PM, Barry Warsaw wrote: > Also, I think it makes a lot of sense to go with YAML even if it isn't the > best most readable option. It's much more common than TOML so the learning > curve will be lessened. I'd rather learn some new syntax that is readable than be stuck with a pain in my eyes and in my brain. ;) -- ~Ethan~ From donald at stufft.io Mon May 9 20:30:18 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 9 May 2016 20:30:18 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731297D.6090300@stoneleaf.us> References: <572E74D2.9040308@nextday.fi> <5731297D.6090300@stoneleaf.us> Message-ID: <7EF22EDA-3D94-438F-84F8-15C0B6975F21@stufft.io> > On May 9, 2016, at 8:21 PM, Ethan Furman wrote: > > Um, what does "vendorability" mean? How hard is it to bundle it with pip by copying the source files into pip._vendor.* ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ethan at stoneleaf.us Mon May 9 22:22:55 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 19:22:55 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731292E.4030601@stoneleaf.us> References: <5731292E.4030601@stoneleaf.us> Message-ID: <573145FF.50108@stoneleaf.us> On 05/09/2016 05:19 PM, Ethan Furman wrote: > On 05/07/2016 09:32 AM, Brett Cannon wrote: >> I also checked pytoml at https://github.com/avakar/pytoml and it looks >> like it's pretty stable; no changes in the past 5 months except to >> support Python 3.5 and only 3 issues. And the format is simple enough >> that if someone had to fork the code like Nathaniel suggested or we did >> it from scratch it wouldn't be a huge burden. After further consideration, and pytoml's author's comment about the spec changing without a version increase, I think we might be better off rolling our own. I like the general simplicity, and would stick with that, but I'd be a lot more comfortable if we had our spec that was more consistent. If there is interest I'll write the PEP and the tool. -- ~Ethan~ From ethan at stoneleaf.us Mon May 9 22:34:13 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 19:34:13 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: <573148A5.8020803@stoneleaf.us> I just found this on StackOverflow: http://stackoverflow.com/a/648487/208880 tl;dr ----- > Recently I was working upon a project and I realised that I wanted to > have conditionals inside my configuration file [...] > > I didn't want to write a mini-language, because unless I did it very > carefully I couldn't allow the flexibility that would be useful. > > Instead I decided that I'd have two forms: If the file started with > "#!" and was executable I'd parse the result of running it; otherwise > I'd read it as-is That approach seems like a win-win: the plain-vanilla static file can be promoted as best-practice, yet we have a fall-back for the complicated and edge cases. -- ~Ethan~ From lukasz at langa.pl Mon May 9 22:52:41 2016 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Mon, 9 May 2016 19:52:41 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <573148A5.8020803@stoneleaf.us> References: <573148A5.8020803@stoneleaf.us> Message-ID: <8B042D62-888A-4D4F-98ED-10364206380D@langa.pl> Next thing you know we end up with a new setup.py, with imports, PYTHONPATH hacking et al ;-) > On May 9, 2016, at 7:34 PM, Ethan Furman wrote: > > I just found this on StackOverflow: > > http://stackoverflow.com/a/648487/208880 > > tl;dr > ----- > > > Recently I was working upon a project and I realised that I wanted to > > have conditionals inside my configuration file [...] > > > > I didn't want to write a mini-language, because unless I did it very > > carefully I couldn't allow the flexibility that would be useful. > > > > Instead I decided that I'd have two forms: If the file started with > > "#!" and was executable I'd parse the result of running it; otherwise > > I'd read it as-is > > That approach seems like a win-win: the plain-vanilla static file can be promoted as best-practice, yet we have a fall-back for the complicated and edge cases. > > -- > ~Ethan~ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Mon May 9 23:35:15 2016 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 9 May 2016 20:35:15 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <573145FF.50108@stoneleaf.us> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> Message-ID: On Mon, May 9, 2016 at 7:22 PM, Ethan Furman wrote: > On 05/09/2016 05:19 PM, Ethan Furman wrote: >> >> On 05/07/2016 09:32 AM, Brett Cannon wrote: > > >>> I also checked pytoml at https://github.com/avakar/pytoml and it looks >>> like it's pretty stable; no changes in the past 5 months except to >>> support Python 3.5 and only 3 issues. And the format is simple enough >>> that if someone had to fork the code like Nathaniel suggested or we did >>> it from scratch it wouldn't be a huge burden. > > > After further consideration, and pytoml's author's comment about the spec > changing without a version increase, I think we might be better off rolling > our own. He's a bit confused -- they didn't change 0.4.0; they made changes in their dev branch working towards 1.0.0 (some cleanups related to the date/time stuff I think?"). But of course when you go to github it shows you the current dev version, and the dev version has a prominent link at the top to the 0.4.0 tag, so if you're skimming it's easy to misread it as saying that what you're looking at is 0.4.0. -n -- Nathaniel J. Smith -- https://vorpus.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon May 9 23:37:18 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 9 May 2016 20:37:18 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <573145FF.50108@stoneleaf.us> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> Message-ID: Really? writing Yet Another Markup Language (YAML :-) ) CAN'T be the simplest, best option. > After further consideration, and pytoml's author's comment about the spec changing without a version increase, I think we might be better off rolling our own. > > I like the general simplicity, and would stick with that, but I'd be a lot > more comfortable if we had our spec that was more consistent. > If we're going to do that, then why not the 'simple part of yaml'. or Python literals. (if I recall, the main reason not to do that was that no other language has a lib to read it -- rolling out own does not solve that!) Or just go with JSON -- I'm annoyed by it at times, but it's not SO bad. (and you can kinda-sorta simulate comments with useless keys :-) { "comment": "this is just something i wanted to say here", ... } or we could do "JSON with comments" -- not hard to write a tiny pre-processor before passing it off to the json lib. Anyway -- let's avoid the temptation to role your own everything, and use something standard! -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Mon May 9 23:40:25 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Mon, 09 May 2016 20:40:25 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> Message-ID: <57315829.1090107@stoneleaf.us> On 05/09/2016 08:35 PM, Nathaniel Smith wrote: > On Mon, May 9, 2016 at 7:22 PM, Ethan Furman wrote: >> After further consideration, and pytoml's author's comment about the spec >> changing without a version increase, I think we might be better off >> rolling our own. > > He's a bit confused -- they didn't change 0.4.0; they made changes in > their dev branch working towards 1.0.0 (some cleanups related to the > date/time stuff I think?"). But of course when you go to github it shows > you the current dev version, and the dev version has a prominent link at > the top to the 0.4.0 tag, so if you're skimming it's easy to misread it > as saying that what you're looking at is 0.4.0. Ah, okay. Thanks for clarifying! -- ~Ethan~ From alex.gronholm at nextday.fi Tue May 10 03:38:51 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 10:38:51 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> Message-ID: <5731900B.3020200@nextday.fi> A few facts: * YAML is good enough for Salt, Ansible and numerous other common tools * The YAML standard has been stable for many years, unlike TOML which still hasn't even reached 1.0 * YAML has widespread tooling support, unlike TOML We all agree that JSON is not the solution. No comments, trailing commas etc. TOML isn't much better than ConfigParser in terms of representing nested structures. So far the ONLY objective problems with YAML seems to be the problematic implementation named PyYAML. If this is really the case, I'd gladly help build a better one just to prevent TOML from being chosen for this task. That we're even /considering/ building something as important as this on an unstable standard is pretty horrifying to me in itself. 10.05.2016, 06:37, Chris Barker kirjoitti: > Really? > > writing Yet Another Markup Language (YAML :-) ) CAN'T be the simplest, > best option. > > > After further consideration, and pytoml's author's comment about the > spec changing without a version increase, I think we might be better > off rolling our own. > > > I like the general simplicity, and would stick with that, but I'd > be a lot more comfortable if we had our spec that was more consistent. > > > If we're going to do that, then why not the 'simple part of yaml'. > > or Python literals. (if I recall, the main reason not to do that was > that no other language has a lib to read it -- rolling out own does > not solve that!) > > Or just go with JSON -- I'm annoyed by it at times, but it's not SO bad. > > (and you can kinda-sorta simulate comments with useless keys :-) > > { "comment": "this is just something i wanted to say here", > ... > } > > or we could do "JSON with comments" -- not hard to write a tiny > pre-processor before passing it off to the json lib. > > Anyway -- let's avoid the temptation to role your own everything, and > use something standard! > > -CHB > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Tue May 10 04:54:24 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 10 May 2016 09:54:24 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731900B.3020200@nextday.fi> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On 10 May 2016 at 08:38, Alex Gr?nholm wrote: > A few facts: > > YAML is good enough for Salt, Ansible and numerous other common tools > The YAML standard has been stable for many years, unlike TOML which still > hasn't even reached 1.0 > YAML has widespread tooling support, unlike TOML > > We all agree that JSON is not the solution. No comments, trailing commas > etc. > TOML isn't much better than ConfigParser in terms of representing nested > structures. Just as another data point, Cookiecutter started off using pyYAML, moved to ruamel.yaml, and ended up with their own (separated out into an independent project, but written specifically for Cookicutter) implementation of "some of" YAML, called poyo. I believe the reason they did this was repeated issues with the existing YAML libraries. And while the YAML-subset provided by poyo *looks* reasonable, I don't see any documentation of precisely *what* subset of YAML is implemented (apart from some examples). The poyo library is also very new, and I doubt it's seen much usage/testing outside of cookiecutter so far. I would love to use YAML. I really would. But for pip, we need a robust, easy to vendor Python implementation (with no C dependency) that is safe (see http://community.embarcadero.com/blogs/entry/yaml-and-remote-code-execution-38738). Writing our own is simply a way to end up with additional maintenance work, that we really don't have the resources for. > So far the ONLY objective problems with YAML seems to be the problematic > implementation named PyYAML. If this is really the case, I'd gladly help > build a better one just to prevent TOML from being chosen for this task. If you can get a robust, stable YAML library written fast enough to be an option for the PEP, then that would certainly be a possibility. But given that poyo has been under development for 5 months, are you going to be able to do better than that in a few weeks? (And see my comments above on poyo). > That we're even considering building something as important as this on an > unstable standard is pretty horrifying to me in itself. Well, IMO, the state of things in terms of config file formats (and not just in Python) is itself pretty dreadful - every time I write an application, I am astounded that there are no good options for something as basic as a configuration file format. Paul From alex.gronholm at nextday.fi Tue May 10 05:01:52 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 12:01:52 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <5731A380.9080009@nextday.fi> 10.05.2016, 11:54, Paul Moore kirjoitti: > On 10 May 2016 at 08:38, Alex Gr?nholm wrote: >> A few facts: >> >> YAML is good enough for Salt, Ansible and numerous other common tools >> The YAML standard has been stable for many years, unlike TOML which still >> hasn't even reached 1.0 >> YAML has widespread tooling support, unlike TOML >> >> We all agree that JSON is not the solution. No comments, trailing commas >> etc. >> TOML isn't much better than ConfigParser in terms of representing nested >> structures. > Just as another data point, Cookiecutter started off using pyYAML, > moved to ruamel.yaml, and ended up with their own (separated out into > an independent project, but written specifically for Cookicutter) > implementation of "some of" YAML, called poyo. I believe the reason > they did this was repeated issues with the existing YAML libraries. > And while the YAML-subset provided by poyo *looks* reasonable, I don't > see any documentation of precisely *what* subset of YAML is > implemented (apart from some examples). The poyo library is also very > new, and I doubt it's seen much usage/testing outside of cookiecutter > so far. > > I would love to use YAML. I really would. But for pip, we need a > robust, easy to vendor Python implementation (with no C dependency) > that is safe (see > http://community.embarcadero.com/blogs/entry/yaml-and-remote-code-execution-38738). > Writing our own is simply a way to end up with additional maintenance > work, that we really don't have the resources for. > >> So far the ONLY objective problems with YAML seems to be the problematic >> implementation named PyYAML. If this is really the case, I'd gladly help >> build a better one just to prevent TOML from being chosen for this task. > If you can get a robust, stable YAML library written fast enough to be > an option for the PEP, then that would certainly be a possibility. But > given that poyo has been under development for 5 months, are you going > to be able to do better than that in a few weeks? (And see my comments > above on poyo). Probably not all on my own, but if I get collaborators, then why not. The most important thing would be to have support for this so I know I'm not wasting my time. If PyYAML's test suite can be reused, that too would help a lot. Any new implementation would have to make sure they catch all the difficult corner cases. >> That we're even considering building something as important as this on an >> unstable standard is pretty horrifying to me in itself. > Well, IMO, the state of things in terms of config file formats (and > not just in Python) is itself pretty dreadful - every time I write an > application, I am astounded that there are no good options for > something as basic as a configuration file format. > > Paul I've used PyYAML in safe mode with no problems, but the API is...awkward, to say the least. I'm not wondering why Donald is -1 on vendoring it to packaging tools. From contact at ionelmc.ro Tue May 10 05:43:50 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Tue, 10 May 2016 12:43:50 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731900B.3020200@nextday.fi> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Tue, May 10, 2016 at 10:38 AM, Alex Gr?nholm wrote: > So far the ONLY objective problems with YAML seems to be the problematic > implementation named PyYAML. If this is really the case, I'd gladly help > build a better one just to prevent TOML from being chosen for this task. > That we're even *considering* building something as important as this on > an unstable standard is pretty horrifying to me in itself. > ?Just my two cents here: every time, but every every time,? I have to google around about how to create a multi-line string in YAML. There are too many ways to write the same thing. And lets not forget those damn sexagesimal literals. The complexity of that language is beyond repair, it's not a *library* problem. Just look at insanities like this ?or this ?. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Tue May 10 06:12:08 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 13:12:08 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <5731B3F8.9040906@nextday.fi> 10.05.2016, 12:43, Ionel Cristian M?rie? kirjoitti: > > On Tue, May 10, 2016 at 10:38 AM, Alex Gr?nholm > > wrote: > > So far the ONLY objective problems with YAML seems to be the > problematic implementation named PyYAML. If this is really the > case, I'd gladly help build a better one just to prevent TOML from > being chosen for this task. That we're even /considering/ building > something as important as this on an unstable standard is pretty > horrifying to me in itself. > > > ?Just my two cents here: every time, but every every time,? I have to > google around about how to create a multi-line string in YAML. There > are too many ways to write the same thing. And lets not forget those > damn sexagesimal literals. The complexity of that language is beyond > repair, it's not a *library* problem. Just look at insanities like > this > > ?or this ? . > I have no problem with any of the examples you linked to. > Thanks, > -- IonelCristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Tue May 10 07:09:59 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 14:09:59 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <573123BA.3000108@stoneleaf.us> References: <573123BA.3000108@stoneleaf.us> Message-ID: <5731C187.2010406@nextday.fi> This looks very close to what I'd like to have, but then we'd have the situation of an uncommon format with no tooling support, won't we? Assuming the actual config file is in xaml format. 10.05.2016, 02:56, Ethan Furman kirjoitti: > On 05/06/2016 07:59 PM, Nathaniel Smith wrote: > >> Here's that one-stop writeup/comparison of all the major configuration >> languages that I mentioned: >> >> https://gist.github.com/njsmith/78f68204c5d969f8c8bc645ef77d4a8f > > Very nice work-up, thanks! > > > However, you didn't include XML -- which, while absolutely horrid, can > be quite readable with the appropriate preprocessor, such as xaml [1] > : > > --- 8< whatever.xaml --------------------------------------------------- > !!! xml1.0 > ~base > > ~schema > // optional > ~version: 1 > > ~bootstrap > ~requirements > // Temporarily commented out 2016-01-10 > // magic-build-helper > ~setuptools > ~version: >= 27 > // for the new frobnicate feature > ~numpy > ~version: >= 1.10 > //Pinned until we get a fix for > // @https://github.com/cyberdyne/the-versionator/issues/123 > ~the-versionator > ~version: 0.13 > > // The owner of pypi name "flit" decides what goes under the > // extension: flit: > // key > ~extensions > ~flit > ~whatever: true > --- 8< ----------------------------------------------------------------- > > which ends up as: > > --- 8< whatever.xml ---------------------------------------------------- > > > > > > 1 > > > > > > > >= 27 > > > > >= 1.10 > > > > 0.13 > > > > > > > > > > true > > > --- 8< ----------------------------------------------------------------- > > -- > ~Ethan~ > > [1] https://pypi.python.org/pypi/xaml > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ncoghlan at gmail.com Tue May 10 08:26:35 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 10 May 2016 22:26:35 +1000 Subject: [Distutils] moving things forward (was: wheel including files it shouldn't) In-Reply-To: References: <571F7134.80709@stoneleaf.us> <8n801vq4ljpalzetnxnpjhx6i-0@mailer.nylas.com> <5720F940.4020307@stoneleaf.us> <1n09a68i2n7eap7zl9yf12k9a-0@mailer.nylas.com> <427B161C-364C-4636-A577-5781098C8A61@stufft.io> Message-ID: On 10 May 2016 at 07:08, Chris Barker wrote: > But I started this whole line of conversation because it seemed that there > was desire for: > > Ability to isolate the build environment. > Ability to better handle/manage non-python dependencies I don't care about the first one - between disposable VMs and Linux containers, we're already spoiled for choice when it comes to supporting isolated build environments, and every project still gaining net new contributors gets a natural test of this whenever someone tries to set up their own local build environment. I do care about the second one - Tennessee Leeuwenburg's draft PEP for that is something he put together at the PyCon Australia sprints, and it's the cornerstone of eventually being able to publish to PyPI and have RPMs, Debian packages, conda packages, homebrew packages, etc, "just happen" without any need for human intervention, even if your package has external binary dependencies. The fact that people would potentially be able to do "pip wheel" more easily (since they'd be presented with a clean "you need , but don't have it" error message rather than a cryptic build failure) is a nice bonus, but it's not the reason I personally care about the feature - I care about making more efficient use of distro packager's time, by only asking them to do things a computer couldn't be doing instead. The more complete we're able to make upstream dependency metadata, the less people will need to manually tweak the output of pyp2rpm (and similar tools for other platforms). >> (twice, actually, anchored on >> /usr/bin/python and /usr/bin/python3), it should eventually be >> feasible to have the upstream->conda pipeline fully automated as well. > > yeah -- there's been talk for ages of automatically building conda packages > (on the fly, maybe) from PyPi packages. But currently on conda-forge we've > decided to NOT try to do that -- it's turned out in practice that enough > pypi packages end up needing some hand-tweaking to build. So teh planned > workflow is now: > > Auto-build a conda build script for a PyPi package > Test it > Tweak it as required > Add it to conda-forge. > > Then -- *maybe* write a tool that auto-updates the PyPi based packages in a > chron job or whatever. > > So not quite a automated conda-PyPi bridge, but not a bad start. Yep, this is around the same level the Linux distros are generally at - a distro level config gets generated *once* (perhaps with a tool like pyp2rpm), but the result of that process then needs to be hand edited when new upstream releases come out, even if nothing significant has changed except the application code itself (no new build dependencies, no new build steps, etc). Utilities like Fedora's rebase-helper can help automate those updates, but I still consider the ideal arrangement to be for the upstream metadata to be of a sufficient standard that post-generation manual tweaking ceases to be necessary in most cases. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From randy at thesyrings.us Tue May 10 08:42:23 2016 From: randy at thesyrings.us (Randy Syring) Date: Tue, 10 May 2016 08:42:23 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731900B.3020200@nextday.fi> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <5731D72F.3070101@thesyrings.us> For what it's worth, I've been following this thread, and I like the idea of using TOML for all the "pro" reasons posted so far. It's newness or not reaching 1.0 yet don't bother me as I believe the plans to specify TOML 0.4 or optionally support the later versions if they don't cause problems makes a lot of sense. The fact that the parser is 300+ lines of code and can be easily vendored is also a big plus. Given that rust is using TOML, if Python adopts it as well, that is big enough "market share" for me and people will get used to it soon enough. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ On 05/10/2016 03:38 AM, Alex Gr?nholm wrote: > A few facts: > > * YAML is good enough for Salt, Ansible and numerous other common tools > * The YAML standard has been stable for many years, unlike TOML > which still hasn't even reached 1.0 > * YAML has widespread tooling support, unlike TOML > > We all agree that JSON is not the solution. No comments, trailing > commas etc. > TOML isn't much better than ConfigParser in terms of representing > nested structures. > So far the ONLY objective problems with YAML seems to be the > problematic implementation named PyYAML. If this is really the case, > I'd gladly help build a better one just to prevent TOML from being > chosen for this task. That we're even /considering/ building something > as important as this on an unstable standard is pretty horrifying to > me in itself. > > 10.05.2016, 06:37, Chris Barker kirjoitti: >> Really? >> >> writing Yet Another Markup Language (YAML :-) ) CAN'T be the >> simplest, best option. >> >> > After further consideration, and pytoml's author's comment about >> the spec changing without a version increase, I think we might be >> better off rolling our own. >> >> >> I like the general simplicity, and would stick with that, but I'd >> be a lot more comfortable if we had our spec that was more >> consistent. >> >> >> If we're going to do that, then why not the 'simple part of yaml'. >> >> or Python literals. (if I recall, the main reason not to do that was >> that no other language has a lib to read it -- rolling out own does >> not solve that!) >> >> Or just go with JSON -- I'm annoyed by it at times, but it's not SO bad. >> >> (and you can kinda-sorta simulate comments with useless keys :-) >> >> { "comment": "this is just something i wanted to say here", >> ... >> } >> >> or we could do "JSON with comments" -- not hard to write a tiny >> pre-processor before passing it off to the json lib. >> >> Anyway -- let's avoid the temptation to role your own everything, and >> use something standard! >> >> -CHB >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> >> _______________________________________________ >> Distutils-SIG maillist -Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue May 10 09:03:31 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 10 May 2016 23:03:31 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On 10 May 2016 at 18:54, Paul Moore wrote: > Well, IMO, the state of things in terms of config file formats (and > not just in Python) is itself pretty dreadful - every time I write an > application, I am astounded that there are no good options for > something as basic as a configuration file format. This is pretty normal for software - no good options, but a plethora of "good enough" ones. Hence https://xkcd.com/927/ :) We just have a particularly exacting use case here, since we want: - a format that's attractive for folks just learning to program in 2016 - a format that's attractive for folks that have been programming for 50+ years - a format that's easy to parse even in Python 2.6 - a format that's version control friendly - a format that's text editor friendly (syntax highlighting, etc) For me, the two leading contenders out of the current discussion have been: - the poyo subset of YAML 1.1 - TOML 0.4.0, as implemented by pytoml The "subset of PyYAML" approach turns this into a documentation exercise (which subset?), and also runs into the problem that poyo doesn't handle serialisation yet, only *reading* YAML files. For TOML, the main questions being asked are around how widely supported it is, and it's actually in a pretty good state on that front. 0.4.0 (rather than TOML in general) is stable by definition, and there are quite a few implementations of that across different languages: https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md#implementations The other big thing someone might want is a schema validator, and there I think it may be possible to just use jsonschema and validate the result of parsing the file, rather than validating the serialised form directly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From leorochael at gmail.com Tue May 10 09:26:34 2016 From: leorochael at gmail.com (Leonardo Rochael Almeida) Date: Tue, 10 May 2016 13:26:34 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: In all this talk about using a YAML subset, I'm surprised no one mentioned YAMLish: https://pypi.python.org/pypi/yamlish It is a well defined subset of YAML and there are implementations in other programming languages. The problem with the 200+-lines-single-file library above is that it depends on PyYAML itself so, vendoring it will be challenging. Anyway, I think TOML 0.4.0 is good enough for our needs. On Tue, 10 May 2016 10:03 Nick Coghlan, wrote: > On 10 May 2016 at 18:54, Paul Moore wrote: > > Well, IMO, the state of things in terms of config file formats (and > > not just in Python) is itself pretty dreadful - every time I write an > > application, I am astounded that there are no good options for > > something as basic as a configuration file format. > > This is pretty normal for software - no good options, but a plethora > of "good enough" ones. Hence https://xkcd.com/927/ :) > > We just have a particularly exacting use case here, since we want: > > - a format that's attractive for folks just learning to program in 2016 > - a format that's attractive for folks that have been programming for 50+ > years > - a format that's easy to parse even in Python 2.6 > - a format that's version control friendly > - a format that's text editor friendly (syntax highlighting, etc) > > For me, the two leading contenders out of the current discussion have been: > > - the poyo subset of YAML 1.1 > - TOML 0.4.0, as implemented by pytoml > > The "subset of PyYAML" approach turns this into a documentation > exercise (which subset?), and also runs into the problem that poyo > doesn't handle serialisation yet, only *reading* YAML files. > > For TOML, the main questions being asked are around how widely > supported it is, and it's actually in a pretty good state on that > front. 0.4.0 (rather than TOML in general) is stable by definition, > and there are quite a few implementations of that across different > languages: > https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md#implementations > > The other big thing someone might want is a schema validator, and > there I think it may be possible to just use jsonschema and validate > the result of parsing the file, rather than validating the serialised > form directly. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue May 10 09:52:18 2016 From: barry at python.org (Barry Warsaw) Date: Tue, 10 May 2016 08:52:18 -0500 Subject: [Distutils] comparison of configuration languages References: <572E74D2.9040308@nextday.fi> <5731297D.6090300@stoneleaf.us> <7EF22EDA-3D94-438F-84F8-15C0B6975F21@stufft.io> Message-ID: <20160510085218.1507f1bd@anarchist.wooz.org> On May 09, 2016, at 08:30 PM, Donald Stufft wrote: >How hard is it to bundle it with pip by copying the source files into >pip._vendor.* Every time another package is vendored, a kitten falls off a unicorn. ;) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From solipsis at pitrou.net Tue May 10 10:16:57 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 10 May 2016 16:16:57 +0200 Subject: [Distutils] comparison of configuration languages References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <20160510161657.64c03837@fsol> On Tue, 10 May 2016 10:38:51 +0300 Alex Gr?nholm wrote: > TOML isn't much better than ConfigParser in terms of representing nested > structures. Indeed, that seems to be a strong point against TOML. If we don't care about nested structures that much, then ConfigParser should be more or less ok... Regards Antoine. From ethan at stoneleaf.us Tue May 10 10:20:57 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 May 2016 07:20:57 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <5731EE49.4030500@stoneleaf.us> On 05/10/2016 01:54 AM, Paul Moore wrote: > Writing our own is simply a way to end up with additional maintenance > work, that we really don't have the resources for. I like writing tools. If the format is one I can get behind I'm happy to be the resource for it. This rules out JSON and YAML, but leaves TOML in the running (as in: I'm happy to take over pytoml if its current author is agreeable). I'm also happy to create one: The Sane/Simple/Super Config Language (or .scl for short). It would be very similar to TOML, possibly a superset. Tools written so far: - dbf [https://pypi.python.org/pypi/dbf] project I learned Python with (so some rough edges, but very serviceable) - scription [https://pypi.python.org/pypi/scription] opinionated command-line parser - antipathy [https://pypi.python.org/pypi/antipathy] file system path library - aenum [https://pypi.python.org/pypi/aneum] totally awesome Enum library ;) (scaled-down version is the stdlib Enum) - xaml [https://pypi.python.org/pypi/xaml] xml processor similar to Ruby's haml PEPs (co)authored so for have also been tool/library oriented: - PEP 409 [https://www.python.org/dev/peps/pep-0409] raise from None (dbf inspired) - PEP 435 [https://www.python.org/dev/peps/pep-0435] Enum - PEP 461 [https://www.python.org/dev/peps/pep-0461] %-interpolation for bytes & bytearrays In other words: this is a serious offer. ;) -- ~Ethan~ From donald at stufft.io Tue May 10 10:24:10 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 May 2016 10:24:10 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160510161657.64c03837@fsol> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> Message-ID: > On May 10, 2016, at 10:16 AM, Antoine Pitrou wrote: > > On Tue, 10 May 2016 10:38:51 +0300 > Alex Gr?nholm wrote: >> TOML isn't much better than ConfigParser in terms of representing nested >> structures. > > Indeed, that seems to be a strong point against TOML. If we don't care > about nested structures that much, then ConfigParser should be more or > less ok? > TOML is infinitely better at nested structured that ConfigParser, given that TOML actually *supports* nested structures beyond a level of 1. The only way to get anything like: [package.build] dependencies = ["setuptools", "wheel"] In ConfigParser is to add post-processing to the values, which then you're no longer a "ConfigParser" file, you're a "ConfigParser + Whatever random one off code you wrote to do post processing" file. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue May 10 10:30:54 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 10 May 2016 16:30:54 +0200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> Message-ID: <20160510163054.72aa15e8@fsol> On Tue, 10 May 2016 10:24:10 -0400 Donald Stufft wrote: > > TOML is infinitely better at nested structured that ConfigParser, given that > TOML actually *supports* nested structures beyond a level of 1. The only way > to get anything like: > > [package.build] > dependencies = ["setuptools", "wheel"] > > In ConfigParser is to add post-processing to the values, which then you're no > longer a "ConfigParser" file, you're a "ConfigParser + Whatever random one off > code you wrote to do post processing" file. The post-processing doesn't seem difficult enough to make any fuss about it, IMHO. The most important concern here should be how usable the format is for end users, not whether implementations need 20 additional lines of code to work with it. (also, what is wrong with providing a pypa-specific library for parsing required configuration? are distlib / distil / pkgutil / the distutils-competitor-du-jour still alive?) Regards Antoine. From donald at stufft.io Tue May 10 10:55:38 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 May 2016 10:55:38 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160510163054.72aa15e8@fsol> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> Message-ID: > On May 10, 2016, at 10:30 AM, Antoine Pitrou wrote: > > On Tue, 10 May 2016 10:24:10 -0400 > Donald Stufft wrote: >> >> TOML is infinitely better at nested structured that ConfigParser, given that >> TOML actually *supports* nested structures beyond a level of 1. The only way >> to get anything like: >> >> [package.build] >> dependencies = ["setuptools", "wheel"] >> >> In ConfigParser is to add post-processing to the values, which then you're no >> longer a "ConfigParser" file, you're a "ConfigParser + Whatever random one off >> code you wrote to do post processing" file. > > The post-processing doesn't seem difficult enough to make any fuss > about it, IMHO. The most important concern here should be how usable > the format is for end users, not whether implementations need 20 > additional lines of code to work with it. I think TOML is more usable than ConfigParser and in particular I think that the adhoc post processing step makes ConfigParser inherently less usable because it forces a special syntax that is specific to this one file. It also means that there's no "right" answer for when you have two different implementations that interpret the same file differently. There's no spec to work off of so it ends up regressing to a smaller version of one of our current problems with the toolchain-- Everything is implementation defined by whatever the most popular tool at the time is. > > (also, what is wrong with providing a pypa-specific library for parsing > required configuration? are distlib / distil / pkgutil / the > distutils-competitor-du-jour still alive?) While we are providing unopinionated bindings like that for the PEPS, we don't want to make that "special" any more than it's just a reference implementation. Tools in languages other than Python will be expected to parse and read these files. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From solipsis at pitrou.net Tue May 10 11:00:02 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 10 May 2016 17:00:02 +0200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> Message-ID: <20160510170002.22c44ff6@fsol> On Tue, 10 May 2016 10:55:38 -0400 Donald Stufft wrote: > > I think TOML is more usable than ConfigParser and in particular I think that > the adhoc post processing step makes ConfigParser inherently less usable > because it forces a special syntax that is specific to this one file. It also > means that there's no "right" answer for when you have two different > implementations that interpret the same file differently. That's true. OTOH, the question is how much better it is for users that it's worthwhile bothering them with a syntax change that will require (at one point or another) migrating existing files. TOML doesn't seem that compelling to me in that regard (quite less than YAML, and I'm not a YAML fan). (as an aside, if there's the question of forking an existing parser implementation for better vendorability, forking a YAML parser may be more useful to third-party folks than forking a TOML parser :-)) Regards Antoine. From alex.gronholm at nextday.fi Tue May 10 11:06:16 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 18:06:16 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160510170002.22c44ff6@fsol> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> Message-ID: <5731F8E8.8080402@nextday.fi> 10.05.2016, 18:00, Antoine Pitrou kirjoitti: > On Tue, 10 May 2016 10:55:38 -0400 > Donald Stufft wrote: >> I think TOML is more usable than ConfigParser and in particular I think that >> the adhoc post processing step makes ConfigParser inherently less usable >> because it forces a special syntax that is specific to this one file. It also >> means that there's no "right" answer for when you have two different >> implementations that interpret the same file differently. > That's true. OTOH, the question is how much better it is for users > that it's worthwhile bothering them with a syntax change that will > require (at one point or another) migrating existing files. TOML doesn't > seem that compelling to me in that regard (quite less than YAML, and I'm > not a YAML fan). > > (as an aside, if there's the question of forking an existing parser > implementation for better vendorability, forking a YAML parser may be > more useful to third-party folks than forking a TOML parser :-)) Amen to that, and that's exactly what I'd like to do. What should the parser be capable of to be accepted for this task? What are the requirements? > Regards > > Antoine. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Tue May 10 11:14:35 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 May 2016 11:14:35 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160510170002.22c44ff6@fsol> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> Message-ID: <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> > On May 10, 2016, at 11:00 AM, Antoine Pitrou wrote: > > (as an aside, if there's the question of forking an existing parser > implementation for better vendorability, forking a YAML parser may be > more useful to third-party folks than forking a TOML parser :-)) I?m seeing what I can come up with (https://github.com/dstufft/yaml) though I don?t know that I feel like actually maintaining whatever it is I end up figuring out there. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From alex.gronholm at nextday.fi Tue May 10 11:16:36 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 18:16:36 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> Message-ID: <5731FB54.3010709@nextday.fi> 10.05.2016, 18:14, Donald Stufft kirjoitti: >> On May 10, 2016, at 11:00 AM, Antoine Pitrou wrote: >> >> (as an aside, if there's the question of forking an existing parser >> implementation for better vendorability, forking a YAML parser may be >> more useful to third-party folks than forking a TOML parser :-)) > > I?m seeing what I can come up with (https://github.com/dstufft/yaml) though > I don?t know that I feel like actually maintaining whatever it is I end up > figuring out there. What exactly are your goals here? > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue May 10 11:26:47 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 May 2016 08:26:47 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> Message-ID: <5731FDB7.6090406@stoneleaf.us> On 05/10/2016 08:14 AM, Donald Stufft wrote: >> On May 10, 2016, at 11:00 AM, Antoine Pitrou wrote: >> (as an aside, if there's the question of forking an existing parser >> implementation for better vendorability, forking a YAML parser may be >> more useful to third-party folks than forking a TOML parser :-)) > > I?m seeing what I can come up with (https://github.com/dstufft/yaml) though > I don?t know that I feel like actually maintaining whatever it is I end up > figuring out there. Please no. I'd rather do xml than yaml. -- ~Ethan~ From alex.gronholm at nextday.fi Tue May 10 11:41:21 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 18:41:21 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5731FDB7.6090406@stoneleaf.us> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> <5731FDB7.6090406@stoneleaf.us> Message-ID: <57320121.4080908@nextday.fi> 10.05.2016, 18:26, Ethan Furman kirjoitti: > On 05/10/2016 08:14 AM, Donald Stufft wrote: >>> On May 10, 2016, at 11:00 AM, Antoine Pitrou wrote: > >>> (as an aside, if there's the question of forking an existing parser >>> implementation for better vendorability, forking a YAML parser may be >>> more useful to third-party folks than forking a TOML parser :-)) >> >> I?m seeing what I can come up with (https://github.com/dstufft/yaml) >> though >> I don?t know that I feel like actually maintaining whatever it is I >> end up >> figuring out there. > > Please no. I'd rather do xml than yaml. > Why do you hate it so much? I strongly prefer YAML to anything else I've seen here. > -- > ~Ethan~ > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From p.f.moore at gmail.com Tue May 10 12:12:10 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 10 May 2016 17:12:10 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: <20160510170002.22c44ff6@fsol> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> Message-ID: On 10 May 2016 at 16:00, Antoine Pitrou wrote: > On Tue, 10 May 2016 10:55:38 -0400 > Donald Stufft wrote: >> >> I think TOML is more usable than ConfigParser and in particular I think that >> the adhoc post processing step makes ConfigParser inherently less usable >> because it forces a special syntax that is specific to this one file. It also >> means that there's no "right" answer for when you have two different >> implementations that interpret the same file differently. > > That's true. OTOH, the question is how much better it is for users > that it's worthwhile bothering them with a syntax change that will > require (at one point or another) migrating existing files. TOML doesn't > seem that compelling to me in that regard (quite less than YAML, and I'm > not a YAML fan). The one aspect that's missing from this discussion (largely because the PEP's still in the process of being written) is a clear statement of what capabilities we *need* from a config file format. I suspect we'll need: - Lists, for dependencies - Nesting (although this is more for "future expansion" by allowing us to namespace the keys) If we're looking to put package metadata in this file, we may also need multi-line string capabilities (for things like descriptions). Before we decide on a format based solely on tool support ("how easy is it for us to write the code?") we should probably see examples of the expected file config using the various different formats ("how easy is the format for users to use?"). Paul From mniedzielski at nasuni.com Tue May 10 12:15:59 2016 From: mniedzielski at nasuni.com (Mark Niedzielski) Date: Tue, 10 May 2016 12:15:59 -0400 Subject: [Distutils] deprecating pip install --target In-Reply-To: References: Message-ID: <608eef16-4691-4e23-61cd-bfdcf338723a@nasuni.com> A new, important, use of --target is in creating installation bundles for AWS Lambda functions using a Python runtime. AWS expects a zip file containing the primary source file and all dependent packages. 'pip install -t' creates exactly the structure needed. Further, because a 'pip install -t .' has the same behavior as a 'npm install' we can easily use the same packaging code for Python and Node Lambda function bundles. Thanks. -Mark This e-mail message and all attachments transmitted with it may contain privileged and/or confidential information intended solely for the use of the addressee(s). If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, forwarding or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately and delete this message, all attachments and all copies and backups thereof. From tds333 at mailbox.org Tue May 10 08:40:33 2016 From: tds333 at mailbox.org (Wolfgang) Date: Tue, 10 May 2016 14:40:33 +0200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: Hi, have done a lot of configuration stuff. Also used the Python ConfigParser (backported version from 3.x) It can be improved, yes. ;-) But the INI style syntax is known and there are tools and parsers available. It is a simple format and this is good. Because it is still human readable and simple configurations can be done with it. Configurations should be simple! But if there are complex requirements the ConfigParser can be extended. (done this for custom interpolation and config includes) Also the allowed comment characters can customized and limited to '#'. So why not use the ConfigParser available with Python and extend it to meet the requirements. Custom getters can be written and for the complex stuff ast.literal_eval() can be used to safely parse the complex list of requirements with comments. Like register new converter: converters = {"literal": ast.literal_eval} used then with: config.getliteral("bootstrap", "requirements") Config example: [bootstrap] reqirements = [ "setuptools >= 27", "numpy >= 1.10", "the-versionator == 0.13", ] My opinion is, keep the configuration and used syntax for this simple. YAML is not simple even the spec is complex. TOML is not mature enough and even not known enough. INI style is already used for setup.cfg. Use it in other places too. Extend it where the need is to extend it and document this. If extended with Python syntax, everyone should be familiar with it. The tools are used for/with Python. Regards, Wolfgang From ethan at stoneleaf.us Tue May 10 12:35:14 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 May 2016 09:35:14 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <57320121.4080908@nextday.fi> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> <5731FDB7.6090406@stoneleaf.us> <57320121.4080908@nextday.fi> Message-ID: <57320DC2.5000604@stoneleaf.us> On 05/10/2016 08:41 AM, Alex Gr?nholm wrote: > 10.05.2016, 18:26, Ethan Furman kirjoitti: >> Please no. I'd rather do xml than yaml. > > Why do you hate it so much? I strongly prefer YAML to anything else I've > seen here. It's too complicated and error-prone. If we want buy-in from casual packagers then our configuration language needs to be simple to understand and simple to get right. (The amount of leading whitespace on a single line changes your data type? Really?? 0644 and 0x644 both map to 420? and 644 maps to 644?) https://docs.saltstack.com/en/latest/topics/troubleshooting/yaml_idiosyncrasies.html https://ciaranm.wordpress.com/2009/03/01/yaml-sucks-gems-sucks-syck-sucks/ While YAML is more easily readable than XML, with XML you already know you're in hell so you tread more carefully. ;) If we want to take the good ideas of YAML and make our own thing I'm okay with that -- but not YAML itself. -- ~Ethan~ From alex.gronholm at nextday.fi Tue May 10 12:38:33 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 10 May 2016 19:38:33 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: <57320DC2.5000604@stoneleaf.us> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> <5731FDB7.6090406@stoneleaf.us> <57320121.4080908@nextday.fi> <57320DC2.5000604@stoneleaf.us> Message-ID: <57320E89.40006@nextday.fi> 10.05.2016, 19:35, Ethan Furman kirjoitti: > On 05/10/2016 08:41 AM, Alex Gr?nholm wrote: >> 10.05.2016, 18:26, Ethan Furman kirjoitti: > >>> Please no. I'd rather do xml than yaml. >> >> Why do you hate it so much? I strongly prefer YAML to anything else I've >> seen here. > > It's too complicated and error-prone. If we want buy-in from casual > packagers then our configuration language needs to be simple to > understand and simple to get right. (The amount of leading whitespace > on a single line changes your data type? Really?? 0644 and 0x644 both > map to 420? and 644 maps to 644?) Do you actually expect to use these in your project's configuration? No? Then why is this a problem for you in this case? > > https://docs.saltstack.com/en/latest/topics/troubleshooting/yaml_idiosyncrasies.html > > > https://ciaranm.wordpress.com/2009/03/01/yaml-sucks-gems-sucks-syck-sucks/ > > > While YAML is more easily readable than XML, with XML you already know > you're in hell so you tread more carefully. ;) > > If we want to take the good ideas of YAML and make our own thing I'm > okay with that -- but not YAML itself. > > -- > ~Ethan~ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From ethan at stoneleaf.us Tue May 10 12:47:34 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 May 2016 09:47:34 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: <57320E89.40006@nextday.fi> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> <5731FDB7.6090406@stoneleaf.us> <57320121.4080908@nextday.fi> <57320DC2.5000604@stoneleaf.us> <57320E89.40006@nextday.fi> Message-ID: <573210A6.30909@stoneleaf.us> On 05/10/2016 09:38 AM, Alex Gr?nholm wrote: > 10.05.2016, 19:35, Ethan Furman kirjoitti: >> It's too complicated and error-prone. If we want buy-in from casual >> packagers then our configuration language needs to be simple to >> understand and simple to get right. (The amount of leading whitespace >> on a single line changes your data type? Really?? 0644 and 0x644 both >> map to 420? and 644 maps to 644?) > > Do you actually expect to use these in your project's configuration? I might. A couple in-house projects have scripts that only root should run. > No? Don't assume for me, and don't assume for the hundreds (thousands? tens of thousands?) of others who will be using this. > Then why is this a problem for you in this case? See above. -- ~Ethan~ From p.f.moore at gmail.com Tue May 10 13:09:39 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 10 May 2016 18:09:39 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: On 10 May 2016 at 13:40, Wolfgang wrote: > So why not use the ConfigParser available with Python and extend it to meet > the requirements. Custom getters can be written and for the complex > stuff ast.literal_eval() can be used to safely parse the complex list > of requirements with comments. Sadly, pip needs to work on Python 2.6+, so we can't use these new features of configparser. Paul From brett at python.org Tue May 10 13:33:49 2016 From: brett at python.org (Brett Cannon) Date: Tue, 10 May 2016 17:33:49 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: Message-ID: Just so everyone knows, I'm ignoring this thread as the PEP I'm drafting with Donald and Nathaniel is nearly finished and thus has already settled on the file format discussion and I haven't heard a new point made on any file format proposal that has already been brought up previously. On Tue, 10 May 2016 at 10:10 Paul Moore wrote: > On 10 May 2016 at 13:40, Wolfgang wrote: > > So why not use the ConfigParser available with Python and extend it to > meet > > the requirements. Custom getters can be written and for the complex > > stuff ast.literal_eval() can be used to safely parse the complex list > > of requirements with comments. > > Sadly, pip needs to work on Python 2.6+, so we can't use these new > features of configparser. > > Paul > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue May 10 20:39:31 2016 From: brett at python.org (Brett Cannon) Date: Wed, 11 May 2016 00:39:31 +0000 Subject: [Distutils] PEP for specifying build dependencies Message-ID: Donald, Nathaniel, and I have finished our proposed PEP for specifying a projects' build dependencies. The PEP is being kept at https://github.com/brettcannon/build-deps-pep, so if you find spelling mistakes and grammatical errors please feel free to send a PR to fix them. The only open issue in the PEP at the moment is the bikeshedding topic of what to name the sub-section containing the requirements: `[package.build]` or `[package.build-system]` (we couldn't reach consensus among the three of us on this). Otherwise the three of us are rather happy with the way the PEP has turned out and look forward to this being the first step towards allowing projects to customize their build process better! ----- PEP: NNN Title: Specifying build dependencies for Python Software Packages Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Nathaniel Smith , Donald Stufft BDFL-Delegate: Nick Coghlan Discussions-To: distutils-sig Status: Draft Type: Standards Track Content-Type: text/x-rst Created: NN-Mmm-2016 Post-History: NN-Mmm-2016 Abstract ======== This PEP specifies how Python software packages should specify their build dependencies (i.e. what dependencies are required to go from source checkout to built wheel). As part of this specification, a new configuration file is introduced for software packages to use to specify their build dependencies (with the expectation that the same configuration file will be used for future configuration details). Rationale ========= When Python first developed its tooling for building distributions of software for projects, distutils [#distutils]_ was the chosen solution. As time went on, setuptools [#setuptools]_ gained popularity to add some features on top of distutils. Both used the concept of a ``setup.py`` file that project maintainers executed to build distributions of their software (as well as users to install said distribution). Using an executable file to specify build requirements under distutils isn't an issue as distutils is part of Python's standard library. Having the build tool as part of Python means that a ``setup.py`` has no external dependency that a project maintainer needs to worry about to build a distribution of their project. There was no need to specify any dependency information as the only dependency is Python. But when a project chooses to use setuptools, the use of an executable file like ``setup.py`` becomes an issue. You can't execute a ``setup.py`` file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the ``setup.py`` file where that information is stored. It's a catch-22 of a file not being runnable without knowing its own contents which can't be known programmatically unless you run the file. Setuptools tried to solve this with a ``setup_requires`` argument to its ``setup()`` function [#setup_args]_. This solution has a number of issues, such as: * No tooling (besides setuptools itself) can access this information without executing the ``setup.py``, but ``setup.py`` can't be executed without having these items installed. * While setuptools itself will install anything listed in this, they won't be installed until *during* the execution of the ``setup()`` function, which means that the only way to actually use anything added here is through increasingly complex machinations that delay the import and usage of these modules until later on in the execution of the ``setup()`` function. * This cannot include ``setuptools`` itself nor can it include a replacement to ``setuptools``, which means that projects such as ``numpy.distutils`` are largely incapable of utilizing it and projects cannot take advantage of newer setuptools features until their users naturally upgrade the version of setuptools to a newer one. * The items listed in ``setup_requires`` get implicily installed whenever you execute the ``setup.py`` but one of the common ways that the ``setup.py`` is executed is via another tool, such as ``pip``, who is already managing dependencies. This means that a command like ``pip install spam`` might end up having both pip and setuptools downloading and installing packages and end users needing to configure *both* tools (and for ``setuptools`` without being in control of the invocation) to change settings like which repository it installs from. It also means that users need to be aware of the discovery rules for both tools, as one may support different package formats or determine the latest version differently. This has cumulated in a situation where use of ``setup_requires`` is rare, where projects tend to either simply copy and paste snippets between ``setup.py`` files or they eschew it all together in favor of simply documenting elsewhere what they expect the user to have manually installed prior to attempting to build or install their project. All of this has led pip [#pip]_ to simply assume that setuptools is necessary when executing a ``setup.py`` file. The problem with this, though, is it doesn't scale if another project began to gain traction in the commnity as setuptools has. It also prevents other projects from gaining traction due to the friction required to use it with a project when pip can't infer the fact that something other than setuptools is required. This PEP attempts to rectify the situation by specifying a way to list the build dependencies of a project in a declarative fashion in a specific file. This allows a project to list what build dependencies it has to go from e.g. source checkout to wheel, while not falling into the catch-22 trap that a ``setup.py`` has where tooling can't infer what a project needs to build itself. Implementing this PEP will allow projects to specify what they depend on upfront so that tools like pip can make sure that they are installed in order to build the project (the actual driving of the build process is not within the scope of this PEP). Specification ============= The build dependencies will be stored in a file named ``pyproject.toml`` that is written in the TOML format [#toml]_. This format was chosen as it is human-usable (unlike JSON [#json]_), it is flexible enough (unlike configparser [#configparser]_), stems from a standard (also unlike configparser [#configparser]_), and it is not overly complex (unlike YAML [#yaml]_). The TOML format is already in use by the Rust community as part of their Cargo package manager [#cargo]_ and in private email stated they have been quite happy with their choice of TOML. A more thorough discussion as to why various alternatives were not chosen can be read in the `Other file formats`_ section. A top-level ``[package]`` table will represent details specific to the package that the project contains. A ``semantics-version`` key within the ``[package]`` table will represent the semantic version that the ``[package]`` table represents. It will always be set to an integer and will default to a value of ``1`` if unspecified. The version will only be updated when the semantics or absence of a key or sub-table in the ``[package]`` table cannot be interpreted in a backwards-compatible fashion (e.g. the version does not need to change if a new table is added to the semantics of the file, but if a pre-existing field changes its meaning or the behavior when a field is absent changes then the semantic version will need to change). Changes to the meaning of the versions are expected to occur through a PEP. Projects are expected to check the value of the ``semantics-version`` field in their code appropriately. The expectation is tools will do something along the lines of:: if data["package"].get("semantics-version", 1) != 1: raise Exception # Whatever exception is appropriate for the tool. There will be a ``[package.build-system]`` sub-table in the configuration file to store build-related data (although the exact name of the sub-table is an `open issue <#name-of-the-build-related-sub-table>`__ as ``[package.build]`` is another possibility). Initially only one key of the table will be valid: ``requires``. That key will have a value of a list of strings representing the PEP 508 dependencies required to build the project as a wheel [#wheel]_ (currently that means what dependencies are required to execute a ``setup.py`` file to generate a wheel). For the vast majority of Python projects that rely upon setuptools, the ``pyproject.toml`` file will be:: [package.build-system] requires = ['setuptools', 'wheel'] # PEP 508 specifications. Or, the equivalent but more verbose:: [package] semantics-version = 1 # Optional; defaults to 1. # Indentation is optional in TOML and has no semantic meaning. [package.build-system] requires = ['setuptools', 'wheel'] # PEP 508 specifications. Because the use of setuptools and wheel are so expansive in the community at the moment, build tools are expected to use the example configuration file above as their default semantics when a ``pyproject.toml`` file is not present. All other top-level keys and tables are reserved for future use by other PEPs except for the ``[tool]`` table. Within that table, tools can have users specify configuration data as long as they use a sub-table within ``[tool]``, e.g. the `flit `_ tool might store its configuration in ``[tool.flit]``. We need some mechanism to allocate names within the ``tool.*`` namespace, to make sure that different projects don't attempt to use the same sub-table and collide. Our rule is that a project can use the subtable ``tool.$NAME`` if, and only if, they own the entry for ``$NAME`` in the Cheeseshop/PyPI. Open Issues =========== Name of the build-related sub-table ----------------------------------- The authors of this PEP couldn't decide between the names ``[package.build]`` and ``[package.build-system]``, and so it is an open issue over which one to go with. Rejected Ideas ============== Other semantic version key names -------------------------------- Names other than ``semantics-version`` were considered to represent the version of semantics that the configuration file was written for. Both ``configuration-version`` and ``metadata-version`` were both considered, but were rejected due to how people may confuse the key as representing a version of the files contents instead of the version of semantics that the file is interpreted under. A flatter namespace ------------------- An earlier draft of this PEP lacked the ``[package]`` table and had all of its contained values one level higher. In the end it was decided it would be better to scope package-related details to its own table for more clear scoping and easier expansion of this file for future use. Other file formats ------------------ Several other file formats were put forward for consideration, all rejected for various reasons. Key requirements were that the format be editable by human beings and have an implementation that can be vendored easily by projects. This outright exluded certain formats like XML which are not friendly towards human beings and were never seriously discussed. JSON '''' The JSON format [#json]_ was initially considered but quickly rejected. While great as a human-readable, string-based data exchange format, the syntax does not lend itself to easy editing by a human being (e.g. the syntax is more verbose than necessary while not allowing for comments). An example JSON file for the proposed data would be:: { "build": { "requires": [ "setuptools", "wheel>=0.27" ] } } YAML '''' The YAML format [#yaml]_ was designed to be a superset of JSON [#json]_ while being easier to work with by hand. There are three main issues with YAML. One is that the specification is large: 86 pages if printed on letter-sized paper. That leaves the possibility that someone may use a feature of YAML that works with one parser but not another. It has been suggested to standardize on a subset, but that basically means creating a new standard specific to this file which is not tractable long-term. Two is that YAML itself is not safe by default. The specification allows for the arbitrary execution of code which is best avoided when dealing with configuration data. It is of course possible to avoid this behavior -- for example, PyYAML provides a ``safe_load`` operation -- but if any tool carelessly uses ``load`` instead then they open themselves up to arbitrary code execution. While this PEP is focused on the building of projects which inherently involves code execution, other configuration data such as project name and version number may end up in the same file someday where arbitrary code execution is not desired. And finally, the most popular Python implemenation of YAML is PyYAML [#pyyaml]_ which is a large project of a few thousand lines of code and an optional C extension module. While in and of itself this isn't necessary an issue, this becomes more of a problem for projects like pip where they would most likely need to vendor PyYAML as a dependency so as to be fully self-contained (otherwise you end up with your install tool needing an install tool to work). A proof-of-concept re-working of PyYAML has been done to see how easy it would be to potentially vendor a simpler version of the library which shows it is a possibility. An example YAML file is:: build: requires: - setuptools - wheel>=0.27 configparser '''''''''''' An INI-style configuration file based on what configparser [#configparser]_ accepts was considered. Unfortunately there is no specification of what configparser accepts, leading to support skew between versions. For instance, what ConfigParser in Python 2.7 accepts is not the same as what configparser in Python 3 accepts. While one could standardize on what Python 3 accepts and simply vendor the backport of the configparser module, that does mean this PEP would have to codify that the backport of configparser must be used by all project wishes to consume the metadata specified by this PEP. This is overly restrictive and could lead to confusion if someone is not aware of that a specific version of configparser is expected. An example INI file is:: [build] requires = setuptools wheel>=0.27 Python literals ''''''''''''''' Someone proposed using Python literals as the configuration format. All Python programmers would be used to the format, there would implicitly be no third-party dependency to read the configuration data, and it can be safe if something like ``ast.literal_eval()`` [#ast_literal_eval]_. The problem is that to user Python literals you either end up with something no better than JSON, or you end up with something like what Bazel [#bazel]_ uses. In the former the issues are the same as JSON. In the latter, you end up with people consistently asking for more flexibility as users have a hard time ignoring the desire to use some feature of Python that they think they need (one of the co-authors has direct experience with this from the internal usage of Bazel at Google). There is no example format as one was never put forward for consideration. Other file names ---------------- Several other file names were considered and rejected (although this is very much a bikeshedding topic, and so the decision comes down to mostly taste). pysettings.toml Most reasonable alternative. pypa.toml While it makes sense to reference the PyPA [#pypa]_, it is a somewhat niche term. It's better to have the file name make sense without having domain-specific knowledge. pybuild.toml From the restrictive perspective of this PEP this filename makes sense, but if any non-build metadata ever gets added to the file then the name ceases to make sense. pip.toml Too tool-specific. meta.toml Too generic; project may want to have its own metadata file. setup.toml While keeping with traditional thanks to ``setup.py``, it does not necessarily match what the file may contain in the future (.e.g is knowing the name of a project inerhently part of its setup?). pymeta.toml Not obvious to newcomers to programming and/or Python. pypackage.toml & pypackaging.toml Name conflation of what a "package" is (project versus namespace). pydevelop.toml The file may contain details not specific to development. pysource.toml Not directly related to source code. pytools.toml Misleading as the file is (currently) aimed at project management. dstufft.toml Too person-specific. ;) References ========== .. [#distutils] distutils (https://docs.python.org/3/library/distutils.html#module-distutils) .. [#setuptools] setuptools (https://pypi.python.org/pypi/setuptools) .. [#setup_args] setuptools: New and Changed setup() Keywords (http://pythonhosted.org/setuptools/setuptools.html#new-and-changed-setup-keywords) .. [#pip] pip (https://pypi.python.org/pypi/pip) .. [#wheel] wheel (https://pypi.python.org/pypi/wheel) .. [#toml] TOML (https://github.com/toml-lang/toml) .. [#json] JSON (http://json.org/) .. [#yaml] YAML (http://yaml.org/) .. [#configparser] configparser (https://docs.python.org/3/library/configparser.html#module-configparser) .. [#pyyaml] PyYAML (https://pypi.python.org/pypi/PyYAML) .. [#pypa] PyPA (https://www.pypa.io) .. [#bazel] Bazel (http://bazel.io/) .. [#ast_literal_eval] ``ast.literal_eval()`` (https://docs.python.org/3/library/ast.html#ast.literal_eval) .. [#cargo] Cargo, Rust's package manager (http://doc.crates.io/) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue May 10 21:24:47 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 May 2016 18:24:47 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Tue, May 10, 2016 at 5:39 PM, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. Thanks Brett! > The only open issue in the PEP at the moment is the bikeshedding topic of > what to name the sub-section containing the requirements: `[package.build]` > or `[package.build-system]` (we couldn't reach consensus among the three of > us on this). To maybe help nudge initial bikeshedding on this in useful directions, the main arguments (IIUC) were: In favor of "build-system": setup.py is used for more than just the strict "build" (source tree/sdist -> wheel) phase. For example, setup.py is also used to do VCS checkout -> sdist. And it seems likely that the new build system abstraction thing will grow similar capabilities at some point. So calling the section just "build" might be misleading. In favor of "build": it's just shorter and reads better. Maybe there's a third option that's even better -- [package.automation] ? Maybe it doesn't matter that much :-) -n -- Nathaniel J. Smith -- https://vorpus.org From robertc at robertcollins.net Tue May 10 21:27:18 2016 From: robertc at robertcollins.net (Robert Collins) Date: Wed, 11 May 2016 13:27:18 +1200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 11 May 2016 at 12:39, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. > > The only open issue in the PEP at the moment is the bikeshedding topic of > what to name the sub-section containing the requirements: `[package.build]` > or `[package.build-system]` (we couldn't reach consensus among the three of > us on this). Otherwise the three of us are rather happy with the way the PEP > has turned out and look forward to this being the first step towards > allowing projects to customize their build process better! I find calling this build dependencies confusing, but perhaps thats just me. Right now the work flow is this: unpack tarball run setup.py egg-info introspect that for dependencies download and install such dependencies (recursively) [ in future, it would resolve ] run setup.py install || setup.py wheel install 1) What would pip do when it encounters one of these files? unpack tarball introspect and prepare an isolated environment with the specified dependencies run setup.py egg_info download and install such runtime dependencies (recursively) [ in future, it would resolve ] run setup.py install || setup.py wheel install ? Right now setup.py dependencies are turing complete, and its a useful escape valve - this design seems to have no provision for that (previous designs we've had here have had that). If the declared dependencies are merely those needed to be able to invoke the build system, rather than those needed to be able to successfully build, it would preserve that escape valve. 2) what do we expect setuptools to do here? Is it reasonable to introspect this file and union it with setup_requires? 3) Why does this specify ' what dependencies are required to go from source checkout to built wheel' ? - build systems also run tests, generate docs and man pages, produce other artifacts. Perhaps making either the target more specific (wheel_requires = ...) or the description less specific ('dependencies required to invoke the build system') would be good. I am not suggesting that we model all the things a build system might do: thats YAGNI at best, scope creep at worst. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From graffatcolmingov at gmail.com Tue May 10 21:43:07 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 10 May 2016 20:43:07 -0500 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Tue, May 10, 2016 at 8:24 PM, Nathaniel Smith wrote: > On Tue, May 10, 2016 at 5:39 PM, Brett Cannon wrote: >> Donald, Nathaniel, and I have finished our proposed PEP for specifying a >> projects' build dependencies. The PEP is being kept at >> https://github.com/brettcannon/build-deps-pep, so if you find spelling >> mistakes and grammatical errors please feel free to send a PR to fix them. > > Thanks Brett! > >> The only open issue in the PEP at the moment is the bikeshedding topic of >> what to name the sub-section containing the requirements: `[package.build]` >> or `[package.build-system]` (we couldn't reach consensus among the three of >> us on this). > > To maybe help nudge initial bikeshedding on this in useful directions, > the main arguments (IIUC) were: > > In favor of "build-system": setup.py is used for more than just the > strict "build" (source tree/sdist -> wheel) phase. For example, > setup.py is also used to do VCS checkout -> sdist. And it seems likely > that the new build system abstraction thing will grow similar > capabilities at some point. So calling the section just "build" might > be misleading. > > In favor of "build": it's just shorter and reads better. > > Maybe there's a third option that's even better -- [package.automation] ? > > Maybe it doesn't matter that much :-) I think "build-system" is more descriptive and the more descriptive we can be, the better. (Think of choosing descriptive method and attribute names as well as variables.) From donald at stufft.io Tue May 10 21:45:43 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 May 2016 21:45:43 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <4FE891C6-23C2-4CAE-A461-F2A8A1D3AC68@stufft.io> > On May 10, 2016, at 9:27 PM, Robert Collins wrote: > > On 11 May 2016 at 12:39, Brett Cannon wrote: >> Donald, Nathaniel, and I have finished our proposed PEP for specifying a >> projects' build dependencies. The PEP is being kept at >> https://github.com/brettcannon/build-deps-pep, so if you find spelling >> mistakes and grammatical errors please feel free to send a PR to fix them. >> >> The only open issue in the PEP at the moment is the bikeshedding topic of >> what to name the sub-section containing the requirements: `[package.build]` >> or `[package.build-system]` (we couldn't reach consensus among the three of >> us on this). Otherwise the three of us are rather happy with the way the PEP >> has turned out and look forward to this being the first step towards >> allowing projects to customize their build process better! > > I find calling this build dependencies confusing, but perhaps thats just me. > > Right now the work flow is this: > > unpack tarball > run setup.py egg-info > introspect that for dependencies > download and install such dependencies (recursively) [ in future, it > would resolve ] > run setup.py install || setup.py wheel > install > > 1) What would pip do when it encounters one of these files? > unpack tarball > introspect and prepare an isolated environment with the specified dependencies > run setup.py egg_info > download and install such runtime dependencies (recursively) [ in > future, it would resolve ] > run setup.py install || setup.py wheel > install > > ? Yes > > Right now setup.py dependencies are turing complete, and its a useful > escape valve - this design seems to have no provision for that > (previous designs we've had here have had that). If the declared > dependencies are merely those needed to be able to invoke the build > system, rather than those needed to be able to successfully build, it > would preserve that escape valve. I think the way this will eventually end up going, is that when we get to the point of adding that next PEP that allows us to swap out setup.py for a totally different interface, is that we'll add something like bootstrap_requires or something, and do something like "If package.build.requires exists, we'll use that for build dependencies, else we'll invoke some API in the ABS to get them". Thus this lets us do the simple thing now, and in the future we can layer on additional things in a backwards compatabile way that *also* keeps the simple case simple and allows the advanced case to happen. > > 2) what do we expect setuptools to do here? Is it reasonable to > introspect this file and union it with setup_requires? I think setuptools should just ignore this file when invoking setup.py directly (maybe raise an error if something doesn?t exist) because I think the common pattern is going to be using real, top level imports of the stuff depended on here so it won?t have a chance to setup_requires them prior to executing. I think that easy_install should behave similarly to pip though. > > 3) Why does this specify ' what dependencies are required to go from > source checkout to built wheel' ? - build systems also run tests, > generate docs and man pages, produce other artifacts. Perhaps making > either the target more specific (wheel_requires = ...) or the > description less specific ('dependencies required to invoke the build > system') would be good. I think the description just wasn't fully thought out. I believe that "dependencies to invoke the build system" is closer to what I've been viewing this as (particularly since with setup.py the way it is, it's hard to differentiate between the different "roles" of why you might be invoking setup.py). ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Tue May 10 21:47:48 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 10 May 2016 21:47:48 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <5DF75157-2678-460A-B0C9-B7F0DFB6CFBF@stufft.io> > On May 10, 2016, at 9:43 PM, Ian Cordasco wrote: > > On Tue, May 10, 2016 at 8:24 PM, Nathaniel Smith wrote: >> On Tue, May 10, 2016 at 5:39 PM, Brett Cannon wrote: >>> Donald, Nathaniel, and I have finished our proposed PEP for specifying a >>> projects' build dependencies. The PEP is being kept at >>> https://github.com/brettcannon/build-deps-pep, so if you find spelling >>> mistakes and grammatical errors please feel free to send a PR to fix them. >> >> Thanks Brett! >> >>> The only open issue in the PEP at the moment is the bikeshedding topic of >>> what to name the sub-section containing the requirements: `[package.build]` >>> or `[package.build-system]` (we couldn't reach consensus among the three of >>> us on this). >> >> To maybe help nudge initial bikeshedding on this in useful directions, >> the main arguments (IIUC) were: >> >> In favor of "build-system": setup.py is used for more than just the >> strict "build" (source tree/sdist -> wheel) phase. For example, >> setup.py is also used to do VCS checkout -> sdist. And it seems likely >> that the new build system abstraction thing will grow similar >> capabilities at some point. So calling the section just "build" might >> be misleading. >> >> In favor of "build": it's just shorter and reads better. >> >> Maybe there's a third option that's even better -- [package.automation] ? >> >> Maybe it doesn't matter that much :-) > > I think "build-system" is more descriptive and the more descriptive we > can be, the better. (Think of choosing descriptive method and > attribute names as well as variables.) I?m in favor of ?build?, mostly because I think [package.build-system] requires = [?setuptools?, ?wheel?] is uglier than [package.build] requires = [?setuptools, ?wheel?] and I think for 99% of people the distinction is going to be lost anyways. That being said, I think either one is OK. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Tue May 10 21:49:16 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 May 2016 18:49:16 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Tue, May 10, 2016 at 6:27 PM, Robert Collins wrote: [...] > If the declared > dependencies are merely those needed to be able to invoke the build > system, rather than those needed to be able to successfully build, it > would preserve that escape valve. My understanding is that this is exactly the intention -- these are the requirements you need before you can run setup.py or whatever replaces setup.py; it doesn't stop us from providing something like PEP 516/517's build requirements. [Reminder for those following along: in the current PEP 516/517 drafts, the sequence goes like: (1) pip consults the static list of "bootstrap" requirements, and makes those available, (2) pip invokes the build backend to ask it if it has any dynamic "build" requirements, (3) pip makes those available, (4) pip invokes the build backend again to run the actual build. So Robert's being bothered at the implication that this PEP might do away with steps (2)-(3).] The awkward bit is that this distinction really only makes sense in "phase 2", once we add the pluggable build backend stuff. The PEP's written to target "phase 1", where we're still assuming setup.py as the only build system, so we're going from "no working build requirements" to "working static build requirements". That doesn't stop us from taking the next step later. I guess the PEP text introduction could be clearer about this somehow. -n -- Nathaniel J. Smith -- https://vorpus.org From graffatcolmingov at gmail.com Tue May 10 21:57:05 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 10 May 2016 20:57:05 -0500 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <5DF75157-2678-460A-B0C9-B7F0DFB6CFBF@stufft.io> References: <5DF75157-2678-460A-B0C9-B7F0DFB6CFBF@stufft.io> Message-ID: On Tue, May 10, 2016 at 8:47 PM, Donald Stufft wrote: > >> On May 10, 2016, at 9:43 PM, Ian Cordasco wrote: >> I think "build-system" is more descriptive and the more descriptive we >> can be, the better. (Think of choosing descriptive method and >> attribute names as well as variables.) > > I?m in favor of ?build?, mostly because I think > > [package.build-system] > requires = [?setuptools?, ?wheel?] > > is uglier than Oh, absolutely it's ugly, but reading it aloud is what makes me prefer it. "The package's build-system requires setuptools, and wheel" Granted, Ruby is the perfect example of trying to make something read perfectly like English being a terrible idea when taken to extremes, but I don't think that will happen here. Other than that, the PEP looks fine. I'm anti-TOML but for reasons that don't deserve being brought up on the mailing list. It's the best option of a bunch of bad options. The rest of the PEP looks great. From ethan at stoneleaf.us Tue May 10 22:02:26 2016 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 10 May 2016 19:02:26 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <573292B2.7030709@stoneleaf.us> On 05/10/2016 05:39 PM, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. Looks great! Thanks to all three of you! +1 to build-system. It's entirely possible to have other sections with the word 'build' in them, so it's better to be more explicit now. -- ~Ethan~ From njs at pobox.com Tue May 10 22:18:56 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 May 2016 19:18:56 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <4FE891C6-23C2-4CAE-A461-F2A8A1D3AC68@stufft.io> References: <4FE891C6-23C2-4CAE-A461-F2A8A1D3AC68@stufft.io> Message-ID: On Tue, May 10, 2016 at 6:45 PM, Donald Stufft wrote: [...] > I think the way this will eventually end up going, is that when we get to the > point of adding that next PEP that allows us to swap out setup.py for a totally > different interface, is that we'll add something like bootstrap_requires or > something, and do something like "If package.build.requires exists, we'll use > that for build dependencies, else we'll invoke some API in the ABS to get > them". Hmm, it's not something we have to decide now, but given this roadmap: Now: - a way to specify static requirements before running setup.py Later: - (definitely) a static method for specifying requirements that need to be fulfilled before running the build backend at all ("bootstrap requirements") - (probably) a way to dynamically query the build backend for extra requirements before building a wheel - (maybe) some sort of optimization where we skip the dynamic query if the right static configuration is provided ...I think the simplest way to manage the transition is if we make the-thing-we're-adding-now map to the future "bootstrap requirements". That way we won't have to change the semantics at all. And if we decide we want some sort of static-build-requirements-optimization thing we can make that the new key that we add later. -n -- Nathaniel J. Smith -- https://vorpus.org From randy at thesyrings.us Tue May 10 23:53:33 2016 From: randy at thesyrings.us (Randy Syring) Date: Tue, 10 May 2016 23:53:33 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <5732ACBD.10200@thesyrings.us> On 05/10/2016 08:39 PM, Brett Cannon wrote: > The build dependencies will be stored in a file named > ``pyproject.toml`` > > [...snip...] > > A top-level ``[package]`` table will represent details specific to the > package that the project contains. > > [...snip...] > > Rejected Ideas > ============== > > pypackage.toml & pypackaging.toml > Name conflation of what a "package" is (project versus namespace). I know this is in the rejected ideas, but I'd like to point out that there is an inconsistency here that bothers me. If the top-level namespace in the file is going to be "package" then it makes sense to me that the file would also be named "py*package*.yaml". It seems like the term "package", while possibly running into the the "conflation issue", is a pretty solid term to build on. * Python *Packaging* Authority (PyPA) * http://python-*packaging*-user-guide.readthedocs.io/en/latest/ * http://the-hitchhikers-guide-to-*packaging*.readthedocs.io/en/latest/introduction.html * docs.python-guide.org/en/latest/shipping/*packaging*/ * Top-level [*package*] namespace for the pep * From the PEP "This PEP specifies how Python software*packages* should specify their build dependencies" * "As of March 2013, the Python *packaging* ecosystem is currently in a rather confusing state." [1] Given the proliferation of the term "package" in describing exactly what this file is about, I'd really like to see the file name reconsidered to be "pypackag[e|ing].toml". Python "packaging", at least in the discussions I follow and am involved is, is becoming _the term_ to refer to this part of the Python ecosystem. IMO, the term "project" is more ambiguous, it could possibly refer to a lot of things. So, IMO, if you are just weighing the two terms on possibility for misunderstanding, "packaging" may get docked a point for being conflated with a python top-level module namespace but project should equally get docked a point for just being too generic AND another point for not matching the top-level config namespace of the file. If you add in the fact that "packaging" is used frequently in the Python ecosystem now to refer to the process of bundling up and installing a python source tree, that's like +5 for "pypackag[e|ing].toml". And, if the case of conflation still feels strong, then surly using "pypackaging.toml" would have to remove most of the confusion. Although, I'd prefer "pypackage.toml" myself. I suppose, if its possible the file would ever have a different top-level config namespace other than "package" so that information related to non-packaging issues could possible end up here (like maybe tox or flake8 starting to read config from here instead of from setup.cfg) then I don't think I'd feel as strongly about "pyproject.toml" being too generic. Although, we could be a bit cagy and make a play on "crate" by using a synonym "carton" (carton.toml, pycarton.toml) which, interestingly, sometimes hold eggs. ;) Thanks. [1] http://python-notes.curiousefficiency.org/en/latest/pep_ideas/core_packaging_api.html *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed May 11 00:24:23 2016 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 10 May 2016 21:24:23 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <5732ACBD.10200@thesyrings.us> References: <5732ACBD.10200@thesyrings.us> Message-ID: On May 10, 2016 20:53, "Randy Syring" wrote: > [...] > > I suppose, if its possible the file would ever have a different top-level config namespace other than "package" so that information related to non-packaging issues could possible end up here (like maybe tox or flake8 starting to read config from here instead of from setup.cfg) then I don't think I'd feel as strongly about "pyproject.toml" being too generic. Look again :-) "All other top-level keys and tables are reserved for future use by other PEPs except for the ``[tool]`` table. Within that table, tools can have users specify configuration data as long as they use a sub-table within ``[tool]``, e.g. the `flit `_ tool might store its configuration in ``[tool.flit]``. We need some mechanism to allocate names within the ``tool.*`` namespace, to make sure that different projects don't attempt to use the same sub-table and collide. Our rule is that a project can use the subtable ``tool.$NAME`` if, and only if, they own the entry for ``$NAME`` in the Cheeseshop/PyPI. " (Maybe should be more prominent I guess?) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Wed May 11 01:47:46 2016 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 11 May 2016 17:47:46 +1200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> Message-ID: <5732C782.3090305@canterbury.ac.nz> Having looked over the TOML spec, I like the simplicity of it (and I cringe from the complexity of YAML). The only thing I don't like about TOML is the way it cops out on nesting. The only reason it does that as far as I can see is because of a dislike for significant indentation. But that doesn't apply to us -- we're Python folks, we're not afraid of indentation! On the contrary, for us it's the One Obvious Way to do nesting. So I'd like a format that is very similar to TOML except that you're allowed to represent nesting using indented blocks. -- Greg From ncoghlan at gmail.com Wed May 11 03:11:54 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 11 May 2016 17:11:54 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 11 May 2016 at 10:39, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. > > The only open issue in the PEP at the moment is the bikeshedding topic of > what to name the sub-section containing the requirements: `[package.build]` > or `[package.build-system]` (we couldn't reach consensus among the three of > us on this). Otherwise the three of us are rather happy with the way the PEP > has turned out and look forward to this being the first step towards > allowing projects to customize their build process better! I prefer [package.build-system]. My rationale for that is that the build system may have *its own* configuration mechanism, separate from this file, and I don't want people to get confused between a project's "build settings" and its "build system identification". Take the default case: for a distutils/setuptools based project, the actual build settings are the arguments to setup() in setup.py, *not* these new settings in pyproject.toml. By contrast, the settings in [package.build-system] are the ones that tell pip and other installers what is needed to make "setup.py bdist_wheel" work (and, in the future, will tell them when to invoke something other than "setup.py bdist_wheel" to do the binary build) > For the vast majority of Python projects that rely upon setuptools, > the ``pyproject.toml`` file will be:: > > [package.build-system] > requires = ['setuptools', 'wheel'] # PEP 508 specifications. It would be worthwhile showing an example of using the new mechanism to bootstrap a project that relies on numpy.distutils. > Open Issues > =========== > > Name of the build-related sub-table > ----------------------------------- > > The authors of this PEP couldn't decide between the names > ``[package.build]`` and ``[package.build-system]``, and so it is an > open issue over which one to go with. "package.build-system", for the reason given above (i.e. "build" conflicts with the project's actual build settings). > Rejected Ideas > ============== > > Other semantic version key names > -------------------------------- > > Names other than ``semantics-version`` were considered to represent > the version of semantics that the configuration file was written for. > Both ``configuration-version`` and ``metadata-version`` were both > considered, but were rejected due to how people may confuse the > key as representing a version of the files contents instead of the > version of semantics that the file is interpreted under. Would you be open to using schema-version rather than semantic-version, and then formally defining the format via jsonschema and/or JSL [1]? Cheers, Nick. [1] The latter is like an ORM for jsonschema: https://pypi.python.org/pypi/jsl -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From njs at pobox.com Wed May 11 03:28:41 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 May 2016 00:28:41 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Wed, May 11, 2016 at 12:11 AM, Nick Coghlan wrote: > On 11 May 2016 at 10:39, Brett Cannon wrote: [...] >> For the vast majority of Python projects that rely upon setuptools, >> the ``pyproject.toml`` file will be:: >> >> [package.build-system] >> requires = ['setuptools', 'wheel'] # PEP 508 specifications. > > It would be worthwhile showing an example of using the new mechanism > to bootstrap a project that relies on numpy.distutils. It's just that with "numpy" added, but sure. @Brett: I also just noticed reading the example above that you're using single-quotes for strings in the TOML instead of double-quote strings, which is a bit odd -- single quote strings in TOML are the same as raw strings in Python, which does work for this case but probably isn't the example we want to set. >> Rejected Ideas >> ============== >> >> Other semantic version key names >> -------------------------------- >> >> Names other than ``semantics-version`` were considered to represent >> the version of semantics that the configuration file was written for. >> Both ``configuration-version`` and ``metadata-version`` were both >> considered, but were rejected due to how people may confuse the >> key as representing a version of the files contents instead of the >> version of semantics that the file is interpreted under. > > Would you be open to using schema-version rather than > semantic-version, and then formally defining the format via jsonschema > and/or JSL [1]? I kinda like the semantics-version name (schema = structure, semantics = structure + interpretation), and I'm not sure what the name of that key has to do with defining a json schema, but anyway, here's a first-pass json schema :-) https://gist.github.com/njsmith/89021cd9ef1a6724579229de164d02d2 (NOTE that that schema's written to check that a file matches the currently defined spec, and should NOT be used to validate real pyproject.toml files, because the additionalProperties: false keys will cause it to error out on future backwards-compatible changes.) -n -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Wed May 11 04:37:52 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 11 May 2016 09:37:52 +0100 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 11 May 2016 at 01:39, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. > > The only open issue in the PEP at the moment is the bikeshedding topic of > what to name the sub-section containing the requirements: `[package.build]` > or `[package.build-system]` (we couldn't reach consensus among the three of > us on this). Otherwise the three of us are rather happy with the way the PEP > has turned out and look forward to this being the first step towards > allowing projects to customize their build process better! +1 on the PEP as a whole - good work, all of you! On the requirements sub-section, I have a mild preference for [package.build-system] (but a stronger preference for not bikeshedding, so I'm OK with either :-)) One thought, I understand that many projects assume they can import particular Python modules in setup.py (numpy is a common one AIUI, for getting the C header location). Would it be worth specifically calling that out as a legitimate usage (it's not just to define the tools you need to do the build, but also to specify the build environment in general) and giving an example? Paul From solipsis at pitrou.net Wed May 11 05:27:56 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 11 May 2016 11:27:56 +0200 Subject: [Distutils] PEP for specifying build dependencies References: Message-ID: <20160511112756.68a08a81@fsol> On Wed, 11 May 2016 17:11:54 +1000 Nick Coghlan wrote: > > Take the default case: for a distutils/setuptools based project, the > actual build settings are the arguments to setup() in setup.py, *not* > these new settings in pyproject.toml. By contrast, the settings in > [package.build-system] are the ones that tell pip and other installers > what is needed to make "setup.py bdist_wheel" work (and, in the > future, will tell them when to invoke something other than "setup.py > bdist_wheel" to do the binary build) Side question: if the build system needs configuring, is a user-provided configuration file really the best place to do so? People will end up having to copy and paste a bunch of configuration directives that are not directly relevant to their own project (also those directives will need maintaining as a build tool may evolve its dependencies over time). Alternative: have a single "build system" configuration directive: [package.build-system] tool = "foopackage:fooexe" ... which instructs the runner to install dependency "foopackage", and then invoking "fooexe" with a single well-known option (e.g. "--pybuild-bootstrap-config") produces a JSON file on stdout describing how to invoke precisely the tool for each build scenario (sdist, wheel, etc.). Regards Antoine. From nicholas.chammas at gmail.com Wed May 11 09:46:22 2016 From: nicholas.chammas at gmail.com (Nicholas Chammas) Date: Wed, 11 May 2016 13:46:22 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> Message-ID: On Tue, May 10, 2016 at 11:15 AM Donald Stufft donald at stufft.io wrote: > > On May 10, 2016, at 11:00 AM, Antoine Pitrou > wrote: > > > > (as an aside, if there's the question of forking an existing parser > > implementation for better vendorability, forking a YAML parser may be > > more useful to third-party folks than forking a TOML parser :-)) > > > I?m seeing what I can come up with (https://github.com/dstufft/yaml) > though > I don?t know that I feel like actually maintaining whatever it is I end up > figuring out there. > Did you intend to fork from https://github.com/yaml/pyyaml? I believe the latest PyYAML is actually hosted on BitBucket: https://bitbucket.org/xi/pyyaml/ Also, a better starting point than PyYAML may be this existing fork of the library, ruamel.yaml: https://bitbucket.org/ruamel/yaml The fork makes several changes which may be relevant to you. Nick ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed May 11 09:49:22 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 May 2016 09:49:22 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <06C0E66B-C409-4AE9-A456-5B9D44DC85AE@stufft.io> Message-ID: > On May 11, 2016, at 9:46 AM, Nicholas Chammas wrote: > > On Tue, May 10, 2016 at 11:15 AM Donald Stufft donald at stufft.io wrote: > > > > > > On May 10, 2016, at 11:00 AM, Antoine Pitrou > wrote: > > > > (as an aside, if there's the question of forking an existing parser > > implementation for better vendorability, forking a YAML parser may be > > more useful to third-party folks than forking a TOML parser :-)) > > > I?m seeing what I can come up with (https://github.com/dstufft/yaml ) though > I don?t know that I feel like actually maintaining whatever it is I end up > figuring out there. > > > Did you intend to fork from https://github.com/yaml/pyyaml ? I believe the latest PyYAML is actually hosted on BitBucket: > > https://bitbucket.org/xi/pyyaml/ > Yea, there was only a very tiny, mostly meaningless delta between Github and bitbucket and I?m lazy. > Also, a better starting point than PyYAML may be this existing fork of the library, ruamel.yaml: > > https://bitbucket.org/ruamel/yaml > The fork makes several changes which may be relevant to you. > > To be clear, my thing was a proof of concept to see how hard it would be to help inform the decision about what file format to use. The PEP has decided that TOML best represents the tradeoffs we want to make so my lib will likely just die on my GitHub :) ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Wed May 11 10:20:32 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 00:20:32 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <20160511112756.68a08a81@fsol> References: <20160511112756.68a08a81@fsol> Message-ID: On 11 May 2016 at 19:27, Antoine Pitrou wrote: > On Wed, 11 May 2016 17:11:54 +1000 > Nick Coghlan wrote: >> >> Take the default case: for a distutils/setuptools based project, the >> actual build settings are the arguments to setup() in setup.py, *not* >> these new settings in pyproject.toml. By contrast, the settings in >> [package.build-system] are the ones that tell pip and other installers >> what is needed to make "setup.py bdist_wheel" work (and, in the >> future, will tell them when to invoke something other than "setup.py >> bdist_wheel" to do the binary build) > > Side question: if the build system needs configuring, is a > user-provided configuration file really the best place to do so? > People will end up having to copy and paste a bunch of configuration > directives that are not directly relevant to their own project (also > those directives will need maintaining as a build tool may evolve > its dependencies over time). When I say "build system configuration" in the context of distutils/setuptools, I mean things like: * MANIFEST.in * non-dependency related setup() arguments (packages, package_dir, py_modules, ext_modules, namespace_packages, entry_points, include_package_data, zip_safe, etc) * the Extension class and its parameters: https://docs.python.org/2/distutils/setupscript.html#describing-extension-modules Those are the settings that actually tell the build system what to build and (in some cases) how to build it. > Alternative: have a single "build system" configuration directive: > > [package.build-system] > > tool = "foopackage:fooexe" > > ... which instructs the runner to install dependency "foopackage", and > then invoking "fooexe" with a single well-known option (e.g. > "--pybuild-bootstrap-config") produces a JSON file on stdout describing > how to invoke precisely the tool for each build scenario (sdist, wheel, > etc.). Specifying alternate build systems likely won't be far behind this PEP (since that's what PEPs 516 and 517 were about), but we decided it made sense to split the two discussions (i.e. figuring out the static configuration basics, then figuring out how to eliminate the need for setup.py shims) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From solipsis at pitrou.net Wed May 11 10:26:31 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 11 May 2016 16:26:31 +0200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <20160511112756.68a08a81@fsol> Message-ID: <20160511162631.7531c856@fsol> On Thu, 12 May 2016 00:20:32 +1000 Nick Coghlan wrote: > > When I say "build system configuration" in the context of > distutils/setuptools, I mean things like: > > * MANIFEST.in > * non-dependency related setup() arguments (packages, package_dir, > py_modules, ext_modules, namespace_packages, entry_points, > include_package_data, zip_safe, etc) > * the Extension class and its parameters: > https://docs.python.org/2/distutils/setupscript.html#describing-extension-modules > > Those are the settings that actually tell the build system what to > build and (in some cases) how to build it. That's confusing :-) You should really call it "build configuration". Regards Antoine. From dholth at gmail.com Wed May 11 11:25:20 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 11 May 2016 15:25:20 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <20160511162631.7531c856@fsol> References: <20160511112756.68a08a81@fsol> <20160511162631.7531c856@fsol> Message-ID: Thanks for the suggested bikeshed topic. Clever! I like build also. "package build requires". I see that build is what you used in the example [rejected] file formats. To me further segmentation between build systems and the actual build that the build system effects is something you might do to avoid the cost of installing a dependency that might not actually be needed during that phase. IMO installation of an occasional extra package is a fair price to pay if it reduces the number of concepts in the system. On Wed, May 11, 2016 at 10:26 AM Antoine Pitrou wrote: > On Thu, 12 May 2016 00:20:32 +1000 > Nick Coghlan wrote: > > > > When I say "build system configuration" in the context of > > distutils/setuptools, I mean things like: > > > > * MANIFEST.in > > * non-dependency related setup() arguments (packages, package_dir, > > py_modules, ext_modules, namespace_packages, entry_points, > > include_package_data, zip_safe, etc) > > * the Extension class and its parameters: > > > https://docs.python.org/2/distutils/setupscript.html#describing-extension-modules > > > > Those are the settings that actually tell the build system what to > > build and (in some cases) how to build it. > > That's confusing :-) You should really call it "build configuration". > > Regards > > Antoine. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed May 11 14:32:30 2016 From: brett at python.org (Brett Cannon) Date: Wed, 11 May 2016 18:32:30 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: Since you can easily read the PEP at https://github.com/brettcannon/build-deps-pep in a nice, rendered format I won't repost the whole thing, but here are the changes I have made based on the comments so far. 1. I rephrased throughout the PEP to not explicitly say this is for building wheels (for Robert). I was just trying to tie the PEP into what happens today, not what we plan to have happen in the future. 2. Fixed the quotation marks on the TOML examples (for Nathaniel). It's just habit from Python why I did it the way I did. 3. Reworked the Abstract to the following (for Robert): """ This PEP specifies how Python software packages should specify what dependencies they have in order to execute their chosen build system. As part of this specification, a new configuration file is introduced for software packages to use to specify their build dependencies (with the expectation that the same configuration file will be used for future configuration details). """ 4. Added a bit to the end of the Rationale section about where this fits in future plans (for Robert): """ To provide more context and motivation for this PEP, think of the (rough) steps required to produce a built artifact for a project: 1. The source checkout of the project. 2. Installation of the build system. 3. Execute the build system. This PEP covers step #2. It is fully expected that a future PEP will cover step #3, including how to have the build system dynamically specify more dependencies that the build system requires to perform its job. The purpose of this PEP though, is to specify the minimal set of requirements for the build system to simply begin execution. """ 5. Added a JSON Schema for the resulting data (for Nick because he must be writing a lot of specs at work recently if he needs typing information for 2 fields :). This is based on Nathaniel's initial work, but I made the "requires" key a requirement when [package.build-system] is defined. I did not change the name of the key to "schema-version" for the reasons Nathaniel outlined in his response to Nick on the topic. { "$schema": "http://json-schema.org/schema#", "type": "object", "additionalProperties": false, "properties": { "package": { "type": "object", "additionalProperties": false, "properties": { "semantics-version": { "type": "integer", "default": 1 }, "build-system": { "type": "object", "additionalProperties": false, "properties": { "requires": { "type": "array", "items": { "type": "string" } } }, "required": ["requires"] } } }, "tool": { "type": "object" } } } I didn't add an example using 'distutils.numpy' as Nick asked because it's just the same example as in the PEP but with one field changed; IOW it doesn't really add anything. I also didn't rename the file as Randy argued for because the file is for the project, not just the package(s) the project contains ("package" is an overloaded term and I don't want to contribute to that with the filename; I can live with the build details being in relation to a package in the project and thus named [package], but other things that may end up in this file might not relate to any package in the project). On Tue, 10 May 2016 at 17:39 Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. > > The only open issue in the PEP at the moment is the bikeshedding topic of > what to name the sub-section containing the requirements: `[package.build]` > or `[package.build-system]` (we couldn't reach consensus among the three of > us on this). Otherwise the three of us are rather happy with the way the > PEP has turned out and look forward to this being the first step towards > allowing projects to customize their build process better! > So far the votes for this open issue are: build-system: 1. Nathaniel 2. Ian 3. Ethan 4. Nick 5. Paul build: 1. Donald 2. Daniel build-configuration (write-in candidate): 1. Antoine -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed May 11 14:49:23 2016 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 11 May 2016 20:49:23 +0200 Subject: [Distutils] PEP for specifying build dependencies References: Message-ID: <20160511204923.046ff08f@fsol> On Wed, 11 May 2016 18:32:30 +0000 Brett Cannon wrote: > > > > The only open issue in the PEP at the moment is the bikeshedding topic of > > what to name the sub-section containing the requirements: `[package.build]` > > or `[package.build-system]` (we couldn't reach consensus among the three of > > us on this). Otherwise the three of us are rather happy with the way the > > PEP has turned out and look forward to this being the first step towards > > allowing projects to customize their build process better! > > > > So far the votes for this open issue are: > > build-system: > > 1. Nathaniel > 2. Ian > 3. Ethan > 4. Nick > 5. Paul > > build: > > 1. Donald > 2. Daniel > > build-configuration (write-in candidate): > > 1. Antoine Ha :) I like "build" actually. Regards Antoine. From robertc at robertcollins.net Wed May 11 15:37:04 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 May 2016 07:37:04 +1200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 12 May 2016 at 06:32, Brett Cannon wrote: > Since you can easily read the PEP at > https://github.com/brettcannon/build-deps-pep in a nice, rendered format I > won't repost the whole thing, but here are the changes I have made based on > the comments so far. > > 1. I rephrased throughout the PEP to not explicitly say this is for building > wheels (for Robert). I was just trying to tie the PEP into what happens > today, not what we plan to have happen in the future. > > 2. Fixed the quotation marks on the TOML examples (for Nathaniel). It's just > habit from Python why I did it the way I did. > > 3. Reworked the Abstract to the following (for Robert): Thanks Brett. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From brett at python.org Wed May 11 16:06:07 2016 From: brett at python.org (Brett Cannon) Date: Wed, 11 May 2016 20:06:07 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Wed, 11 May 2016 at 12:37 Robert Collins wrote: > On 12 May 2016 at 06:32, Brett Cannon wrote: > > Since you can easily read the PEP at > > https://github.com/brettcannon/build-deps-pep in a nice, rendered > format I > > won't repost the whole thing, but here are the changes I have made based > on > > the comments so far. > > > > 1. I rephrased throughout the PEP to not explicitly say this is for > building > > wheels (for Robert). I was just trying to tie the PEP into what happens > > today, not what we plan to have happen in the future. > > > > 2. Fixed the quotation marks on the TOML examples (for Nathaniel). It's > just > > habit from Python why I did it the way I did. > > > > 3. Reworked the Abstract to the following (for Robert): > > Thanks Brett. > Welcome! Just like all of you, I want to get this PEP right as it's the foundational one to start us down the road of *finally* getting building untangled in a way we're all happy and excited about! -Brett > > -Rob > > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Wed May 11 16:22:38 2016 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 11 May 2016 22:22:38 +0200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <5733948E.9080301@ubuntu.com> On 11.05.2016 02:39, Brett Cannon wrote: > Donald, Nathaniel, and I have finished our proposed PEP for specifying a > projects' build dependencies. The PEP is being kept at > https://github.com/brettcannon/build-deps-pep, so if you find spelling > mistakes and grammatical errors please feel free to send a PR to fix them. probably too late or out of scope for this pep. However the distro packaging for python packages recommends to run the testsuite during the package build. Would it be possible to extend this pep to test depends, or maybe track this as a separate pep? Matthias From robertc at robertcollins.net Wed May 11 16:27:50 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 May 2016 08:27:50 +1200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <5733948E.9080301@ubuntu.com> References: <5733948E.9080301@ubuntu.com> Message-ID: On 12 May 2016 at 08:22, Matthias Klose wrote: > On 11.05.2016 02:39, Brett Cannon wrote: >> >> Donald, Nathaniel, and I have finished our proposed PEP for specifying a >> projects' build dependencies. The PEP is being kept at >> https://github.com/brettcannon/build-deps-pep, so if you find spelling >> mistakes and grammatical errors please feel free to send a PR to fix them. > > > probably too late or out of scope for this pep. However the distro packaging > for python packages recommends to run the testsuite during the package > build. Would it be possible to extend this pep to test depends, or maybe > track this as a separate pep? It's covered as well- this is general purpose 'what is needed to run setup.py' - once you can run setup.py, you can machine interrogate any further dependencies. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Wed May 11 16:46:20 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 May 2016 16:46:20 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <5733948E.9080301@ubuntu.com> Message-ID: <56891B3E-96A2-466C-9653-FC5539C0AB0C@stufft.io> > On May 11, 2016, at 4:27 PM, Robert Collins wrote: > >> probably too late or out of scope for this pep. However the distro packaging >> for python packages recommends to run the testsuite during the package >> build. Would it be possible to extend this pep to test depends, or maybe >> track this as a separate pep? > > It's covered as well- this is general purpose 'what is needed to run > setup.py' - once you can run setup.py, you can machine interrogate any > further dependencies. It?s not *exactly* covered? I mean people could stick test dependencies in this new field but I don?t think that setuptools actually exposes the test_requires in any meaningful way (nor do I think people actually use setuptools test support that consistently). So setuptools could get better support for testing dependencies independently of this PEP *or* another PEP could add a similar section that layered ontop of this. It?s definitely out of scope for this particular PEP though. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Wed May 11 16:51:47 2016 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 12 May 2016 08:51:47 +1200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <56891B3E-96A2-466C-9653-FC5539C0AB0C@stufft.io> References: <5733948E.9080301@ubuntu.com> <56891B3E-96A2-466C-9653-FC5539C0AB0C@stufft.io> Message-ID: On 12 May 2016 at 08:46, Donald Stufft wrote: > >> On May 11, 2016, at 4:27 PM, Robert Collins wrote: >> >>> probably too late or out of scope for this pep. However the distro packaging >>> for python packages recommends to run the testsuite during the package >>> build. Would it be possible to extend this pep to test depends, or maybe >>> track this as a separate pep? >> >> It's covered as well- this is general purpose 'what is needed to run >> setup.py' - once you can run setup.py, you can machine interrogate any >> further dependencies. > > > It?s not *exactly* covered? I mean people could stick test dependencies in > this new field but I don?t think that setuptools actually exposes the > test_requires in any meaningful way (nor do I think people actually use > setuptools test support that consistently). So setuptools could get better > support for testing dependencies independently of this PEP *or* another PEP > could add a similar section that layered ontop of this. It?s definitely out > of scope for this particular PEP though. Right - in more detail though: tox using projects: - set it to setuptools, wheel and any other 'needed to run setup.py sdist' things. Then tox will work - and its list is generally plain text that packagers can introspect, install separately and run the tox commands directly. setup.py test using projects: - a small setuptools entrypoint can be written to allow introspecting test_requires, so we just need to set the requires as for the tox case etc. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From randy at thesyrings.us Wed May 11 18:41:18 2016 From: randy at thesyrings.us (Randy Syring) Date: Wed, 11 May 2016 18:41:18 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <5733B50E.7030808@thesyrings.us> On 05/11/2016 02:32 PM, Brett Cannon wrote: > I also didn't rename the file as Randy argued for because the file is > for the project, not just the package(s) the project contains > ("package" is an overloaded term and I don't want to contribute to > that with the filename; I can live with the build details being in > relation to a package in the project and thus named [package], but > other things that may end up in this file might not relate to any > package in the project). This makes sense, thanks for your consideration. *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed May 11 19:01:49 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 May 2016 16:01:49 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Wed, May 11, 2016 at 11:32 AM, Brett Cannon wrote: [...] > the file is for the project, not just the package(s) the project > contains ("package" is an overloaded term and I don't want to contribute to > that with the filename; I can live with the build details being in relation > to a package in the project and thus named [package], but other things that > may end up in this file might not relate to any package in the project). We went back and forth on the overloaded "package" name a bit while drafting too, and eventually just gave up and went ahead because it's not that important. To me this feels similar to situations I've encountered in the past, where I've spent a bunch of time debating between two things, and it turned out that the reason we couldn't agree was because both proposals were wrong and a third solution was much better :-). I still don't think the [package] name part is worth arguing about much, but throwing this out there in case it turns out to be that "third way" that suddenly makes everyone go "a-ha!": If you think about it, really the stuff in [package.build-system] is there to tell pip how to run the build system. It's associated with building the project/package, sure, but that's not what makes it special -- everything that goes in this file will be associated with building or developing the project/package: [tool.flit], [tool.coverage], [tool.pytest], [tool.tox], whatever. The build-system stuff could easily and comfortably have gone into [tool.pip.build-system] instead... *except* that we don't want it to be specific to pip, we want it to be a piece of shared/common configuration that's defined by a shared process (PEPs) and used by *multiple* tools. That's why it doesn't belong in [tool.pip]. Or another way to put it, contrasting [tool.*] versus [package.*] is kinda weird, because those categories aren't actually contradictory -- it's like having categories [red] versus [square]. So, proposal: maybe we should rename the [package] namespace to something that reflects what distinguishes it from [tool], like: [standard.build-system] or [common.build-system] or [shared.build-system] -n -- Nathaniel J. Smith -- https://vorpus.org From brett at python.org Wed May 11 21:08:55 2016 From: brett at python.org (Brett Cannon) Date: Thu, 12 May 2016 01:08:55 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Wed, 11 May 2016 at 16:01 Nathaniel Smith wrote: > On Wed, May 11, 2016 at 11:32 AM, Brett Cannon wrote: > [...] > > the file is for the project, not just the package(s) the project > > contains ("package" is an overloaded term and I don't want to contribute > to > > that with the filename; I can live with the build details being in > relation > > to a package in the project and thus named [package], but other things > that > > may end up in this file might not relate to any package in the project). > > We went back and forth on the overloaded "package" name a bit while > drafting too, and eventually just gave up and went ahead because it's > not that important. > > To me this feels similar to situations I've encountered in the past, > where I've spent a bunch of time debating between two things, and it > turned out that the reason we couldn't agree was because both > proposals were wrong and a third solution was much better :-). > > I still don't think the [package] name part is worth arguing about > much, but throwing this out there in case it turns out to be that > "third way" that suddenly makes everyone go "a-ha!": > > If you think about it, really the stuff in [package.build-system] is > there to tell pip how to run the build system. It's associated with > building the project/package, sure, but that's not what makes it > special -- everything that goes in this file will be associated with > building or developing the project/package: [tool.flit], > [tool.coverage], [tool.pytest], [tool.tox], whatever. The build-system > stuff could easily and comfortably have gone into > [tool.pip.build-system] instead... *except* that we don't want it to > be specific to pip, we want it to be a piece of shared/common > configuration that's defined by a shared process (PEPs) and used by > *multiple* tools. That's why it doesn't belong in [tool.pip]. > > Or another way to put it, contrasting [tool.*] versus [package.*] is > kinda weird, because those categories aren't actually contradictory -- > it's like having categories [red] versus [square]. > > So, proposal: maybe we should rename the [package] namespace to > something that reflects what distinguishes it from [tool], like: > > [standard.build-system] > > or > > [common.build-system] > > or > > [shared.build-system] > > or [base.build-system] or [super.build-system] I'm +1 on base, super, or common, +0 on standard, -0 on shared. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed May 11 21:33:04 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 May 2016 21:33:04 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: > On May 11, 2016, at 9:08 PM, Brett Cannon wrote: > > > > On Wed, 11 May 2016 at 16:01 Nathaniel Smith > wrote: > On Wed, May 11, 2016 at 11:32 AM, Brett Cannon > wrote: > [...] > > the file is for the project, not just the package(s) the project > > contains ("package" is an overloaded term and I don't want to contribute to > > that with the filename; I can live with the build details being in relation > > to a package in the project and thus named [package], but other things that > > may end up in this file might not relate to any package in the project). > > We went back and forth on the overloaded "package" name a bit while > drafting too, and eventually just gave up and went ahead because it's > not that important. > > To me this feels similar to situations I've encountered in the past, > where I've spent a bunch of time debating between two things, and it > turned out that the reason we couldn't agree was because both > proposals were wrong and a third solution was much better :-). > > I still don't think the [package] name part is worth arguing about > much, but throwing this out there in case it turns out to be that > "third way" that suddenly makes everyone go "a-ha!": > > If you think about it, really the stuff in [package.build-system] is > there to tell pip how to run the build system. It's associated with > building the project/package, sure, but that's not what makes it > special -- everything that goes in this file will be associated with > building or developing the project/package: [tool.flit], > [tool.coverage], [tool.pytest], [tool.tox], whatever. The build-system > stuff could easily and comfortably have gone into > [tool.pip.build-system] instead... *except* that we don't want it to > be specific to pip, we want it to be a piece of shared/common > configuration that's defined by a shared process (PEPs) and used by > *multiple* tools. That's why it doesn't belong in [tool.pip]. > > Or another way to put it, contrasting [tool.*] versus [package.*] is > kinda weird, because those categories aren't actually contradictory -- > it's like having categories [red] versus [square]. > > So, proposal: maybe we should rename the [package] namespace to > something that reflects what distinguishes it from [tool], like: > > [standard.build-system] > > or > > [common.build-system] > > or > > [shared.build-system] > > > or > > [base.build-system] > > or > > [super.build-system] > > I'm +1 on base, super, or common, +0 on standard, -0 on shared. > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig I don't like any of these options nearly as much as [package] TBH. I don?t think that base, super, common, standard, or shared are any less ambiguous than package (in fact I think they are _more_ ambigious). I don't really think of it as package vs tool, I think of it as an implicit vs an explicit . I think it makes the file uglier to have the explicit, particularly since I think the example should really be something like: [standard.package.build-system] requires = ["setuptools", "wheel"] [tool.flake8] ... Because the value of the [package] namespace isn't that it separates us from the [tool] namespace (we could get that easily without it), but that it separates us from *other*, non packaging related but "standard" stuff that might be added in the future. The value of the [tool] namespace isn't that it doesn't make sense for a [flit] and a [package] to be at the same level, but rather that we have no idea what keys people might use there (and indeed, `package` is taken on PyPI) but that it allows us to separate the wild west of "anything goes" from the strictly defined the rest of the file. IOW, the reason to omit the [standard] namespace and the reason to include the [tool] namespace is practicality being purity and designing this file first for humans to edit it, and only second for machines to access it and some sort of semantic purity a distant third. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Wed May 11 21:45:22 2016 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 11 May 2016 18:45:22 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On May 11, 2016 6:33 PM, "Donald Stufft" wrote: > [...] > > I don't like any of these options nearly as much as [package] TBH. I don?t > think that base, super, common, standard, or shared are any less ambiguous than > package (in fact I think they are _more_ ambigious). > > > I don't really think of it as package vs tool, I think of it as an implicit > vs an explicit . I think it makes the file > uglier to have the explicit, particularly since I think the > example should really be something like: > > [standard.package.build-system] > requires = ["setuptools", "wheel"] > > [tool.flake8] > ... > > Because the value of the [package] namespace isn't that it separates us from > the [tool] namespace (we could get that easily without it), but that it > separates us from *other*, non packaging related but "standard" stuff that > might be added in the future. Can you give an example of something that would go in your hypothetical implicit a pyproject.tml [standard] section, but that would not be related to configuring that project's package/packages and thus go into [package]? Partly asking because I'm not sure what the difference is between a "project" and a "package", partly because if we can articulate a clear guideline then that'd be useful for the future. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Wed May 11 22:22:13 2016 From: donald at stufft.io (Donald Stufft) Date: Wed, 11 May 2016 22:22:13 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <33ABD4E7-05FA-47E4-B1DE-C13BC8C8913D@stufft.io> > On May 11, 2016, at 9:45 PM, Nathaniel Smith wrote: > > On May 11, 2016 6:33 PM, "Donald Stufft" > wrote: > > > [...] > > > > I don't like any of these options nearly as much as [package] TBH. I don?t > > think that base, super, common, standard, or shared are any less ambiguous than > > package (in fact I think they are _more_ ambigious). > > > > > > I don't really think of it as package vs tool, I think of it as an implicit > > vs an explicit . I think it makes the file > > uglier to have the explicit, particularly since I think the > > example should really be something like: > > > > [standard.package.build-system] > > requires = ["setuptools", "wheel"] > > > > [tool.flake8] > > ... > > > > Because the value of the [package] namespace isn't that it separates us from > > the [tool] namespace (we could get that easily without it), but that it > > separates us from *other*, non packaging related but "standard" stuff that > > might be added in the future. > > Can you give an example of something that would go in your hypothetical implicit a pyproject.tml [standard] section, but that would not be related to configuring that project's package/packages and thus go into [package]? Partly asking because I'm not sure what the difference is between a "project" and a "package", partly because if we can articulate a clear guideline then that'd be useful for the future. > > -n > This is somewhat of a contrived example because I?m not sure how useful it would be to have a standard representation of it, but one possible example is PEP8 (the actual PEP not the tool on PyPI) linters and what rules they follow that would allow people to use arbitrary linters against a code base (which may or may not be a ?package? in the PyPI/pip/PyPA sense) but is just a chunk of code sitting in a directory. A less contrived answer is that I simply don?t know, but it feels like the ?cost? of introducing the [package] top level is pretty low (a total of 8 additional characters per table) and in my head it has some meaning (this is the stuff for a Python distributable package, that you could, but maybe don?t, ship on PyPI and install with pip). I view the ?project? as a superset of that, where part of configuring a ?project? may include configuring the package side of things (assuming it is even a package and it isn?t just some arbitrary code in a dir) but might also include other things. On the other hand, I feel like `[standard]` or whatever isn?t really meaningful as anything other than ?the stuff that isn?t in [tool]?, which makes me feel like adding it in is mostly a purity thing and the cost (a total of 9 extra characters per table) doesn?t seem worth it since any human will be able to trivially identify the set of things which are not in the tool namespace, and computers can also do that pretty easily (although slightly clumsily). Now, you could argue that the [package] keyword is superfluous and in reality it?s highly unlikely that we ever get anything major that would ever sit as a sibling to it (besides tool) and thus it doesn?t make sense to pay the cost of those extra 8 characters when it is probably going to be the only non-tool value ever. Personally I think hedging our bets and leaving the door open for that possibility is a nice thing to do when the cost is so low. However, I don?t think it?d be unreasonable or silly to make the other trade off and just say that having it isn?t valuable and just stick [build-system] at the top level along with [tool.*] and say that if we ever come up with something that is not related to a package (in the PyPA sense) that it really won?t be that big of a deal to just have it live beside stuff like [build-system]. So I think we should either have: [package.build-system] requires = [?setuptools?, ?wheel?] [tool.flake8] ? OR [build-system] requires = [?setuptools?, ?wheel?] [tool.flake8] ? but I don?t think trying to make the parsed tree fit some ?correct? hierarchy of data types when you consider the [tool] section (which really only exists to prevent collisions, otherwise we?d just let people stick [flake8] etc at the top level) is worth it. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu May 12 02:38:57 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 16:38:57 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <20160511162631.7531c856@fsol> References: <20160511112756.68a08a81@fsol> <20160511162631.7531c856@fsol> Message-ID: On 12 May 2016 at 00:26, Antoine Pitrou wrote: > On Thu, 12 May 2016 00:20:32 +1000 > Nick Coghlan wrote: >> >> When I say "build system configuration" in the context of >> distutils/setuptools, I mean things like: >> >> * MANIFEST.in >> * non-dependency related setup() arguments (packages, package_dir, >> py_modules, ext_modules, namespace_packages, entry_points, >> include_package_data, zip_safe, etc) >> * the Extension class and its parameters: >> https://docs.python.org/2/distutils/setupscript.html#describing-extension-modules >> >> Those are the settings that actually tell the build system what to >> build and (in some cases) how to build it. > > That's confusing :-) You should really call it "build configuration". Gah, that's what I *meant* to write. Unfortunately, I put the extra word in there without noticing, rendering the entire message thoroughly confusing. To be clear: * "build system configuration": telling other tools what's needed to invoke the build system * "build configuration": actually telling the build system what it needs to do The build system config is the part the PEP proposes to start moving to pyproject.toml (with support for non-setup.py based invocation deferred to a later PEP). The build config itself will remain in tool dependent locations (e.g. setup.py, setup.cfg, flit.ini). If any build config were to move to pyproject.toml, it would be via the [tool.] mechanism, and be a decision for the developers of the build tool concerned, rather than needing to be the topic of a PEP. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu May 12 03:01:12 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 17:01:12 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 12 May 2016 at 11:33, Donald Stufft wrote: > I don't really think of it as package vs tool, I think of it as an implicit > vs an explicit . I think it makes the > file > uglier to have the explicit, particularly since I think the > example should really be something like: > > [standard.package.build-system] > requires = ["setuptools", "wheel"] > > [tool.flake8] > ... > > Because the value of the [package] namespace isn't that it separates us from > the [tool] namespace (we could get that easily without it), but that it > separates us from *other*, non packaging related but "standard" stuff that > might be added in the future. In that case though: 1. semantics-version isn't about the package, it's about the pyproject.toml file itself. 2. build-system feels like it could readily be top level as well, regardless of what other sections we added later That would make the example in the PEP =============== semantics-version = 1 # Optional; defaults to 1. [build-system] requires = ["setuptools", "wheel"] # PEP 508 specifications. =============== So I'm not clear on what the [package] namespace is buying us over just having [build-system] as a top level namespace (it would be different with a section name of "build" - for that, [package.build] reads nicely, and you can mostly ignore that it creates a nested namespace TOML. As noted elsewhere, I don't like "build" though - we're not configuring the build, we're specifying what's needed to run the build system in the first place). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Thu May 12 03:20:21 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 17:20:21 +1000 Subject: [Distutils] comparison of configuration languages In-Reply-To: <5732C782.3090305@canterbury.ac.nz> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <5732C782.3090305@canterbury.ac.nz> Message-ID: On 11 May 2016 at 15:47, Greg Ewing wrote: > Having looked over the TOML spec, I like the simplicity > of it (and I cringe from the complexity of YAML). > The only thing I don't like about TOML is the way it > cops out on nesting. > > The only reason it does that as far as I can see is > because of a dislike for significant indentation. This is a feature, not a bug. The trick is that TOML allows *implicit* namespace creation when you only care about the innermost level: [this.is.nested] nice = "As a human reader & writer, you don't care about the nesting" [this-is-not-nested] nice = "As a human reader & writer, you don't care about the lack of nesting" The only folks that need to care about whether the configuration is actually nested or flat are those working with the config file *programmatically* - from the point of view of the folks hand editing the config file, it's always just the flat list of named key=value sections that they're used to in traditional ini files. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From xav.fernandez at gmail.com Thu May 12 04:32:42 2016 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Thu, 12 May 2016 10:32:42 +0200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: Thanks for your work ! For what it's worth, I also think that: - semantics-version (or maybe pyproject-version ? to mimic the Wheel-Version of the WHEEL file) should be a top level value; - [build-system] requires = ["setuptools", "wheel"] reads nicely and better than [package.build-system] Regards, Xavier On Thu, May 12, 2016 at 9:01 AM, Nick Coghlan wrote: > On 12 May 2016 at 11:33, Donald Stufft wrote: > > I don't really think of it as package vs tool, I think of it as an > implicit > > vs an explicit . I think it makes the > > file > > uglier to have the explicit, particularly since I think > the > > example should really be something like: > > > > [standard.package.build-system] > > requires = ["setuptools", "wheel"] > > > > [tool.flake8] > > ... > > > > Because the value of the [package] namespace isn't that it separates us > from > > the [tool] namespace (we could get that easily without it), but that it > > separates us from *other*, non packaging related but "standard" stuff > that > > might be added in the future. > > In that case though: > > 1. semantics-version isn't about the package, it's about the > pyproject.toml file itself. > 2. build-system feels like it could readily be top level as well, > regardless of what other sections we added later > > That would make the example in the PEP > =============== > semantics-version = 1 # Optional; defaults to 1. > > [build-system] > requires = ["setuptools", "wheel"] # PEP 508 specifications. > =============== > > So I'm not clear on what the [package] namespace is buying us over > just having [build-system] as a top level namespace (it would be > different with a section name of "build" - for that, [package.build] > reads nicely, and you can mostly ignore that it creates a nested > namespace TOML. As noted elsewhere, I don't like "build" though - > we're not configuring the build, we're specifying what's needed to run > the build system in the first place). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Thu May 12 05:07:53 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 May 2016 02:07:53 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On Thu, May 12, 2016 at 12:01 AM, Nick Coghlan wrote: > On 12 May 2016 at 11:33, Donald Stufft wrote: >> I don't really think of it as package vs tool, I think of it as an implicit >> vs an explicit . I think it makes the >> file >> uglier to have the explicit, particularly since I think the >> example should really be something like: >> >> [standard.package.build-system] >> requires = ["setuptools", "wheel"] >> >> [tool.flake8] >> ... >> >> Because the value of the [package] namespace isn't that it separates us from >> the [tool] namespace (we could get that easily without it), but that it >> separates us from *other*, non packaging related but "standard" stuff that >> might be added in the future. > > In that case though: > > 1. semantics-version isn't about the package, it's about the > pyproject.toml file itself. > 2. build-system feels like it could readily be top level as well, > regardless of what other sections we added later > > That would make the example in the PEP > =============== > semantics-version = 1 # Optional; defaults to 1. > > [build-system] > requires = ["setuptools", "wheel"] # PEP 508 specifications. > =============== > > So I'm not clear on what the [package] namespace is buying us over > just having [build-system] as a top level namespace (it would be > different with a section name of "build" - for that, [package.build] > reads nicely, and you can mostly ignore that it creates a nested > namespace TOML. As noted elsewhere, I don't like "build" though - > we're not configuring the build, we're specifying what's needed to run > the build system in the first place). When we were spitballing the draft, I think where [package] originally came from was the idea that having semantics-version at the top level is not actually useful -- most tools will only care about the semantics of the [tool.whatever] table and the only PEP change that would affect them is if we for some reason redefined the [tool] table itself. Which we aren't going to do. But if semantics-version is top-level, then presumably everyone has to check for it and error out if it changes. So bumping semantics-version would cause all these tools to error out, even though the part of the file that they actually care about didn't change, which would mean in practice we would just never actually bump the semantics-version because the flag day would be too painful. Introducing a [package] level and pushing semantics-version down inside [package] insulates from that. ...Given how complicated this is ending up being, I'm sorta inclined to just drop semantics-version. It's only in there as a "hey why not it doesn't hurt" thing. I can't imagine any situation in which we'd actually bump the semantics version. If we need to make some incompatible change we'll actually do it by adding a [build-system-2] or something, and specify that [build-system] and [build-system-2] are both allowed in the same file, and if both are present then new tools should ignore [build-system] -- way smoother and backwards-compatible. -n -- Nathaniel J. Smith -- https://vorpus.org From p.f.moore at gmail.com Thu May 12 05:21:22 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 12 May 2016 10:21:22 +0100 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 12 May 2016 at 10:07, Nathaniel Smith wrote: > ...Given how complicated this is ending up being, I'm sorta inclined > to just drop semantics-version. It's only in there as a "hey why not > it doesn't hurt" thing. I can't imagine any situation in which we'd > actually bump the semantics version. If we need to make some > incompatible change we'll actually do it by adding a [build-system-2] > or something, and specify that [build-system] and [build-system-2] are > both allowed in the same file, and if both are present then new tools > should ignore [build-system] -- way smoother and backwards-compatible. That does seem like a simpler solution. I'd like to think that most changes we'd make to the file format could be done in a backward-compatible manner - new values that default appropriately, detectable changes to existing values (like a single string value becomes a list, just allow a string and treat it as a 1-element list). If we have to make breaking changes, using a new name (either at the section level like you suggest, or at the individual item level) seems perfectly acceptable. And if we don't need semantics-version in the [package] section, we can promote [build-system] to top level and just have 2 top level items, [build-system] and [tools]. Seems clean and manageable to me. Paul From njs at pobox.com Thu May 12 05:26:57 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 May 2016 02:26:57 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <33ABD4E7-05FA-47E4-B1DE-C13BC8C8913D@stufft.io> References: <33ABD4E7-05FA-47E4-B1DE-C13BC8C8913D@stufft.io> Message-ID: On Wed, May 11, 2016 at 7:22 PM, Donald Stufft wrote: [...] > Now, you could argue that the [package] keyword is superfluous and in > reality it?s highly unlikely that we ever get anything major that would ever > sit as a sibling to it (besides tool) and thus it doesn?t make sense to pay > the cost of those extra 8 characters when it is probably going to be the > only non-tool value ever. Personally I think hedging our bets and leaving > the door open for that possibility is a nice thing to do when the cost is so > low. However, I don?t think it?d be unreasonable or silly to make the other > trade off and just say that having it isn?t valuable and just stick > [build-system] at the top level along with [tool.*] and say that if we ever > come up with something that is not related to a package (in the PyPA sense) > that it really won?t be that big of a deal to just have it live beside stuff > like [build-system]. > > So I think we should either have: > > [package.build-system] > requires = [?setuptools?, ?wheel?] > > [tool.flake8] > ? > > OR > > [build-system] > requires = [?setuptools?, ?wheel?] > > [tool.flake8] > ? > > but I don?t think trying to make the parsed tree fit some ?correct? > hierarchy of data types when you consider the [tool] section (which really > only exists to prevent collisions, otherwise we?d just let people stick > [flake8] etc at the top level) is worth it. FTR, to the extent that I object to [package] it's nothing to do with character count and purity, and instead to do with it being a bit confusing / poor UI, because as we've seen no-one's really sure what a "package" is. It's not a huge deal, but it might create some user confusion and future-PEP-author confusion. My preference ordering: [common.build-system] = [build-system] > [package.build-system] > nothing -n -- Nathaniel J. Smith -- https://vorpus.org From donald at stufft.io Thu May 12 05:44:46 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 05:44:46 -0400 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <20160510161657.64c03837@fsol> <20160510163054.72aa15e8@fsol> <20160510170002.22c44ff6@fsol> <5732C782.3090305@canterbury.ac.nz> Message-ID: <5956B890-68CD-4598-AFA6-529CCC11FBC3@stufft.io> > On May 12, 2016, at 3:20 AM, Nick Coghlan wrote: > > On 11 May 2016 at 15:47, Greg Ewing wrote: >> Having looked over the TOML spec, I like the simplicity >> of it (and I cringe from the complexity of YAML). >> The only thing I don't like about TOML is the way it >> cops out on nesting. >> >> The only reason it does that as far as I can see is >> because of a dislike for significant indentation. > > This is a feature, not a bug. It did occur to me yesterday an unambiguous way to reduce the repetition of nested tables without introducing significant whitespace. I opened an issue on the TOML Github repo [1] but the gist of it is similar to how Python uses . to do relative imports. They may or may not like it though, who knows :) [1] https://github.com/toml-lang/toml/issues/413 ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From donald at stufft.io Thu May 12 07:41:21 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 07:41:21 -0400 Subject: [Distutils] PyPI and GPG Signatures Message-ID: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Currently, PyPI allows you to upload a GPG signature along with your package file as well as associate a GPG Short ID with your user. Theoretically this allows end users to not trust PyPI and instead validate end to end signatures from the original author. I've written [1] previously about package signing, and about a number of common suggestions for achieving it that don't actually do much, if anything, to increase to the security of things. The current implementation on PyPI falls into such a trap. The main problem with GPG and package signing is that a GPG key provides some guarantees (ignoring issues with the concept of a WOT) about the *identity* of the person possessing the key, however it provides no mechanism for providing any guarantees about what *capabilities* should be granted to that person. More concretely, while you can use GPG as is to verify that yes, "Donald Stufft" signed a particular package, you cannot use it to determine if "Donald Stufft" is *allowed* to sign for that package, a valid signature from me on the requests project should be just as invalid as an invalid signature from anyone on the requests project. The only namespacing provided by GPG itself is "trusted key" vs "not trusted key". PyPI offers a work around for this in the form of allowing users to associate their GPG short ID with their user profile. However this is not actually very useful because it doesn't really provide much benefit overall. The goal of signing a package on PyPI is generally to allow you to safely download the file without trusting PyPI, but if you need to trust PyPI to determine what key is allowed to sign a package, then you've not really added much in the way of additional assurances, you've just added another possible point of failure. Beyond the inherent issues with attempting to use GPG support for anything useful on PyPI there are also a number of implementation specific issues with this support. Currently you *must* use a GPG Short ID with PyPI, however the GPG Short IDs are not actually secure and can be pretty easily brute forced to have a collision, which means that people can make keys that come out to the same short ID as one that an author has in their profile. In addition, uploading a signature to PyPI is not actually validated upon upload, allowing people to upload a signature that doesn't actually validate, causing a persist failure mode (though nobody validates our signatures, so nobody ever runs into this problem). On top of all of that, I believe that there will never be a case where a tool like pip supports these GPG signatures [2]. Even if you wiped away all of the above problems, GPG is still a complex standard without great support for building tooling around it. The most reasonable way of implementing support for that would be to ship a copy of the gpg binary around for different platforms and shell out to it. However GPG is GPL licensed which means that's not something we could actually do, and even if it were, shipping binaries is not generally a reasonable thing to do for pip and anything besides pure Python is a no go. I am aware of a single tool anywhere that actively supports verifying the signatures that people upload to PyPI, and that is Debian's uscan program. Even in that case the people writing the Debian watch file have to hardcode in a signing key into it and in my experience, when faced with a validation error it's not unusual for Debian to simply disable signature checking for that project and/or just blindly update the key to whatever the new key is. All in all, I think that there is not a whole lot of point to having this feature in PyPI, it is predicated a bunch of invalid assumptions (as detailed above) and I do not believe end users are actually even using the keys that are being uploaded. Last time I looked, pip, easy_install, and bandersnatch represented something like 99% of all download traffic on PyPI and none of those will do anything with the .asc files being uploaded to PyPI (other than bandersnatch just blindly mirroring it). When looking at the number of projects actively using this feature on PyPI, I can see that 27931/591919 files on PyPI have the ``has_signature`` database field set to true, or roughly 4% of all files on PyPI, which roughly holds up when you look at the number of distinct projects that have ever uploaded a signature as well (3559/80429). Thus, I would like to remove this feature from PyPI (but not from PEP 503, if other repositories want to continue to support it they are free to). Doing this would allow simplifying code we have in Warehouse anyplace we touch uploaded files (since we almost always end up needing to branch into special behavior for files ending with .asc). It will allow us to reduce the number of concepts in the UI (what is a pgp signature? What do I do with it? etc) without simply hiding a feature (which is likely to cause confusion, why do you support it if you won't show it etc). I think it will also make releasing slightly easier for developers, since I personally know a number of authors on PyPI who don't really believe there is any value in signing their packages on PyPI, but they do it anyways because of a vague notion that they should do it. If we do it, an open question would be what we do with all of the *existing* signatures on PyPI. We could just leave them in place and stop accepting new signatures, though that still means we end up needing to branch on .asc anyplace we handle files because they'll still be a valid code path. Another option is to just simply get rid of them and act as if nobody ever uploaded them in the first place, which is my preferred option. What do folks think? Would anyone be particularly against getting rid of the GPG support in PyPI? [1] https://caremad.io/2013/07/packaging-signing-not-holy-grail/ [2] When we do implement package signing in pip, it will almost certainly be via TUF, most likely using ed25519 signatures but perhaps using RSA. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From p.f.moore at gmail.com Thu May 12 08:05:43 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 12 May 2016 13:05:43 +0100 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: On 12 May 2016 at 12:41, Donald Stufft wrote: > What do folks think? Would anyone be particularly against getting rid of the > GPG support in PyPI? 28K projects is too many to do a mailshot, but would it be worth asking this question more widely than on distutils-sig? Just "Do you maintain a project on PyPI that has GPG sigs and would you care if we removed them? If so, please let us know on the thread on distutils-sig." On an unrelated note, it might be a good feature for Warehouse to add some means of notifying project owners for cases like this. Paul From donald at stufft.io Thu May 12 08:28:16 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 08:28:16 -0400 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: <87E38E1D-41DA-42DE-9B99-2DD55FD06A68@stufft.io> > On May 12, 2016, at 8:05 AM, Paul Moore wrote: > > On 12 May 2016 at 12:41, Donald Stufft wrote: >> What do folks think? Would anyone be particularly against getting rid of the >> GPG support in PyPI? > > 28K projects is too many to do a mailshot, but would it be worth > asking this question more widely than on distutils-sig? Just "Do you > maintain a project on PyPI that has GPG sigs and would you care if we > removed them? If so, please let us know on the thread on > distutils-sig.? It's 28k *files* but a single project can have more than one file. The total number of projects that have *ever* uploaded a file with a signature is 3.5k and of that 3.5k, only 2.7k projects have their *latest* release uploaded with signatures. > > On an unrelated note, it might be a good feature for Warehouse to add > some means of notifying project owners for cases like this. > Paul ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu May 12 08:31:18 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 22:31:18 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: On 12 May 2016 at 19:07, Nathaniel Smith wrote: > When we were spitballing the draft, I think where [package] originally > came from was the idea that having semantics-version at the top level > is not actually useful -- most tools will only care about the > semantics of the [tool.whatever] table and the only PEP change that > would affect them is if we for some reason redefined the [tool] table > itself. Which we aren't going to do. But if semantics-version is > top-level, then presumably everyone has to check for it and error out > if it changes. So bumping semantics-version would cause all these > tools to error out, even though the part of the file that they > actually care about didn't change, which would mean in practice we > would just never actually bump the semantics-version because the flag > day would be too painful. Introducing a [package] level and pushing > semantics-version down inside [package] insulates from that. > > ...Given how complicated this is ending up being, I'm sorta inclined > to just drop semantics-version. It's only in there as a "hey why not > it doesn't hurt" thing. I can't imagine any situation in which we'd > actually bump the semantics version. If we need to make some > incompatible change we'll actually do it by adding a [build-system-2] > or something, and specify that [build-system] and [build-system-2] are > both allowed in the same file, and if both are present then new tools > should ignore [build-system] -- way smoother and backwards-compatible. We could also keep semantics-version, and just put it inside [build-system]. Either way, by allowing access to the [tool.*] namespace without any other version check, the key constraint we're placing on ourselves is a commitment to only making backwards compatible changes at the top level of the schema definition, and that should be a feasible promise to keep. While I can't conceive of an eventuality where we'd need to break such a promise, even if we did, the change could be indicated by switching to a different filename. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From xav.fernandez at gmail.com Thu May 12 08:30:46 2016 From: xav.fernandez at gmail.com (Xavier Fernandez) Date: Thu, 12 May 2016 14:30:46 +0200 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: I'm not sure that is an issue: if the version is bumped, this won't happen overnight. Why would projects/tools not have the time to update and support semantic-version 1 and 2 ? On Thu, May 12, 2016 at 11:07 AM, Nathaniel Smith wrote: > On Thu, May 12, 2016 at 12:01 AM, Nick Coghlan wrote: > > On 12 May 2016 at 11:33, Donald Stufft wrote: > >> I don't really think of it as package vs tool, I think of it as an > implicit > >> vs an explicit . I think it makes > the > >> file > >> uglier to have the explicit, particularly since I > think the > >> example should really be something like: > >> > >> [standard.package.build-system] > >> requires = ["setuptools", "wheel"] > >> > >> [tool.flake8] > >> ... > >> > >> Because the value of the [package] namespace isn't that it separates us > from > >> the [tool] namespace (we could get that easily without it), but that it > >> separates us from *other*, non packaging related but "standard" stuff > that > >> might be added in the future. > > > > In that case though: > > > > 1. semantics-version isn't about the package, it's about the > > pyproject.toml file itself. > > 2. build-system feels like it could readily be top level as well, > > regardless of what other sections we added later > > > > That would make the example in the PEP > > =============== > > semantics-version = 1 # Optional; defaults to 1. > > > > [build-system] > > requires = ["setuptools", "wheel"] # PEP 508 specifications. > > =============== > > > > So I'm not clear on what the [package] namespace is buying us over > > just having [build-system] as a top level namespace (it would be > > different with a section name of "build" - for that, [package.build] > > reads nicely, and you can mostly ignore that it creates a nested > > namespace TOML. As noted elsewhere, I don't like "build" though - > > we're not configuring the build, we're specifying what's needed to run > > the build system in the first place). > > When we were spitballing the draft, I think where [package] originally > came from was the idea that having semantics-version at the top level > is not actually useful -- most tools will only care about the > semantics of the [tool.whatever] table and the only PEP change that > would affect them is if we for some reason redefined the [tool] table > itself. Which we aren't going to do. But if semantics-version is > top-level, then presumably everyone has to check for it and error out > if it changes. So bumping semantics-version would cause all these > tools to error out, even though the part of the file that they > actually care about didn't change, which would mean in practice we > would just never actually bump the semantics-version because the flag > day would be too painful. Introducing a [package] level and pushing > semantics-version down inside [package] insulates from that. > > ...Given how complicated this is ending up being, I'm sorta inclined > to just drop semantics-version. It's only in there as a "hey why not > it doesn't hurt" thing. I can't imagine any situation in which we'd > actually bump the semantics version. If we need to make some > incompatible change we'll actually do it by adding a [build-system-2] > or something, and specify that [build-system] and [build-system-2] are > both allowed in the same file, and if both are present then new tools > should ignore [build-system] -- way smoother and backwards-compatible. > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu May 12 08:42:53 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 08:42:53 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: Message-ID: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> > On May 12, 2016, at 8:31 AM, Nick Coghlan wrote: > > We could also keep semantics-version, and just put it inside [build-system]. > > Either way, by allowing access to the [tool.*] namespace without any > other version check, the key constraint we're placing on ourselves is > a commitment to only making backwards compatible changes at the top > level of the schema definition, and that should be a feasible promise > to keep. While I can't conceive of an eventuality where we'd need to > break such a promise, even if we did, the change could be indicated by > switching to a different filename. I don't think we should put it inside of [build-system], largely because I think the chances we ever need to increment the version is very small, and I feel like putting it inside of [build-system] means we'll then need one for any other top level key we put in. Each additional one is additional complexity and increases the chance that some tool doesn't accurately check every single one of them. Putting it inside of [package] made some sense, because that was going to be a container for all of the things that one particular group of people (distutils-sig / PyPA) "managed" or cared about but I think that putting it on each individual sub section is just overkill. We can easily stick it at the top level of the file and just explicitly state that the [tool.*] namespace is exempt from deriving any sort of meaning from the semantics-version value. I think that is easier to not screw up (only one check, vs N checks) and I think that it looks nicer too from a human writing/editing POV if we ever do bump the version and force people to write it out: semantics-version = 2 [build-system] requires = [ "setuptools", "pip", ] [test-runner] # Just an example command = "py.test --strict" requires = [ "pytest", ] [tool.pip] index-url = "https://index.example.com/simple/" But honestly, I'm of the opinion we could probably just ditch it. I don't think it'll be hard to maintain compatibility within the keywords we pick in this file and I worry that by including it in something that we expect humans to write, we provide an incentive to using it when perhaps we could think up a better, backwards compatible syntax. The main argument in favor of adding it now with an implicit default of `1` is that if I'm wrong and we end up needing it, including it now will mean that projects are actively checking the version number so we can safely increase it with the desired effect. If we don't include it now, then even if we add it at a later date nothing will be checking to see if that changed or not. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu May 12 08:56:16 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 12 May 2016 22:56:16 +1000 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: On 12 May 2016 at 21:41, Donald Stufft wrote: > Thus, I would like to remove this feature from PyPI (but not from PEP 503, if > other repositories want to continue to support it they are free to). Doing this > would allow simplifying code we have in Warehouse anyplace we touch uploaded > files (since we almost always end up needing to branch into special behavior > for files ending with .asc). It will allow us to reduce the number of concepts > in the UI (what is a pgp signature? What do I do with it? etc) without simply > hiding a feature (which is likely to cause confusion, why do you support it if > you won't show it etc). I think it will also make releasing slightly easier for > developers, since I personally know a number of authors on PyPI who don't > really believe there is any value in signing their packages on PyPI, but they > do it anyways because of a vague notion that they should do it. I'm a fan of most ideas that lower barriers to migration from the legacy PyPI codebase to Warehouse :) However, I think one of the core points here is that even if GPG keys are removed from the *public* API, we still don't want the migration to break things for folks with automated upload processes that supply signature information - there'll be enough teething problems for the migration without a potential coincident breakage for a couple of thousands projects. That means the trade-off to consider is whether or not dropping this feature will get to the point of being able to deploy Warehouse sooner or not. - if it's already implemented & tested in Warehouse, then leave it alone - you can take it out later, some time *after* the migration - if it's implemented, but not well tested, decide what to do based on whether testing or the below idea seems like less work - if it's not implemented in Warehouse yet, that's when the key requirement becomes "don't break automated signed uploads" If we drop the feature, then my suggestion for a future upload UX is to let uploads with signatures succeed, but implicitly discard the signatures, and *automatically email the maintainer explaining why their uploaded signatures aren't appearing*. Similarly, projects that *had* signatures uploaded should gain a database flag indicating previously uploaded keys have been discarded, such that: 1. The admin page for the project explains why the signatures are gone 2. The affected package maintainers are emailed with a list of projects they maintain from which signatures have been removed The aim here would be to ensure that package maintainers tempted to ask "Where did my package signatures go?" are able to have that question answered *within the context of the service itself*, even if they've never heard of distutils-sig. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu May 12 09:21:50 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 09:21:50 -0400 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: <469A3298-DC9B-4FE3-A9AB-FB60EF3DCDBA@stufft.io> > On May 12, 2016, at 8:56 AM, Nick Coghlan wrote: > > On 12 May 2016 at 21:41, Donald Stufft wrote: >> Thus, I would like to remove this feature from PyPI (but not from PEP 503, if >> other repositories want to continue to support it they are free to). Doing this >> would allow simplifying code we have in Warehouse anyplace we touch uploaded >> files (since we almost always end up needing to branch into special behavior >> for files ending with .asc). It will allow us to reduce the number of concepts >> in the UI (what is a pgp signature? What do I do with it? etc) without simply >> hiding a feature (which is likely to cause confusion, why do you support it if >> you won't show it etc). I think it will also make releasing slightly easier for >> developers, since I personally know a number of authors on PyPI who don't >> really believe there is any value in signing their packages on PyPI, but they >> do it anyways because of a vague notion that they should do it. > > I'm a fan of most ideas that lower barriers to migration from the > legacy PyPI codebase to Warehouse :) > > However, I think one of the core points here is that even if GPG keys > are removed from the *public* API, we still don't want the migration > to break things for folks with automated upload processes that supply > signature information - there'll be enough teething problems for the > migration without a potential coincident breakage for a couple of > thousands projects. That means the trade-off to consider is whether or > not dropping this feature will get to the point of being able to > deploy Warehouse sooner or not. Both PyPI and Warehouse silently ignore additional fields so we get this for free unless we go out of our way to add a check for the existence of it in the POST and manually raise an error. > > - if it's already implemented & tested in Warehouse, then leave it > alone - you can take it out later, some time *after* the migration > - if it's implemented, but not well tested, decide what to do based on > whether testing or the below idea seems like less work > - if it's not implemented in Warehouse yet, that's when the key > requirement becomes "don't break automated signed uploads? It is implemented for uploads (and like all code in Warehouse it has 100% coverage) and I've had people successfully upload signatures to Warehouse with it... but then I've also had "weird" errors where people attempt to upload a signature to Warehouse and the `request.POST["gpg_signature"]` is a completely different type of object than it normally is. I have no idea *why* it was suddenly a different object for those people and that comes from ``cgi.py`` in the standard library which is... well not super easy to follow what's going on. So I'm moderately concerned about what is going to happen if more people start trying to use that, particularly since I can't really decipher what what is triggering this behavior to know how to guard against it (other than just blindly writing the code to accept the additional type I've observed with no way to reproduce the error and just hope that it's enough to fix it). To make it worse, the problem was persistent until the person in questioned reinstalled twine and it's dependencies (but I can't repro with any version of twine). It's not implemented the UI nor is the ability to associate a GPG key with a particular user implemented nor is the ability to see what GPG key a user has associated with their account implemented, and I sort of don't want to implement those for all the reasons I listed in my original post. To do that would require Nicole's time to properly design it, and her time is more limited than mine is, so if I can reduce the things she needs to do by putting in some effort on my part (like removing the code that's there, or implementing your mail solution) that seems like a good trade off to me. Some folks who uploaded GPG keys to Warehouse noticed they didn't show up in the UI and asked about it (since they had uploaded them) and they felt that either Warehouse should fully implement the GPG feature or it should get rid of it, but a half implementation isn't great for anyone. > > If we drop the feature, then my suggestion for a future upload UX is > to let uploads with signatures succeed, but implicitly discard the > signatures, and *automatically email the maintainer explaining why > their uploaded signatures aren't appearing*. That seems reasonable. > > Similarly, projects that *had* signatures uploaded should gain a > database flag indicating previously uploaded keys have been discarded, > such that: > > 1. The admin page for the project explains why the signatures are gone > 2. The affected package maintainers are emailed with a list of > projects they maintain from which signatures have been removed > > The aim here would be to ensure that package maintainers tempted to > ask "Where did my package signatures go?" are able to have that > question answered *within the context of the service itself*, even if > they've never heard of distutils-sig. That seems reasonable to, and I don?t even really need to add a new database column for it (though it might make sense for performance sake) but this should be trivially query able using: SELECT EXISTS( SELECT 1 FROM release_files WHERE name = %s AND has_signature = 't' ); ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From fungi at yuggoth.org Thu May 12 09:32:12 2016 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 May 2016 13:32:12 +0000 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: <20160512133211.GC15295@yuggoth.org> On 2016-05-12 07:41:21 -0400 (-0400), Donald Stufft wrote: [...] > What do folks think? Would anyone be particularly against getting > rid of the GPG support in PyPI? We have plans[*] in the OpenStack community to start autosigning our sdist and wheel builds (and similar release artifacts we build for other package ecosystems), so that we can track provenance and integrity through part of our release pipeline. I'm hoping to have that implemented in the next few months. While also uploading these signatures to PyPI was seen as useful, we do already have another primary location we can publish detached signatures along with our release artifacts so I would probably just ignore the PyPI/twine-specific part of the work if this goes away. [*] http://specs.openstack.org/openstack-infra/infra-specs/specs/artifact-signing.html -- Jeremy Stanley From brett at python.org Thu May 12 12:33:33 2016 From: brett at python.org (Brett Cannon) Date: Thu, 12 May 2016 16:33:33 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On Thu, 12 May 2016 at 05:43 Donald Stufft wrote: > > > On May 12, 2016, at 8:31 AM, Nick Coghlan wrote: > > > > We could also keep semantics-version, and just put it inside > [build-system]. > > > > Either way, by allowing access to the [tool.*] namespace without any > > other version check, the key constraint we're placing on ourselves is > > a commitment to only making backwards compatible changes at the top > > level of the schema definition, and that should be a feasible promise > > to keep. While I can't conceive of an eventuality where we'd need to > > break such a promise, even if we did, the change could be indicated by > > switching to a different filename. > > I don't think we should put it inside of [build-system], largely because I > think the chances we ever need to increment the version is very small, and > I > feel like putting it inside of [build-system] means we'll then need one for > any other top level key we put in. Each additional one is additional > complexity > and increases the chance that some tool doesn't accurately check every > single > one of them. Putting it inside of [package] made some sense, because that > was > going to be a container for all of the things that one particular group of > people (distutils-sig / PyPA) "managed" or cared about but I think that > putting > it on each individual sub section is just overkill. > Everything that Nathaniel and Donald said is accurate about the discussion we had offline while drafting the PEP. Top-of-file was originally proposed by me but was viewed as too broad, hence the namespacing of the bits PyPA controlled and putting the field in there. We also considered per-table, but that seemed like overkill. > > We can easily stick it at the top level of the file and just explicitly > state > that the [tool.*] namespace is exempt from deriving any sort of meaning > from > the semantics-version value. I think that is easier to not screw up (only > one > check, vs N checks) and I think that it looks nicer too from a human > writing/editing POV if we ever do bump the version and force people to > write it > out: > I had originally proposed that but I think we didn't like the wording of it or the possibility of someone not realizing that the scoping of the field was special-cased to everything *not* in [tool]. > > semantics-version = 2 > > [build-system] > requires = [ > "setuptools", > "pip", > ] > > [test-runner] # Just an example > command = "py.test --strict" > requires = [ > "pytest", > ] > > [tool.pip] > index-url = "https://index.example.com/simple/" > > But honestly, I'm of the opinion we could probably just ditch it. I don't > think > it'll be hard to maintain compatibility within the keywords we pick in this > file and I worry that by including it in something that we expect humans to > write, we provide an incentive to using it when perhaps we could think up a > better, backwards compatible syntax. The main argument in favor of adding > it > now with an implicit default of `1` is that if I'm wrong and we end up > needing > it, including it now will mean that projects are actively checking the > version > number so we can safely increase it with the desired effect. If we don't > include it now, then even if we add it at a later date nothing will be > checking > to see if that changed or not. > Both Donald and Nathaniel say to drop it, and since I put it in just to be overly cautious I'm fine with dropping it. So unless Nick says "semantics-version or death!", I agree w/ my co-authors and would update the PEP to say: 1. no semantics-version 2. [build-system] instead of [package.build-system] -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Thu May 12 13:31:57 2016 From: randy at thesyrings.us (Randy Syring) Date: Thu, 12 May 2016 13:31:57 -0400 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: <5734BE0D.3020107@thesyrings.us> On 05/12/2016 12:33 PM, Brett Cannon wrote: > Both Donald and Nathaniel say to drop it, and since I put it in just > to be overly cautious I'm fine with dropping it. So unless Nick says > "semantics-version or death!", I agree w/ my co-authors and would > update the PEP to say: > > 1. no semantics-version > 2. [build-system] instead of [package.build-system] > FWIW, as a someone who is not highly involved, but following this discussion b/c I maintain projects/packages, I like the simplicity and readability of this. +1 *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu May 12 15:05:45 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 12 May 2016 15:05:45 -0400 Subject: [Distutils] PyPI and GPG Signatures References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: <20160512150545.7f5afdc3@anarchist.wooz.org> On May 12, 2016, at 07:41 AM, Donald Stufft wrote: >I am aware of a single tool anywhere that actively supports verifying the >signatures that people upload to PyPI, and that is Debian's uscan >program. Even in that case the people writing the Debian watch file have to >hardcode in a signing key into it and in my experience, when faced with a >validation error it's not unusual for Debian to simply disable signature >checking for that project and/or just blindly update the key to whatever the >new key is. I like that uscan provides this feature, but I don't know how many packages actually use it, either within the Debian Python teams, or in the larger Debian community. I'd like to use it more often on packages I maintain but it's kind of difficult to find your way back to an authoritative signing key. For my own packages that I also maintain in Debian, it's of course trivial, so I have that enabled for them. I sign all my package uploads to PyPI, and I mostly trust myself . If it's possible to get signing keys from PyPI, I really have no idea how to do that. The web ui doesn't at all make it obvious (to me, at least). I understand the implementation dilemma for Warehouse, but rather than ditch this feature, I'd rather see it improve by making the signing keys more discoverable and verifiable. I wonder if keybase.io could be used somehow. Or perhaps a prominent link in the package metadata pointing to a pubkey location. Then it would be up to projects to utilize these mechanisms to make their signing keys obvious, and tools like uscan can increase their usage of such features. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From donald at stufft.io Thu May 12 16:34:16 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 12 May 2016 16:34:16 -0400 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: <20160512150545.7f5afdc3@anarchist.wooz.org> References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> <20160512150545.7f5afdc3@anarchist.wooz.org> Message-ID: > On May 12, 2016, at 3:05 PM, Barry Warsaw wrote: > > On May 12, 2016, at 07:41 AM, Donald Stufft wrote: > >> I am aware of a single tool anywhere that actively supports verifying the >> signatures that people upload to PyPI, and that is Debian's uscan >> program. Even in that case the people writing the Debian watch file have to >> hardcode in a signing key into it and in my experience, when faced with a >> validation error it's not unusual for Debian to simply disable signature >> checking for that project and/or just blindly update the key to whatever the >> new key is. > > I like that uscan provides this feature, but I don't know how many packages > actually use it, either within the Debian Python teams, or in the larger > Debian community. I'd like to use it more often on packages I maintain but > it's kind of difficult to find your way back to an authoritative signing key. > For my own packages that I also maintain in Debian, it's of course trivial, so > I have that enabled for them. I sign all my package uploads to PyPI, and I > mostly trust myself . > > If it's possible to get signing keys from PyPI, I really have no idea how to > do that. The web ui doesn't at all make it obvious (to me, at least). You know, I went and poked at it again because I knew I had see it previously, and I realized the only place you can see the GPG key is: * Your own details page. * The "Roles" page for a project that you're the maintainer or owner of. So even as implemented on PyPI the ability to record what a particular user's GPG key is, is practically worthless. > > I understand the implementation dilemma for Warehouse, but rather than ditch > this feature, I'd rather see it improve by making the signing keys more > discoverable and verifiable. I wonder if keybase.io could be used somehow. > Or perhaps a prominent link in the package metadata pointing to a pubkey > location. Then it would be up to projects to utilize these mechanisms to make > their signing keys obvious, and tools like uscan can increase their usage of > such features. So my response to this is, let's pretend for a minute that we have the greatest and most amazing setup for verifying that the key 0x6E3CBCE93372DCFA belongs to me. What's your next step? How do you verify that I'm allowed to release for pip? What happens if tomorrow I decide I'm no longer going to use key 0x6E3CBCE93372DCFA because it got compromised (remembering that key revocation is hilariously broken [1]). What if we add a new signing key because I'm tired of releasing pip and someone else is going to take over, what path is Debian going to take for verifying that some new key is allowed to sign for it that doesn't put "Whatever PyPI says" in the path of trust? The problem isn't just that it's a bit annoying to implement in Warehouse, but also that GPG's trust model is not really useful at all for package signing except in very specific scenarios which we don't have (basically just Linux distros or some other thing, and even then the trust model isn't that useful they just hack it with a custom trust keychain). If I find some time I might try and get some data on it, but I greatly suspect that approaching 100% of the traffic downloading the .asc files from PyPI are Bandersnatch automatically mirroring them and Debian's uscan. I tried to search codesearch.debian.net but I can't seem to make it search over multiple lines so all I can get is there are 14 packages that have the strings `pgpsigurlmangle` and `pypi` on a single line, but I suspect more people extend that out over two lines so I don't think it's a meaningful number for this decision. Overall though, I really don't think it's worth it to keep it around when the trust model is broken and the only real consumer is Debian's uscan (and even then, most of it's value is already taken care of by TLS). [1] https://www.imperialviolet.org/2012/02/05/crlsets.html - Ok it's about CRL which is a x509 thing, but the overall point applies to GPG too. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From njs at pobox.com Thu May 12 17:18:31 2016 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 12 May 2016 14:18:31 -0700 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> Message-ID: On May 12, 2016 4:41 AM, "Donald Stufft" wrote: > [...] > All in all, I think that there is not a whole lot of point to having this > feature in PyPI, it is predicated a bunch of invalid assumptions (as detailed > above) and I do not believe end users are actually even using the keys that > are being uploaded. Last time I looked, pip, easy_install, and bandersnatch > represented something like 99% of all download traffic on PyPI and none of > those will do anything with the .asc files being uploaded to PyPI (other than > bandersnatch just blindly mirroring it). When looking at the number of projects > actively using this feature on PyPI, I can see that 27931/591919 files on PyPI > have the ``has_signature`` database field set to true, or roughly 4% of all > files on PyPI, which roughly holds up when you look at the number of distinct > projects that have ever uploaded a signature as well (3559/80429). Numpy is one of those projects that signs uploads, and it's utterly useless like you say -- we switch release managers on a regular basis, we don't have any communication of what the key is supposed to be... there are a few edge cases where I guess it could help a little (I actually would trust a version of requests signed by your key more than I would trust one signed by a just-invented key to no provenance, and there probably are a few people out there who check keys manually), but I agree it's almost pure cargo cult security. My main concern when implementing this is how to communicate it to users, given that almost none of them understand security either, and that if this ends up on news sites like "pypi is dropping package security" then that hurts long term good will plus means confused folk showing up with pitchforks and eating up everyone's time. And honestly they would kind of have a point -- "we're getting rid of this thing that only kinda works now in favor of something amazing that doesn't exist yet" is just not a popular move. I guess the maximally time-efficient approach would be to be to just not mention gpg signatures or touch the relevant code again until TUF is ready, and then we can frame it as "we're switching to something better". Not sure how viable that is in practice though! Maybe one way to kick this down the road a bit until there's time to deal with it properly would be to leave signature upload off on warehouse and tell people that for now that's still an imported legacy pypi only feature, you can continue uploading them there but we're still evaluating what we want to do with warehouse. I guess at some point this would become the last blocker to turning off legacy pypi and then some decision will have to be made, but maybe by that point things will be clearer? Obviously you should do whatever you think will make the transition as fast and easy as possible though! -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu May 12 17:38:18 2016 From: barry at python.org (Barry Warsaw) Date: Thu, 12 May 2016 17:38:18 -0400 Subject: [Distutils] PyPI and GPG Signatures In-Reply-To: References: <2BFEA647-BE8C-42D8-8DF4-F438D9E15040@stufft.io> <20160512150545.7f5afdc3@anarchist.wooz.org> Message-ID: <20160512173818.6f395f1e@subdivisions.wooz.org> On May 12, 2016, at 04:34 PM, Donald Stufft wrote: >So my response to this is, let's pretend for a minute that we have the >greatest and most amazing setup for verifying that the key 0x6E3CBCE93372DCFA >belongs to me. What's your next step? How do you verify that I'm allowed to >release for pip? I'd hope that the project home page would say that. I sheepishly admit that we don't have that information on the Mailman home page, but you *could* follow the link from me (described as the lead developer) to my own home page and then grab the key from there, verified from keybase.io. >What happens if tomorrow I decide I'm no longer going to use key >0x6E3CBCE93372DCFA because it got compromised (remembering that key >revocation is hilariously broken [1]). What if we add a new signing key >because I'm tired of releasing pip and someone else is going to take over, >what path is Debian going to take for verifying that some new key is allowed >to sign for it that doesn't put "Whatever PyPI says" in the path of trust? uscan would complain and then I'd have to try to figure out the new signing credentials. It's not wonderful, but for platform and package maintainers who care, I think it does provide value, and the signing credentials likely don't change that often. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From phil at riverbankcomputing.com Thu May 12 18:40:55 2016 From: phil at riverbankcomputing.com (Phil Thompson) Date: Thu, 12 May 2016 23:40:55 +0100 Subject: [Distutils] Does pip Honour "Obsoletes"? Message-ID: I may be doing something wrong, but... The METADATA in my wheel uses Obsoletes but pip does not remove the obsoleted package on an install of my wheel. Is it supposed to? Thanks, Phil From ncoghlan at gmail.com Thu May 12 23:52:09 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 13 May 2016 13:52:09 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On 13 May 2016 at 02:33, Brett Cannon wrote: > Both Donald and Nathaniel say to drop it, and since I put it in just to be > overly cautious I'm fine with dropping it. So unless Nick says > "semantics-version or death!", I agree w/ my co-authors and would update the > PEP to say: > > 1. no semantics-version > 2. [build-system] instead of [package.build-system] > I think both of those changes are improvements - having to change the filename is a reasonable disincentive against making breaking changes, and with just the two sections initially being defined ([build-system] and [tool.*]) it makes sense to have them both at the top level. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From graffatcolmingov at gmail.com Fri May 13 09:56:29 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 13 May 2016 08:56:29 -0500 Subject: [Distutils] Does pip Honour "Obsoletes"? In-Reply-To: References: Message-ID: On Thu, May 12, 2016 at 5:40 PM, Phil Thompson wrote: > I may be doing something wrong, but... > > The METADATA in my wheel uses Obsoletes but pip does not remove the obsoleted package on an install of my wheel. > > Is it supposed to? No. Unlike some other package managers, pip doesn't keep track of every package that depends on another, so it will not remove the obsoleted package unless you specifically do `pip uninstall obsoleted-package`. This prevents silent/unintentional breakage. Cheers, Ian From brett at python.org Fri May 13 11:19:09 2016 From: brett at python.org (Brett Cannon) Date: Fri, 13 May 2016 15:19:09 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On Thu, 12 May 2016 at 20:52 Nick Coghlan wrote: > On 13 May 2016 at 02:33, Brett Cannon wrote: > > Both Donald and Nathaniel say to drop it, and since I put it in just to > be > > overly cautious I'm fine with dropping it. So unless Nick says > > "semantics-version or death!", I agree w/ my co-authors and would update > the > > PEP to say: > > > > 1. no semantics-version > > 2. [build-system] instead of [package.build-system] > > > > I think both of those changes are improvements - having to change the > filename is a reasonable disincentive against making breaking changes, > and with just the two sections initially being defined ([build-system] > and [tool.*]) it makes sense to have them both at the top level. > Seems everyone is in agreement then. I'll update the PEP and send out a new draft later today. -------------- next part -------------- An HTML attachment was scrubbed... URL: From holger at merlinux.eu Fri May 13 12:05:42 2016 From: holger at merlinux.eu (holger krekel) Date: Fri, 13 May 2016 18:05:42 +0200 Subject: [Distutils] devpi-server-4.0: fixing the pip-8.1.2 issue / pep 503 compliance Message-ID: <20160513160542.GE13519@uwanda> devpi-server-4.0: fixing the pip-8.1.2 problem / PEP503 compliance ============================================================================ We've made available critically important releases of the devpi private packaging available. If you are not using "devpi" yet then you can may just read http://doc.devpi.net and forget about the rest of this announcement. This is for the many who experienced the "pip doesn't install packages anymore with devpi" problem. First of all, you may temporarily pin "pip" to avoid the problem on the client side: pip install pip==8.1.1 This is obviously a crutch but gives you some time to perform the export/import cycle required for hosting private packages via devpi-server-4.0 and being compatible with pip-8.1.2. If you are using devpi-server as a pure pypi.python.org cache you don't need to perform export/import and can just delete your server directory ($HOME/.devpi/server by default) before you install and start up devpi-server-4.0. If you are hosting private packages on devpi you will need to perform an export/import cycle of your server state in order to run devpi-server-4.0. The "4.0" in this case only signals this export/import need -- no other big changes are coming with 4.0. At the end of this announcement we explain some details of why we needed to go for a 4.0 and not just a micro bugfix release. To export from devpi-server-3.X -------------------------------- upgrade to the new devpi-server-3.1.2 before you export, like this: pip install "devpi-server<4.0" Now stop your server and run: devpi-server --export EXPORTDIR --serverdir SERVERDIR where EXPORTDIR should be a fresh new directory and SERVERDIR should be the server state directory ($HOME/.devpi/server by default). To export from devpi-server-2.X -------------------------------- Upgrade to the latest devpi-server-2.X release: pip install "devpi-server<3.0" devpi-common>=2.0.10 Here we force the devpi-common dependency to not accidentally be "devpi-common==2.0.9" which could lead to problems. Now stop your server and run: devpi-server --export EXPORTDIR --serverdir SERVERDIR where EXPORTDIR should be a fresh new directory and SERVERDIR should be the server state directory ($HOME/.devpi/server by default). to import state into devpi-server-4.0 ---------------------------------------- Upgrade to the latest devpi-server-4.X release: pip install "devpi-server<5.0" devpi-web If you don't use "devpi-web" you can leave it out from the pip command. Check you have the right version: devpi-server --version Now import from your previously created EXPORTDIR: devpi-server --serverdir SERVERDIR_NEW --import EXPORTDIR This will take a while if you have many indexes or lots of documentation -- devpi-web will create a search index over all of it during import. You are now good to go -- pip works again! devpi-client also has a 2.6.3 -------------------------------- We also published a minor bugfix "devpi-client-2.6.3" release which should work with both devpi-server-2.6 and devpi-server-4.0 as we are generally trying to keep devpi-client forward/backward compatible. You only need to install devpi-client-2.6.3 if you also install devpi-server into the same virtual environment. Otherwise using devpi-client-2.6.2 with both devpi-server-2.6 and devpi-server-4.0 probably works fine as well. background on the pip/devpi bug for the curious ----------------------------------------------- Besides devpi, also artifactory and other private index servers have experienced failures with pip-8.1.2. The change from 8.1.1 was that pip now asks for PEP503-normalized names when requesting the simple page from an index. Previously "-" and "." would be allowed but with the new normalization "." is substituted with "-". Now "pip install zope.interface" triggers a request to "+simple/zope-interface" and devpi in turns asks pypi.python.org/simple/zope-interface and gets an answer with lots of "zope.interface-*.tar.gz" release links. But those are not matched because without PEP503 "zope.interface" and "zope-interface" are different things. Moreover, pypi.python.org used to redirect to the "true" name but does not do this anymore which contributed to the eventual problem. We decided to go for 4.0 because since 3.0 we base database keys on normalized project names -- and this normalization is used in like 20-30 code places across the devpi system and plugins. Trying to be clever and avoid the export/import and trick "pip-8.1.2" into working looked like a can of worms. Now with devpi-server-4.0 we are using proper PEP503 specified normalization so should be safe. best, holger and florian P.S.: we offer support contracts btw and thank in particular Dolby Laboratories, YouGov Inc and BlueYonder GmbH who funded a lot of the last year's devpi work and now agreed to be named in public - and no, we didn't get around to make a flashy web site yet. For now, just mail holger at merlinux to discuss support and training options. From chris.barker at noaa.gov Fri May 13 13:15:53 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 May 2016 10:15:53 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Tue, May 10, 2016 at 1:54 AM, Paul Moore wrote: > I would love to use YAML. I really would. But for pip, we need a > robust, easy to vendor Python implementation conda has been using yaml forever (with pyyaml) , and whatever problem is has (and there are many), I don't think I've ever seen a single issue cause by yaml or pyyaml. Might be worth asking the conda devs. I think we're freaking out way too much about what *could* go wrong. Oh, and why not "JSON with comments and trailing commas" - it would be well defined and easy to implement. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri May 13 13:46:42 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 13 May 2016 18:46:42 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On 13 May 2016 at 18:15, Chris Barker wrote: > I think we're freaking out way too much about what *could* go wrong. It's more that:pip would have to vendor pyyaml, and it's not small. I have no idea whether it's easy to vendor, either (does it have separate code paths for Python 2 and 3? I don't know, I've never looked). Also, we'd have to forego the C implementation - I have no idea how much of a performance hit that would give (of course, performance is hardly a key issue here) or how likely it is that bugs exist in the Python version that aren't normally noticed because people use the C implementation (which is definitely just speculation, conceded). But I think the decision is made, so it's not worth debating any more, honestly. Paul From njs at pobox.com Fri May 13 13:47:36 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 13 May 2016 10:47:36 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Fri, May 13, 2016 at 10:15 AM, Chris Barker wrote: > On Tue, May 10, 2016 at 1:54 AM, Paul Moore wrote: >> >> I would love to use YAML. I really would. But for pip, we need a >> robust, easy to vendor Python implementation > > > conda has been using yaml forever (with pyyaml) , and whatever problem is > has (and there are many), I don't think I've ever seen a single issue cause > by yaml or pyyaml. Might be worth asking the conda devs. "vendor" here means "munge the library so that it becomes a pure-python subpackage of pip named pip._vendor.whatever". This is a specific technical challenge that conda doesn't attempt... -n -- Nathaniel J. Smith -- https://vorpus.org From brett at python.org Fri May 13 14:22:51 2016 From: brett at python.org (Brett Cannon) Date: Fri, 13 May 2016 18:22:51 +0000 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Fri, 13 May 2016 at 10:47 Paul Moore wrote: > On 13 May 2016 at 18:15, Chris Barker wrote: > > I think we're freaking out way too much about what *could* go wrong. > > It's more that:pip would have to vendor pyyaml, and it's not small. I > have no idea whether it's easy to vendor, either (does it have > separate code paths for Python 2 and 3? I don't know, I've never > looked). Also, we'd have to forego the C implementation - I have no > idea how much of a performance hit that would give (of course, > performance is hardly a key issue here) or how likely it is that bugs > exist in the Python version that aren't normally noticed because > people use the C implementation (which is definitely just speculation, > conceded). > > But I think the decision is made, so it's not worth debating any more, > honestly. > No need to think; the decision is made and it's TOML. I know Chris doesn't mean to stir up trouble, but at this point if someone wants to propose something other than TOML they are going to have to write their own PEP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 13 14:33:52 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 May 2016 11:33:52 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: One other question: Is it just examples or is "build" being defined as "build a wheel"? i.e. there are times one might want to build a package without building a wheel -- just it install it yourself, or to put it in another package format -- conda, rpm, what have you. In conda's case, building a wheel, and then installing it would work fine, but I'm not sure we want to lock that down as the only way to build a package. Granted, if all it means is that someone will download an unnecessary dependency, big deal. I'm also a bit confused about whether we're trying to specify the dependencies required simply to run the build tool itself, or the dependencies required to actually do the build -- or the latter being saved for another day? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri May 13 14:57:29 2016 From: brett at python.org (Brett Cannon) Date: Fri, 13 May 2016 18:57:29 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On Fri, 13 May 2016 at 11:34 Chris Barker wrote: > One other question: > > Is it just examples or is "build" being defined as "build a wheel"? > Just an example (clarified previously). -Brett > > i.e. there are times one might want to build a package without building a > wheel -- just it install it yourself, or to put it in another package > format -- conda, rpm, what have you. > > In conda's case, building a wheel, and then installing it would work fine, > but I'm not sure we want to lock that down as the only way to build a > package. > > Granted, if all it means is that someone will download an unnecessary > dependency, big deal. > > I'm also a bit confused about whether we're trying to specify the > dependencies required simply to run the build tool itself, or the > dependencies required to actually do the build -- or the latter being saved > for another day? > The minimal requirements to execute the build system. Providing some way to specify more dependencies in some dynamic fashion and the like is for another PEP. -Brett > > -CHB > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Fri May 13 16:09:31 2016 From: njs at pobox.com (Nathaniel Smith) Date: Fri, 13 May 2016 13:09:31 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On May 13, 2016 11:34 AM, "Chris Barker" wrote: > > One other question: > > Is it just examples or is "build" being defined as "build a wheel"? > > i.e. there are times one might want to build a package without building a wheel -- just it install it yourself, or to put it in another package format -- conda, rpm, what have you. > > In conda's case, building a wheel, and then installing it would work fine, but I'm not sure we want to lock that down as the only way to build a package. As Brett already clarified, this pep is just about how you get to the point of being able to start the build system; it doesn't care what the build system actually outputs. But, the plan *is* to make wheels the standard way to build packages -- that will be in the next pep :-). I'm not sure I'd call it "lock down", because there's nothing that will stop you running setup.py bdist_rpm or whatever. But our goal is to reach the point where package authors get a choice of what build system to use, and there's no guarantee that every build system will implement bdist_rpm. So, the plan is to require all build systems to be able to output wheels, and then debian or conda-build or whoever will convert the wheel into whatever final package format they want. This is way more scalable than requiring N different build systems to each be able to output M different formats for N*M code paths. And if wheels aren't sufficient, well, we can add stuff to the spec :-) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 13 16:31:22 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 May 2016 13:31:22 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On Fri, May 13, 2016 at 1:09 PM, Nathaniel Smith wrote: > But, the plan *is* to make wheels the standard way to build packages -- > that will be in the next pep :-). I'm not sure I'd call it "lock down", > because there's nothing that will stop you running setup.py bdist_rpm or > whatever. But our goal is to reach the point where package authors get a > choice of what build system to use, and there's no guarantee that every > build system will implement bdist_rpm. > > hmm -- this really feels like mingling packaging and building. Does making sure everything builds a wheel help systems like rpm and the like? Honestly I have no idea. I do know that conda is very is very much designed to not care at all how a package is build or installed, as long as it can be installed -- so if a wheel is built and then that wheel is installed, that all the same to conda. But is that the case for everything else? I absolutely agree that we shouldn't expect a bdist_rpm and the like -- in fact, those should all be deprecated. but maybe a "install" that goes from source to installed package, without passing through a wheel? or maybe not -- I really don't know rpm or deb or anything else well enough to know. > So, the plan is to require all build systems to be able to output wheels, > and then debian or conda-build or whoever will convert the wheel into > whatever final package format they want. > easy for conda -- not sure about the others.... hmm -- homebrew builds from source, so as long as you have a way to install the wheel you built, it'll be fine (much like conda, but without ever making a package) -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 13 16:33:26 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 13 May 2016 13:33:26 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Fri, May 13, 2016 at 11:22 AM, Brett Cannon wrote: > No need to think; the decision is made and it's TOML. I know Chris doesn't > mean to stir up trouble, > I got a bit out of sync with the conversation -- sorry for the noise. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat May 14 03:25:22 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 14 May 2016 17:25:22 +1000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: On 14 May 2016 at 06:31, Chris Barker wrote: > On Fri, May 13, 2016 at 1:09 PM, Nathaniel Smith wrote: >> >> But, the plan *is* to make wheels the standard way to build packages -- >> that will be in the next pep :-). I'm not sure I'd call it "lock down", >> because there's nothing that will stop you running setup.py bdist_rpm or >> whatever. But our goal is to reach the point where package authors get a >> choice of what build system to use, and there's no guarantee that every >> build system will implement bdist_rpm. > > hmm -- this really feels like mingling packaging and building. > > Does making sure everything builds a wheel help systems like rpm and the > like? Honestly I have no idea. Yes, it does. The reason is that separating the build system from the deployment system is normal practice in those tools, but not the case with approaches like "./setup.py install". Direct invocation of the latter also loses semantic information about the package contents that may be relevant for Filesystem Hierarchy Standard compliance purposes. When the upstream installation process is instead broken up into "build a binary artifact" and "install a binary artifact", that brings a few benefits: - the fact that the build system and the deployment target may be different machines can be handled upstream - we get a new metadata source (the binary artifact format) that can be used in tools like pyp2rpm - the semantic information about file types is captured in the binary artifact Over time, what this means is that distros can move away from the current practice of defining a packaging configuration once, and then updating it manually when rebasing to a new upstream release, in favour of automatically generating the downstream packaging config, and submitting patches to the upstream project to improve the built binary artifact as necessary (patching it locally in the meantime). That's not something that's going to happen quickly, but you can see the foundations of it being laid as people start trying to do things like automatically rebuild all of PyPI as RPMs. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Fri May 13 15:03:25 2016 From: brett at python.org (Brett Cannon) Date: Fri, 13 May 2016 19:03:25 +0000 Subject: [Distutils] build system requirements PEP, 3rd draft Message-ID: Biggest changes since the initial draft: 1. No more semantics-version 2. No more [package] table 3. Settled on [build-system] as the table name 4. The "requires" key is required if [build-system] is defined 5. Changed the title and clarified that this is all about the minimum requirements for the build system to execute (some added support for things like dynamic dependencies for producing build artifacts is for another PEP) 6. Added a JSON Schema for the resulting data from the table because Nick likes his specs :) ---------- PEP: NNN Title: Specifying Minimum Build System Requirements for Python Projects Version: $Revision$ Last-Modified: $Date$ Author: Brett Cannon , Nathaniel Smith , Donald Stufft BDFL-Delegate: Nick Coghlan Discussions-To: distutils-sig Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 10-May-2016 Post-History: 10-May-2016, 11-May-2016, 13-May-2016 Abstract ======== This PEP specifies how Python software packages should specify what dependencies they have in order to execute their chosen build system. As part of this specification, a new configuration file is introduced for software packages to use to specify their build dependencies (with the expectation that the same configuration file will be used for future configuration details). Rationale ========= When Python first developed its tooling for building distributions of software for projects, distutils [#distutils]_ was the chosen solution. As time went on, setuptools [#setuptools]_ gained popularity to add some features on top of distutils. Both used the concept of a ``setup.py`` file that project maintainers executed to build distributions of their software (as well as users to install said distribution). Using an executable file to specify build requirements under distutils isn't an issue as distutils is part of Python's standard library. Having the build tool as part of Python means that a ``setup.py`` has no external dependency that a project maintainer needs to worry about to build a distribution of their project. There was no need to specify any dependency information as the only dependency is Python. But when a project chooses to use setuptools, the use of an executable file like ``setup.py`` becomes an issue. You can't execute a ``setup.py`` file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the ``setup.py`` file where that information is stored. It's a catch-22 of a file not being runnable without knowing its own contents which can't be known programmatically unless you run the file. Setuptools tried to solve this with a ``setup_requires`` argument to its ``setup()`` function [#setup_args]_. This solution has a number of issues, such as: * No tooling (besides setuptools itself) can access this information without executing the ``setup.py``, but ``setup.py`` can't be executed without having these items installed. * While setuptools itself will install anything listed in this, they won't be installed until *during* the execution of the ``setup()`` function, which means that the only way to actually use anything added here is through increasingly complex machinations that delay the import and usage of these modules until later on in the execution of the ``setup()`` function. * This cannot include ``setuptools`` itself nor can it include a replacement to ``setuptools``, which means that projects such as ``numpy.distutils`` are largely incapable of utilizing it and projects cannot take advantage of newer setuptools features until their users naturally upgrade the version of setuptools to a newer one. * The items listed in ``setup_requires`` get implicily installed whenever you execute the ``setup.py`` but one of the common ways that the ``setup.py`` is executed is via another tool, such as ``pip``, who is already managing dependencies. This means that a command like ``pip install spam`` might end up having both pip and setuptools downloading and installing packages and end users needing to configure *both* tools (and for ``setuptools`` without being in control of the invocation) to change settings like which repository it installs from. It also means that users need to be aware of the discovery rules for both tools, as one may support different package formats or determine the latest version differently. This has cumulated in a situation where use of ``setup_requires`` is rare, where projects tend to either simply copy and paste snippets between ``setup.py`` files or they eschew it all together in favor of simply documenting elsewhere what they expect the user to have manually installed prior to attempting to build or install their project. All of this has led pip [#pip]_ to simply assume that setuptools is necessary when executing a ``setup.py`` file. The problem with this, though, is it doesn't scale if another project began to gain traction in the commnity as setuptools has. It also prevents other projects from gaining traction due to the friction required to use it with a project when pip can't infer the fact that something other than setuptools is required. This PEP attempts to rectify the situation by specifying a way to list the minimal dependencies of the build system of a project in a declarative fashion in a specific file. This allows a project to list what build dependencies it has to go from e.g. source checkout to wheel, while not falling into the catch-22 trap that a ``setup.py`` has where tooling can't infer what a project needs to build itself. Implementing this PEP will allow projects to specify what build system they depend on upfront so that tools like pip can make sure that they are installed in order to run the build system to build the project. To provide more context and motivation for this PEP, think of the (rough) steps required to produce a built artifact for a project: 1. The source checkout of the project. 2. Installation of the build system. 3. Execute the build system. This PEP covers step #2. It is fully expected that a future PEP will cover step #3, including how to have the build system dynamically specify more dependencies that the build system requires to perform its job. The purpose of this PEP though, is to specify the minimal set of requirements for the build system to simply begin execution. Specification ============= The build system dependencies will be stored in a file named ``pyproject.toml`` that is written in the TOML format [#toml]_. This format was chosen as it is human-usable (unlike JSON [#json]_), it is flexible enough (unlike configparser [#configparser]_), stems from a standard (also unlike configparser [#configparser]_), and it is not overly complex (unlike YAML [#yaml]_). The TOML format is already in use by the Rust community as part of their Cargo package manager [#cargo]_ and in private email stated they have been quite happy with their choice of TOML. A more thorough discussion as to why various alternatives were not chosen can be read in the `Other file formats`_ section. There will be a ``[build-system]`` table in the configuration file to store build-related data. Initially only one key of the table will be valid and mandatory: ``requires``. That key will have a value of a list of strings representing the PEP 508 dependencies required to execute the build system (currently that means what dependencies are required to execute a ``setup.py`` file). To provide a type-specific representation of the resulting data from the TOML file for illustrative purposes only, the following JSON Schema [#jsonschema]_ would match the data format:: { "$schema": "http://json-schema.org/schema#", "type": "object", "additionalProperties": false, "properties": { "build-system": { "type": "object", "additionalProperties": false, "properties": { "requires": { "type": "array", "items": { "type": "string" } } }, "required": ["requires"] }, "tool": { "type": "object" } } } For the vast majority of Python projects that rely upon setuptools, the ``pyproject.toml`` file will be:: [build-system] # Minimum requirements for the build system to execute. requires = ["setuptools", "wheel"] # PEP 508 specifications. Because the use of setuptools and wheel are so expansive in the community at the moment, build tools are expected to use the example configuration file above as their default semantics when a ``pyproject.toml`` file is not present. All other top-level keys and tables are reserved for future use by other PEPs except for the ``[tool]`` table. Within that table, tools can have users specify configuration data as long as they use a sub-table within ``[tool]``, e.g. the `flit `_ tool would store its configuration in ``[tool.flit]``. We need some mechanism to allocate names within the ``tool.*`` namespace, to make sure that different projects don't attempt to use the same sub-table and collide. Our rule is that a project can use the subtable ``tool.$NAME`` if, and only if, they own the entry for ``$NAME`` in the Cheeseshop/PyPI. Rejected Ideas ============== A semantic version key ---------------------- For future-proofing the structure of the configuration file, a ``semantics-version`` key was initially proposed. Defaulting to ``1``, the idea was that if any semantics changes to previously defined keys or tables occurred which were not backwards-compatible, then the ``semantics-version`` would be incremented to a new number. In the end, though, it was decided that this was a premature optimization. The expectation is that changes to what is pre-defined semantically in the configuration file will be rather conservative. And in the instances where a backwards-incompatible change would have occurred, different names can be used for the new semantics to avoid breaking older tools. A more nested namespace ----------------------- An earlier draft of this PEP had a top-level ``[package]`` table. The idea was to impose some scoping for a semantics versioning scheme (see `A semantic version key`_ for why that idea was rejected). With the need for scoping removed, the point of having a top-level table became superfluous. Other table names ----------------- Another name proposed for the ``[build-system]`` table was ``[build]``. The alternative name is shorter, but doesn't convey as much of the intention of what information is store in the table. After a vote on the distutils-sig mailing list, the current name won out. Other file formats ------------------ Several other file formats were put forward for consideration, all rejected for various reasons. Key requirements were that the format be editable by human beings and have an implementation that can be vendored easily by projects. This outright exluded certain formats like XML which are not friendly towards human beings and were never seriously discussed. JSON '''' The JSON format [#json]_ was initially considered but quickly rejected. While great as a human-readable, string-based data exchange format, the syntax does not lend itself to easy editing by a human being (e.g. the syntax is more verbose than necessary while not allowing for comments). An example JSON file for the proposed data would be:: { "build": { "requires": [ "setuptools", "wheel>=0.27" ] } } YAML '''' The YAML format [#yaml]_ was designed to be a superset of JSON [#json]_ while being easier to work with by hand. There are three main issues with YAML. One is that the specification is large: 86 pages if printed on letter-sized paper. That leaves the possibility that someone may use a feature of YAML that works with one parser but not another. It has been suggested to standardize on a subset, but that basically means creating a new standard specific to this file which is not tractable long-term. Two is that YAML itself is not safe by default. The specification allows for the arbitrary execution of code which is best avoided when dealing with configuration data. It is of course possible to avoid this behavior -- for example, PyYAML provides a ``safe_load`` operation -- but if any tool carelessly uses ``load`` instead then they open themselves up to arbitrary code execution. While this PEP is focused on the building of projects which inherently involves code execution, other configuration data such as project name and version number may end up in the same file someday where arbitrary code execution is not desired. And finally, the most popular Python implemenation of YAML is PyYAML [#pyyaml]_ which is a large project of a few thousand lines of code and an optional C extension module. While in and of itself this isn't necessarily an issue, this becomes more of a problem for projects like pip where they would most likely need to vendor PyYAML as a dependency so as to be fully self-contained (otherwise you end up with your install tool needing an install tool to work). A proof-of-concept re-working of PyYAML has been done to see how easy it would be to potentially vendor a simpler version of the library which shows it is a possibility. An example YAML file is:: build: requires: - setuptools - wheel>=0.27 configparser '''''''''''' An INI-style configuration file based on what configparser [#configparser]_ accepts was considered. Unfortunately there is no specification of what configparser accepts, leading to support skew between versions. For instance, what ConfigParser in Python 2.7 accepts is not the same as what configparser in Python 3 accepts. While one could standardize on what Python 3 accepts and simply vendor the backport of the configparser module, that does mean this PEP would have to codify that the backport of configparser must be used by all project wishes to consume the metadata specified by this PEP. This is overly restrictive and could lead to confusion if someone is not aware of that a specific version of configparser is expected. An example INI file is:: [build] requires = setuptools wheel>=0.27 Python literals ''''''''''''''' Someone proposed using Python literals as the configuration format. All Python programmers would be used to the format, there would implicitly be no third-party dependency to read the configuration data, and it can be safe if something like ``ast.literal_eval()`` [#ast_literal_eval]_. The problem is that to user Python literals you either end up with something no better than JSON, or you end up with something like what Bazel [#bazel]_ uses. In the former the issues are the same as JSON. In the latter, you end up with people consistently asking for more flexibility as users have a hard time ignoring the desire to use some feature of Python that they think they need (one of the co-authors has direct experience with this from the internal usage of Bazel at Google). There is no example format as one was never put forward for consideration. Other file names ---------------- Several other file names were considered and rejected (although this is very much a bikeshedding topic, and so the decision comes down to mostly taste). pysettings.toml Most reasonable alternative. pypa.toml While it makes sense to reference the PyPA [#pypa]_, it is a somewhat niche term. It's better to have the file name make sense without having domain-specific knowledge. pybuild.toml From the restrictive perspective of this PEP this filename makes sense, but if any non-build metadata ever gets added to the file then the name ceases to make sense. pip.toml Too tool-specific. meta.toml Too generic; project may want to have its own metadata file. setup.toml While keeping with traditional thanks to ``setup.py``, it does not necessarily match what the file may contain in the future (.e.g is knowing the name of a project inerhently part of its setup?). pymeta.toml Not obvious to newcomers to programming and/or Python. pypackage.toml & pypackaging.toml Name conflation of what a "package" is (project versus namespace). pydevelop.toml The file may contain details not specific to development. pysource.toml Not directly related to source code. pytools.toml Misleading as the file is (currently) aimed at project management. dstufft.toml Too person-specific. ;) References ========== .. [#distutils] distutils (https://docs.python.org/3/library/distutils.html#module-distutils) .. [#setuptools] setuptools (https://pypi.python.org/pypi/setuptools) .. [#setup_args] setuptools: New and Changed setup() Keywords ( http://pythonhosted.org/setuptools/setuptools.html#new-and-changed-setup-keywords ) .. [#pip] pip (https://pypi.python.org/pypi/pip) .. [#wheel] wheel (https://pypi.python.org/pypi/wheel) .. [#toml] TOML (https://github.com/toml-lang/toml) .. [#json] JSON (http://json.org/) .. [#yaml] YAML (http://yaml.org/) .. [#configparser] configparser (https://docs.python.org/3/library/configparser.html#module-configparser) .. [#pyyaml] PyYAML (https://pypi.python.org/pypi/PyYAML) .. [#pypa] PyPA (https://www.pypa.io) .. [#bazel] Bazel (http://bazel.io/) .. [#ast_literal_eval] ``ast.literal_eval()`` (https://docs.python.org/3/library/ast.html#ast.literal_eval) .. [#cargo] Cargo, Rust's package manager (http://doc.crates.io/) .. [#jsonschema] JSON Schema (http://json-schema.org/) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From lele at metapensiero.it Sat May 14 10:41:11 2016 From: lele at metapensiero.it (Lele Gaifax) Date: Sat, 14 May 2016 16:41:11 +0200 Subject: [Distutils] comparison of configuration languages References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <877few8zy0.fsf@metapensiero.it> Chris Barker writes: > Oh, and why not "JSON with comments and trailing commas" - it would be well > defined and easy to implement. And mostly done, even: https://bitbucket.org/intellimath/pyaxon ciao, lele. -- nickname: Lele Gaifax | Quando vivr? di quello che ho pensato ieri real: Emanuele Gaifas | comincer? ad aver paura di chi mi copia. lele at metapensiero.it | -- Fortunato Depero, 1929. From contact at ionelmc.ro Sat May 14 16:21:33 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Sat, 14 May 2016 23:21:33 +0300 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On Fri, May 13, 2016 at 9:22 PM, Brett Cannon wrote: > No need to think; the decision is made and it's TOML. I know Chris doesn't > mean to stir up trouble, but at this point if someone wants to propose > something other than TOML they are going to have to write their own PEP. ?Not asking for any change but has anyone looked at libconfig ? ?It looks quite interesting: simple grammar and nesting support. What do you think of it? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Sat May 14 18:33:26 2016 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Sat, 14 May 2016 15:33:26 -0700 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> Message-ID: <-6193373753335887999@unknownmsgid> > When the upstream installation process is instead broken up into > "build a binary artifact" and "install a binary artifact", that brings > a few benefits: Great -- thanks for the detailed explanation. Sounds like a good plan, then. -CHB From donald at stufft.io Sun May 15 21:41:55 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 15 May 2016 21:41:55 -0400 Subject: [Distutils] Download Counts Temporarily Disabled Message-ID: <677AE35D-4BBE-4761-A90F-ED767031E702@stufft.io> Hey, Just an FYI, I've disabled download counts on PyPI for the time being. The statistics stack is broken and needs engineering effort to fix it back up to deal with changes to PyPI. It was suggested that hiding the counts would help prevent user confusion when they see things like "downloaded 0 times" making people believe that a library has no users, even if it is a significantly downloaded library. I'm unlikely to get around to fixing the current stack since, as part of Warehouse, I'm working on a *new* statistics stack which is much better. The data collection and storage parts of that stack are already done and I just need to get querying done (made more difficult by the fact that the new system queries can take 10+ seconds to complete, but can be queried on any dimension) and a tool to process the historical data and put it into the new storage engine. Anyways, this is just to let folks know that this isn't a permanent loss of the feature and we won't lose any data. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From robertc at robertcollins.net Sun May 15 23:55:53 2016 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 16 May 2016 15:55:53 +1200 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: On 15 May 2016 at 08:21, Ionel Cristian M?rie? wrote: > > On Fri, May 13, 2016 at 9:22 PM, Brett Cannon wrote: >> >> No need to think; the decision is made and it's TOML. I know Chris doesn't >> mean to stir up trouble, but at this point if someone wants to propose >> something other than TOML they are going to have to write their own PEP. > > > Not asking for any change but has anyone looked at libconfig? It looks quite > interesting: simple grammar and nesting support. What do you think of it? I hadn't, but its certainly irrelevant here, as vendoring it suitably for pip would be excruciating due to the C dependency. -Rob From ncoghlan at gmail.com Mon May 16 03:02:03 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 16 May 2016 17:02:03 +1000 Subject: [Distutils] build system requirements PEP, 3rd draft In-Reply-To: References: Message-ID: On 14 May 2016 at 05:03, Brett Cannon wrote: > Biggest changes since the initial draft: > > 1. No more semantics-version > 2. No more [package] table > 3. Settled on [build-system] as the table name > 4. The "requires" key is required if [build-system] is defined > 5. Changed the title and clarified that this is all about the minimum > requirements for the build system to execute (some added support for things > like dynamic dependencies for producing build artifacts is for another PEP) And lo, the colour of the bikeshed was chosen, and the chosen colour was "teal" :) More seriously, as BDFL-Delegate, I'm entirely happy with this version. Thanks for your work in pulling this reduced scope PEP together, as well as to Robert, Nathaniel and Donald in driving the preceding build system invocation PEPs. While it's a deviation from the nominal PEP process, I figure you can check this into the PEPs repo in an already Accepted state, with a reference back to this email for the Resolution. Once the PEPs repo itself allows for pull requests, I except we'll adjust the numeric identifier allocation process to include checking for open PRs and using the next number not used in a current PR, rather than the next number not used in the repo itself. > Added a JSON Schema for the resulting data from the table because Nick likes > his specs :) Working for Boeing for so long left its mark :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From agroszer at gmail.com Mon May 16 03:20:46 2016 From: agroszer at gmail.com (Adam GROSZER) Date: Mon, 16 May 2016 09:20:46 +0200 Subject: [Distutils] windows SSL: CERTIFICATE_VERIFY_FAILED Message-ID: <573974CE.1030408@gmail.com> Hi, I have here an old windows server 2003R2 (winbot.zope.org). Recently updated python 2.7.0 to 2.7.11. Now running into SSL: CERTIFICATE_VERIFY_FAILED with buildout bootstrap. I installed DigiCertHighAssuranceEVRootCA.crt to "Certificates (Local Computer)\Trusted Root Certification Authorities\Certificates. As detailed at: http://www.databasemart.com/howto/SQLoverssl/How_To_Install_Trusted_Root_Certification_Authority_With_MMC.aspx But still get: C:\buildslave\zope.testing\build>c:\Python27_32\python.exe bootstrap.py Traceback (most recent call last): File "bootstrap.py", line 92, in exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez) File "c:\Python27_32\lib\urllib2.py", line 154, in urlopen return opener.open(url, data, timeout) File "c:\Python27_32\lib\urllib2.py", line 431, in open response = self._open(req, data) File "c:\Python27_32\lib\urllib2.py", line 449, in _open '_open', req) File "c:\Python27_32\lib\urllib2.py", line 409, in _call_chain result = func(*args) File "c:\Python27_32\lib\urllib2.py", line 1240, in https_open context=self._context) File "c:\Python27_32\lib\urllib2.py", line 1197, in do_open raise URLError(err) urllib2.URLError: -- Best regards, Adam GROSZER -- Quote of the day: The Church has many critics but few rivals. - Anonymous From p.f.moore at gmail.com Mon May 16 04:02:45 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 16 May 2016 09:02:45 +0100 Subject: [Distutils] build system requirements PEP, 3rd draft In-Reply-To: References: Message-ID: On 16 May 2016 at 08:02, Nick Coghlan wrote: > More seriously, as BDFL-Delegate, I'm entirely happy with this > version. Thanks for your work in pulling this reduced scope PEP > together, as well as to Robert, Nathaniel and Donald in driving the > preceding build system invocation PEPs. Excellent news! Thanks to all for moving this forward. Paul From brett at python.org Mon May 16 10:43:30 2016 From: brett at python.org (Brett Cannon) Date: Mon, 16 May 2016 14:43:30 +0000 Subject: [Distutils] build system requirements PEP, 3rd draft In-Reply-To: References: Message-ID: And the PEP is checked in! https://hg.python.org/peps/file/tip/pep-0518.txt I'm at OSCON and in a tutorial on my work laptop so I didn't have a chance to verify there weren't any reST errors, so if someone with commit privileges can just quickly run `make` on the peps repo to make sure I didn't botch something that would be appreciated. :) On Mon, 16 May 2016 at 01:02 Paul Moore wrote: > On 16 May 2016 at 08:02, Nick Coghlan wrote: > > More seriously, as BDFL-Delegate, I'm entirely happy with this > > version. Thanks for your work in pulling this reduced scope PEP > > together, as well as to Robert, Nathaniel and Donald in driving the > > preceding build system invocation PEPs. > > Excellent news! Thanks to all for moving this forward. > > Paul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Mon May 16 10:48:51 2016 From: donald at stufft.io (Donald Stufft) Date: Mon, 16 May 2016 10:48:51 -0400 Subject: [Distutils] build system requirements PEP, 3rd draft In-Reply-To: References: Message-ID: <6071BDB5-128D-4569-AC7E-D873DFADCEA5@stufft.io> > On May 16, 2016, at 10:43 AM, Brett Cannon wrote: > > I'm at OSCON and in a tutorial on my work laptop so I didn't have a chance to verify there weren't any reST errors, so if someone with commit privileges can just quickly run `make` on the peps repo to make sure I didn't botch something that would be appreciated. :) Seems to work: $ make pep-0518.html pep-0518.txt (text/x-rst) -> pep-0518.html ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris.barker at noaa.gov Mon May 16 11:41:02 2016 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 16 May 2016 08:41:02 -0700 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: <7800899700581903@unknownmsgid> ?Not asking for any change but has anyone looked at libconfig ? ?It looks quite interesting: simple grammar and nesting support. What do you think of it As pointed out, it's a C lib. But as we all like writing tools, it wouldn't be very hard to write a Python parser for the format. But it's a bit C-y for my taste, and yet another configure language? Really? -CHB Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon May 16 11:47:13 2016 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 16 May 2016 10:47:13 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: 1. again, ConfigObj is pure Python, supports nesting, and is read/write (so there is no need to template injectable config files (e.g., with \n, =, #;etc. must be escaped)) | PyPI: https://pypi.python.org/pypi/configobj/5.0.6 | Src: https://github.com/DiffSK/configobj | Docs: https://configobj.readthedocs.io/en/latest/ 2. A grammar for configparser would make this more apparent https://hg.python.org/cpython/file/tip/Lib/configparser.py On Sunday, May 15, 2016, Robert Collins wrote: > On 15 May 2016 at 08:21, Ionel Cristian M?rie? > wrote: > > > > On Fri, May 13, 2016 at 9:22 PM, Brett Cannon > wrote: > >> > >> No need to think; the decision is made and it's TOML. I know Chris > doesn't > >> mean to stir up trouble, but at this point if someone wants to propose > >> something other than TOML they are going to have to write their own PEP. > > > > > > Not asking for any change but has anyone looked at libconfig? It looks > quite > > interesting: simple grammar and nesting support. What do you think of it? > > I hadn't, but its certainly irrelevant here, as vendoring it suitably > for pip would be excruciating due to the C dependency. > > -Rob > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Mon May 16 11:54:25 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Mon, 16 May 2016 10:54:25 -0500 Subject: [Distutils] comparison of configuration languages In-Reply-To: References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> Message-ID: The PEP has been accepted by Nick. Let's turn our thoughts to more productive topics rather than talking about a moot point. Cheers, Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon May 16 12:45:01 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 16 May 2016 17:45:01 +0100 Subject: [Distutils] comparison of configuration languages In-Reply-To: <7800899700581903@unknownmsgid> References: <5731292E.4030601@stoneleaf.us> <573145FF.50108@stoneleaf.us> <5731900B.3020200@nextday.fi> <7800899700581903@unknownmsgid> Message-ID: On 16 May 2016 at 16:41, Chris Barker - NOAA Federal wrote: > As pointed out, it's a C lib. But as we all like writing tools, it wouldn't > be very hard to write a Python parser for the format. There is one (on PyPI - can't recall the name now, sorry). > But it's a bit C-y for my taste, and yet another configure language? Really? Also, the spec seems to imply that it doesn't support Unicode (no Unicode escapes in strings). Anyway, as noted this is very off-topic now, so that's all I'll say. Paul From agroszer at gmail.com Mon May 16 14:12:41 2016 From: agroszer at gmail.com (Adam GROSZER) Date: Mon, 16 May 2016 20:12:41 +0200 Subject: [Distutils] windows SSL: CERTIFICATE_VERIFY_FAILED In-Reply-To: <573974CE.1030408@gmail.com> References: <573974CE.1030408@gmail.com> Message-ID: <573A0D99.8030907@gmail.com> DOH, To answer my own question: it's bootstrap.pypa.io not pypi... needed a different root cert. On 05/16/2016 09:20 AM, Adam GROSZER wrote: > Hi, > > I have here an old windows server 2003R2 (winbot.zope.org). > Recently updated python 2.7.0 to 2.7.11. > Now running into SSL: CERTIFICATE_VERIFY_FAILED with buildout bootstrap. > > I installed DigiCertHighAssuranceEVRootCA.crt to "Certificates (Local > Computer)\Trusted Root Certification Authorities\Certificates. > As detailed at: > http://www.databasemart.com/howto/SQLoverssl/How_To_Install_Trusted_Root_Certification_Authority_With_MMC.aspx > > > But still get: > > > C:\buildslave\zope.testing\build>c:\Python27_32\python.exe bootstrap.py > > Traceback (most recent call last): > > File "bootstrap.py", line 92, in > > exec(urlopen('https://bootstrap.pypa.io/ez_setup.py').read(), ez) > > File "c:\Python27_32\lib\urllib2.py", line 154, in urlopen > > return opener.open(url, data, timeout) > > File "c:\Python27_32\lib\urllib2.py", line 431, in open > > response = self._open(req, data) > > File "c:\Python27_32\lib\urllib2.py", line 449, in _open > > '_open', req) > > File "c:\Python27_32\lib\urllib2.py", line 409, in _call_chain > > result = func(*args) > > File "c:\Python27_32\lib\urllib2.py", line 1240, in https_open > > context=self._context) > > File "c:\Python27_32\lib\urllib2.py", line 1197, in do_open > > raise URLError(err) > > urllib2.URLError: certificate verify failed (_ssl.c:590)> > -- Best regards, Adam GROSZER -- Quote of the day: It's hard to get ivory in Africa, but in Alabama the Tuscaloosa. From brett at python.org Mon May 16 15:56:18 2016 From: brett at python.org (Brett Cannon) Date: Mon, 16 May 2016 19:56:18 +0000 Subject: [Distutils] build system requirements PEP, 3rd draft In-Reply-To: <6071BDB5-128D-4569-AC7E-D873DFADCEA5@stufft.io> References: <6071BDB5-128D-4569-AC7E-D873DFADCEA5@stufft.io> Message-ID: Cool, thanks! On Mon, 16 May 2016 at 07:49 Donald Stufft wrote: > > On May 16, 2016, at 10:43 AM, Brett Cannon wrote: > > I'm at OSCON and in a tutorial on my work laptop so I didn't have a chance > to verify there weren't any reST errors, so if someone with commit > privileges can just quickly run `make` on the peps repo to make sure I > didn't botch something that would be appreciated. :) > > > > Seems to work: > > $ make pep-0518.html > pep-0518.txt (text/x-rst) -> pep-0518.html > > ----------------- > Donald Stufft > PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 > DCFA > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luis.de.sousa at protonmail.ch Fri May 20 10:57:42 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Fri, 20 May 2016 10:57:42 -0400 Subject: [Distutils] PyPi upload fails with TypeError Message-ID: Dear all, I am trying to upload a new package to PyPi with this command: $ python3 setup.py sdist upload The process seems to go on well until the running upload bit, at which point I get this exception: Traceback (most recent call last): File "setup.py", line 33, in "Topic :: Scientific/Engineering :: GIS", File "/usr/lib/python3.4/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/lib/python3.4/distutils/dist.py", line 955, in run_commands self.run_command(cmd) File "/usr/lib/python3.4/distutils/dist.py", line 974, in run_command cmd_obj.run() File "/usr/lib/python3.4/distutils/command/upload.py", line 65, in run self.upload_file(command, pyversion, filename) File "/usr/lib/python3.4/distutils/command/upload.py", line 139, in upload_file user_pass = (self.username + ":" + self.password).encode('ascii') TypeError: Can't convert 'NoneType' object to str implicitly I verified that Topic :: Scientific/Engineering :: GIS is a valid topic. What else am I missing? Thank you, Lu?s Sent from [ProtonMail](https://protonmail.ch), encrypted email based in Switzerland. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Fri May 20 12:43:01 2016 From: tseaver at palladion.com (Tres Seaver) Date: Fri, 20 May 2016 12:43:01 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05/20/2016 10:57 AM, Lu??s de Sousa via Distutils-SIG wrote: > File "/usr/lib/python3.4/distutils/command/upload.py", line 139, in > upload_file user_pass = (self.username + ":" + > self.password).encode('ascii') TypeError: Can't convert 'NoneType' > object to str implicitly > > I verified that Topic :: Scientific/Engineering :: GIS is a valid > topic. What else am I missing? The TypeError is about the *last* line in the traceback: One of `self.username' or 'self.password' is set to 'None'. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJXPz6FAAoJEPKpaDSJE9HYV68P/jlp6buHOZeaGdEVcSiPV/V6 l1FCk56Wgu8/IOjIA/APLH5USMfABESt9YHKBzMENfRQpIbaMs0CmhdE5ONeJs3F LSOGEiY4i5QWNjOlw1oE5GaG9Hj31XuzmCMs39kIQ3C+gOXq/inGTIF4fsvGmK/l dMfVEWmbG0OJkcclrgrTAiJ79La0408ci+N62hsz4s3l+d0F9PUZXd6gpl6AOzMm ORr1IMsGHLfhyWFVfmWoJxyRUJVeGE+t3wY6CCOJEJEckbezOVXyxkBnkSxhVpwd fwffKpNTwFRXvggH4qPcDmmf4T6sAS/hOkBBw0WjcbNMRlFzCaU7Ehr2aw3+wxjU H8GAsn3Xh2H1T3c4qHUYWe7X78/mPN0Pri9Rykg9NVSJ9m1awq94tJgR7tH1om3U +4UvzzkyIKIr8a8YikqSjKnTrJWhqtnSJ+08D4zlBoA1R7lQMdD079wOnmtaLKYj ogYf9xG1jFpVWUeCOdh6qHlQikBHWFWRFSMootKg/tRd6qrsLsbaoaIlEGefPEtJ NgOQ1wMYWthzGglkiktGF7Fv6qHit0rYVSOOSu7gIxQbpkgOylaIwGDuvkArELk3 WD1AJ/F2gc409/IhrvJITP5rP+sdJVMtJswEqYvB6zxaUURvt+Qf4crzqPUCvrAp aKxqMdnFeKZ5jqKRrXge =Ov3n -----END PGP SIGNATURE----- From luis.de.sousa at protonmail.ch Fri May 20 14:00:15 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Fri, 20 May 2016 14:00:15 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: The TypeError is about the *last* line in the traceback: One of `self.username' or 'self.password' is set to 'None'. That being the case, how can I correct the bug? Must I upgrade setuptools? Or some other package? Thank you, Lu?s -------------- next part -------------- An HTML attachment was scrubbed... URL: From berker.peksag at gmail.com Fri May 20 14:12:01 2016 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Fri, 20 May 2016 21:12:01 +0300 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: On Fri, May 20, 2016 at 9:00 PM, Lu??s de Sousa wrote: > > The TypeError is about the *last* line in the traceback: One of > `self.username' or 'self.password' is set to 'None'. > > > That being the case, how can I correct the bug? Must I upgrade setuptools? > Or some other package? Is there a .pypirc file in your $HOME directory? If there is one, can you compare its content with the example at https://docs.python.org/3/distutils/packageindex.html#pypirc ? --Berker From berker.peksag at gmail.com Fri May 20 14:18:34 2016 From: berker.peksag at gmail.com (=?UTF-8?Q?Berker_Peksa=C4=9F?=) Date: Fri, 20 May 2016 21:18:34 +0300 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: On Fri, May 20, 2016 at 9:12 PM, Berker Peksa? wrote: > On Fri, May 20, 2016 at 9:00 PM, Lu??s de Sousa > wrote: >> >> The TypeError is about the *last* line in the traceback: One of >> `self.username' or 'self.password' is set to 'None'. >> >> >> That being the case, how can I correct the bug? Must I upgrade setuptools? >> Or some other package? > > Is there a .pypirc file in your $HOME directory? If there is one, can > you compare its content with the example at > https://docs.python.org/3/distutils/packageindex.html#pypirc ? There is an open issue about this on bugs.python.org: http://bugs.python.org/issue18454 I will try to fix it at PyCon US sprints. --Berker From graffatcolmingov at gmail.com Fri May 20 14:22:02 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Fri, 20 May 2016 13:22:02 -0500 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Until then, try using twine (https://github.com/pypa/twine). You'll have to make your sdist with `python setup.py sdist` and then upload it with `twine upload dist/mypackage.sdist`. But twine will prompt you when it can't find a credential for you instead of proceeding onward nobly. On Fri, May 20, 2016 at 1:18 PM, Berker Peksa? wrote: > On Fri, May 20, 2016 at 9:12 PM, Berker Peksa? wrote: >> On Fri, May 20, 2016 at 9:00 PM, Lu??s de Sousa >> wrote: >>> >>> The TypeError is about the *last* line in the traceback: One of >>> `self.username' or 'self.password' is set to 'None'. >>> >>> >>> That being the case, how can I correct the bug? Must I upgrade setuptools? >>> Or some other package? >> >> Is there a .pypirc file in your $HOME directory? If there is one, can >> you compare its content with the example at >> https://docs.python.org/3/distutils/packageindex.html#pypirc ? > > There is an open issue about this on bugs.python.org: > http://bugs.python.org/issue18454 > > I will try to fix it at PyCon US sprints. > > --Berker > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From donald at stufft.io Sat May 21 14:21:04 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 May 2016 14:21:04 -0400 Subject: [Distutils] Publicly Queryable Statistics Message-ID: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Hey, One thing I?ve been working on as part of Warehouse, is a subproject that I call ?Linehaul?. This is essentially a little statistics daemon that will take specially formatted syslog messages coming off of Fastly and shove them inside of a BigQuery database. I?m happy to report that I?ve just finished the production deployment of this and we?re now sending every download event that hits Fastly into BigQuery. First off, I?d like to thank Felipe Hoffa, Will Curran, and Preston Holmes over at Google for helping me get credits for this sorted out so that we can actually get this going! They?ve been a big help. So onto what this means. Basically, BigQuery gives us the ability to relatively quickly (typically < 60s) query very large datasets using something that is very similar to SQL. Unlike typical time series databases we don?t have to know ahead of time what we want to query on, we can just insert data into rows in a table (and our tables are sharded by days) and then using the SQLlike query language, you can do any sort of query you like. On top of all of this, BigQuery gives me the ability to share the dataset publicly with anyone who is logged into a Google account, which means that *anyone* can query this data and look for any sort of interesting information they can find in it. The cost of any queries you run will be associated with your own account (but the first 1TB of data a month that you query is free I believe, nor are you charged for queries that error out or return cached results). Anyways, you can query these BigQuery tables whose names match the pattern `the-psf:pypi.downloadsYYYYMMDD` and you can see whatever data you want. We?ve only just started recorded the data in this spot so right now there isn?t a whole lot of data available (but it?s constantly streaming in). Once the 22nd rolls over I?m going to delete what data we have available for the 21st, and then start backfilling historical data starting with the 21st and going backwards. You should be able to run queries in a Web UI by navigating to https://bigquery.cloud.google.com/dataset/the-psf:pypi (you might have to accept a Terms of Service). The table schema looks like: https://s.caremad.io/lPpTF6rxWZ/ but it should also be visible on the big query page. Some example queries you might want to run are located at https://gist.github.com/alex/4f100a9592b05e9b4d63 (but note, that?s currently using the *old* not publicly available table name, you?ll need to replace [long-stack-762:pypi.downloads] with [the-psf.pypi:downloads]). If you want to write your own queries, you should be able to find the syntax here: https://cloud.google.com/bigquery/query-reference Anyways, new data should constantly be streaming in, and I should be able to backfill data all the way to Jan of 2014 or so. Hopefully this is useful to folks, and if you find any interesting queries or numbers, please share them! ? Donald Stufft From donald at stufft.io Sat May 21 14:24:41 2016 From: donald at stufft.io (Donald Stufft) Date: Sat, 21 May 2016 14:24:41 -0400 Subject: [Distutils] Publicly Queryable Statistics In-Reply-To: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> References: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Message-ID: <7147E06F-3A1C-4CCA-9B83-9C8A7F62409C@stufft.io> > On May 21, 2016, at 2:21 PM, Donald Stufft wrote: > > So onto what this means. > Oh, one additional tidbit of information- Most of the data is coming from parsing user agents, which means that the ability to get this information depends a lot on what is actually downloading the file. Generally most of the information is only available on pip (and newer pip versions added progressively more information). In cases where we didn?t know we have inserted (null) values (such as bandersnatch mirroring and not knowing what the ?python version? is). This means that you?ll often times see (null) values in fields, which just represents downloads using something where we couldn?t determine that information from. ? Donald Stufft From wes.turner at gmail.com Sun May 22 03:39:51 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 22 May 2016 02:39:51 -0500 Subject: [Distutils] Publicly Queryable Statistics In-Reply-To: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> References: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Message-ID: - to query, say, a month's worth of data, what would need to be done? - "sharded by day" ... UTC? On Saturday, May 21, 2016, Donald Stufft wrote: > Hey, > > One thing I?ve been working on as part of Warehouse, is a subproject that > I call ?Linehaul?. This is essentially a little statistics daemon that will > take specially formatted syslog messages coming off of Fastly and shove > them inside of a BigQuery database. I?m happy to report that I?ve just > finished the production deployment of this and we?re now sending every > download event that hits Fastly into BigQuery. > > First off, I?d like to thank Felipe Hoffa, Will Curran, and Preston Holmes > over at Google for helping me get credits for this sorted out so that we > can actually get this going! They?ve been a big help. > > So onto what this means. > > Basically, BigQuery gives us the ability to relatively quickly (typically > < 60s) query very large datasets using something that is very similar to > SQL. Unlike typical time series databases we don?t have to know ahead of > time what we want to query on, we can just insert data into rows in a table > (and our tables are sharded by days) and then using the SQLlike query > language, you can do any sort of query you like. > > On top of all of this, BigQuery gives me the ability to share the dataset > publicly with anyone who is logged into a Google account, which means that > *anyone* can query this data and look for any sort of interesting > information they can find in it. The cost of any queries you run will be > associated with your own account (but the first 1TB of data a month that > you query is free I believe, nor are you charged for queries that error out > or return cached results). > > Anyways, you can query these BigQuery tables whose names match the pattern > `the-psf:pypi.downloadsYYYYMMDD` and you can see whatever data you want. > > We?ve only just started recorded the data in this spot so right now there > isn?t a whole lot of data available (but it?s constantly streaming in). > Once the 22nd rolls over I?m going to delete what data we have available > for the 21st, and then start backfilling historical data starting with the > 21st and going backwards. You should be able to run queries in a Web UI by > navigating to https://bigquery.cloud.google.com/dataset/the-psf:pypi (you > might have to accept a Terms of Service). > > The table schema looks like: https://s.caremad.io/lPpTF6rxWZ/ but it > should also be visible on the big query page. Some example queries you > might want to run are located at > https://gist.github.com/alex/4f100a9592b05e9b4d63 (but note, that?s > currently using the *old* not publicly available table name, you?ll need to > replace [long-stack-762:pypi.downloads] with [the-psf.pypi:downloads]). > If you want to write your own queries, you should be able to find the > syntax here: https://cloud.google.com/bigquery/query-reference > > Anyways, new data should constantly be streaming in, and I should be able > to backfill data all the way to Jan of 2014 or so. Hopefully this is useful > to folks, and if you find any interesting queries or numbers, please share > them! > > ? > Donald Stufft > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun May 22 03:48:37 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 22 May 2016 17:48:37 +1000 Subject: [Distutils] Publicly Queryable Statistics In-Reply-To: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> References: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Message-ID: On 22 May 2016 at 04:21, Donald Stufft wrote: > Hey, > > One thing I?ve been working on as part of Warehouse, is a subproject that I call ?Linehaul?. This is essentially a little statistics daemon that will take specially formatted syslog messages coming off of Fastly and shove them inside of a BigQuery database. I?m happy to report that I?ve just finished the production deployment of this and we?re now sending every download event that hits Fastly into BigQuery. > > First off, I?d like to thank Felipe Hoffa, Will Curran, and Preston Holmes over at Google for helping me get credits for this sorted out so that we can actually get this going! They?ve been a big help. Great work, folks! It will be interesting to see what kinds of metrics and dashboards folks build on this over time :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Sun May 22 10:23:39 2016 From: donald at stufft.io (Donald Stufft) Date: Sun, 22 May 2016 10:23:39 -0400 Subject: [Distutils] Publicly Queryable Statistics In-Reply-To: References: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Message-ID: > On May 22, 2016, at 3:39 AM, Wes Turner wrote: > > - to query, say, a month's worth of data, what would need to be done? > - "sharded by day" ... UTC? > You use a TABLE_DATE_RANGE() function, like this: TABLE_DATE_RANGE([the-psf:pypi.downloads], TIMESTAMP("20160114"), TIMESTAMP("20160214?)) Or, if you wanted to get fancier you could do something like this for the ?last 30 days?: TABLE_DATE_RANGE([the-psf:pypi.downloads], DATE_ADD(CURRENT_TIMESTAMP(), -1, "month"), CURRENT_TIMESTAMP()) You can see examples of it in use at https://gist.github.com/alex/4f100a9592b05e9b4d63 or see the query docs at https://cloud.google.com/bigquery/query-reference. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Sun May 22 15:52:06 2016 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 22 May 2016 14:52:06 -0500 Subject: [Distutils] Publicly Queryable Statistics In-Reply-To: References: <99E9C3D2-4A19-4B27-81B0-8CAF09948866@stufft.io> Message-ID: thanks! On Sunday, May 22, 2016, Donald Stufft wrote: > > On May 22, 2016, at 3:39 AM, Wes Turner > wrote: > > - to query, say, a month's worth of data, what would need to be done? > - "sharded by day" ... UTC? > > > > You use a TABLE_DATE_RANGE() function, like this: > > > TABLE_DATE_RANGE([the-psf:pypi.downloads], TIMESTAMP("20160114"), > TIMESTAMP("20160214?)) > > Or, if you wanted to get fancier you could do something like this for the > ?last 30 days?: > > TABLE_DATE_RANGE([the-psf:pypi.downloads], > DATE_ADD(CURRENT_TIMESTAMP(), -1, "month"), CURRENT_TIMESTAMP()) > > > You can see examples of it in use at > https://gist.github.com/alex/4f100a9592b05e9b4d63 or see the query docs > at https://cloud.google.com/bigquery/query-reference. > > > ? > Donald Stufft > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luis.de.sousa at protonmail.ch Tue May 24 14:13:16 2016 From: luis.de.sousa at protonmail.ch (=?UTF-8?Q?Lu=C3=AD=C2=ADs_de_Sousa?=) Date: Tue, 24 May 2016 14:13:16 -0400 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Hi there Ian. Twine is also failling, apparently it can not find setup tools, please check the log below. Cheers. $ twine upload dist/hex-utils-0.2.sdist Traceback (most recent call last): File "/usr/bin/twine", line 9, in load_entry_point('twine==1.5.0', 'console_scripts', 'twine')() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 542, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2569, in load_entry_point return ep.load() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2229, in load return self.resolve() File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2235, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python3/dist-packages/twine/__main__.py", line 20, in from twine.cli import dispatch File "/usr/lib/python3/dist-packages/twine/cli.py", line 19, in import setuptools ImportError: No module named 'setuptools' $ pip install setuptoolsRequirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/dist-packages $ dpkg -l | grep setuptools ii python-setuptools 20.7.0-1 all Python Distutils Enhancements -------- Original Message -------- Subject: Re: [Distutils] PyPi upload fails with TypeError Local Time: 20 May 2016 8:22 PM UTC Time: 20 May 2016 18:22 From: graffatcolmingov at gmail.com To: berker.peksag at gmail.com CC: luis.de.sousa at protonmail.ch,Distutils-Sig at python.org Until then, try using twine (https://github.com/pypa/twine). You'll have to make your sdist with `python setup.py sdist` and then upload it with `twine upload dist/mypackage.sdist`. But twine will prompt you when it can't find a credential for you instead of proceeding onward nobly. -------------- next part -------------- An HTML attachment was scrubbed... URL: From graffatcolmingov at gmail.com Tue May 24 14:16:46 2016 From: graffatcolmingov at gmail.com (Ian Cordasco) Date: Tue, 24 May 2016 13:16:46 -0500 Subject: [Distutils] PyPi upload fails with TypeError In-Reply-To: References: Message-ID: Luis, it looks like you're running twine on Python 3 and setuptools is installed for Python 2. Try doing: python3 -m pip install setuptools or apt-get install -y python3-setuptools On Tue, May 24, 2016 at 1:13 PM, Lu??s de Sousa wrote: > Hi there Ian. Twine is also failling, apparently it can not find setup > tools, please check the log below. > > Cheers. > > $ twine upload dist/hex-utils-0.2.sdist > Traceback (most recent call last): > File "/usr/bin/twine", line 9, in > load_entry_point('twine==1.5.0', 'console_scripts', 'twine')() > File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 542, > in load_entry_point > return get_distribution(dist).load_entry_point(group, name) > File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line > 2569, in load_entry_point > return ep.load() > File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line > 2229, in load > return self.resolve() > File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line > 2235, in resolve > module = __import__(self.module_name, fromlist=['__name__'], level=0) > File "/usr/lib/python3/dist-packages/twine/__main__.py", line 20, in > > from twine.cli import dispatch > File "/usr/lib/python3/dist-packages/twine/cli.py", line 19, in > import setuptools > ImportError: No module named 'setuptools' > > $ pip install setuptoolsRequirement already satisfied (use --upgrade to > upgrade): setuptools in /usr/lib/python2.7/dist-packages > > $ dpkg -l | grep setuptools > ii python-setuptools 20.7.0-1 all Python Distutils Enhancements > > > > -------- Original Message -------- > Subject: Re: [Distutils] PyPi upload fails with TypeError > Local Time: 20 May 2016 8:22 PM > UTC Time: 20 May 2016 18:22 > From: graffatcolmingov at gmail.com > To: berker.peksag at gmail.com > CC: luis.de.sousa at protonmail.ch,Distutils-Sig at python.org > > Until then, try using twine (https://github.com/pypa/twine). You'll > have to make your sdist with `python setup.py sdist` and then upload > it with `twine upload dist/mypackage.sdist`. But twine will prompt you > when it can't find a credential for you instead of proceeding onward > nobly. > > From guettliml at thomas-guettler.de Wed May 25 03:13:58 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 25 May 2016 09:13:58 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. Message-ID: <574550B6.60607@thomas-guettler.de> If you want wheel to be successful, **provide a build server**. Quoting the author of psutil: https://github.com/giampaolo/psutil/issues/824#issuecomment-221359292 {{{ On Linux / Unix the only way you have to install psutil right now is via source / tarball. I don't want to provide wheels for Linux (or other UNIX platforms). I would have to cover all supported python versions (7) both 32 and 64 bits, meaning 14 extra packages to compile and upload on PYPI on every release. I do that for Windows because installing VS is an order of magnitude more difficult than installing gcc on Linux/UNIX but again: not willing to do extra work on that front (sorry). What you could do is create a wheel yourself with python setup.py build bdist_wheel by using the same python/arch version you have on the server, upload it on the server and install it with pip. }}} What do you think? Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From alex.gronholm at nextday.fi Wed May 25 03:57:16 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Wed, 25 May 2016 10:57:16 +0300 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <574550B6.60607@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> Message-ID: <57455ADC.5010608@nextday.fi> Amen to that, but who will pay for it? I imagine a great deal of processing power would be required for this. How do implementors of other languages handle this? 25.05.2016, 10:13, Thomas G?ttler kirjoitti: > If you want wheel to be successful, **provide a build server**. > > Quoting the author of psutil: > > https://github.com/giampaolo/psutil/issues/824#issuecomment-221359292 > > {{{ > On Linux / Unix the only way you have to install psutil right now is > via source / tarball. I don't want to provide wheels for Linux (or > other UNIX platforms). I would have to cover all supported python > versions (7) both 32 and 64 bits, meaning 14 extra packages to compile > and upload on PYPI on every release. I do that for Windows because > installing VS is an order of magnitude more difficult than installing > gcc on Linux/UNIX but again: not willing to do extra work on that > front (sorry). > What you could do is create a wheel yourself with python setup.py > build bdist_wheel by using the same python/arch version you have on > the server, upload it on the server and install it with pip. > }}} > > What do you think? > > Regards, > Thomas G?ttler > From contact at ionelmc.ro Wed May 25 03:57:47 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Wed, 25 May 2016 10:57:47 +0300 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <574550B6.60607@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> Message-ID: On Wed, May 25, 2016 at 10:13 AM, Thomas G?ttler < guettliml at thomas-guettler.de> wrote: > I do that for Windows because installing VS is an order of magnitude more > difficult than installing gcc on Linux/UNIX but again: not willing to do > extra work on that front (sorry). He may accept a PR with Travis configuration to build the wheels though ... Using that manylinux docker image should be easy enough on Travis.? ?On the other hand, there might be weird dependencies and the wheel don't build well on the manylinux container - so there's still some extra effort? for the maintainer. A simpler package would probably make a better discussion. PS. To build all the manylinux wheels for a package you only need to run something like: docker run -itv $(pwd):/code quay.io/pypa/manylinux1_x86_64 bash -c 'set -eux; cd code; rm -rf wheelhouse; for variant in /opt/python/*; do rm -rf dist build *.egg-info && $variant/bin/python setup.py clean --all bdist_wheel; auditwheel repair dist/*.whl; done; rm -rf dist build *.egg-info' ? Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed May 25 04:01:28 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 25 May 2016 09:01:28 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> Message-ID: On 25 May 2016 at 08:57, Ionel Cristian M?rie? wrote: > He may accept a PR with Travis configuration to build the wheels though ... > Using that manylinux docker image should be easy enough on Travis. A PR to the packaging user Guide explaining how to build Linux (manylinux) wheels using Travis would be a worthwhile addition (it would sit nicely alongside https://packaging.python.org/en/latest/appveyor/ which explains how to do the same for Windows using Appveyor). Paul From noah at coderanger.net Wed May 25 04:08:25 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 25 May 2016 01:08:25 -0700 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <574550B6.60607@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> Message-ID: <3BCC35B6-198F-4877-9F43-C9665256CB3B@coderanger.net> > On May 25, 2016, at 12:13 AM, Thomas G?ttler wrote: > > If you want wheel to be successful, **provide a build server**. > > Quoting the author of psutil: > > https://github.com/giampaolo/psutil/issues/824#issuecomment-221359292 > > {{{ > On Linux / Unix the only way you have to install psutil right now is via source / tarball. I don't want to provide wheels for Linux (or other UNIX platforms). I would have to cover all supported python versions (7) both 32 and 64 bits, meaning 14 extra packages to compile and upload on PYPI on every release. I do that for Windows because installing VS is an order of magnitude more difficult than installing gcc on Linux/UNIX but again: not willing to do extra work on that front (sorry). > What you could do is create a wheel yourself with python setup.py build bdist_wheel by using the same python/arch version you have on the server, upload it on the server and install it with pip. > }}} > > What do you think? The problems haven't really changed every time someone brings this up. Running untrusted code from the internet isn't impossible (eg. Travis, Heroku, Lambda) but it requires serious care and feeding at a scale we don't currently have the resources for. Until something in that equation changes, the best we can do it try to piggyback on an existing sandbox environment like Travis. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From guettliml at thomas-guettler.de Wed May 25 09:42:16 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 25 May 2016 15:42:16 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <57455ADC.5010608@nextday.fi> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> Message-ID: <5745ABB8.6090906@thomas-guettler.de> Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: > Amen to that, but who will pay for it? I imagine a great deal of processing power would be required for this. > How do implementors of other languages handle this? I talked with someone who is member of the python software foundation, and he said that money for projects like this is available. Of course this was no official statement. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From p.f.moore at gmail.com Wed May 25 09:55:34 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 25 May 2016 14:55:34 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745ABB8.6090906@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> Message-ID: On 25 May 2016 at 14:42, Thomas G?ttler wrote: > Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >> >> Amen to that, but who will pay for it? I imagine a great deal of >> processing power would be required for this. >> How do implementors of other languages handle this? > > > I talked with someone who is member of the python software foundation, and > he said that > money for projects like this is available. Of course this was no official > statement. The other aspect of this is who has sufficient time/expertise to set something like this up? Are you volunteering to do this? Paul From ncoghlan at gmail.com Wed May 25 09:56:30 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 May 2016 23:56:30 +1000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745ABB8.6090906@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> Message-ID: On 25 May 2016 at 23:42, Thomas G?ttler wrote: > Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >> >> Amen to that, but who will pay for it? I imagine a great deal of >> processing power would be required for this. >> How do implementors of other languages handle this? > > I talked with someone who is member of the python software foundation, and > he said that > money for projects like this is available. Of course this was no official > statement. No, money for a project of this scale is not available (the person you spoke to may have been thinking of PSF development grants, which typically only cover 4-12 weeks of dedicated development work by a single developer on a project with specific near term objectives, not running on ongoing service that would dwarf PyPI itself in complexity) Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From mail at cbaines.net Wed May 25 09:50:13 2016 From: mail at cbaines.net (Christopher Baines) Date: Wed, 25 May 2016 15:50:13 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <574550B6.60607@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> Message-ID: <5745AD95.4040602@cbaines.net> On 25/05/16 09:13, Thomas G?ttler wrote: > If you want wheel to be successful, **provide a build server**. > > Quoting the author of psutil: > > https://github.com/giampaolo/psutil/issues/824#issuecomment-221359292 > > {{{ > On Linux / Unix the only way you have to install psutil right now is via > source / tarball. I don't want to provide wheels for Linux (or other > UNIX platforms). I would have to cover all supported python versions (7) > both 32 and 64 bits, meaning 14 extra packages to compile and upload on > PYPI on every release. I do that for Windows because installing VS is an > order of magnitude more difficult than installing gcc on Linux/UNIX but > again: not willing to do extra work on that front (sorry). > What you could do is create a wheel yourself with python setup.py build > bdist_wheel by using the same python/arch version you have on the > server, upload it on the server and install it with pip. > }}} > > What do you think? This is something best left to operating systems/package managers, e.g. Debian, Ubuntu, Fedora, RedHat, Guix(SD), Nix(OS). Contrary to the quote above, I believe psutil is available from all of the projects above (in a binary and source format). These projects have the expressiveness to reason about packages that are not just pure Python (e.g. that depend on GCC), and the software and infrastructure (e.g. build servers) to manage this. In my mind providing a build server is just not in scope for Python, or PyPI. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 949 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Wed May 25 09:52:42 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 25 May 2016 23:52:42 +1000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <574550B6.60607@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> Message-ID: On 25 May 2016 at 17:13, Thomas G?ttler wrote: > If you want wheel to be successful, **provide a build server**. Thomas, aside from that statement being demonstrably untrue (since the wheel format has already proven to be wildly successful, even with developers coping with Linux ABI fragmentation), an attitude of "give me more free stuff, or your project will fail" is not an acceptable tone to adopt on this list. The contributors here are (mainly) volunteers working on infrastructure provided by a public interest charity, not your personal servants to be ordered about as you feel inclined. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guettliml at thomas-guettler.de Wed May 25 11:11:53 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 25 May 2016 17:11:53 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> Message-ID: <5745C0B9.4090107@thomas-guettler.de> Am 25.05.2016 um 15:52 schrieb Nick Coghlan: > On 25 May 2016 at 17:13, Thomas G?ttler wrote: >> If you want wheel to be successful, **provide a build server**. > > Thomas, aside from that statement being demonstrably untrue (since the > wheel format has already proven to be wildly successful, even with > developers coping with Linux ABI fragmentation), an attitude of "give > me more free stuff, or your project will fail" is not an acceptable > tone to adopt on this list. > > The contributors here are (mainly) volunteers working on > infrastructure provided by a public interest charity, not your > personal servants to be ordered about as you feel inclined. You seem to be angry. Why? I am just bringing information from A to B (from github issue to this list). I guess the author of psutil is just a volunteer like you are. What is the problem? I think the problem is that creating wheels for psutil is difficult. I think we all agree on this. If not, if you think it is easy, then please tell us. My intention was, to find a solution. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From guettliml at thomas-guettler.de Wed May 25 11:22:19 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 25 May 2016 17:22:19 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> Message-ID: <5745C32B.6030005@thomas-guettler.de> Am 25.05.2016 um 15:55 schrieb Paul Moore: > On 25 May 2016 at 14:42, Thomas G?ttler wrote: >> Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >>> >>> Amen to that, but who will pay for it? I imagine a great deal of >>> processing power would be required for this. >>> How do implementors of other languages handle this? >> >> >> I talked with someone who is member of the python software foundation, and >> he said that >> money for projects like this is available. Of course this was no official >> statement. > > The other aspect of this is who has sufficient time/expertise to set > something like this up? Are you volunteering to do this? I am volunteering for doing coordination work: - communication - layout of datastructures - interchange of datastructures. - no coding But we need at least ten people how say "I'm willing to help" Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From guettliml at thomas-guettler.de Wed May 25 11:27:02 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Wed, 25 May 2016 17:27:02 +0200 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> Message-ID: <5745C446.2040907@thomas-guettler.de> Am 25.05.2016 um 15:56 schrieb Nick Coghlan: > On 25 May 2016 at 23:42, Thomas G?ttler wrote: >> Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >>> >>> Amen to that, but who will pay for it? I imagine a great deal of >>> processing power would be required for this. >>> How do implementors of other languages handle this? >> >> I talked with someone who is member of the python software foundation, and >> he said that >> money for projects like this is available. Of course this was no official >> statement. > > No, money for a project of this scale is not available (the person you > spoke to may have been thinking of PSF development grants, which > typically only cover 4-12 weeks of dedicated development work by a > single developer on a project with specific near term objectives, not > running on ongoing service that would dwarf PyPI itself in complexity) Yes, running a permanent build-server might be too high regular expenses. I think providing a self hostable build server which can be started with one command would be such a project. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From p.f.moore at gmail.com Wed May 25 11:48:33 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 25 May 2016 16:48:33 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745C0B9.4090107@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: On 25 May 2016 at 16:11, Thomas G?ttler wrote: >> The contributors here are (mainly) volunteers working on >> infrastructure provided by a public interest charity, not your >> personal servants to be ordered about as you feel inclined. > > You seem to be angry. Why? I don't know about Nick, but your tone is certainly annoying to me. I do this work in my (very limited!) spare time, and in most cases, what I do is of no benefit to me personally, but is purely to help others who raise issues. Your comment "If you want wheel to be successful, **provide a build server**" is both patronising (do you think no-one here is aware that a build server would be beneficial?), inaccurate (wheels *are* successful - ask anyone who's got benefit from them, it's not hard to find such people) and demanding (you suggest others need to do work, but offer none of your own time to help). Many of your postings here have had a similar tone - insisting that things "have to be" done a certain way, or that there "must" be a solution in the form you prefer. I've tended to assume that this has simply been unintentional, and maybe getting exaggerated by the lack of non-verbal cues available in email, but you need to consider your words more carefully if you don't want to start seriously offending people. And that's not behaviour that's acceptable on this list (or indeed, any of the other Python lists). > I am just bringing information from A to B (from github issue > to this list). Your information was phrased as a demand. If you don't think that's the case, I encourage you to re-read your own post. > I guess the author of psutil is just a volunteer like you are. Yes, and (on the issue you linked to) he politely explained why he didn't provide Linux binaries and offered some approaches you could take. You seemed to take that as somehow implying that he was dissatisfied and posted "information" here implying that. I hope you didn't mean it that way, but in doing so, as far as I can tell you misrepresented him. > What is the problem? I think the problem is that > creating wheels for psutil is difficult. Hardly. The problem is that Unix systems are very diverse, and trying to maintain compatible builds for all the variations is more work than a single volunteer can manage. The people working on manylinux on this mailing list have put a *lot* of work into trying to alleviate that issue. Maybe the psutil author hasn't heard of this, or maybe he's not had the time to investigate or take advantage of it. Or maybe he doesn't feel (or indeed, he's sure - after all, psutil is a very low-level package) that manylinux isn't the right answer for his project. But just because you personally weren't willing to set up the prerequisites to build a copy of psutil for yourself, doesn't mean that "creating wheels for psutil is difficult". > I think we all agree on this. No. > If not, if you think it is easy, then please tell us. Building and publishing wheels for every platform variation your user base might be interested *is* difficult, simply because of the scale of the problem and the fact that most volunteer developers don't have access to all those platforms. And yes, a build farm would help those people. But it's not anyone's *responsibility* to provide that. Indeed, the whole open source ecosystem is built on people seeing a need and *putting their own effort into making it happen, and then sharing the results*. You seem to think that your contribution should be to nag people who are already providing freely of their time, to do more, so that you don't have to. > My intention was, to find a solution. Well, it didn't come across that way. >From another post: > I am volunteering for doing coordination work: > - communication > - layout of datastructures > - interchange of datastructures. > - no coding > > But we need at least ten people how say "I'm willing to help" We don't need project managers for this task. We need people willing to code, build servers, etc. If (and it's a big if) the management of the work becomes too much of an overhead for those people, then (and *only* then) they might appreciate some help with the administrative tasks. But volunteering to co-ordinate resources that don't exist is not useful. Nor is trying to find such resources from the pool of people who are *already* doing plenty. If you want to find resources, I suggest you look among people who are not yet contributing to the packaging ecosystem. Paul From matthew.brett at gmail.com Wed May 25 11:49:23 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 25 May 2016 11:49:23 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745C32B.6030005@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> <5745C32B.6030005@thomas-guettler.de> Message-ID: On Wed, May 25, 2016 at 11:22 AM, Thomas G?ttler wrote: > > > Am 25.05.2016 um 15:55 schrieb Paul Moore: >> >> On 25 May 2016 at 14:42, Thomas G?ttler >> wrote: >>> >>> Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >>>> >>>> >>>> Amen to that, but who will pay for it? I imagine a great deal of >>>> processing power would be required for this. >>>> How do implementors of other languages handle this? >>> >>> >>> >>> I talked with someone who is member of the python software foundation, >>> and >>> he said that >>> money for projects like this is available. Of course this was no official >>> statement. >> >> >> The other aspect of this is who has sufficient time/expertise to set >> something like this up? Are you volunteering to do this? > > > I am volunteering for doing coordination work: > - communication > - layout of datastructures > - interchange of datastructures. > - no coding > > But we need at least ten people how say "I'm willing to help" We (scientific Python folks) have been thinking about this too [1]. We're getting to the stage where the informal methods we have been using are difficult to coordinate. As more people provide wheels, it gets more common for packages to release source before wheels, and cause a cry of pain from users whose installation suddenly fails or changes. The situation got particularly bad when Python 3.5 was released, because none or very few of us had Python 3.5 packages ready, and so nearly all new installs were suddenly not getting wheels, and suffering. We all really need some system that can, from some simple trigger, like a push of a git tag, build wheels for Windows 32 + 64, OSX, and manylinux1_x86_64, for Pythons 2.7, 3.4 and 3.5 (and 3.6?), test these installs, and, if the tests pass, and push these either to an accessible spot (we have been using donated rackspace hosting) or to pypi directly. We're fairly close to that at the moment. Many of us are building and testing binaries on Appveyor, Travis by default. We have lots of projects set up with repos MacPython github org, where the repositories only exist to build, test OSX wheels on travis-ci OSX VMs, and push the wheels to rackspace - e.g. [2]. We have been setting up similar systems for manylinux - e.g. [3, 4]. The problem is that these systems are largely manual, in that a release for e.g. numpy involves: * Trigger build / test on separate Appveyor repo; * Trigger build / test on MacPython/numpy-wheels; * Trigger build on manylinux-builds / test on manylinux testing; * When all these are done and tests passing, locate generated binaries on rackspace / Appveyor, and upload to pypi. This is complicated, and it's relatively hard for a given package to set this up for themselves. When Python 3.6 comes out, we'll all have to do this release procedure at more or less the same time. So, I would love to have a system that could either collate these different services (Appveyor, Travis, Circle-CI, Rackspace, AWS) into something coherent, or generate something new that is more streamlined. See conda-forge [5] for an example of collating build services. I think it's fine for each package to specify its own build and test recipes as long as they can do it in a way that is well defined, with examples to work from. The success of travis-ci is a testament to the ingenuity of packagers in getting their packages built and tested. Maybe this could be a PEP of its own. It would certainly help to have some idea of what kind of support the PSF can give - the spec would look different for a new custom system and a system collating Appveyor / Travis etc. I'm certainly happy to devote time to this (in the hope of saving a lot of time later). Best, Matthew [1] https://github.com/scipy/scipy/issues/6157#issuecomment-219314029 [2] https://github.com/MacPython/numpy-wheels [3] https://github.com/matthew-brett/manylinux-builds [4] https://github.com/matthew-brett/manylinux-testing [5] https://conda-forge.github.io/ From chris.barker at noaa.gov Wed May 25 11:51:12 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 25 May 2016 08:51:12 -0700 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745C446.2040907@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> <5745C446.2040907@thomas-guettler.de> Message-ID: On Wed, May 25, 2016 at 8:27 AM, Thomas G?ttler < guettliml at thomas-guettler.de> wrote: > I think providing a self hostable build server which can be started with > one command > would be such a project. The manylinux Docker container is heading in that direction already. I suggest you consider helping out with that effort. There is also the conda ecosystem, with the conda-forge project using public CI systems to build packages: https://conda-forge.github.io/ I imagine a similar system could be adopted for wheels.... -CHB > > Regards, > Thomas G?ttler > > > -- > Thomas Guettler http://www.thomas-guettler.de/ > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Wed May 25 12:01:51 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 25 May 2016 12:01:51 -0400 Subject: [Distutils] Distutils improvements regarding header installation and building C extension modules Message-ID: Hello everyone, This is my first post here so, apologies if I am breaking any rules. Lately, I have been filing a few bug reports and patches to distutils on bugs.python.org that all concern the installation and build of C++ extensions. *1) The distutils.ccompiler has_flag method.* (http://bugs.python.org/issue26689) When building C++ extension modules that require a certain compiler flag (such as enabling C++11 features), you may want to check if the compiler has such a flag available. I proposed a patch adding a `has_flag` method to ccompiler. It is an equivalent to cmake' s CHECK_CXX_COMPILER_FLAG. The implementation is similar to the one of has_function which by the way has a pending patch by minrk in issue (http://bugs.python.org/issue25544). *2) On the need for something like pip.locations.distutils_scheme in distutils* (http://bugs.python.org/issue26955) When installing a python package that has a directive for the install_headers distutils command, these headers are usually installed under the main python include directory, which can be retrieved with sysconfig.get_path('include') or directly referred to as the 'include' string when setting the include directories of an extension module. However, on some systems like OS X, headers for extension modules are not located in under the python include directory /usr/local/Cellar/pythonX/X.Y.Z/Frameworks/Python.framework/Versions/X.Y/include/pythonX.Y but in /usr/local/include/pythonX.Y. Is there a generic way to find the location where headers are installed in a python install? pip.locations has a distutils_scheme function which seems to be returning the right thing, but it seems to be a bit overkill to require pip. On the other side, no path returned by sysconfig corresponds to `/usr/local/include/pythonX.Y`. *3) On allowing `install_headers` command to follow a specific directory structure* (http://bugs.python.org/issue27123) (instead of making a flat copy of the specified list of headers) Unlike wheel's data files, which can be specified as a list of tuples including the target directories [(target_directory, [list of files for target directory])] the `headers` setup keyword argument only let's you specify a list of files that will be copied over to a sub-directory of sysconfig.get_path('include'). It would be useful to enable the same feature for headers as we have for data files. If this is a feature that you would also like to see, I can propose a patch to the install_headers command. Thanks, Syllvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.corlay at gmail.com Wed May 25 12:07:22 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 25 May 2016 12:07:22 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> <5745C446.2040907@thomas-guettler.de> Message-ID: For the build, one could use continuous integration providers like appveyor, circleci and travisci to perform the builds. This is what has been done by conda folks with conda-forge. On Wed, May 25, 2016 at 11:51 AM, Chris Barker wrote: > On Wed, May 25, 2016 at 8:27 AM, Thomas G?ttler < > guettliml at thomas-guettler.de> wrote: > >> I think providing a self hostable build server which can be started with >> one command >> would be such a project. > > > The manylinux Docker container is heading in that direction already. I > suggest you consider helping out with that effort. > > There is also the conda ecosystem, with the conda-forge project using > public CI systems to build packages: > > https://conda-forge.github.io/ > > I imagine a similar system could be adopted for wheels.... > > -CHB > > > > > > > >> >> Regards, >> Thomas G?ttler >> >> >> -- >> Thomas Guettler http://www.thomas-guettler.de/ >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed May 25 13:49:20 2016 From: dholth at gmail.com (Daniel Holth) Date: Wed, 25 May 2016 17:49:20 +0000 Subject: [Distutils] Distutils improvements regarding header installation and building C extension modules In-Reply-To: References: Message-ID: A fix for data_files would be very welcome. It has behaved inconsistently in setuptools and distutils and virtualenv or not-virtualenv. If a wheel archive has a subfolder package-1.0.data/headers/... then that tree will be copied into the headers directory as shown below: In [2]: import wheel; wheel.install.get_install_paths('package') Out[2]: {'data': '/home/vagrant/opt/pyenv', 'headers': '/home/vagrant/opt/pyenv/include/site/python2.7/package', 'platlib': '/home/vagrant/opt/pyenv/lib64/python2.7/site-packages', 'purelib': '/home/vagrant/opt/pyenv/lib/python2.7/site-packages', 'scripts': '/home/vagrant/opt/pyenv/bin'} pip's mapping of paths could differ. It would be wonderful if you are able to figure out how to actually convince distutils/setuptools to put the right files or tree of files into 'headers'. On Wed, May 25, 2016 at 1:24 PM Sylvain Corlay wrote: > Hello everyone, > > This is my first post here so, apologies if I am breaking any rules. > > Lately, I have been filing a few bug reports and patches to distutils on > bugs.python.org that all concern the installation and build of C++ > extensions. > > *1) The distutils.ccompiler has_flag method.* > (http://bugs.python.org/issue26689) > > When building C++ extension modules that require a certain compiler flag > (such as enabling C++11 features), you may want to check if the compiler > has such a flag available. > > I proposed a patch adding a `has_flag` method to ccompiler. It is an > equivalent to cmake' s CHECK_CXX_COMPILER_FLAG. > > The implementation is similar to the one of has_function which by the way > has a pending patch by minrk in issue (http://bugs.python.org/issue25544). > > *2) On the need for something like pip.locations.distutils_scheme in > distutils* > (http://bugs.python.org/issue26955) > > When installing a python package that has a directive for the install_headers > distutils command, these headers are usually installed under the main > python include directory, which can be retrieved with > sysconfig.get_path('include') or directly referred to as the 'include' string > when setting the include directories of an extension module. > > However, on some systems like OS X, headers for extension modules are not > located in under the python include directory > > > /usr/local/Cellar/pythonX/X.Y.Z/Frameworks/Python.framework/Versions/X.Y/include/pythonX.Y > > but in > > /usr/local/include/pythonX.Y. > > Is there a generic way to find the location where headers are installed in > a python install? pip.locations has a distutils_scheme function which > seems to be returning the right thing, but it seems to be a bit overkill to > require pip. On the other side, no path returned by sysconfig corresponds > to `/usr/local/include/pythonX.Y`. > > *3) On allowing `install_headers` command to follow a specific directory > structure* > (http://bugs.python.org/issue27123) > > (instead of making a flat copy of the specified list of headers) > > Unlike wheel's data files, which can be specified as a list of tuples > including the target directories > > [(target_directory, [list of files for target directory])] > > the `headers` setup keyword argument only let's you specify a list of > files that will be copied over to a sub-directory of > sysconfig.get_path('include'). > > It would be useful to enable the same feature for headers as we have for > data files. > > If this is a feature that you would also like to see, I can propose a > patch to the install_headers command. > > Thanks, > > Syllvain > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From randy at thesyrings.us Wed May 25 14:03:55 2016 From: randy at thesyrings.us (Randy Syring) Date: Wed, 25 May 2016 14:03:55 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: <5745E90B.5060501@thesyrings.us> On 05/25/2016 11:48 AM, Paul Moore wrote: > I don't know about Nick, but your tone is certainly annoying to me. I > do this work in my (very limited!) spare time, and in most cases, what > I do is of no benefit to me personally, but is purely to help others > who raise issues. I wonder if this is an issue of English being Thomas' second language. By his email address, he appears to reside in Germany. So maybe something has been lost in translation? *Randy Syring* Husband | Father | Redeemed Sinner /"For what does it profit a man to gain the whole world and forfeit his soul?" (Mark 8:36 ESV)/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed May 25 14:08:19 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 25 May 2016 19:08:19 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745E90B.5060501@thesyrings.us> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <5745E90B.5060501@thesyrings.us> Message-ID: On 25 May 2016 at 19:03, Randy Syring wrote: > On 05/25/2016 11:48 AM, Paul Moore wrote: > >> I don't know about Nick, but your tone is certainly annoying to me. I >> do this work in my (very limited!) spare time, and in most cases, what >> I do is of no benefit to me personally, but is purely to help others >> who raise issues. > > I wonder if this is an issue of English being Thomas' second language. By > his email address, he appears to reside in Germany. So maybe something has > been lost in translation? I've certainly been happy to assume that it is, for the most part. However, I do believe that there comes a point where even if there are language issues, it's important to let people know how their comments are coming across. I hope Thomas took my post in that vein. Paul From tim at tim-smith.us Wed May 25 14:35:29 2016 From: tim at tim-smith.us (Tim Smith) Date: Wed, 25 May 2016 11:35:29 -0700 Subject: [Distutils] Distutils improvements regarding header installation and building C extension modules Message-ID: On Wed, May 25, 2016 at 1:24 PM Sylvain Corlay wrote: > *2) On the need for something like pip.locations.distutils_scheme in > distutils* > (http://bugs.python.org/issue26955) > > When installing a python package that has a directive for the install_headers > distutils command, these headers are usually installed under the main > python include directory, which can be retrieved with > sysconfig.get_path('include') or directly referred to as the 'include' string > when setting the include directories of an extension module. > > However, on some systems like OS X, headers for extension modules are not > located in under the python include directory > > /usr/local/Cellar/pythonX/X.Y.Z/Frameworks/Python.framework/Versions/X.Y/include/pythonX.Y > > but in > > /usr/local/include/pythonX.Y. > > Is there a generic way to find the location where headers are installed in > a python install? pip.locations has a distutils_scheme function which seems > to be returning the right thing, but it seems to be a bit overkill to > require pip. On the other side, no path returned by sysconfig corresponds > to `/usr/local/include/pythonX.Y`. As a Homebrew maintainer this sounds like something that Homebrew could influence. Are there any packages in the wild that use this mechanism? It seems that headers are mostly installed beneath site-packages. I don't have strong feelings about whether Homebrew should have better support for install_headers or whether that would be straightforward to implement but IIRC we've had no prior reports of this causing trouble. Thanks, Tim From noah at coderanger.net Wed May 25 14:19:39 2016 From: noah at coderanger.net (Noah Kantrowitz) Date: Wed, 25 May 2016 11:19:39 -0700 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745C32B.6030005@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <57455ADC.5010608@nextday.fi> <5745ABB8.6090906@thomas-guettler.de> <5745C32B.6030005@thomas-guettler.de> Message-ID: > On May 25, 2016, at 8:22 AM, Thomas G?ttler wrote: > > > > Am 25.05.2016 um 15:55 schrieb Paul Moore: >> On 25 May 2016 at 14:42, Thomas G?ttler wrote: >>> Am 25.05.2016 um 09:57 schrieb Alex Gr?nholm: >>>> >>>> Amen to that, but who will pay for it? I imagine a great deal of >>>> processing power would be required for this. >>>> How do implementors of other languages handle this? >>> >>> >>> I talked with someone who is member of the python software foundation, and >>> he said that >>> money for projects like this is available. Of course this was no official >>> statement. >> >> The other aspect of this is who has sufficient time/expertise to set >> something like this up? Are you volunteering to do this? > > I am volunteering for doing coordination work: > - communication > - layout of datastructures > - interchange of datastructures. > - no coding > > But we need at least ten people how say "I'm willing to help" Short answer: this is not how anything works. Long answer: This is not a question of getting some number of people to help. If you can clone us a small army of Donalds, Nicks, and Richards then we _might_ be able to pull this off. The money isn't the problem per se, it's the human cost in upkeep for a system designed explicitly to run hostile code safely. --Noah -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From sylvain.corlay at gmail.com Wed May 25 15:07:07 2016 From: sylvain.corlay at gmail.com (Sylvain Corlay) Date: Wed, 25 May 2016 21:07:07 +0200 Subject: [Distutils] Distutils improvements regarding header installation and building C extension modules In-Reply-To: References: Message-ID: On Wed, May 25, 2016 at 8:35 PM, Tim Smith wrote: > > As a Homebrew maintainer this sounds like something that Homebrew > could influence. Are there any packages in the wild that use this > mechanism? It seems that headers are mostly installed beneath > site-packages. I don't have strong feelings about whether Homebrew > should have better support for install_headers or whether that would > be straightforward to implement but IIRC we've had no prior reports of > this causing trouble. > > Thanks, > Tim Thanks Tim, The OS X python install is the only one I know of where headers installed with the install_headers command are not placed in a subdirectory of `sys.config('include')`. Besides, `pip.locations.distutils_scheme` returns the right include directory even in the case of homebrew. My point here was that this pip.locations function should probably be a feature of distutils itself. Although I would probably not have discovered the need for it if homebrew was placing extra headers in the same place as everyone else ! I don't think that using the package_data to place headers under site-package is the right thing to do in general. Then you need to rely on some python function to return the include directory... Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Wed May 25 20:22:13 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 26 May 2016 00:22:13 +0000 Subject: [Distutils] PEP for specifying build dependencies In-Reply-To: <-6193373753335887999@unknownmsgid> References: <8AD82792-0B53-4C37-AA24-9C79A75EE509@stufft.io> <-6193373753335887999@unknownmsgid> Message-ID: Thanks for working on this PEP. I've updated my setup-requires implementation to use the toml file instead of setup.cfg. It should match the PEP, give it a try! https://bitbucket.org/dholth/setup-requires Do you have a favorite toml implementation? On Sat, May 14, 2016 at 6:33 PM Chris Barker - NOAA Federal < chris.barker at noaa.gov> wrote: > > When the upstream installation process is instead broken up into > > "build a binary artifact" and "install a binary artifact", that brings > > a few benefits: > > Great -- thanks for the detailed explanation. Sounds like a good plan, > then. > > -CHB > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 26 01:57:25 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 26 May 2016 15:57:25 +1000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <5745C0B9.4090107@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: On 26 May 2016 at 01:11, Thomas G?ttler wrote: > > > Am 25.05.2016 um 15:52 schrieb Nick Coghlan: >> >> On 25 May 2016 at 17:13, Thomas G?ttler >> wrote: >>> >>> If you want wheel to be successful, **provide a build server**. >> >> >> Thomas, aside from that statement being demonstrably untrue (since the >> wheel format has already proven to be wildly successful, even with >> developers coping with Linux ABI fragmentation), an attitude of "give >> me more free stuff, or your project will fail" is not an acceptable >> tone to adopt on this list. >> >> The contributors here are (mainly) volunteers working on >> infrastructure provided by a public interest charity, not your >> personal servants to be ordered about as you feel inclined. > > You seem to be angry. Why? Thomas, I am pointing out that your current exhibition of entitled behaviour across multiple venues (as represented by the specific sentence I quoted) is problematic. Please stop trying to crack the whip and generate an artificial sense of urgency - software publication and distribution is a complex problem, and most of the current challenges in the PyPI ecosystem stem from an ongoing lack of funding which requires organisational change moreso than technical change. While various folks are working on that, it's mostly not a distutils-sig level problem, but rather a question for the PSF and for commercial Python redistributors. If you're looking to provide information, or ask if a particular solution that seems obvious to you would be feasible in practice, then do that. The one thing I am asking you to STOP doing is combining your questions with exaggerated hyperbole that's designed to make volunteers feel bad. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From matthew.brett at gmail.com Thu May 26 14:20:10 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 26 May 2016 14:20:10 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: Hi, On Thu, May 26, 2016 at 1:57 AM, Nick Coghlan wrote: > On 26 May 2016 at 01:11, Thomas G?ttler wrote: >> >> >> Am 25.05.2016 um 15:52 schrieb Nick Coghlan: >>> >>> On 25 May 2016 at 17:13, Thomas G?ttler >>> wrote: >>>> >>>> If you want wheel to be successful, **provide a build server**. >>> >>> >>> Thomas, aside from that statement being demonstrably untrue (since the >>> wheel format has already proven to be wildly successful, even with >>> developers coping with Linux ABI fragmentation), an attitude of "give >>> me more free stuff, or your project will fail" is not an acceptable >>> tone to adopt on this list. >>> >>> The contributors here are (mainly) volunteers working on >>> infrastructure provided by a public interest charity, not your >>> personal servants to be ordered about as you feel inclined. >> >> You seem to be angry. Why? > > Thomas, I am pointing out that your current exhibition of entitled > behaviour across multiple venues (as represented by the specific > sentence I quoted) is problematic. Please stop trying to crack the > whip and generate an artificial sense of urgency - software > publication and distribution is a complex problem, and most of the > current challenges in the PyPI ecosystem stem from an ongoing lack of > funding which requires organisational change moreso than technical > change. While various folks are working on that, it's mostly not a > distutils-sig level problem, but rather a question for the PSF and for > commercial Python redistributors. > > If you're looking to provide information, or ask if a particular > solution that seems obvious to you would be feasible in practice, then > do that. The one thing I am asking you to STOP doing is combining your > questions with exaggerated hyperbole that's designed to make > volunteers feel bad. I just wanted to make sure that we didn't lose out on starting a discussion of this problem. The problem is of course caused by the runaway success of the wheel format, and I'm sure it can be solved in a sensible way, but however expressed, it's true that wheels have become so standard that we do need to think about automation for build and release, if we aren't going run into trouble. By trouble, I mean that users will often hit the situation where they don't get wheels when they expect to, and get turned off pypi / wheels as a result. I have personally put a great deal of work into building and releasing wheels, so that is something I'd really like to avoid. So - can I humbly ask - what is the best way to get that discussion going? Cheers, Matthew From dholth at gmail.com Thu May 26 14:28:26 2016 From: dholth at gmail.com (Daniel Holth) Date: Thu, 26 May 2016 18:28:26 +0000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: Maybe there could be a way to say "the most recent release that has a wheel for my platform". That would help with the problem of binaries not being available concurrently with a new source distribution. On Thu, May 26, 2016 at 2:21 PM Matthew Brett wrote: > Hi, > > On Thu, May 26, 2016 at 1:57 AM, Nick Coghlan wrote: > > On 26 May 2016 at 01:11, Thomas G?ttler > wrote: > >> > >> > >> Am 25.05.2016 um 15:52 schrieb Nick Coghlan: > >>> > >>> On 25 May 2016 at 17:13, Thomas G?ttler > >>> wrote: > >>>> > >>>> If you want wheel to be successful, **provide a build server**. > >>> > >>> > >>> Thomas, aside from that statement being demonstrably untrue (since the > >>> wheel format has already proven to be wildly successful, even with > >>> developers coping with Linux ABI fragmentation), an attitude of "give > >>> me more free stuff, or your project will fail" is not an acceptable > >>> tone to adopt on this list. > >>> > >>> The contributors here are (mainly) volunteers working on > >>> infrastructure provided by a public interest charity, not your > >>> personal servants to be ordered about as you feel inclined. > >> > >> You seem to be angry. Why? > > > > Thomas, I am pointing out that your current exhibition of entitled > > behaviour across multiple venues (as represented by the specific > > sentence I quoted) is problematic. Please stop trying to crack the > > whip and generate an artificial sense of urgency - software > > publication and distribution is a complex problem, and most of the > > current challenges in the PyPI ecosystem stem from an ongoing lack of > > funding which requires organisational change moreso than technical > > change. While various folks are working on that, it's mostly not a > > distutils-sig level problem, but rather a question for the PSF and for > > commercial Python redistributors. > > > > If you're looking to provide information, or ask if a particular > > solution that seems obvious to you would be feasible in practice, then > > do that. The one thing I am asking you to STOP doing is combining your > > questions with exaggerated hyperbole that's designed to make > > volunteers feel bad. > > I just wanted to make sure that we didn't lose out on starting a > discussion of this problem. > > The problem is of course caused by the runaway success of the wheel > format, and I'm sure it can be solved in a sensible way, but however > expressed, it's true that wheels have become so standard that we do > need to think about automation for build and release, if we aren't > going run into trouble. By trouble, I mean that users will often hit > the situation where they don't get wheels when they expect to, and get > turned off pypi / wheels as a result. I have personally put a great > deal of work into building and releasing wheels, so that is something > I'd really like to avoid. > > So - can I humbly ask - what is the best way to get that discussion going? > > Cheers, > > Matthew > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu May 26 14:36:12 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 26 May 2016 14:36:12 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: <0F8B7F7F-F15C-4A74-90DE-8A650FAFD5A6@stufft.io> > On May 26, 2016, at 2:28 PM, Daniel Holth wrote: > > Maybe there could be a way to say "the most recent release that has a wheel for my platform". That would help with the problem of binaries not being available concurrently with a new source distribution. pip install ?binary-only :all: ... ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald at stufft.io Thu May 26 14:36:50 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 26 May 2016 14:36:50 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <0F8B7F7F-F15C-4A74-90DE-8A650FAFD5A6@stufft.io> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <0F8B7F7F-F15C-4A74-90DE-8A650FAFD5A6@stufft.io> Message-ID: > On May 26, 2016, at 2:36 PM, Donald Stufft wrote: > > >> On May 26, 2016, at 2:28 PM, Daniel Holth > wrote: >> >> Maybe there could be a way to say "the most recent release that has a wheel for my platform". That would help with the problem of binaries not being available concurrently with a new source distribution. > > > > pip install ?binary-only :all: ... > Sorry, it?s ?only-binary. ? Donald Stufft -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Thu May 26 14:41:22 2016 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 26 May 2016 14:41:22 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: On Thu, May 26, 2016 at 2:28 PM, Daniel Holth wrote: > Maybe there could be a way to say "the most recent release that has a wheel > for my platform". That would help with the problem of binaries not being > available concurrently with a new source distribution. Yes, that would certainly help get over some of the immediate problems. Sorry for my ignorance - but does ``--only-binary`` search for an earlier release with a binary or just bomb out if the latest release does not have a binary? It would also be good to have a flag to say "if this is pure Python go ahead and use the source, otherwise error". Otherwise I guess we'd have to rely on everyone with a pure Python package generating wheels. It would be very good to work out a plan for new Python releases as well. We really need to get wheels up to pypi a fair while before the release date, and it's easy to forget to do that, because, at the moment, we don't have much testing infrastructure to make sure that a range of wheel installs are working OK. Cheers, Matthew From donald at stufft.io Thu May 26 14:47:47 2016 From: donald at stufft.io (Donald Stufft) Date: Thu, 26 May 2016 14:47:47 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> > On May 26, 2016, at 2:41 PM, Matthew Brett wrote: > > On Thu, May 26, 2016 at 2:28 PM, Daniel Holth wrote: >> Maybe there could be a way to say "the most recent release that has a wheel >> for my platform". That would help with the problem of binaries not being >> available concurrently with a new source distribution. > > Yes, that would certainly help get over some of the immediate problems. > > Sorry for my ignorance - but does ``--only-binary`` search for an > earlier release with a binary or just bomb out if the latest release > does not have a binary? It would also be good to have a flag to say > "if this is pure Python go ahead and use the source, otherwise error". > Otherwise I guess we'd have to rely on everyone with a pure Python > package generating wheels. I believe it would find the latest version that has a wheel available, I could be misremembering though. > > It would be very good to work out a plan for new Python releases as > well. We really need to get wheels up to pypi a fair while before the > release date, and it's easy to forget to do that, because, at the > moment, we don't have much testing infrastructure to make sure that a > range of wheel installs are working OK. > I want to get something setup that would allow people to only need to upload a source release to PyPI and then have wheels automatically built for them (but not mandate that- Projects that wish it should always be able to control their wheel generation). I don?t know what that would specifically look like, if someone is motivated to work on it I?m happy to help figure out what it should look like and provide guidance where I can, otherwise it?ll wait until I get around to it. ? Donald Stufft From rmcgibbo at gmail.com Thu May 26 15:32:59 2016 From: rmcgibbo at gmail.com (Robert T. McGibbon) Date: Thu, 26 May 2016 15:32:59 -0400 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: > I want to get something setup that would allow people to only need to upload > a source release to PyPI and then have wheels automatically built for them > (but not mandate that- Projects that wish it should always be able to control > their wheel generation). I don?t know what that would specifically look > like, if someone is motivated to work on it I?m happy to help figure out what > it should look like and provide guidance where I can, otherwise it?ll wait > until I get around to it. One first step towards this that's a natural follow-on to the manylinux work might be to define a overall build configuration file / format and process for automating the whole wheel build cycle (i'm thinking of something modeled after conda-build) that would, among other things for potentially multiple versions of python: - run `pip wheel` (or setu.py bdist_wheel) to compile the wheel - run `auditwheel` (linux) or `delocate` (osx) to bundle any external libraries -Robert On Thu, May 26, 2016 at 2:47 PM, Donald Stufft wrote: > > > On May 26, 2016, at 2:41 PM, Matthew Brett > wrote: > > > > On Thu, May 26, 2016 at 2:28 PM, Daniel Holth wrote: > >> Maybe there could be a way to say "the most recent release that has a > wheel > >> for my platform". That would help with the problem of binaries not being > >> available concurrently with a new source distribution. > > > > Yes, that would certainly help get over some of the immediate problems. > > > > Sorry for my ignorance - but does ``--only-binary`` search for an > > earlier release with a binary or just bomb out if the latest release > > does not have a binary? It would also be good to have a flag to say > > "if this is pure Python go ahead and use the source, otherwise error". > > Otherwise I guess we'd have to rely on everyone with a pure Python > > package generating wheels. > > I believe it would find the latest version that has a wheel available, > I could be misremembering though. > > > > > It would be very good to work out a plan for new Python releases as > > well. We really need to get wheels up to pypi a fair while before the > > release date, and it's easy to forget to do that, because, at the > > moment, we don't have much testing infrastructure to make sure that a > > range of wheel installs are working OK. > > > > I want to get something setup that would allow people to only need to > upload > a source release to PyPI and then have wheels automatically built for them > (but not mandate that- Projects that wish it should always be able to > control > their wheel generation). I don?t know what that would specifically look > like, if someone is motivated to work on it I?m happy to help figure out > what > it should look like and provide guidance where I can, otherwise it?ll wait > until I get around to it. > > ? > Donald Stufft > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -- -Robert -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Thu May 26 15:49:32 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 26 May 2016 20:49:32 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: On 26 May 2016 at 19:20, Matthew Brett wrote: > I just wanted to make sure that we didn't lose out on starting a > discussion of this problem. > > The problem is of course caused by the runaway success of the wheel > format, and I'm sure it can be solved in a sensible way, but however > expressed, it's true that wheels have become so standard that we do > need to think about automation for build and release, if we aren't > going run into trouble. By trouble, I mean that users will often hit > the situation where they don't get wheels when they expect to, and get > turned off pypi / wheels as a result. I have personally put a great > deal of work into building and releasing wheels, so that is something > I'd really like to avoid. > > So - can I humbly ask - what is the best way to get that discussion going? I would certainly like to promote projects using existing CI systems for building wheels. It seems like a pretty simple approach that would cover a decent proportion of the problem. There's already a section in the Packaging User Guide explaining how to use Appveyor to build Windows binary wheels. If you don't have dependencies to deal with (e.g., you're simply providing a C speedup module) it's pretty simple. If you *do* have dependencies to deal with, I suspect you could arrange for them to be uploaded when you build somehow without too much issue - but I don't have projects like that so I can't really add that to the document. The biggest problem with Appveyor is that it's less well known than Travis. However, I don't know of many (frankly, any!) projects using this approach - whether that's because of a lack of awareness, or because it's not as useful as it seemed to me when I wrote it, or some other reason, I don't honestly know. Maybe we could get some feedback on what can be done to improve that section? Is there going to be any sort of packaging discussions at PyCon? Maybe it's a question that could be asked? (I'm not going to be there myself, unfortunately). For Linux/Unix, the problem was always the plethora of binary formats. If we can assume that manylinux builds will constitute a 90% solution here, maybe someone could contribute a section explaining how to add the building of manylinux wheels into a project's Travis config. I know almost nothing about OSX, but doesn't Travis offer OSX builders? Could they be used to build extensions in the same way as the above? Going beyond this level (which is basically making sure projects know they have easy access to build environments for the key platforms, and encouraging them to use them) is when it gets harder. Integrating artifacts from travis/appveyor into a release process would need some work, for example. Maybe a way forward here would be for some projects to try doing so, and writing up a set of "how we did it" notes that could over time turn into a set of recommended practices in the PUG. Or an interested 3rd party could probably do most of the experimentation and development, simply by forking a few typical projects and seeing what was needed. With all of the above, we'd end up with some (hopefully!) pretty good documentation of how to (relatively) painlessly add building of wheels for all major platforms into *any* project (that didn't have specialised issues to deal with). I'm not sure where we'd go from there. The next step seems to be to actually provide some sort of build service, or curated set of wheels for projects that don't/won't/can't produce their own. But that's a lot of work - the conda folks, and people like Christoph Gohlke, know how much. Maybe by making the process simple enough (as described above) might encourage someone to put together a wheel service along the lines of Christoph's (but for all platforms, and in a format that pip can access directly). Once something like that starts up, it may attract additional volunteers, and grow from there. But it's still quite a commitment from someone. Paul From ncoghlan at gmail.com Thu May 26 20:40:38 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 27 May 2016 10:40:38 +1000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: On 27 May 2016 04:48, "Donald Stufft" wrote: > > On May 26, 2016, at 2:41 PM, Matthew Brett wrote: > > It would be very good to work out a plan for new Python releases as > > well. We really need to get wheels up to pypi a fair while before the > > release date, and it's easy to forget to do that, because, at the > > moment, we don't have much testing infrastructure to make sure that a > > range of wheel installs are working OK. > > I want to get something setup that would allow people to only need to upload > a source release to PyPI and then have wheels automatically built for them > (but not mandate that- Projects that wish it should always be able to control > their wheel generation). A possible preceding step for that might be to create a service that reports per-project information on clients downloading the sdist versions of a project. With the Big Query data publicly available, that shouldn't need to be part of PyPI itself (at least in the near term), so it should just need an interested volunteer, rather than being gated on Warehouse. The intent there would be to allow projects that decide to build their own wheels to prioritise their wheel creation, and be able to quantify the impact of each addition to their build matrix, rather than necessarily providing pre-built binaries for all supported platforms supported by the source release. The other thing that data could be used for is to start quantifying the throughput requirements for an "all of PyPI" build service by looking at release publication rates (rather than download rates). Again, likely pursuable by volunteers using the free tier of a public PaaS, rather than requiring ongoing investment. Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 26 22:02:28 2016 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 27 May 2016 12:02:28 +1000 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: On 27 May 2016 10:40, "Nick Coghlan" wrote: > > > On 27 May 2016 04:48, "Donald Stufft" wrote: > > > On May 26, 2016, at 2:41 PM, Matthew Brett wrote: > > > It would be very good to work out a plan for new Python releases as > > > well. We really need to get wheels up to pypi a fair while before the > > > release date, and it's easy to forget to do that, because, at the > > > moment, we don't have much testing infrastructure to make sure that a > > > range of wheel installs are working OK. > > > > I want to get something setup that would allow people to only need to upload > > a source release to PyPI and then have wheels automatically built for them > > (but not mandate that- Projects that wish it should always be able to control > > their wheel generation). > > A possible preceding step for that might be to create a service that reports per-project information on clients downloading the sdist versions of a project. With the Big Query data publicly available, that shouldn't need to be part of PyPI itself (at least in the near term), so it should just need an interested volunteer, rather than being gated on Warehouse. It belatedly occurs to me that experimenting with that myself would align pretty well with some ideas I'm exploring for $(day job), so if nobody else pursues this, I'll likely take a look myself some time in the next few weeks (and if they do take a look, I should be able to find the time to help). Cheers, Nick. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Fri May 27 02:22:23 2016 From: wes.turner at gmail.com (Wes Turner) Date: Fri, 27 May 2016 01:22:23 -0500 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: conda-forge/conda-smithy | Src: https://github.com/conda-forge/conda-smithy | Homepage: https://conda-forge.github.io/ : "conda-forge is a github organization containing repositories of conda recipes. Thanks to some awesome continuous integration providers (AppVeyor, CircleCI and TravisCI), each repository, also known as a feedstock, automatically builds its own recipe in a clean and repeatable way on Windows, Linux and OSX" https://conda-forge.github.io/feedstocks.html so, for psutil-feedstock: https://github.com/conda-forge/psutil-feedstock - https://github.com/conda-forge/psutil-feedstock/blob/master/recipe/meta.yaml - https://github.com/conda-forge/psutil-feedstock/blob/master/appveyor.yml - https://github.com/conda-forge/psutil-feedstock/blob/master/ci_support/run_docker_build.sh - https://github.com/conda-forge/psutil-feedstock/blob/master/.travis.yml ? https://github.com/conda-forge/conda-smithy/tree/master/conda_smithy/templates On Thursday, May 26, 2016, Robert T. McGibbon wrote: > > I want to get something setup that would allow people to only need to > upload > > a source release to PyPI and then have wheels automatically built for > them > > (but not mandate that- Projects that wish it should always be able to > control > > their wheel generation). I don?t know what that would specifically look > > like, if someone is motivated to work on it I?m happy to help figure out > what > > it should look like and provide guidance where I can, otherwise it?ll > wait > > until I get around to it. > > One first step towards this that's a natural follow-on to the manylinux > work > might be to define a overall build configuration file / format and process > for > automating the whole wheel build cycle (i'm thinking of something modeled > after > conda-build) that would, among other things > > for potentially multiple versions of python: > - run `pip wheel` (or setu.py bdist_wheel) to compile the wheel > - run `auditwheel` (linux) or `delocate` (osx) to bundle any external > libraries > > -Robert > > On Thu, May 26, 2016 at 2:47 PM, Donald Stufft > wrote: > >> >> > On May 26, 2016, at 2:41 PM, Matthew Brett > > wrote: >> > >> > On Thu, May 26, 2016 at 2:28 PM, Daniel Holth > > wrote: >> >> Maybe there could be a way to say "the most recent release that has a >> wheel >> >> for my platform". That would help with the problem of binaries not >> being >> >> available concurrently with a new source distribution. >> > >> > Yes, that would certainly help get over some of the immediate problems. >> > >> > Sorry for my ignorance - but does ``--only-binary`` search for an >> > earlier release with a binary or just bomb out if the latest release >> > does not have a binary? It would also be good to have a flag to say >> > "if this is pure Python go ahead and use the source, otherwise error". >> > Otherwise I guess we'd have to rely on everyone with a pure Python >> > package generating wheels. >> >> I believe it would find the latest version that has a wheel available, >> I could be misremembering though. >> >> > >> > It would be very good to work out a plan for new Python releases as >> > well. We really need to get wheels up to pypi a fair while before the >> > release date, and it's easy to forget to do that, because, at the >> > moment, we don't have much testing infrastructure to make sure that a >> > range of wheel installs are working OK. >> > >> >> I want to get something setup that would allow people to only need to >> upload >> a source release to PyPI and then have wheels automatically built for them >> (but not mandate that- Projects that wish it should always be able to >> control >> their wheel generation). I don?t know what that would specifically look >> like, if someone is motivated to work on it I?m happy to help figure out >> what >> it should look like and provide guidance where I can, otherwise it?ll wait >> until I get around to it. >> >> ? >> Donald Stufft >> >> >> >> _______________________________________________ >> Distutils-SIG maillist - Distutils-SIG at python.org >> >> https://mail.python.org/mailman/listinfo/distutils-sig >> > > > > -- > -Robert > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guettliml at thomas-guettler.de Fri May 27 07:25:51 2016 From: guettliml at thomas-guettler.de (=?UTF-8?Q?Thomas_G=c3=bcttler?=) Date: Fri, 27 May 2016 13:25:51 +0200 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: <57482EBF.7010601@thomas-guettler.de> Am 26.05.2016 um 20:28 schrieb Daniel Holth: > Maybe there could be a way to say "the most recent release that has a wheel for my platform". That would help with the > problem of binaries not being available concurrently with a new source distribution. I don't get what you want to say. If you are maintainer, then there is no "my platform". There is matrix: python_versions x supported_platforms If you have optional dependencies this matrix gets a third dimension :-) Tell me if I speak none sense, but AFAIK there are two ways to to build for other platforms: 1. Cross compiling: you create code for platform B an platform A. 2. VM: create code for platform B on platform A by running a VM of platform B. I guess taking the second way is easier - but slower. Before coding a single line I would check what already exists (outside the "python world") What do you think? Please tell me what's wrong with above text. Regards, Thomas G?ttler -- Thomas Guettler http://www.thomas-guettler.de/ From contact at ionelmc.ro Fri May 27 07:35:38 2016 From: contact at ionelmc.ro (=?UTF-8?Q?Ionel_Cristian_M=C4=83rie=C8=99?=) Date: Fri, 27 May 2016 14:35:38 +0300 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: <57482EBF.7010601@thomas-guettler.de> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: On Fri, May 27, 2016 at 2:25 PM, Thomas G?ttler < guettliml at thomas-guettler.de> wrote: > I don't get what you want to say. > > If you are maintainer, then there is no "my platform". There is matrix: > ?Missing the context, but didn't he write that from user perspective? One could argue that? getting old versions just because there's a wheel ain't the best idea. Plus users can always pin the dependency to the "right" version (the one that has wheel for their platform). Functionally nothing is missing, it's an argument about default behavior. Thanks, -- Ionel Cristian M?rie?, http://blog.ionelmc.ro -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri May 27 08:34:30 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 27 May 2016 12:34:30 +0000 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: So this was a problem with eggs too. Let's say ZODB 3.0.1 was just released. You are happily using 3.0.0, the next version is a minor upgrade, but there are no precompiled packages for 3.0.1, so your build breaks on Friday morning when you are trying to deploy. Compiling it yourself is not an option because you are on Windows XP, and the compiler is so hard to install that you would need to hire a consultant to get it done; only to bridge the 1-week gap between a new ZODB release and the precompiled eggs for your platform. If you find yourself in that situation then you would appreciate an easy way to use the newest available binaries rather than the newest available source, or perhaps to install directly from your cached binaries without going through the package index. Manually pinning the older version just to get the binary is going to ruin your day. With wheel we can at least approximate this by doing a two-phase install, first dumping all your project's dependencies as wheels to a wheel directory, and then only pointing the installer at the wheel directory for the deployment. Daniel On Fri, May 27, 2016 at 7:36 AM Ionel Cristian M?rie? < distutils-sig at python.org> wrote: > > On Fri, May 27, 2016 at 2:25 PM, Thomas G?ttler < > guettliml at thomas-guettler.de> wrote: > >> I don't get what you want to say. >> >> If you are maintainer, then there is no "my platform". There is matrix: >> > > ?Missing the context, but didn't he write that from user perspective? > > One could argue that? getting old versions just because there's a wheel > ain't the best idea. Plus users can always pin the dependency to the > "right" version (the one that has wheel for their platform). Functionally > nothing is missing, it's an argument about default behavior. > > > > > > Thanks, > -- Ionel Cristian M?rie?, http://blog.ionelmc.ro > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 27 12:22:08 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 27 May 2016 09:22:08 -0700 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <90E4DE6A-082B-45F3-8A8D-4A346F947A2E@stufft.io> Message-ID: On Thu, May 26, 2016 at 11:22 PM, Wes Turner wrote: > > "conda-forge is a github organization > containing repositories of conda recipes. > Just to be clear -- while conda-forge is about conda packages, a good deal of the CI integration stuff would apply just as well to building wheels. It would make a lot of sense to build off of what they've done. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri May 27 12:28:13 2016 From: chris.barker at noaa.gov (Chris Barker) Date: Fri, 27 May 2016 09:28:13 -0700 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: Not that this isn't an issue, but: On Fri, May 27, 2016 at 5:34 AM, Daniel Holth wrote: > So this was a problem with eggs too. Let's say ZODB 3.0.1 was just > released. You are happily using 3.0.0, the next version is a minor upgrade, > but there are no precompiled packages for 3.0.1, so your build breaks on > Friday morning when you are trying to deploy. > If you are using pip, etc. to DEPLOY, they you'd better darn be using explicit versions in your requirements.txt. Just sayin' Of course, you may not want to use explicit versions in development for CI testing, and then you run into the same issue. But having a test fail on commit is not nearly as big a deal.... If you find yourself in that situation then you would appreciate an easy > way to use the newest available binaries rather than the newest available > source, > I think the trick here is that you need "binaries" for packages with C extensions, but source is just fine for pure python. So I kind of like the idea of making wheels the default for distributing on PyPi always -- even for pure python modules. And wheels are trivial to build from pure python packages -- so why not? -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri May 27 12:37:52 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 27 May 2016 17:37:52 +0100 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: On 27 May 2016 at 17:28, Chris Barker wrote: > So I kind of like the idea of making wheels the default for distributing on > PyPi always -- even for pure python modules. And wheels are trivial to build > from pure python packages -- so why not? It would be *really* nice to have some sort of metadata/flag that said "this project is pure Python". Normally, what I want is not *quite* --only-binary, but rather "only binary except for pure Python where I'm happy to take a source distribution". But AFAIK, there's no way for pip to know that :-( Paul From donald at stufft.io Fri May 27 12:40:51 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 27 May 2016 12:40:51 -0400 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: > On May 27, 2016, at 12:37 PM, Paul Moore wrote: > > On 27 May 2016 at 17:28, Chris Barker wrote: >> So I kind of like the idea of making wheels the default for distributing on >> PyPi always -- even for pure python modules. And wheels are trivial to build >> from pure python packages -- so why not? > > It would be *really* nice to have some sort of metadata/flag that said > "this project is pure Python". Normally, what I want is not *quite* > --only-binary, but rather "only binary except for pure Python where > I'm happy to take a source distribution". But AFAIK, there's no way > for pip to know that :-( The flip side is it should be trivial for pure Python projects to release wheels, often requiring them to do nothing different except ensuring `wheel` is installed and running ``setup.py sdist bdist_wheel` instead of just `setup.py sdist`. ? Donald Stufft From alex.gronholm at nextday.fi Fri May 27 12:43:35 2016 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Fri, 27 May 2016 19:43:35 +0300 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: <57487937.20800@nextday.fi> This reminds me of something I wanted to ask ? how come the "virtualenv" tool installs wheel but the built-in "venv" tool does not? The latter is the currently recommended method of creating a virtualenv, isn't it? 27.05.2016, 19:40, Donald Stufft kirjoitti: >> On May 27, 2016, at 12:37 PM, Paul Moore wrote: >> >> On 27 May 2016 at 17:28, Chris Barker wrote: >>> So I kind of like the idea of making wheels the default for distributing on >>> PyPi always -- even for pure python modules. And wheels are trivial to build >>> from pure python packages -- so why not? >> It would be *really* nice to have some sort of metadata/flag that said >> "this project is pure Python". Normally, what I want is not *quite* >> --only-binary, but rather "only binary except for pure Python where >> I'm happy to take a source distribution". But AFAIK, there's no way >> for pip to know that :-( > > The flip side is it should be trivial for pure Python projects to release wheels, often requiring them to do nothing different except ensuring `wheel` is installed and running ``setup.py sdist bdist_wheel` instead of just `setup.py sdist`. > > ? > Donald Stufft > > > > _______________________________________________ > Distutils-SIG maillist - Distutils-SIG at python.org > https://mail.python.org/mailman/listinfo/distutils-sig From robin at reportlab.com Fri May 27 12:37:26 2016 From: robin at reportlab.com (Robin Becker) Date: Fri, 27 May 2016 17:37:26 +0100 Subject: [Distutils] If you want wheel to be successful, provide a build server. In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> Message-ID: <91b2a934-18a4-d553-6c67-84f68c2f4428@chamonix.reportlab.co.uk> As a lurker on this list and long time user of everything python I feel this is becoming appropriate https://xkcd.com/927/ -- Robin Becker From p.f.moore at gmail.com Fri May 27 12:46:10 2016 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 27 May 2016 17:46:10 +0100 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: On 27 May 2016 at 17:40, Donald Stufft wrote: >> On May 27, 2016, at 12:37 PM, Paul Moore wrote: >> >> On 27 May 2016 at 17:28, Chris Barker wrote: >>> So I kind of like the idea of making wheels the default for distributing on >>> PyPi always -- even for pure python modules. And wheels are trivial to build >>> from pure python packages -- so why not? >> >> It would be *really* nice to have some sort of metadata/flag that said >> "this project is pure Python". Normally, what I want is not *quite* >> --only-binary, but rather "only binary except for pure Python where >> I'm happy to take a source distribution". But AFAIK, there's no way >> for pip to know that :-( > > > The flip side is it should be trivial for pure Python projects to release wheels, often requiring them to do nothing different except ensuring `wheel` is installed and running ``setup.py sdist bdist_wheel` instead of just `setup.py sdist`. Well, yes. But it pretty much is trivial already, but there are still a reasonable number of projects that don't do so. My theories as to why: - The project hasn't done a new release yet, and doesn't want to upload a wheel *except* as part of a new release. - The project has tooling to do a release, and hasn't got round to changing it. - The project isn't aware of wheels. - The project doesn't see the value of wheels for pure-python code. Paul From donald at stufft.io Fri May 27 12:46:29 2016 From: donald at stufft.io (Donald Stufft) Date: Fri, 27 May 2016 12:46:29 -0400 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: <57487937.20800@nextday.fi> References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> <57487937.20800@nextday.fi> Message-ID: <06FB3D6B-DD1F-4D86-9A23-0CC7909E2A2F@stufft.io> > On May 27, 2016, at 12:43 PM, Alex Gr?nholm wrote: > > This reminds me of something I wanted to ask ? how come the "virtualenv" tool installs wheel but the built-in "venv" tool does not? The latter is the currently recommended method of creating a virtualenv, isn't it? > virtualenv, by nature of not being in the standard library, can be a bit more liberal. Once pyproject.toml support is in pip and we can automatically install wheel for wheel caching as part of the build process, I intend to remove both setuptools and wheel from virtualenv (and venv). ? Donald Stufft From dholth at gmail.com Fri May 27 13:10:34 2016 From: dholth at gmail.com (Daniel Holth) Date: Fri, 27 May 2016 17:10:34 +0000 Subject: [Distutils] matrix: python_versions x supported_plattforms, cross-compiling vs VM In-Reply-To: References: <574550B6.60607@thomas-guettler.de> <5745C0B9.4090107@thomas-guettler.de> <57482EBF.7010601@thomas-guettler.de> Message-ID: On Fri, May 27, 2016 at 12:28 PM Chris Barker wrote: > Not that this isn't an issue, but: > > On Fri, May 27, 2016 at 5:34 AM, Daniel Holth wrote: > >> So this was a problem with eggs too. Let's say ZODB 3.0.1 was just >> released. You are happily using 3.0.0, the next version is a minor upgrade, >> but there are no precompiled packages for 3.0.1, so your build breaks on >> Friday morning when you are trying to deploy. >> > > If you are using pip, etc. to DEPLOY, they you'd better darn be using > explicit versions in your requirements.txt. > > Just sayin' > Yes, it goes without saying that you can avoid these problems by some combination of best practices such as running your own devpi. But maybe we could figure out how to provide best practices by default. Of course, you may not want to use explicit versions in development for CI > testing, and then you run into the same issue. But having a test fail on > commit is not nearly as big a deal.... > > If you find yourself in that situation then you would appreciate an easy >> way to use the newest available binaries rather than the newest available >> source, >> > > I think the trick here is that you need "binaries" for packages with C > extensions, but source is just fine for pure python. > > So I kind of like the idea of making wheels the default for distributing > on PyPi always -- even for pure python modules. And wheels are trivial to > build from pure python packages -- so why not? > > -CHB > > > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Tue May 31 03:33:23 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Tue, 31 May 2016 09:33:23 +0200 Subject: [Distutils] PIL install fails: that was to be expected? Message-ID: Hi, A colleague is trying to run the buildout for an *old* project that has an *old* dependency on "PIL". So not the "Pillow" replacement, but "PIL". Install fails, becauce https://pypi.python.org/simple/pil/ is empty. What I suspect is that this is because of the everything-should-be-hosted-on-pypi-itself change? Yes, it should be Pillow and yes I'm in complete agreement that everything should be on pypi and yes my colleague can fix this with a quick branch. But I just wanted to make sure that nothing major is broken right now. (And to make this potential problem google-able :-) ) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity" From donald at stufft.io Tue May 31 06:16:18 2016 From: donald at stufft.io (Donald Stufft) Date: Tue, 31 May 2016 06:16:18 -0400 Subject: [Distutils] PIL install fails: that was to be expected? In-Reply-To: References: Message-ID: <5F7C29BC-F22D-44C2-95D4-80F90C4F02E9@stufft.io> > On May 31, 2016, at 3:33 AM, Reinout van Rees wrote: > > What I suspect is that this is because of the everything-should-be-hosted-on-pypi-itself change? Yes. Sadly PIL fell to bitrot as PyPI progressed. ? Donald Stufft From waynejwerner at gmail.com Tue May 31 07:43:02 2016 From: waynejwerner at gmail.com (Wayne Werner) Date: Tue, 31 May 2016 06:43:02 -0500 Subject: [Distutils] PIL install fails: that was to be expected? In-Reply-To: References: Message-ID: On Tue, May 31, 2016 at 2:33 AM, Reinout van Rees wrote: > > Yes, it should be Pillow and yes I'm in complete agreement that everything > should be on pypi and yes my colleague can fix this with a quick branch. > But I just wanted to make sure that nothing major is broken right now. > (And to make this potential problem google-able :-) ) If it's possible, I'd also suggest opening a PR for the *old* project in question. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinout at vanrees.org Tue May 31 10:01:26 2016 From: reinout at vanrees.org (Reinout van Rees) Date: Tue, 31 May 2016 16:01:26 +0200 Subject: [Distutils] PIL install fails: that was to be expected? In-Reply-To: <5F7C29BC-F22D-44C2-95D4-80F90C4F02E9@stufft.io> References: <5F7C29BC-F22D-44C2-95D4-80F90C4F02E9@stufft.io> Message-ID: Op 31-05-16 om 12:16 schreef Donald Stufft: >> On May 31, 2016, at 3:33 AM, Reinout van Rees wrote: >> > >> >What I suspect is that this is because of the everything-should-be-hosted-on-pypi-itself change? > Yes. Sadly PIL fell to bitrot as PyPI progressed. Thanks, then I at least know what happened :-) (@Wayne: the old project is something internal, I've already fixed it :-) ) Reinout -- Reinout van Rees http://reinout.vanrees.org/ reinout at vanrees.org http://www.nelen-schuurmans.nl/ "Learning history by destroying artifacts is a time-honored atrocity"